* [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default
@ 2016-10-19 4:11 Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 01/32] net/qede/base: add new init files and rearrange the code Rasesh Mody
` (32 more replies)
0 siblings, 33 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Rasesh Mody
Hi,
This patch set includes changes to update the base driver, work with
newer FW 8.10.9.0, adds new features, includes enhancements and code
cleanup, provides bug fixes and updates documentation for the QEDE
poll mode driver.
It enables QEDE PMD in the dpdk config by default. The dependency on
external library libz has been addressed.
The patch set updates the QEDE PMD to 1.2.0.1.
Review comments received for v3 have been addressed.
Please apply to DPDK tree for v16.11 release.
Thanks!
Rasesh
Harish Patil (14):
net/qede: change signature of MCP command API
net/qede: serialize access to MFW mbox
net/qede: add NIC selftest and query sensor info support
net/qede: fix port (re)configuration issue
net/qede/base: allow MTU change via vport-update
net/qede: add missing 100G link speed capability
net/qede: remove unused/dead code
net/qede: fixes for VLAN filters
net/qede: add enable/disable VLAN filtering
net/qede: fix RSS related issues
net/qede/base: add support to initiate PF FLR
net/qede: skip slowpath polling for 100G VF device
net/qede: fix driver version string
net/qede: fix status block index for VF queues
Rasesh Mody (16):
net/qede/base: add new init files and rearrange the code
net/qede/base: formatting changes
net/qede: use FW CONFIG defines as needed
net/qede/base: add HSI changes and register defines
net/qede/base: add attention formatting string
net/qede/base: additional formatting/comment changes
net/qede: fix 32 bit compilation
net/qede/base: update base driver
net/qede/base: rename structure and defines
net/qede/base: comment enhancements
net/qede/base: add MFW crash dump support
net/qede/base: change Rx Tx queue start APIs
net/qede: add support for queue statistics
net/qede: remove zlib dependency and enable PMD by default
doc: update qede pmd documentation
net/qede: update driver version
Sony Chacko (2):
net/qede: enable support for unequal number of Rx/Tx queues
net/qede: add scatter gather support
config/common_base | 2 +-
doc/guides/nics/features/qede.ini | 4 +
doc/guides/nics/features/qede_vf.ini | 4 +
doc/guides/nics/qede.rst | 32 +-
drivers/net/qede/Makefile | 6 +-
drivers/net/qede/base/bcm_osal.c | 23 +
drivers/net/qede/base/bcm_osal.h | 10 +
drivers/net/qede/base/common_hsi.h | 956 ++++++++++-
drivers/net/qede/base/ecore.h | 631 +++----
drivers/net/qede/base/ecore_chain.h | 51 +-
drivers/net/qede/base/ecore_cxt.c | 387 ++++-
drivers/net/qede/base/ecore_cxt.h | 52 +-
drivers/net/qede/base/ecore_cxt_api.h | 25 +-
drivers/net/qede/base/ecore_dcbx.c | 589 ++++++-
drivers/net/qede/base/ecore_dcbx.h | 18 +-
drivers/net/qede/base/ecore_dcbx_api.h | 154 +-
drivers/net/qede/base/ecore_dev.c | 1813 +++++++++++++-------
drivers/net/qede/base/ecore_dev_api.h | 238 ++-
drivers/net/qede/base/ecore_gtt_reg_addr.h | 30 +-
drivers/net/qede/base/ecore_gtt_values.h | 20 +-
drivers/net/qede/base/ecore_hsi_common.h | 1358 +++++++++------
drivers/net/qede/base/ecore_hsi_debug_tools.h | 1025 ++++++++++++
drivers/net/qede/base/ecore_hsi_eth.h | 997 ++++++++---
drivers/net/qede/base/ecore_hsi_init_func.h | 132 ++
drivers/net/qede/base/ecore_hsi_init_tool.h | 454 +++++
drivers/net/qede/base/ecore_hsi_tools.h | 1081 ------------
drivers/net/qede/base/ecore_hw.c | 222 ++-
drivers/net/qede/base/ecore_hw.h | 75 +-
drivers/net/qede/base/ecore_hw_defs.h | 39 +-
drivers/net/qede/base/ecore_init_fw_funcs.c | 400 +++--
drivers/net/qede/base/ecore_init_fw_funcs.h | 250 ++-
drivers/net/qede/base/ecore_init_ops.c | 11 +-
drivers/net/qede/base/ecore_init_ops.h | 14 +-
drivers/net/qede/base/ecore_int.c | 446 +++--
drivers/net/qede/base/ecore_int.h | 23 +-
drivers/net/qede/base/ecore_int_api.h | 11 +
drivers/net/qede/base/ecore_iov_api.h | 519 ++----
drivers/net/qede/base/ecore_iro.h | 234 ++-
drivers/net/qede/base/ecore_iro_values.h | 140 +-
drivers/net/qede/base/ecore_l2.c | 531 +++---
drivers/net/qede/base/ecore_l2.h | 85 +-
drivers/net/qede/base/ecore_l2_api.h | 167 +-
drivers/net/qede/base/ecore_mcp.c | 881 ++++++++--
drivers/net/qede/base/ecore_mcp.h | 141 +-
drivers/net/qede/base/ecore_mcp_api.h | 220 ++-
drivers/net/qede/base/ecore_proto_if.h | 63 +-
drivers/net/qede/base/ecore_rt_defs.h | 869 +++++-----
drivers/net/qede/base/ecore_sp_api.h | 15 +-
drivers/net/qede/base/ecore_sp_commands.c | 99 +-
drivers/net/qede/base/ecore_sp_commands.h | 38 +-
drivers/net/qede/base/ecore_spq.c | 237 +--
drivers/net/qede/base/ecore_spq.h | 162 +-
drivers/net/qede/base/ecore_sriov.c | 1826 +++++++++++++--------
drivers/net/qede/base/ecore_sriov.h | 247 +--
drivers/net/qede/base/ecore_status.h | 18 +-
drivers/net/qede/base/ecore_vf.c | 759 +++++----
drivers/net/qede/base/ecore_vf.h | 258 +--
drivers/net/qede/base/ecore_vf_api.h | 100 +-
drivers/net/qede/base/ecore_vfpf_if.h | 436 +++--
drivers/net/qede/base/eth_common.h | 439 +++--
drivers/net/qede/base/mcp_public.h | 825 +++++++---
drivers/net/qede/base/nvm_cfg.h | 2183 +++++++++++++++----------
drivers/net/qede/base/reg_addr.h | 36 +
drivers/net/qede/qede_eth_if.c | 75 +-
drivers/net/qede/qede_eth_if.h | 16 +-
drivers/net/qede/qede_ethdev.c | 487 ++++--
drivers/net/qede/qede_ethdev.h | 83 +-
drivers/net/qede/qede_if.h | 12 +-
drivers/net/qede/qede_main.c | 84 +-
drivers/net/qede/qede_rxtx.c | 763 +++++----
drivers/net/qede/qede_rxtx.h | 25 +-
mk/rte.app.mk | 2 +-
72 files changed, 15642 insertions(+), 9016 deletions(-)
create mode 100644 drivers/net/qede/base/ecore_hsi_debug_tools.h
create mode 100644 drivers/net/qede/base/ecore_hsi_init_func.h
create mode 100644 drivers/net/qede/base/ecore_hsi_init_tool.h
delete mode 100644 drivers/net/qede/base/ecore_hsi_tools.h
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 01/32] net/qede/base: add new init files and rearrange the code
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 02/32] net/qede/base: formatting changes Rasesh Mody
` (31 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Rasesh Mody
Added ecore_hsi_debug_tools.h, ecore_hsi_init_func.h,
ecore_hsi_init_tool.h files. Rearranged code from ecore_hsi_common.h and
ecore_hsi_tools.h to the new files. Removed unused code.
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
---
drivers/net/qede/base/ecore.h | 17 +-
drivers/net/qede/base/ecore_dev.c | 73 +-
drivers/net/qede/base/ecore_hsi_common.h | 226 ------
drivers/net/qede/base/ecore_hsi_debug_tools.h | 1025 +++++++++++++++++++++++
drivers/net/qede/base/ecore_hsi_init_func.h | 132 +++
drivers/net/qede/base/ecore_hsi_init_tool.h | 454 +++++++++++
drivers/net/qede/base/ecore_hsi_tools.h | 1081 -------------------------
drivers/net/qede/base/ecore_init_fw_funcs.c | 67 +-
drivers/net/qede/base/ecore_init_ops.c | 2 +-
drivers/net/qede/base/ecore_int.c | 141 +---
10 files changed, 1678 insertions(+), 1540 deletions(-)
create mode 100644 drivers/net/qede/base/ecore_hsi_debug_tools.h
create mode 100644 drivers/net/qede/base/ecore_hsi_init_func.h
create mode 100644 drivers/net/qede/base/ecore_hsi_init_tool.h
delete mode 100644 drivers/net/qede/base/ecore_hsi_tools.h
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index d682a78..db72f03 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -10,7 +10,9 @@
#define __ECORE_H
#include "ecore_hsi_common.h"
-#include "ecore_hsi_tools.h"
+#include "ecore_hsi_debug_tools.h"
+#include "ecore_hsi_init_func.h"
+#include "ecore_hsi_init_tool.h"
#include "ecore_proto_if.h"
#include "mcp_public.h"
@@ -556,14 +558,15 @@ struct ecore_dev {
#define ECORE_DEV_TYPE_AH (1 << 0)
/* Translate type/revision combo into the proper conditions */
#define ECORE_IS_BB(dev) ((dev)->type == ECORE_DEV_TYPE_BB)
-#define ECORE_IS_BB_A0(dev) (ECORE_IS_BB(dev) && \
- CHIP_REV_IS_A0(dev))
-#define ECORE_IS_BB_B0(dev) (ECORE_IS_BB(dev) && \
- CHIP_REV_IS_B0(dev))
+#define ECORE_IS_BB_A0(dev) (ECORE_IS_BB(dev) && CHIP_REV_IS_A0(dev))
+#ifndef ASIC_ONLY
+#define ECORE_IS_BB_B0(dev) ((ECORE_IS_BB(dev) && CHIP_REV_IS_B0(dev)) || \
+ (CHIP_REV_IS_TEDIBEAR(dev)))
+#else
+#define ECORE_IS_BB_B0(dev) (ECORE_IS_BB(dev) && CHIP_REV_IS_B0(dev))
+#endif
#define ECORE_IS_AH(dev) ((dev)->type == ECORE_DEV_TYPE_AH)
#define ECORE_IS_K2(dev) ECORE_IS_AH(dev)
-#define ECORE_GET_TYPE(dev) (ECORE_IS_BB_A0(dev) ? CHIP_BB_A0 : \
- ECORE_IS_BB_B0(dev) ? CHIP_BB_B0 : CHIP_K2)
u16 vendor_id;
u16 device_id;
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 0a68969..89faa35 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -281,13 +281,6 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
for (i = 0; i < num_ports; i++) {
p_qm_port = &qm_info->qm_port_params[i];
p_qm_port->active = 1;
- /* @@@TMP - was NUM_OF_PHYS_TCS; Changed until dcbx will
- * be in place
- */
- if (num_ports == 4)
- p_qm_port->num_active_phys_tcs = 2;
- else
- p_qm_port->num_active_phys_tcs = 5;
p_qm_port->num_pbf_cmd_lines = PBF_MAX_CMD_LINES / num_ports;
p_qm_port->num_btb_blocks = BTB_MAX_BLOCKS / num_ports;
}
@@ -599,19 +592,15 @@ static void ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
{
int hw_mode = 0;
- switch (ECORE_GET_TYPE(p_hwfn->p_dev)) {
- case CHIP_BB_A0:
+ if (ECORE_IS_BB_A0(p_hwfn->p_dev)) {
hw_mode |= 1 << MODE_BB_A0;
- break;
- case CHIP_BB_B0:
+ } else if (ECORE_IS_BB_B0(p_hwfn->p_dev)) {
hw_mode |= 1 << MODE_BB_B0;
- break;
- case CHIP_K2:
+ } else if (ECORE_IS_AH(p_hwfn->p_dev)) {
hw_mode |= 1 << MODE_K2;
- break;
- default:
- DP_NOTICE(p_hwfn, true, "Can't initialize chip ID %d\n",
- ECORE_GET_TYPE(p_hwfn->p_dev));
+ } else {
+ DP_NOTICE(p_hwfn, true, "Unknown chip type %#x\n",
+ p_hwfn->p_dev->type);
return;
}
@@ -690,37 +679,6 @@ static void ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev))
ecore_wr(p_hwfn, p_ptt, MISCS_REG_RESET_PL_HV_2, 0x3ffffff);
- /* initialize interrupt masks */
- for (i = 0;
- i <
- attn_blocks[BLOCK_MISCS].chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].
- num_of_int_regs; i++)
- ecore_wr(p_hwfn, p_ptt,
- attn_blocks[BLOCK_MISCS].
- chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].int_regs[i]->
- mask_addr, 0);
-
- if (!CHIP_REV_IS_EMUL(p_hwfn->p_dev) || !ECORE_IS_AH(p_hwfn->p_dev))
- ecore_wr(p_hwfn, p_ptt,
- attn_blocks[BLOCK_CNIG].
- chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].int_regs[0]->
- mask_addr, 0);
- ecore_wr(p_hwfn, p_ptt,
- attn_blocks[BLOCK_PGLCS].
- chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].int_regs[0]->
- mask_addr, 0);
- ecore_wr(p_hwfn, p_ptt,
- attn_blocks[BLOCK_CPMU].
- chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].int_regs[0]->
- mask_addr, 0);
- /* Currently A0 and B0 interrupt bits are the same in pglue_b;
- * If this changes, need to set this according to chip type. <14/09/23>
- */
- ecore_wr(p_hwfn, p_ptt,
- attn_blocks[BLOCK_PGLUE_B].
- chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].int_regs[0]->
- mask_addr, 0x80000);
-
/* initialize port mode to 4x10G_E (10G with 4x10 SERDES) */
/* CNIG_REG_NW_PORT_MODE is same for A0 and B0 */
if (!CHIP_REV_IS_EMUL(p_hwfn->p_dev) || !ECORE_IS_AH(p_hwfn->p_dev))
@@ -1227,25 +1185,6 @@ ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
* &ctrl);
*/
-#ifndef ASIC_ONLY
- /*@@TMP - On B0 build 1, need to mask the datapath_registers parity */
- if (CHIP_REV_IS_EMUL_B0(p_hwfn->p_dev) &&
- (p_hwfn->p_dev->chip_metal == 1)) {
- u32 reg_addr, tmp;
-
- reg_addr =
- attn_blocks[BLOCK_PGLUE_B].
- chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].prty_regs[0]->
- mask_addr;
- DP_NOTICE(p_hwfn, false,
- "Masking datapath registers parity on"
- " B0 emulation [build 1]\n");
- tmp = ecore_rd(p_hwfn, p_ptt, reg_addr);
- tmp |= (1 << 0); /* Was PRTY_MASK_DATAPATH_REGISTERS */
- ecore_wr(p_hwfn, p_ptt, reg_addr, tmp);
- }
-#endif
-
rc = ecore_hw_init_pf_doorbell_bar(p_hwfn, p_ptt);
if (rc)
return rc;
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index e341b95..9cd55c4 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -1319,172 +1319,6 @@ struct atten_status_block {
__le32 reserved1;
};
-enum block_addr {
- GRCBASE_GRC = 0x50000,
- GRCBASE_MISCS = 0x9000,
- GRCBASE_MISC = 0x8000,
- GRCBASE_DBU = 0xa000,
- GRCBASE_PGLUE_B = 0x2a8000,
- GRCBASE_CNIG = 0x218000,
- GRCBASE_CPMU = 0x30000,
- GRCBASE_NCSI = 0x40000,
- GRCBASE_OPTE = 0x53000,
- GRCBASE_BMB = 0x540000,
- GRCBASE_PCIE = 0x54000,
- GRCBASE_MCP = 0xe00000,
- GRCBASE_MCP2 = 0x52000,
- GRCBASE_PSWHST = 0x2a0000,
- GRCBASE_PSWHST2 = 0x29e000,
- GRCBASE_PSWRD = 0x29c000,
- GRCBASE_PSWRD2 = 0x29d000,
- GRCBASE_PSWWR = 0x29a000,
- GRCBASE_PSWWR2 = 0x29b000,
- GRCBASE_PSWRQ = 0x280000,
- GRCBASE_PSWRQ2 = 0x240000,
- GRCBASE_PGLCS = 0x0,
- GRCBASE_DMAE = 0xc000,
- GRCBASE_PTU = 0x560000,
- GRCBASE_TCM = 0x1180000,
- GRCBASE_MCM = 0x1200000,
- GRCBASE_UCM = 0x1280000,
- GRCBASE_XCM = 0x1000000,
- GRCBASE_YCM = 0x1080000,
- GRCBASE_PCM = 0x1100000,
- GRCBASE_QM = 0x2f0000,
- GRCBASE_TM = 0x2c0000,
- GRCBASE_DORQ = 0x100000,
- GRCBASE_BRB = 0x340000,
- GRCBASE_SRC = 0x238000,
- GRCBASE_PRS = 0x1f0000,
- GRCBASE_TSDM = 0xfb0000,
- GRCBASE_MSDM = 0xfc0000,
- GRCBASE_USDM = 0xfd0000,
- GRCBASE_XSDM = 0xf80000,
- GRCBASE_YSDM = 0xf90000,
- GRCBASE_PSDM = 0xfa0000,
- GRCBASE_TSEM = 0x1700000,
- GRCBASE_MSEM = 0x1800000,
- GRCBASE_USEM = 0x1900000,
- GRCBASE_XSEM = 0x1400000,
- GRCBASE_YSEM = 0x1500000,
- GRCBASE_PSEM = 0x1600000,
- GRCBASE_RSS = 0x238800,
- GRCBASE_TMLD = 0x4d0000,
- GRCBASE_MULD = 0x4e0000,
- GRCBASE_YULD = 0x4c8000,
- GRCBASE_XYLD = 0x4c0000,
- GRCBASE_PRM = 0x230000,
- GRCBASE_PBF_PB1 = 0xda0000,
- GRCBASE_PBF_PB2 = 0xda4000,
- GRCBASE_RPB = 0x23c000,
- GRCBASE_BTB = 0xdb0000,
- GRCBASE_PBF = 0xd80000,
- GRCBASE_RDIF = 0x300000,
- GRCBASE_TDIF = 0x310000,
- GRCBASE_CDU = 0x580000,
- GRCBASE_CCFC = 0x2e0000,
- GRCBASE_TCFC = 0x2d0000,
- GRCBASE_IGU = 0x180000,
- GRCBASE_CAU = 0x1c0000,
- GRCBASE_UMAC = 0x51000,
- GRCBASE_XMAC = 0x210000,
- GRCBASE_DBG = 0x10000,
- GRCBASE_NIG = 0x500000,
- GRCBASE_WOL = 0x600000,
- GRCBASE_BMBN = 0x610000,
- GRCBASE_IPC = 0x20000,
- GRCBASE_NWM = 0x800000,
- GRCBASE_NWS = 0x700000,
- GRCBASE_MS = 0x6a0000,
- GRCBASE_PHY_PCIE = 0x620000,
- GRCBASE_MISC_AEU = 0x8000,
- GRCBASE_BAR0_MAP = 0x1c00000,
- MAX_BLOCK_ADDR
-};
-
-enum block_id {
- BLOCK_GRC,
- BLOCK_MISCS,
- BLOCK_MISC,
- BLOCK_DBU,
- BLOCK_PGLUE_B,
- BLOCK_CNIG,
- BLOCK_CPMU,
- BLOCK_NCSI,
- BLOCK_OPTE,
- BLOCK_BMB,
- BLOCK_PCIE,
- BLOCK_MCP,
- BLOCK_MCP2,
- BLOCK_PSWHST,
- BLOCK_PSWHST2,
- BLOCK_PSWRD,
- BLOCK_PSWRD2,
- BLOCK_PSWWR,
- BLOCK_PSWWR2,
- BLOCK_PSWRQ,
- BLOCK_PSWRQ2,
- BLOCK_PGLCS,
- BLOCK_DMAE,
- BLOCK_PTU,
- BLOCK_TCM,
- BLOCK_MCM,
- BLOCK_UCM,
- BLOCK_XCM,
- BLOCK_YCM,
- BLOCK_PCM,
- BLOCK_QM,
- BLOCK_TM,
- BLOCK_DORQ,
- BLOCK_BRB,
- BLOCK_SRC,
- BLOCK_PRS,
- BLOCK_TSDM,
- BLOCK_MSDM,
- BLOCK_USDM,
- BLOCK_XSDM,
- BLOCK_YSDM,
- BLOCK_PSDM,
- BLOCK_TSEM,
- BLOCK_MSEM,
- BLOCK_USEM,
- BLOCK_XSEM,
- BLOCK_YSEM,
- BLOCK_PSEM,
- BLOCK_RSS,
- BLOCK_TMLD,
- BLOCK_MULD,
- BLOCK_YULD,
- BLOCK_XYLD,
- BLOCK_PRM,
- BLOCK_PBF_PB1,
- BLOCK_PBF_PB2,
- BLOCK_RPB,
- BLOCK_BTB,
- BLOCK_PBF,
- BLOCK_RDIF,
- BLOCK_TDIF,
- BLOCK_CDU,
- BLOCK_CCFC,
- BLOCK_TCFC,
- BLOCK_IGU,
- BLOCK_CAU,
- BLOCK_UMAC,
- BLOCK_XMAC,
- BLOCK_DBG,
- BLOCK_NIG,
- BLOCK_WOL,
- BLOCK_BMBN,
- BLOCK_IPC,
- BLOCK_NWM,
- BLOCK_NWS,
- BLOCK_MS,
- BLOCK_PHY_PCIE,
- BLOCK_MISC_AEU,
- BLOCK_BAR0_MAP,
- MAX_BLOCK_ID
-};
-
/*
* Igu cleanup bit values to distinguish between clean or producer consumer
*/
@@ -1561,43 +1395,12 @@ struct dmae_cmd {
__le16 xsum8 /* checksum8 result */;
};
-struct fw_ver_num {
- u8 major /* Firmware major version number */;
- u8 minor /* Firmware minor version number */;
- u8 rev /* Firmware revision version number */;
- u8 eng /* Firmware engineering version number (for bootleg versions) */
- ;
-};
-
-struct fw_ver_info {
- __le16 tools_ver /* Tools version number */;
- u8 image_id /* FW image ID (e.g. main, l2b, kuku) */;
- u8 reserved1;
- struct fw_ver_num num /* FW version number */;
- __le32 timestamp /* FW Timestamp in unix time (sec. since 1970) */;
- __le32 reserved2;
-};
-
struct storm_ram_section {
__le16 offset
/* The offset of the section in the RAM (in 64 bit units) */;
__le16 size /* The size of the section (in 64 bit units) */;
};
-struct fw_info {
- struct fw_ver_info ver /* FW version information */;
- struct storm_ram_section fw_asserts_section
- /* The FW Asserts offset/size in Storm RAM */;
- __le32 reserved;
-};
-
-struct fw_info_location {
- __le32 grc_addr /* GRC address where the fw_info struct is located. */;
- __le32 size
- /* Size of the fw_info structure (thats located at the grc_addr). */
- ;
-};
-
/*
* IGU cleanup command
*/
@@ -1672,35 +1475,6 @@ struct igu_msix_vector {
#define IGU_MSIX_VECTOR_RESERVED1_SHIFT 24
};
-enum init_modes {
- MODE_BB_A0,
- MODE_BB_B0,
- MODE_K2,
- MODE_ASIC,
- MODE_EMUL_REDUCED,
- MODE_EMUL_FULL,
- MODE_FPGA,
- MODE_CHIPSIM,
- MODE_SF,
- MODE_MF_SD,
- MODE_MF_SI,
- MODE_PORTS_PER_ENG_1,
- MODE_PORTS_PER_ENG_2,
- MODE_PORTS_PER_ENG_4,
- MODE_100G,
- MODE_EAGLE_ENG1_WORKAROUND,
- MAX_INIT_MODES
-};
-
-enum init_phases {
- PHASE_ENGINE,
- PHASE_PORT,
- PHASE_PF,
- PHASE_VF,
- PHASE_QM_PF,
- MAX_INIT_PHASES
-};
-
struct mstorm_core_conn_ag_ctx {
u8 byte0 /* cdu_validation */;
u8 byte1 /* state */;
diff --git a/drivers/net/qede/base/ecore_hsi_debug_tools.h b/drivers/net/qede/base/ecore_hsi_debug_tools.h
new file mode 100644
index 0000000..e82b0d4
--- /dev/null
+++ b/drivers/net/qede/base/ecore_hsi_debug_tools.h
@@ -0,0 +1,1025 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#ifndef __ECORE_HSI_DEBUG_TOOLS__
+#define __ECORE_HSI_DEBUG_TOOLS__
+/****************************************/
+/* Debug Tools HSI constants and macros */
+/****************************************/
+
+
+enum block_addr {
+ GRCBASE_GRC = 0x50000,
+ GRCBASE_MISCS = 0x9000,
+ GRCBASE_MISC = 0x8000,
+ GRCBASE_DBU = 0xa000,
+ GRCBASE_PGLUE_B = 0x2a8000,
+ GRCBASE_CNIG = 0x218000,
+ GRCBASE_CPMU = 0x30000,
+ GRCBASE_NCSI = 0x40000,
+ GRCBASE_OPTE = 0x53000,
+ GRCBASE_BMB = 0x540000,
+ GRCBASE_PCIE = 0x54000,
+ GRCBASE_MCP = 0xe00000,
+ GRCBASE_MCP2 = 0x52000,
+ GRCBASE_PSWHST = 0x2a0000,
+ GRCBASE_PSWHST2 = 0x29e000,
+ GRCBASE_PSWRD = 0x29c000,
+ GRCBASE_PSWRD2 = 0x29d000,
+ GRCBASE_PSWWR = 0x29a000,
+ GRCBASE_PSWWR2 = 0x29b000,
+ GRCBASE_PSWRQ = 0x280000,
+ GRCBASE_PSWRQ2 = 0x240000,
+ GRCBASE_PGLCS = 0x0,
+ GRCBASE_DMAE = 0xc000,
+ GRCBASE_PTU = 0x560000,
+ GRCBASE_TCM = 0x1180000,
+ GRCBASE_MCM = 0x1200000,
+ GRCBASE_UCM = 0x1280000,
+ GRCBASE_XCM = 0x1000000,
+ GRCBASE_YCM = 0x1080000,
+ GRCBASE_PCM = 0x1100000,
+ GRCBASE_QM = 0x2f0000,
+ GRCBASE_TM = 0x2c0000,
+ GRCBASE_DORQ = 0x100000,
+ GRCBASE_BRB = 0x340000,
+ GRCBASE_SRC = 0x238000,
+ GRCBASE_PRS = 0x1f0000,
+ GRCBASE_TSDM = 0xfb0000,
+ GRCBASE_MSDM = 0xfc0000,
+ GRCBASE_USDM = 0xfd0000,
+ GRCBASE_XSDM = 0xf80000,
+ GRCBASE_YSDM = 0xf90000,
+ GRCBASE_PSDM = 0xfa0000,
+ GRCBASE_TSEM = 0x1700000,
+ GRCBASE_MSEM = 0x1800000,
+ GRCBASE_USEM = 0x1900000,
+ GRCBASE_XSEM = 0x1400000,
+ GRCBASE_YSEM = 0x1500000,
+ GRCBASE_PSEM = 0x1600000,
+ GRCBASE_RSS = 0x238800,
+ GRCBASE_TMLD = 0x4d0000,
+ GRCBASE_MULD = 0x4e0000,
+ GRCBASE_YULD = 0x4c8000,
+ GRCBASE_XYLD = 0x4c0000,
+ GRCBASE_PRM = 0x230000,
+ GRCBASE_PBF_PB1 = 0xda0000,
+ GRCBASE_PBF_PB2 = 0xda4000,
+ GRCBASE_RPB = 0x23c000,
+ GRCBASE_BTB = 0xdb0000,
+ GRCBASE_PBF = 0xd80000,
+ GRCBASE_RDIF = 0x300000,
+ GRCBASE_TDIF = 0x310000,
+ GRCBASE_CDU = 0x580000,
+ GRCBASE_CCFC = 0x2e0000,
+ GRCBASE_TCFC = 0x2d0000,
+ GRCBASE_IGU = 0x180000,
+ GRCBASE_CAU = 0x1c0000,
+ GRCBASE_UMAC = 0x51000,
+ GRCBASE_XMAC = 0x210000,
+ GRCBASE_DBG = 0x10000,
+ GRCBASE_NIG = 0x500000,
+ GRCBASE_WOL = 0x600000,
+ GRCBASE_BMBN = 0x610000,
+ GRCBASE_IPC = 0x20000,
+ GRCBASE_NWM = 0x800000,
+ GRCBASE_NWS = 0x700000,
+ GRCBASE_MS = 0x6a0000,
+ GRCBASE_PHY_PCIE = 0x620000,
+ GRCBASE_LED = 0x6b8000,
+ GRCBASE_MISC_AEU = 0x8000,
+ GRCBASE_BAR0_MAP = 0x1c00000,
+ MAX_BLOCK_ADDR
+};
+
+
+enum block_id {
+ BLOCK_GRC,
+ BLOCK_MISCS,
+ BLOCK_MISC,
+ BLOCK_DBU,
+ BLOCK_PGLUE_B,
+ BLOCK_CNIG,
+ BLOCK_CPMU,
+ BLOCK_NCSI,
+ BLOCK_OPTE,
+ BLOCK_BMB,
+ BLOCK_PCIE,
+ BLOCK_MCP,
+ BLOCK_MCP2,
+ BLOCK_PSWHST,
+ BLOCK_PSWHST2,
+ BLOCK_PSWRD,
+ BLOCK_PSWRD2,
+ BLOCK_PSWWR,
+ BLOCK_PSWWR2,
+ BLOCK_PSWRQ,
+ BLOCK_PSWRQ2,
+ BLOCK_PGLCS,
+ BLOCK_DMAE,
+ BLOCK_PTU,
+ BLOCK_TCM,
+ BLOCK_MCM,
+ BLOCK_UCM,
+ BLOCK_XCM,
+ BLOCK_YCM,
+ BLOCK_PCM,
+ BLOCK_QM,
+ BLOCK_TM,
+ BLOCK_DORQ,
+ BLOCK_BRB,
+ BLOCK_SRC,
+ BLOCK_PRS,
+ BLOCK_TSDM,
+ BLOCK_MSDM,
+ BLOCK_USDM,
+ BLOCK_XSDM,
+ BLOCK_YSDM,
+ BLOCK_PSDM,
+ BLOCK_TSEM,
+ BLOCK_MSEM,
+ BLOCK_USEM,
+ BLOCK_XSEM,
+ BLOCK_YSEM,
+ BLOCK_PSEM,
+ BLOCK_RSS,
+ BLOCK_TMLD,
+ BLOCK_MULD,
+ BLOCK_YULD,
+ BLOCK_XYLD,
+ BLOCK_PRM,
+ BLOCK_PBF_PB1,
+ BLOCK_PBF_PB2,
+ BLOCK_RPB,
+ BLOCK_BTB,
+ BLOCK_PBF,
+ BLOCK_RDIF,
+ BLOCK_TDIF,
+ BLOCK_CDU,
+ BLOCK_CCFC,
+ BLOCK_TCFC,
+ BLOCK_IGU,
+ BLOCK_CAU,
+ BLOCK_UMAC,
+ BLOCK_XMAC,
+ BLOCK_DBG,
+ BLOCK_NIG,
+ BLOCK_WOL,
+ BLOCK_BMBN,
+ BLOCK_IPC,
+ BLOCK_NWM,
+ BLOCK_NWS,
+ BLOCK_MS,
+ BLOCK_PHY_PCIE,
+ BLOCK_LED,
+ BLOCK_MISC_AEU,
+ BLOCK_BAR0_MAP,
+ MAX_BLOCK_ID
+};
+
+
+/*
+ * binary debug buffer types
+ */
+enum bin_dbg_buffer_type {
+ BIN_BUF_DBG_MODE_TREE /* init modes tree */,
+ BIN_BUF_DBG_DUMP_REG /* GRC Dump registers */,
+ BIN_BUF_DBG_DUMP_MEM /* GRC Dump memories */,
+ BIN_BUF_DBG_IDLE_CHK_REGS /* Idle Check registers */,
+ BIN_BUF_DBG_IDLE_CHK_IMMS /* Idle Check immediates */,
+ BIN_BUF_DBG_IDLE_CHK_RULES /* Idle Check rules */,
+ BIN_BUF_DBG_IDLE_CHK_PARSING_DATA /* Idle Check parsing data */,
+ BIN_BUF_DBG_ATTN_BLOCKS /* Attention blocks */,
+ BIN_BUF_DBG_ATTN_REGS /* Attention registers */,
+ BIN_BUF_DBG_ATTN_INDEXES /* Attention indexes */,
+ BIN_BUF_DBG_ATTN_NAME_OFFSETS /* Attention name offsets */,
+ BIN_BUF_DBG_PARSING_STRINGS /* Debug Tools parsing strings */,
+ MAX_BIN_DBG_BUFFER_TYPE
+};
+
+
+/*
+ * Attention bit mapping
+ */
+struct dbg_attn_bit_mapping {
+ __le16 data;
+/* The index of an attention in the blocks attentions list
+ * (if is_unused_idx_cnt=0), or a number of consecutive unused attention bits
+ * (if is_unused_idx_cnt=1)
+ */
+#define DBG_ATTN_BIT_MAPPING_VAL_MASK 0x7FFF
+#define DBG_ATTN_BIT_MAPPING_VAL_SHIFT 0
+/* if set, the val field indicates the number of consecutive unused attention
+ * bits
+ */
+#define DBG_ATTN_BIT_MAPPING_IS_UNUSED_BIT_CNT_MASK 0x1
+#define DBG_ATTN_BIT_MAPPING_IS_UNUSED_BIT_CNT_SHIFT 15
+};
+
+
+/*
+ * Attention block per-type data
+ */
+struct dbg_attn_block_type_data {
+/* Offset of this block attention names in the debug attention name offsets
+ * array
+ */
+ __le16 names_offset;
+ __le16 reserved1;
+ u8 num_regs /* Number of attention registers in this block */;
+ u8 reserved2;
+/* Offset of this blocks attention registers in the attention registers array
+ * (in dbg_attn_reg units)
+ */
+ __le16 regs_offset;
+};
+
+/*
+ * Block attentions
+ */
+struct dbg_attn_block {
+/* attention block per-type data. Count must match the number of elements in
+ * dbg_attn_type.
+ */
+ struct dbg_attn_block_type_data per_type_data[2];
+};
+
+
+/*
+ * Attention register result
+ */
+struct dbg_attn_reg_result {
+ __le32 data;
+/* STS attention register GRC address (in dwords) */
+#define DBG_ATTN_REG_RESULT_STS_ADDRESS_MASK 0xFFFFFF
+#define DBG_ATTN_REG_RESULT_STS_ADDRESS_SHIFT 0
+/* Number of attention indexes in this register */
+#define DBG_ATTN_REG_RESULT_NUM_ATTN_IDX_MASK 0xFF
+#define DBG_ATTN_REG_RESULT_NUM_ATTN_IDX_SHIFT 24
+/* Offset of this registers block attention indexes (values in the range
+ * 0..number of block attentions)
+ */
+ __le16 attn_idx_offset;
+ __le16 reserved;
+ __le32 sts_val /* Value read from the STS attention register */;
+ __le32 mask_val /* Value read from the MASK attention register */;
+};
+
+/*
+ * Attention block result
+ */
+struct dbg_attn_block_result {
+ u8 block_id /* Registers block ID */;
+ u8 data;
+/* Value from dbg_attn_type enum */
+#define DBG_ATTN_BLOCK_RESULT_ATTN_TYPE_MASK 0x3
+#define DBG_ATTN_BLOCK_RESULT_ATTN_TYPE_SHIFT 0
+/* Number of registers in the blok in which at least one attention bit is set */
+#define DBG_ATTN_BLOCK_RESULT_NUM_REGS_MASK 0x3F
+#define DBG_ATTN_BLOCK_RESULT_NUM_REGS_SHIFT 2
+/* Offset of this registers block attention names in the attention name offsets
+ * array
+ */
+ __le16 names_offset;
+/* result data for each register in the block in which at least one attention
+ * bit is set
+ */
+ struct dbg_attn_reg_result reg_results[15];
+};
+
+
+
+/*
+ * mode header
+ */
+struct dbg_mode_hdr {
+ __le16 data;
+/* indicates if a mode expression should be evaluated (0/1) */
+#define DBG_MODE_HDR_EVAL_MODE_MASK 0x1
+#define DBG_MODE_HDR_EVAL_MODE_SHIFT 0
+/* offset (in bytes) in modes expression buffer. valid only if eval_mode is
+ * set.
+ */
+#define DBG_MODE_HDR_MODES_BUF_OFFSET_MASK 0x7FFF
+#define DBG_MODE_HDR_MODES_BUF_OFFSET_SHIFT 1
+};
+
+/*
+ * Attention register
+ */
+struct dbg_attn_reg {
+ struct dbg_mode_hdr mode /* Mode header */;
+/* Offset of this registers block attention indexes (values in the range
+ * 0..number of block attentions)
+ */
+ __le16 attn_idx_offset;
+ __le32 data;
+/* STS attention register GRC address (in dwords) */
+#define DBG_ATTN_REG_STS_ADDRESS_MASK 0xFFFFFF
+#define DBG_ATTN_REG_STS_ADDRESS_SHIFT 0
+/* Number of attention indexes in this register */
+#define DBG_ATTN_REG_NUM_ATTN_IDX_MASK 0xFF
+#define DBG_ATTN_REG_NUM_ATTN_IDX_SHIFT 24
+/* STS_CLR attention register GRC address (in dwords) */
+ __le32 sts_clr_address;
+/* MASK attention register GRC address (in dwords) */
+ __le32 mask_address;
+};
+
+
+
+/*
+ * attention types
+ */
+enum dbg_attn_type {
+ ATTN_TYPE_INTERRUPT,
+ ATTN_TYPE_PARITY,
+ MAX_DBG_ATTN_TYPE
+};
+
+
+/*
+ * condition header for registers dump
+ */
+struct dbg_dump_cond_hdr {
+ struct dbg_mode_hdr mode /* Mode header */;
+ u8 block_id /* block ID */;
+ u8 data_size /* size in dwords of the data following this header */;
+};
+
+
+/*
+ * memory data for registers dump
+ */
+struct dbg_dump_mem {
+ __le32 dword0;
+/* register address (in dwords) */
+#define DBG_DUMP_MEM_ADDRESS_MASK 0xFFFFFF
+#define DBG_DUMP_MEM_ADDRESS_SHIFT 0
+#define DBG_DUMP_MEM_MEM_GROUP_ID_MASK 0xFF /* memory group ID */
+#define DBG_DUMP_MEM_MEM_GROUP_ID_SHIFT 24
+ __le32 dword1;
+/* register size (in dwords) */
+#define DBG_DUMP_MEM_LENGTH_MASK 0xFFFFFF
+#define DBG_DUMP_MEM_LENGTH_SHIFT 0
+#define DBG_DUMP_MEM_RESERVED_MASK 0xFF
+#define DBG_DUMP_MEM_RESERVED_SHIFT 24
+};
+
+
+/*
+ * register data for registers dump
+ */
+struct dbg_dump_reg {
+ __le32 data;
+/* register address (in dwords) */
+#define DBG_DUMP_REG_ADDRESS_MASK 0xFFFFFF
+#define DBG_DUMP_REG_ADDRESS_SHIFT 0
+#define DBG_DUMP_REG_LENGTH_MASK 0xFF /* register size (in dwords) */
+#define DBG_DUMP_REG_LENGTH_SHIFT 24
+};
+
+
+/*
+ * split header for registers dump
+ */
+struct dbg_dump_split_hdr {
+ __le32 hdr;
+/* size in dwords of the data following this header */
+#define DBG_DUMP_SPLIT_HDR_DATA_SIZE_MASK 0xFFFFFF
+#define DBG_DUMP_SPLIT_HDR_DATA_SIZE_SHIFT 0
+#define DBG_DUMP_SPLIT_HDR_SPLIT_TYPE_ID_MASK 0xFF /* split type ID */
+#define DBG_DUMP_SPLIT_HDR_SPLIT_TYPE_ID_SHIFT 24
+};
+
+
+/*
+ * condition header for idle check
+ */
+struct dbg_idle_chk_cond_hdr {
+ struct dbg_mode_hdr mode /* Mode header */;
+/* size in dwords of the data following this header */
+ __le16 data_size;
+};
+
+
+/*
+ * Idle Check condition register
+ */
+struct dbg_idle_chk_cond_reg {
+ __le32 data;
+/* Register GRC address (in dwords) */
+#define DBG_IDLE_CHK_COND_REG_ADDRESS_MASK 0xFFFFFF
+#define DBG_IDLE_CHK_COND_REG_ADDRESS_SHIFT 0
+/* value from block_id enum */
+#define DBG_IDLE_CHK_COND_REG_BLOCK_ID_MASK 0xFF
+#define DBG_IDLE_CHK_COND_REG_BLOCK_ID_SHIFT 24
+ __le16 num_entries /* number of registers entries to check */;
+ u8 entry_size /* size of registers entry (in dwords) */;
+ u8 start_entry /* index of the first entry to check */;
+};
+
+
+/*
+ * Idle Check info register
+ */
+struct dbg_idle_chk_info_reg {
+ __le32 data;
+/* Register GRC address (in dwords) */
+#define DBG_IDLE_CHK_INFO_REG_ADDRESS_MASK 0xFFFFFF
+#define DBG_IDLE_CHK_INFO_REG_ADDRESS_SHIFT 0
+/* value from block_id enum */
+#define DBG_IDLE_CHK_INFO_REG_BLOCK_ID_MASK 0xFF
+#define DBG_IDLE_CHK_INFO_REG_BLOCK_ID_SHIFT 24
+ __le16 size /* register size in dwords */;
+ struct dbg_mode_hdr mode /* Mode header */;
+};
+
+
+/*
+ * Idle Check register
+ */
+union dbg_idle_chk_reg {
+ struct dbg_idle_chk_cond_reg cond_reg /* condition register */;
+ struct dbg_idle_chk_info_reg info_reg /* info register */;
+};
+
+
+/*
+ * Idle Check result header
+ */
+struct dbg_idle_chk_result_hdr {
+ __le16 rule_id /* Failing rule index */;
+ __le16 mem_entry_id /* Failing memory entry index */;
+ u8 num_dumped_cond_regs /* number of dumped condition registers */;
+ u8 num_dumped_info_regs /* number of dumped condition registers */;
+ u8 severity /* from dbg_idle_chk_severity_types enum */;
+ u8 reserved;
+};
+
+
+/*
+ * Idle Check result register header
+ */
+struct dbg_idle_chk_result_reg_hdr {
+ u8 data;
+/* indicates if this register is a memory */
+#define DBG_IDLE_CHK_RESULT_REG_HDR_IS_MEM_MASK 0x1
+#define DBG_IDLE_CHK_RESULT_REG_HDR_IS_MEM_SHIFT 0
+/* register index within the failing rule */
+#define DBG_IDLE_CHK_RESULT_REG_HDR_REG_ID_MASK 0x7F
+#define DBG_IDLE_CHK_RESULT_REG_HDR_REG_ID_SHIFT 1
+ u8 start_entry /* index of the first checked entry */;
+ __le16 size /* register size in dwords */;
+};
+
+
+/*
+ * Idle Check rule
+ */
+struct dbg_idle_chk_rule {
+ __le16 rule_id /* Idle Check rule ID */;
+ u8 severity /* value from dbg_idle_chk_severity_types enum */;
+ u8 cond_id /* Condition ID */;
+ u8 num_cond_regs /* number of condition registers */;
+ u8 num_info_regs /* number of info registers */;
+ u8 num_imms /* number of immediates in the condition */;
+ u8 reserved1;
+/* offset of this rules registers in the idle check register array
+ * (in dbg_idle_chk_reg units)
+ */
+ __le16 reg_offset;
+/* offset of this rules immediate values in the immediate values array
+ * (in dwords)
+ */
+ __le16 imm_offset;
+};
+
+
+/*
+ * Idle Check rule parsing data
+ */
+struct dbg_idle_chk_rule_parsing_data {
+ __le32 data;
+/* indicates if this register has a FW message */
+#define DBG_IDLE_CHK_RULE_PARSING_DATA_HAS_FW_MSG_MASK 0x1
+#define DBG_IDLE_CHK_RULE_PARSING_DATA_HAS_FW_MSG_SHIFT 0
+/* Offset of this rules strings in the debug strings array (in bytes) */
+#define DBG_IDLE_CHK_RULE_PARSING_DATA_STR_OFFSET_MASK 0x7FFFFFFF
+#define DBG_IDLE_CHK_RULE_PARSING_DATA_STR_OFFSET_SHIFT 1
+};
+
+
+/*
+ * idle check severity types
+ */
+enum dbg_idle_chk_severity_types {
+/* idle check failure should cause an error */
+ IDLE_CHK_SEVERITY_ERROR,
+/* idle check failure should cause an error only if theres no traffic */
+ IDLE_CHK_SEVERITY_ERROR_NO_TRAFFIC,
+/* idle check failure should cause a warning */
+ IDLE_CHK_SEVERITY_WARNING,
+ MAX_DBG_IDLE_CHK_SEVERITY_TYPES
+};
+
+
+
+/*
+ * Debug Bus block data
+ */
+struct dbg_bus_block_data {
+/* Indicates if the block is enabled for recording (0/1) */
+ u8 enabled;
+ u8 hw_id /* HW ID associated with the block */;
+ u8 line_num /* Debug line number to select */;
+ u8 right_shift /* Number of units to right the debug data (0-3) */;
+ u8 cycle_en /* 4-bit value: bit i set -> unit i is enabled. */;
+/* 4-bit value: bit i set -> unit i is forced valid. */
+ u8 force_valid;
+/* 4-bit value: bit i set -> unit i frame bit is forced. */
+ u8 force_frame;
+ u8 reserved;
+};
+
+
+/*
+ * Debug Bus Clients
+ */
+enum dbg_bus_clients {
+ DBG_BUS_CLIENT_RBCN,
+ DBG_BUS_CLIENT_RBCP,
+ DBG_BUS_CLIENT_RBCR,
+ DBG_BUS_CLIENT_RBCT,
+ DBG_BUS_CLIENT_RBCU,
+ DBG_BUS_CLIENT_RBCF,
+ DBG_BUS_CLIENT_RBCX,
+ DBG_BUS_CLIENT_RBCS,
+ DBG_BUS_CLIENT_RBCH,
+ DBG_BUS_CLIENT_RBCZ,
+ DBG_BUS_CLIENT_OTHER_ENGINE,
+ DBG_BUS_CLIENT_TIMESTAMP,
+ DBG_BUS_CLIENT_CPU,
+ DBG_BUS_CLIENT_RBCY,
+ DBG_BUS_CLIENT_RBCQ,
+ DBG_BUS_CLIENT_RBCM,
+ DBG_BUS_CLIENT_RBCB,
+ DBG_BUS_CLIENT_RBCW,
+ DBG_BUS_CLIENT_RBCV,
+ MAX_DBG_BUS_CLIENTS
+};
+
+
+/*
+ * Debug Bus constraint operation types
+ */
+enum dbg_bus_constraint_ops {
+ DBG_BUS_CONSTRAINT_OP_EQ /* equal */,
+ DBG_BUS_CONSTRAINT_OP_NE /* not equal */,
+ DBG_BUS_CONSTRAINT_OP_LT /* less than */,
+ DBG_BUS_CONSTRAINT_OP_LTC /* less than (cyclic) */,
+ DBG_BUS_CONSTRAINT_OP_LE /* less than or equal */,
+ DBG_BUS_CONSTRAINT_OP_LEC /* less than or equal (cyclic) */,
+ DBG_BUS_CONSTRAINT_OP_GT /* greater than */,
+ DBG_BUS_CONSTRAINT_OP_GTC /* greater than (cyclic) */,
+ DBG_BUS_CONSTRAINT_OP_GE /* greater than or equal */,
+ DBG_BUS_CONSTRAINT_OP_GEC /* greater than or equal (cyclic) */,
+ MAX_DBG_BUS_CONSTRAINT_OPS
+};
+
+
+/*
+ * Debug Bus memory address
+ */
+struct dbg_bus_mem_addr {
+ __le32 lo;
+ __le32 hi;
+};
+
+/*
+ * Debug Bus PCI buffer data
+ */
+struct dbg_bus_pci_buf_data {
+ struct dbg_bus_mem_addr phys_addr /* PCI buffer physical address */;
+ struct dbg_bus_mem_addr virt_addr /* PCI buffer virtual address */;
+ __le32 size /* PCI buffer size in bytes */;
+};
+
+/*
+ * Debug Bus Storm EID range filter params
+ */
+struct dbg_bus_storm_eid_range_params {
+ u8 min /* Minimal event ID to filter on */;
+ u8 max /* Maximal event ID to filter on */;
+};
+
+/*
+ * Debug Bus Storm EID mask filter params
+ */
+struct dbg_bus_storm_eid_mask_params {
+ u8 val /* Event ID value */;
+ u8 mask /* Event ID mask. 1s in the mask = dont care bits. */;
+};
+
+/*
+ * Debug Bus Storm EID filter params
+ */
+union dbg_bus_storm_eid_params {
+/* EID range filter params */
+ struct dbg_bus_storm_eid_range_params range;
+/* EID mask filter params */
+ struct dbg_bus_storm_eid_mask_params mask;
+};
+
+/*
+ * Debug Bus Storm data
+ */
+struct dbg_bus_storm_data {
+/* Indicates if the Storm is enabled for fast debug recording (0/1) */
+ u8 fast_enabled;
+/* Fast debug Storm mode, valid only if fast_enabled is set */
+ u8 fast_mode;
+/* Indicates if the Storm is enabled for slow debug recording (0/1) */
+ u8 slow_enabled;
+/* Slow debug Storm mode, valid only if slow_enabled is set */
+ u8 slow_mode;
+ u8 hw_id /* HW ID associated with the Storm */;
+ u8 eid_filter_en /* Indicates if EID filtering is performed (0/1) */;
+/* 1 = EID range filter, 0 = EID mask filter. Valid only if eid_filter_en is
+ * set,
+ */
+ u8 eid_range_not_mask;
+ u8 cid_filter_en /* Indicates if CID filtering is performed (0/1) */;
+/* EID filter params to filter on. Valid only if eid_filter_en is set. */
+ union dbg_bus_storm_eid_params eid_filter_params;
+ __le16 reserved;
+/* CID to filter on. Valid only if cid_filter_en is set. */
+ __le32 cid;
+};
+
+/*
+ * Debug Bus data
+ */
+struct dbg_bus_data {
+ __le32 app_version /* The tools version number of the application */;
+ u8 state /* The current debug bus state */;
+ u8 hw_dwords /* HW dwords per cycle */;
+ u8 next_hw_id /* Next HW ID to be associated with an input */;
+ u8 num_enabled_blocks /* Number of blocks enabled for recording */;
+ u8 num_enabled_storms /* Number of Storms enabled for recording */;
+ u8 target /* Output target */;
+ u8 next_trigger_state /* ID of next trigger state to be added */;
+/* ID of next filter/trigger constraint to be added */
+ u8 next_constraint_id;
+ u8 one_shot_en /* Indicates if one-shot mode is enabled (0/1) */;
+ u8 grc_input_en /* Indicates if GRC recording is enabled (0/1) */;
+/* Indicates if timestamp recording is enabled (0/1) */
+ u8 timestamp_input_en;
+ u8 filter_en /* Indicates if the recording filter is enabled (0/1) */;
+/* Indicates if the recording trigger is enabled (0/1) */
+ u8 trigger_en;
+/* If true, the next added constraint belong to the filter. Otherwise,
+ * it belongs to the last added trigger state. Valid only if either filter or
+ * triggers are enabled.
+ */
+ u8 adding_filter;
+/* Indicates if the recording filter should be applied before the trigger.
+ * Valid only if both filter and trigger are enabled (0/1)
+ */
+ u8 filter_pre_trigger;
+/* Indicates if the recording filter should be applied after the trigger.
+ * Valid only if both filter and trigger are enabled (0/1)
+ */
+ u8 filter_post_trigger;
+/* If true, all inputs are associated with HW ID 0. Otherwise, each input is
+ * assigned a different HW ID (0/1)
+ */
+ u8 unify_inputs;
+/* Indicates if the other engine sends it NW recording to this engine (0/1) */
+ u8 rcv_from_other_engine;
+/* Debug Bus PCI buffer data. Valid only when the target is
+ * DBG_BUS_TARGET_ID_PCI.
+ */
+ struct dbg_bus_pci_buf_data pci_buf;
+ __le16 reserved;
+/* Debug Bus data for each block */
+ struct dbg_bus_block_data blocks[80];
+/* Debug Bus data for each block */
+ struct dbg_bus_storm_data storms[6];
+};
+
+
+/*
+ * Debug bus filter types
+ */
+enum dbg_bus_filter_types {
+ DBG_BUS_FILTER_TYPE_OFF /* filter always off */,
+ DBG_BUS_FILTER_TYPE_PRE /* filter before trigger only */,
+ DBG_BUS_FILTER_TYPE_POST /* filter after trigger only */,
+ DBG_BUS_FILTER_TYPE_ON /* filter always on */,
+ MAX_DBG_BUS_FILTER_TYPES
+};
+
+
+/*
+ * Debug bus frame modes
+ */
+enum dbg_bus_frame_modes {
+ DBG_BUS_FRAME_MODE_0HW_4ST = 0 /* 0 HW dwords, 4 Storm dwords */,
+ DBG_BUS_FRAME_MODE_4HW_0ST = 3 /* 4 HW dwords, 0 Storm dwords */,
+ DBG_BUS_FRAME_MODE_8HW_0ST = 4 /* 8 HW dwords, 0 Storm dwords */,
+ MAX_DBG_BUS_FRAME_MODES
+};
+
+
+/*
+ * Debug bus input types
+ */
+enum dbg_bus_input_types {
+ DBG_BUS_INPUT_TYPE_STORM,
+ DBG_BUS_INPUT_TYPE_BLOCK,
+ MAX_DBG_BUS_INPUT_TYPES
+};
+
+
+
+/*
+ * Debug bus other engine mode
+ */
+enum dbg_bus_other_engine_modes {
+ DBG_BUS_OTHER_ENGINE_MODE_NONE,
+ DBG_BUS_OTHER_ENGINE_MODE_DOUBLE_BW_TX,
+ DBG_BUS_OTHER_ENGINE_MODE_DOUBLE_BW_RX,
+ DBG_BUS_OTHER_ENGINE_MODE_CROSS_ENGINE_TX,
+ DBG_BUS_OTHER_ENGINE_MODE_CROSS_ENGINE_RX,
+ MAX_DBG_BUS_OTHER_ENGINE_MODES
+};
+
+
+
+/*
+ * Debug bus post-trigger recording types
+ */
+enum dbg_bus_post_trigger_types {
+ DBG_BUS_POST_TRIGGER_RECORD /* start recording after trigger */,
+ DBG_BUS_POST_TRIGGER_DROP /* drop data after trigger */,
+ MAX_DBG_BUS_POST_TRIGGER_TYPES
+};
+
+
+/*
+ * Debug bus pre-trigger recording types
+ */
+enum dbg_bus_pre_trigger_types {
+ DBG_BUS_PRE_TRIGGER_START_FROM_ZERO /* start recording from time 0 */,
+/* start recording some chunks before trigger */
+ DBG_BUS_PRE_TRIGGER_NUM_CHUNKS,
+ DBG_BUS_PRE_TRIGGER_DROP /* drop data before trigger */,
+ MAX_DBG_BUS_PRE_TRIGGER_TYPES
+};
+
+
+/*
+ * Debug bus SEMI frame modes
+ */
+enum dbg_bus_semi_frame_modes {
+/* 0 slow dwords, 4 fast dwords */
+ DBG_BUS_SEMI_FRAME_MODE_0SLOW_4FAST = 0,
+/* 4 slow dwords, 0 fast dwords */
+ DBG_BUS_SEMI_FRAME_MODE_4SLOW_0FAST = 3,
+ MAX_DBG_BUS_SEMI_FRAME_MODES
+};
+
+
+/*
+ * Debug bus states
+ */
+enum dbg_bus_states {
+ DBG_BUS_STATE_IDLE /* debug bus idle state (not recording) */,
+/* debug bus is ready for configuration and recording */
+ DBG_BUS_STATE_READY,
+ DBG_BUS_STATE_RECORDING /* debug bus is currently recording */,
+ DBG_BUS_STATE_STOPPED /* debug bus recording has stopped */,
+ MAX_DBG_BUS_STATES
+};
+
+
+
+
+
+
+/*
+ * Debug Bus Storm modes
+ */
+enum dbg_bus_storm_modes {
+ DBG_BUS_STORM_MODE_PRINTF /* store data (fast debug) */,
+ DBG_BUS_STORM_MODE_PRAM_ADDR /* pram address (fast debug) */,
+ DBG_BUS_STORM_MODE_DRA_RW /* DRA read/write data (fast debug) */,
+ DBG_BUS_STORM_MODE_DRA_W /* DRA write data (fast debug) */,
+ DBG_BUS_STORM_MODE_LD_ST_ADDR /* load/store address (fast debug) */,
+ DBG_BUS_STORM_MODE_DRA_FSM /* DRA state machines (fast debug) */,
+ DBG_BUS_STORM_MODE_RH /* recording handlers (fast debug) */,
+ DBG_BUS_STORM_MODE_FOC /* FOC: FIN + DRA Rd (slow debug) */,
+ DBG_BUS_STORM_MODE_EXT_STORE /* FOC: External Store (slow) */,
+ MAX_DBG_BUS_STORM_MODES
+};
+
+
+/*
+ * Debug bus target IDs
+ */
+enum dbg_bus_targets {
+/* records debug bus to DBG block internal buffer */
+ DBG_BUS_TARGET_ID_INT_BUF,
+ DBG_BUS_TARGET_ID_NIG /* records debug bus to the NW */,
+ DBG_BUS_TARGET_ID_PCI /* records debug bus to a PCI buffer */,
+ MAX_DBG_BUS_TARGETS
+};
+
+
+/*
+ * GRC Dump data
+ */
+struct dbg_grc_data {
+/* Value of each GRC parameter. Array size must match enum dbg_grc_params. */
+ __le32 param_val[40];
+/* Indicates for each GRC parameter if it was set by the user (0/1).
+ * Array size must match the enum dbg_grc_params.
+ */
+ u8 param_set_by_user[40];
+};
+
+
+/*
+ * Debug GRC params
+ */
+enum dbg_grc_params {
+ DBG_GRC_PARAM_DUMP_TSTORM /* dump Tstorm memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_MSTORM /* dump Mstorm memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_USTORM /* dump Ustorm memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_XSTORM /* dump Xstorm memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_YSTORM /* dump Ystorm memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_PSTORM /* dump Pstorm memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_REGS /* dump non-memory registers (0/1) */,
+ DBG_GRC_PARAM_DUMP_RAM /* dump Storm internal RAMs (0/1) */,
+ DBG_GRC_PARAM_DUMP_PBUF /* dump Storm passive buffer (0/1) */,
+ DBG_GRC_PARAM_DUMP_IOR /* dump Storm IORs (0/1) */,
+ DBG_GRC_PARAM_DUMP_VFC /* dump VFC memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_CM_CTX /* dump CM contexts (0/1) */,
+ DBG_GRC_PARAM_DUMP_PXP /* dump PXP memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_RSS /* dump RSS memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_CAU /* dump CAU memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_QM /* dump QM memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_MCP /* dump MCP memories (0/1) */,
+ DBG_GRC_PARAM_RESERVED /* reserved */,
+ DBG_GRC_PARAM_DUMP_CFC /* dump CFC memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_IGU /* dump IGU memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_BRB /* dump BRB memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_BTB /* dump BTB memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_BMB /* dump BMB memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_NIG /* dump NIG memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_MULD /* dump MULD memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_PRS /* dump PRS memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_DMAE /* dump PRS memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_TM /* dump TM (timers) memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_SDM /* dump SDM memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_DIF /* dump DIF memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_STATIC /* dump static debug data (0/1) */,
+ DBG_GRC_PARAM_UNSTALL /* un-stall Storms after dump (0/1) */,
+ DBG_GRC_PARAM_NUM_LCIDS /* number of LCIDs (0..320) */,
+ DBG_GRC_PARAM_NUM_LTIDS /* number of LTIDs (0..320) */,
+/* preset: exclude all memories from dump (1 only) */
+ DBG_GRC_PARAM_EXCLUDE_ALL,
+/* preset: include memories for crash dump (1 only) */
+ DBG_GRC_PARAM_CRASH,
+/* perform dump only if MFW is responding (0/1) */
+ DBG_GRC_PARAM_PARITY_SAFE,
+ DBG_GRC_PARAM_DUMP_CM /* dump CM memories (0/1) */,
+ DBG_GRC_PARAM_DUMP_PHY /* dump PHY memories (0/1) */,
+ MAX_DBG_GRC_PARAMS
+};
+
+
+/*
+ * Debug reset registers
+ */
+enum dbg_reset_regs {
+ DBG_RESET_REG_MISCS_PL_UA,
+ DBG_RESET_REG_MISCS_PL_HV,
+ DBG_RESET_REG_MISCS_PL_HV_2,
+ DBG_RESET_REG_MISC_PL_UA,
+ DBG_RESET_REG_MISC_PL_HV,
+ DBG_RESET_REG_MISC_PL_PDA_VMAIN_1,
+ DBG_RESET_REG_MISC_PL_PDA_VMAIN_2,
+ DBG_RESET_REG_MISC_PL_PDA_VAUX,
+ MAX_DBG_RESET_REGS
+};
+
+
+/*
+ * Debug status codes
+ */
+enum dbg_status {
+ DBG_STATUS_OK,
+ DBG_STATUS_APP_VERSION_NOT_SET,
+ DBG_STATUS_UNSUPPORTED_APP_VERSION,
+ DBG_STATUS_DBG_BLOCK_NOT_RESET,
+ DBG_STATUS_INVALID_ARGS,
+ DBG_STATUS_OUTPUT_ALREADY_SET,
+ DBG_STATUS_INVALID_PCI_BUF_SIZE,
+ DBG_STATUS_PCI_BUF_ALLOC_FAILED,
+ DBG_STATUS_PCI_BUF_NOT_ALLOCATED,
+ DBG_STATUS_TOO_MANY_INPUTS,
+ DBG_STATUS_INPUT_OVERLAP,
+ DBG_STATUS_HW_ONLY_RECORDING,
+ DBG_STATUS_STORM_ALREADY_ENABLED,
+ DBG_STATUS_STORM_NOT_ENABLED,
+ DBG_STATUS_BLOCK_ALREADY_ENABLED,
+ DBG_STATUS_BLOCK_NOT_ENABLED,
+ DBG_STATUS_NO_INPUT_ENABLED,
+ DBG_STATUS_NO_FILTER_TRIGGER_64B,
+ DBG_STATUS_FILTER_ALREADY_ENABLED,
+ DBG_STATUS_TRIGGER_ALREADY_ENABLED,
+ DBG_STATUS_TRIGGER_NOT_ENABLED,
+ DBG_STATUS_CANT_ADD_CONSTRAINT,
+ DBG_STATUS_TOO_MANY_TRIGGER_STATES,
+ DBG_STATUS_TOO_MANY_CONSTRAINTS,
+ DBG_STATUS_RECORDING_NOT_STARTED,
+ DBG_STATUS_DATA_DIDNT_TRIGGER,
+ DBG_STATUS_NO_DATA_RECORDED,
+ DBG_STATUS_DUMP_BUF_TOO_SMALL,
+ DBG_STATUS_DUMP_NOT_CHUNK_ALIGNED,
+ DBG_STATUS_UNKNOWN_CHIP,
+ DBG_STATUS_VIRT_MEM_ALLOC_FAILED,
+ DBG_STATUS_BLOCK_IN_RESET,
+ DBG_STATUS_INVALID_TRACE_SIGNATURE,
+ DBG_STATUS_INVALID_NVRAM_BUNDLE,
+ DBG_STATUS_NVRAM_GET_IMAGE_FAILED,
+ DBG_STATUS_NON_ALIGNED_NVRAM_IMAGE,
+ DBG_STATUS_NVRAM_READ_FAILED,
+ DBG_STATUS_IDLE_CHK_PARSE_FAILED,
+ DBG_STATUS_MCP_TRACE_BAD_DATA,
+ DBG_STATUS_MCP_TRACE_NO_META,
+ DBG_STATUS_MCP_COULD_NOT_HALT,
+ DBG_STATUS_MCP_COULD_NOT_RESUME,
+ DBG_STATUS_DMAE_FAILED,
+ DBG_STATUS_SEMI_FIFO_NOT_EMPTY,
+ DBG_STATUS_IGU_FIFO_BAD_DATA,
+ DBG_STATUS_MCP_COULD_NOT_MASK_PRTY,
+ DBG_STATUS_FW_ASSERTS_PARSE_FAILED,
+ DBG_STATUS_REG_FIFO_BAD_DATA,
+ DBG_STATUS_PROTECTION_OVERRIDE_BAD_DATA,
+ DBG_STATUS_DBG_ARRAY_NOT_SET,
+ DBG_STATUS_MULTI_BLOCKS_WITH_FILTER,
+ MAX_DBG_STATUS
+};
+
+
+/*
+ * Debug Storms IDs
+ */
+enum dbg_storms {
+ DBG_TSTORM_ID,
+ DBG_MSTORM_ID,
+ DBG_USTORM_ID,
+ DBG_XSTORM_ID,
+ DBG_YSTORM_ID,
+ DBG_PSTORM_ID,
+ MAX_DBG_STORMS
+};
+
+
+/*
+ * Idle Check data
+ */
+struct idle_chk_data {
+ __le32 buf_size /* Idle check buffer size in dwords */;
+/* Indicates if the idle check buffer size was set (0/1) */
+ u8 buf_size_set;
+ u8 reserved1;
+ __le16 reserved2;
+};
+
+/*
+ * Debug Tools data (per HW function)
+ */
+struct dbg_tools_data {
+ struct dbg_grc_data grc /* GRC Dump data */;
+ struct dbg_bus_data bus /* Debug Bus data */;
+ struct idle_chk_data idle_chk /* Idle Check data */;
+ u8 mode_enable[40] /* Indicates if a mode is enabled (0/1) */;
+/* Indicates if a block is in reset state (0/1) */
+ u8 block_in_reset[80];
+ u8 chip_id /* Chip ID (from enum chip_ids) */;
+ u8 platform_id /* Platform ID (from enum platform_ids) */;
+ u8 initialized /* Indicates if the data was initialized */;
+ u8 reserved;
+};
+
+
+#endif /* __ECORE_HSI_DEBUG_TOOLS__ */
diff --git a/drivers/net/qede/base/ecore_hsi_init_func.h b/drivers/net/qede/base/ecore_hsi_init_func.h
new file mode 100644
index 0000000..fca7479
--- /dev/null
+++ b/drivers/net/qede/base/ecore_hsi_init_func.h
@@ -0,0 +1,132 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#ifndef __ECORE_HSI_INIT_FUNC__
+#define __ECORE_HSI_INIT_FUNC__
+/********************************/
+/* HSI Init Functions constants */
+/********************************/
+
+/* Number of VLAN priorities */
+#define NUM_OF_VLAN_PRIORITIES 8
+
+
+/*
+ * BRB RAM init requirements
+ */
+struct init_brb_ram_req {
+ __le32 guranteed_per_tc /* guaranteed size per TC, in bytes */;
+ __le32 headroom_per_tc /* headroom size per TC, in bytes */;
+ __le32 min_pkt_size /* min packet size, in bytes */;
+ __le32 max_ports_per_engine /* min packet size, in bytes */;
+ u8 num_active_tcs[MAX_NUM_PORTS] /* number of active TCs per port */;
+};
+
+
+/*
+ * ETS per-TC init requirements
+ */
+struct init_ets_tc_req {
+/* if set, this TC participates in the arbitration with a strict priority
+ * (the priority is equal to the TC ID)
+ */
+ u8 use_sp;
+/* if set, this TC participates in the arbitration with a WFQ weight
+ * (indicated by the weight field)
+ */
+ u8 use_wfq;
+/* An arbitration weight. Valid only if use_wfq is set. */
+ __le16 weight;
+};
+
+/*
+ * ETS init requirements
+ */
+struct init_ets_req {
+ __le32 mtu /* Max packet size (in bytes) */;
+/* ETS initialization requirements per TC. */
+ struct init_ets_tc_req tc_req[NUM_OF_TCS];
+};
+
+
+
+/*
+ * NIG LB RL init requirements
+ */
+struct init_nig_lb_rl_req {
+/* Global MAC+LB RL rate (in Mbps). If set to 0, the RL will be disabled. */
+ __le16 lb_mac_rate;
+/* Global LB RL rate (in Mbps). If set to 0, the RL will be disabled. */
+ __le16 lb_rate;
+ __le32 mtu /* Max packet size (in bytes) */;
+/* RL rate per physical TC (in Mbps). If set to 0, the RL will be disabled. */
+ __le16 tc_rate[NUM_OF_PHYS_TCS];
+};
+
+
+/*
+ * NIG TC mapping for each priority
+ */
+struct init_nig_pri_tc_map_entry {
+ u8 tc_id /* the mapped TC ID */;
+ u8 valid /* indicates if the mapping entry is valid */;
+};
+
+
+/*
+ * NIG priority to TC map init requirements
+ */
+struct init_nig_pri_tc_map_req {
+ struct init_nig_pri_tc_map_entry pri[NUM_OF_VLAN_PRIORITIES];
+};
+
+
+/*
+ * QM per-port init parameters
+ */
+struct init_qm_port_params {
+ u8 active /* Indicates if this port is active */;
+/* Vector of valid bits for active TCs used by this port */
+ u8 active_phys_tcs;
+/* number of PBF command lines that can be used by this port */
+ __le16 num_pbf_cmd_lines;
+/* number of BTB blocks that can be used by this port */
+ __le16 num_btb_blocks;
+ __le16 reserved;
+};
+
+
+/*
+ * QM per-PQ init parameters
+ */
+struct init_qm_pq_params {
+ u8 vport_id /* VPORT ID */;
+ u8 tc_id /* TC ID */;
+ u8 wrr_group /* WRR group */;
+/* Indicates if a rate limiter should be allocated for the PQ (0/1) */
+ u8 rl_valid;
+};
+
+
+/*
+ * QM per-vport init parameters
+ */
+struct init_qm_vport_params {
+/* rate limit in Mb/sec units. a value of 0 means dont configure. ignored if
+ * VPORT RL is globally disabled.
+ */
+ __le32 vport_rl;
+/* WFQ weight. A value of 0 means dont configure. ignored if VPORT WFQ is
+ * globally disabled.
+ */
+ __le16 vport_wfq;
+/* the first Tx PQ ID associated with this VPORT for each TC. */
+ __le16 first_tx_pq_id[NUM_OF_TCS];
+};
+
+#endif /* __ECORE_HSI_INIT_FUNC__ */
diff --git a/drivers/net/qede/base/ecore_hsi_init_tool.h b/drivers/net/qede/base/ecore_hsi_init_tool.h
new file mode 100644
index 0000000..410b0bc
--- /dev/null
+++ b/drivers/net/qede/base/ecore_hsi_init_tool.h
@@ -0,0 +1,454 @@
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
+#ifndef __ECORE_HSI_INIT_TOOL__
+#define __ECORE_HSI_INIT_TOOL__
+/**************************************/
+/* Init Tool HSI constants and macros */
+/**************************************/
+
+/* Width of GRC address in bits (addresses are specified in dwords) */
+#define GRC_ADDR_BITS 23
+#define MAX_GRC_ADDR ((1 << GRC_ADDR_BITS) - 1)
+
+/* indicates an init that should be applied to any phase ID */
+#define ANY_PHASE_ID 0xffff
+
+/* Max size in dwords of a zipped array */
+#define MAX_ZIPPED_SIZE 8192
+
+
+struct fw_asserts_ram_section {
+/* The offset of the section in the RAM in RAM lines (64-bit units) */
+ __le16 section_ram_line_offset;
+/* The size of the section in RAM lines (64-bit units) */
+ __le16 section_ram_line_size;
+/* The offset of the asserts list within the section in dwords */
+ u8 list_dword_offset;
+/* The size of an assert list element in dwords */
+ u8 list_element_dword_size;
+ u8 list_num_elements /* The number of elements in the asserts list */;
+/* The offset of the next list index field within the section in dwords */
+ u8 list_next_index_dword_offset;
+};
+
+
+struct fw_ver_num {
+ u8 major /* Firmware major version number */;
+ u8 minor /* Firmware minor version number */;
+ u8 rev /* Firmware revision version number */;
+/* Firmware engineering version number (for bootleg versions) */
+ u8 eng;
+};
+
+struct fw_ver_info {
+ __le16 tools_ver /* Tools version number */;
+ u8 image_id /* FW image ID (e.g. main, l2b, kuku) */;
+ u8 reserved1;
+ struct fw_ver_num num /* FW version number */;
+ __le32 timestamp /* FW Timestamp in unix time (sec. since 1970) */;
+ __le32 reserved2;
+};
+
+struct fw_info {
+ struct fw_ver_info ver /* FW version information */;
+/* Info regarding the FW asserts section in the Storm RAM */
+ struct fw_asserts_ram_section fw_asserts_section;
+};
+
+
+struct fw_info_location {
+/* GRC address where the fw_info struct is located. */
+ __le32 grc_addr;
+/* Size of the fw_info structure (thats located at the grc_addr). */
+ __le32 size;
+};
+
+
+
+
+enum init_modes {
+ MODE_BB_A0,
+ MODE_BB_B0,
+ MODE_K2,
+ MODE_ASIC,
+ MODE_EMUL_REDUCED,
+ MODE_EMUL_FULL,
+ MODE_FPGA,
+ MODE_CHIPSIM,
+ MODE_SF,
+ MODE_MF_SD,
+ MODE_MF_SI,
+ MODE_PORTS_PER_ENG_1,
+ MODE_PORTS_PER_ENG_2,
+ MODE_PORTS_PER_ENG_4,
+ MODE_100G,
+ MODE_40G,
+ MODE_EAGLE_ENG1_WORKAROUND,
+ MAX_INIT_MODES
+};
+
+
+enum init_phases {
+ PHASE_ENGINE,
+ PHASE_PORT,
+ PHASE_PF,
+ PHASE_VF,
+ PHASE_QM_PF,
+ MAX_INIT_PHASES
+};
+
+
+enum init_split_types {
+ SPLIT_TYPE_NONE,
+ SPLIT_TYPE_PORT,
+ SPLIT_TYPE_PF,
+ SPLIT_TYPE_PORT_PF,
+ SPLIT_TYPE_VF,
+ MAX_INIT_SPLIT_TYPES
+};
+
+
+/*
+ * Binary buffer header
+ */
+struct bin_buffer_hdr {
+/* buffer offset in bytes from the beginning of the binary file */
+ __le32 offset;
+ __le32 length /* buffer length in bytes */;
+};
+
+
+/*
+ * binary init buffer types
+ */
+enum bin_init_buffer_type {
+ BIN_BUF_INIT_FW_VER_INFO /* fw_ver_info struct */,
+ BIN_BUF_INIT_CMD /* init commands */,
+ BIN_BUF_INIT_VAL /* init data */,
+ BIN_BUF_INIT_MODE_TREE /* init modes tree */,
+ BIN_BUF_INIT_IRO /* internal RAM offsets */,
+ MAX_BIN_INIT_BUFFER_TYPE
+};
+
+
+/*
+ * init array header: raw
+ */
+struct init_array_raw_hdr {
+ __le32 data;
+/* Init array type, from init_array_types enum */
+#define INIT_ARRAY_RAW_HDR_TYPE_MASK 0xF
+#define INIT_ARRAY_RAW_HDR_TYPE_SHIFT 0
+/* init array params */
+#define INIT_ARRAY_RAW_HDR_PARAMS_MASK 0xFFFFFFF
+#define INIT_ARRAY_RAW_HDR_PARAMS_SHIFT 4
+};
+
+/*
+ * init array header: standard
+ */
+struct init_array_standard_hdr {
+ __le32 data;
+/* Init array type, from init_array_types enum */
+#define INIT_ARRAY_STANDARD_HDR_TYPE_MASK 0xF
+#define INIT_ARRAY_STANDARD_HDR_TYPE_SHIFT 0
+/* Init array size (in dwords) */
+#define INIT_ARRAY_STANDARD_HDR_SIZE_MASK 0xFFFFFFF
+#define INIT_ARRAY_STANDARD_HDR_SIZE_SHIFT 4
+};
+
+/*
+ * init array header: zipped
+ */
+struct init_array_zipped_hdr {
+ __le32 data;
+/* Init array type, from init_array_types enum */
+#define INIT_ARRAY_ZIPPED_HDR_TYPE_MASK 0xF
+#define INIT_ARRAY_ZIPPED_HDR_TYPE_SHIFT 0
+/* Init array zipped size (in bytes) */
+#define INIT_ARRAY_ZIPPED_HDR_ZIPPED_SIZE_MASK 0xFFFFFFF
+#define INIT_ARRAY_ZIPPED_HDR_ZIPPED_SIZE_SHIFT 4
+};
+
+/*
+ * init array header: pattern
+ */
+struct init_array_pattern_hdr {
+ __le32 data;
+/* Init array type, from init_array_types enum */
+#define INIT_ARRAY_PATTERN_HDR_TYPE_MASK 0xF
+#define INIT_ARRAY_PATTERN_HDR_TYPE_SHIFT 0
+/* pattern size in dword */
+#define INIT_ARRAY_PATTERN_HDR_PATTERN_SIZE_MASK 0xF
+#define INIT_ARRAY_PATTERN_HDR_PATTERN_SIZE_SHIFT 4
+/* pattern repetitions */
+#define INIT_ARRAY_PATTERN_HDR_REPETITIONS_MASK 0xFFFFFF
+#define INIT_ARRAY_PATTERN_HDR_REPETITIONS_SHIFT 8
+};
+
+/*
+ * init array header union
+ */
+union init_array_hdr {
+ struct init_array_raw_hdr raw /* raw init array header */;
+/* standard init array header */
+ struct init_array_standard_hdr standard;
+ struct init_array_zipped_hdr zipped /* zipped init array header */;
+ struct init_array_pattern_hdr pattern /* pattern init array header */;
+};
+
+
+
+
+
+/*
+ * init array types
+ */
+enum init_array_types {
+ INIT_ARR_STANDARD /* standard init array */,
+ INIT_ARR_ZIPPED /* zipped init array */,
+ INIT_ARR_PATTERN /* a repeated pattern */,
+ MAX_INIT_ARRAY_TYPES
+};
+
+
+
+/*
+ * init operation: callback
+ */
+struct init_callback_op {
+ __le32 op_data;
+/* Init operation, from init_op_types enum */
+#define INIT_CALLBACK_OP_OP_MASK 0xF
+#define INIT_CALLBACK_OP_OP_SHIFT 0
+#define INIT_CALLBACK_OP_RESERVED_MASK 0xFFFFFFF
+#define INIT_CALLBACK_OP_RESERVED_SHIFT 4
+ __le16 callback_id /* Callback ID */;
+ __le16 block_id /* Blocks ID */;
+};
+
+
+/*
+ * init operation: delay
+ */
+struct init_delay_op {
+ __le32 op_data;
+/* Init operation, from init_op_types enum */
+#define INIT_DELAY_OP_OP_MASK 0xF
+#define INIT_DELAY_OP_OP_SHIFT 0
+#define INIT_DELAY_OP_RESERVED_MASK 0xFFFFFFF
+#define INIT_DELAY_OP_RESERVED_SHIFT 4
+ __le32 delay /* delay in us */;
+};
+
+
+/*
+ * init operation: if_mode
+ */
+struct init_if_mode_op {
+ __le32 op_data;
+/* Init operation, from init_op_types enum */
+#define INIT_IF_MODE_OP_OP_MASK 0xF
+#define INIT_IF_MODE_OP_OP_SHIFT 0
+#define INIT_IF_MODE_OP_RESERVED1_MASK 0xFFF
+#define INIT_IF_MODE_OP_RESERVED1_SHIFT 4
+/* Commands to skip if the modes dont match */
+#define INIT_IF_MODE_OP_CMD_OFFSET_MASK 0xFFFF
+#define INIT_IF_MODE_OP_CMD_OFFSET_SHIFT 16
+ __le16 reserved2;
+/* offset (in bytes) in modes expression buffer */
+ __le16 modes_buf_offset;
+};
+
+
+/*
+ * init operation: if_phase
+ */
+struct init_if_phase_op {
+ __le32 op_data;
+/* Init operation, from init_op_types enum */
+#define INIT_IF_PHASE_OP_OP_MASK 0xF
+#define INIT_IF_PHASE_OP_OP_SHIFT 0
+/* Indicates if DMAE is enabled in this phase */
+#define INIT_IF_PHASE_OP_DMAE_ENABLE_MASK 0x1
+#define INIT_IF_PHASE_OP_DMAE_ENABLE_SHIFT 4
+#define INIT_IF_PHASE_OP_RESERVED1_MASK 0x7FF
+#define INIT_IF_PHASE_OP_RESERVED1_SHIFT 5
+/* Commands to skip if the phases dont match */
+#define INIT_IF_PHASE_OP_CMD_OFFSET_MASK 0xFFFF
+#define INIT_IF_PHASE_OP_CMD_OFFSET_SHIFT 16
+ __le32 phase_data;
+#define INIT_IF_PHASE_OP_PHASE_MASK 0xFF /* Init phase */
+#define INIT_IF_PHASE_OP_PHASE_SHIFT 0
+#define INIT_IF_PHASE_OP_RESERVED2_MASK 0xFF
+#define INIT_IF_PHASE_OP_RESERVED2_SHIFT 8
+#define INIT_IF_PHASE_OP_PHASE_ID_MASK 0xFFFF /* Init phase ID */
+#define INIT_IF_PHASE_OP_PHASE_ID_SHIFT 16
+};
+
+
+/*
+ * init mode operators
+ */
+enum init_mode_ops {
+ INIT_MODE_OP_NOT /* init mode not operator */,
+ INIT_MODE_OP_OR /* init mode or operator */,
+ INIT_MODE_OP_AND /* init mode and operator */,
+ MAX_INIT_MODE_OPS
+};
+
+
+/*
+ * init operation: raw
+ */
+struct init_raw_op {
+ __le32 op_data;
+/* Init operation, from init_op_types enum */
+#define INIT_RAW_OP_OP_MASK 0xF
+#define INIT_RAW_OP_OP_SHIFT 0
+#define INIT_RAW_OP_PARAM1_MASK 0xFFFFFFF /* init param 1 */
+#define INIT_RAW_OP_PARAM1_SHIFT 4
+ __le32 param2 /* Init param 2 */;
+};
+
+/*
+ * init array params
+ */
+struct init_op_array_params {
+ __le16 size /* array size in dwords */;
+ __le16 offset /* array start offset in dwords */;
+};
+
+/*
+ * Write init operation arguments
+ */
+union init_write_args {
+/* value to write, used when init source is INIT_SRC_INLINE */
+ __le32 inline_val;
+/* number of zeros to write, used when init source is INIT_SRC_ZEROS */
+ __le32 zeros_count;
+/* array offset to write, used when init source is INIT_SRC_ARRAY */
+ __le32 array_offset;
+/* runtime array params to write, used when init source is INIT_SRC_RUNTIME */
+ struct init_op_array_params runtime;
+};
+
+/*
+ * init operation: write
+ */
+struct init_write_op {
+ __le32 data;
+/* init operation, from init_op_types enum */
+#define INIT_WRITE_OP_OP_MASK 0xF
+#define INIT_WRITE_OP_OP_SHIFT 0
+/* init source type, taken from init_source_types enum */
+#define INIT_WRITE_OP_SOURCE_MASK 0x7
+#define INIT_WRITE_OP_SOURCE_SHIFT 4
+#define INIT_WRITE_OP_RESERVED_MASK 0x1
+#define INIT_WRITE_OP_RESERVED_SHIFT 7
+/* indicates if the register is wide-bus */
+#define INIT_WRITE_OP_WIDE_BUS_MASK 0x1
+#define INIT_WRITE_OP_WIDE_BUS_SHIFT 8
+/* internal (absolute) GRC address, in dwords */
+#define INIT_WRITE_OP_ADDRESS_MASK 0x7FFFFF
+#define INIT_WRITE_OP_ADDRESS_SHIFT 9
+ union init_write_args args /* Write init operation arguments */;
+};
+
+/*
+ * init operation: read
+ */
+struct init_read_op {
+ __le32 op_data;
+/* init operation, from init_op_types enum */
+#define INIT_READ_OP_OP_MASK 0xF
+#define INIT_READ_OP_OP_SHIFT 0
+/* polling type, from init_poll_types enum */
+#define INIT_READ_OP_POLL_TYPE_MASK 0xF
+#define INIT_READ_OP_POLL_TYPE_SHIFT 4
+#define INIT_READ_OP_RESERVED_MASK 0x1
+#define INIT_READ_OP_RESERVED_SHIFT 8
+/* internal (absolute) GRC address, in dwords */
+#define INIT_READ_OP_ADDRESS_MASK 0x7FFFFF
+#define INIT_READ_OP_ADDRESS_SHIFT 9
+/* expected polling value, used only when polling is done */
+ __le32 expected_val;
+};
+
+/*
+ * Init operations union
+ */
+union init_op {
+ struct init_raw_op raw /* raw init operation */;
+ struct init_write_op write /* write init operation */;
+ struct init_read_op read /* read init operation */;
+ struct init_if_mode_op if_mode /* if_mode init operation */;
+ struct init_if_phase_op if_phase /* if_phase init operation */;
+ struct init_callback_op callback /* callback init operation */;
+ struct init_delay_op delay /* delay init operation */;
+};
+
+
+
+/*
+ * Init command operation types
+ */
+enum init_op_types {
+ INIT_OP_READ /* GRC read init command */,
+ INIT_OP_WRITE /* GRC write init command */,
+/* Skip init commands if the init modes expression doesn't match */
+ INIT_OP_IF_MODE,
+/* Skip init commands if the init phase doesn't match */
+ INIT_OP_IF_PHASE,
+ INIT_OP_DELAY /* delay init command */,
+ INIT_OP_CALLBACK /* callback init command */,
+ MAX_INIT_OP_TYPES
+};
+
+
+/*
+ * init polling types
+ */
+enum init_poll_types {
+ INIT_POLL_NONE /* No polling */,
+ INIT_POLL_EQ /* init value is included in the init command */,
+ INIT_POLL_OR /* init value is all zeros */,
+ INIT_POLL_AND /* init value is an array of values */,
+ MAX_INIT_POLL_TYPES
+};
+
+
+
+
+/*
+ * init source types
+ */
+enum init_source_types {
+ INIT_SRC_INLINE /* init value is included in the init command */,
+ INIT_SRC_ZEROS /* init value is all zeros */,
+ INIT_SRC_ARRAY /* init value is an array of values */,
+ INIT_SRC_RUNTIME /* init value is provided during runtime */,
+ MAX_INIT_SOURCE_TYPES
+};
+
+
+
+
+/*
+ * Internal RAM Offsets macro data
+ */
+struct iro {
+ __le32 base /* RAM field offset */;
+ __le16 m1 /* multiplier 1 */;
+ __le16 m2 /* multiplier 2 */;
+ __le16 m3 /* multiplier 3 */;
+ __le16 size /* RAM field size */;
+};
+
+#endif /* __ECORE_HSI_INIT_TOOL__ */
diff --git a/drivers/net/qede/base/ecore_hsi_tools.h b/drivers/net/qede/base/ecore_hsi_tools.h
deleted file mode 100644
index 18eea76..0000000
--- a/drivers/net/qede/base/ecore_hsi_tools.h
+++ /dev/null
@@ -1,1081 +0,0 @@
-/*
- * Copyright (c) 2016 QLogic Corporation.
- * All rights reserved.
- * www.qlogic.com
- *
- * See LICENSE.qede_pmd for copyright and licensing details.
- */
-
-#ifndef __ECORE_HSI_TOOLS__
-#define __ECORE_HSI_TOOLS__
-/**********************************/
-/* Tools HSI constants and macros */
-/**********************************/
-
-/*********************************** Init ************************************/
-
-/* Width of GRC address in bits (addresses are specified in dwords) */
-#define GRC_ADDR_BITS 23
-#define MAX_GRC_ADDR ((1 << GRC_ADDR_BITS) - 1)
-
-/* indicates an init that should be applied to any phase ID */
-#define ANY_PHASE_ID 0xffff
-
-/* init pattern size in bytes */
-#define INIT_PATTERN_SIZE_BITS 4
-#define MAX_INIT_PATTERN_SIZE (1 << INIT_PATTERN_SIZE_BITS)
-
-/* Max size in dwords of a zipped array */
-#define MAX_ZIPPED_SIZE 8192
-
-/* Global PXP window */
-#define NUM_OF_PXP_WIN 19
-#define PXP_WIN_DWORD_SIZE_BITS 10
-#define PXP_WIN_DWORD_SIZE (1 << PXP_WIN_DWORD_SIZE_BITS)
-#define PXP_WIN_BYTE_SIZE_BITS (PXP_WIN_DWORD_SIZE_BITS + 2)
-#define PXP_WIN_BYTE_SIZE (PXP_WIN_DWORD_SIZE * 4)
-
-/********************************* GRC Dump **********************************/
-
-/* width of GRC dump register sequence length in bits */
-#define DUMP_SEQ_LEN_BITS 8
-#define DUMP_SEQ_LEN_MAX_VAL ((1 << DUMP_SEQ_LEN_BITS) - 1)
-
-/* width of GRC dump memory length in bits */
-#define DUMP_MEM_LEN_BITS 18
-#define DUMP_MEM_LEN_MAX_VAL ((1 << DUMP_MEM_LEN_BITS) - 1)
-
-/* width of register type ID in bits */
-#define REG_TYPE_ID_BITS 6
-#define REG_TYPE_ID_MAX_VAL ((1 << REG_TYPE_ID_BITS) - 1)
-
-/* width of block ID in bits */
-#define BLOCK_ID_BITS 8
-#define BLOCK_ID_MAX_VAL ((1 << BLOCK_ID_BITS) - 1)
-
-/******************************** Idle Check *********************************/
-
-/* max number of idle check predicate immediates */
-#define MAX_IDLE_CHK_PRED_IMM 3
-
-/* max number of idle check argument registers */
-#define MAX_IDLE_CHK_READ_REGS 3
-
-/* max number of idle check loops */
-#define MAX_IDLE_CHK_LOOPS 0x10000
-
-/* max idle check address increment */
-#define MAX_IDLE_CHK_INCREMENT 0x10000
-
-/* inicates an undefined idle check line index */
-#define IDLE_CHK_UNDEFINED_LINE_IDX 0xffffff
-
-/* max number of register values following the idle check header for LSI */
-#define IDLE_CHK_MAX_LSI_DUMP_REGS 2
-
-/* arguments for IDLE_CHK_MACRO_TYPE_QM_RD_WR */
-#define IDLE_CHK_QM_RD_WR_PTR 0
-#define IDLE_CHK_QM_RD_WR_BANK 1
-
-/**************************************/
-/* HSI Functions constants and macros */
-/**************************************/
-
-/* Number of VLAN priorities */
-#define NUM_OF_VLAN_PRIORITIES 8
-
-/* the MCP Trace meta data signautre is duplicated in the
- * perl script that generats the NVRAM images
- */
-#define MCP_TRACE_META_IMAGE_SIGNATURE 0x669955aa
-
-/* Maximal number of RAM lines occupied by FW Asserts data */
-#define MAX_FW_ASSERTS_RAM_LINES 800
-
-/*
- * Binary buffer header
- */
-struct bin_buffer_hdr {
- __le32 offset
- /* buffer offset in bytes from the beginning of the binary file */;
- __le32 length /* buffer length in bytes */;
-};
-
-/*
- * binary buffer types
- */
-enum bin_buffer_type {
- BIN_BUF_FW_VER_INFO /* fw_ver_info struct */,
- BIN_BUF_INIT_CMD /* init commands */,
- BIN_BUF_INIT_VAL /* init data */,
- BIN_BUF_INIT_MODE_TREE /* init modes tree */,
- BIN_BUF_IRO /* internal RAM offsets array */,
- MAX_BIN_BUFFER_TYPE
-};
-
-/*
- * Chip IDs
- */
-enum chip_ids {
- CHIP_BB_A0 /* BB A0 chip ID */,
- CHIP_BB_B0 /* BB B0 chip ID */,
- CHIP_K2 /* AH chip ID */,
- MAX_CHIP_IDS
-};
-
-/*
- * memory dump descriptor
- */
-struct dbg_dump_mem_desc {
- __le32 dword0;
-#define DBG_DUMP_MEM_DESC_ADDRESS_MASK 0xFFFFFF
-#define DBG_DUMP_MEM_DESC_ADDRESS_SHIFT 0
-#define DBG_DUMP_MEM_DESC_ASIC_CHIP_MASK_MASK 0xF
-#define DBG_DUMP_MEM_DESC_ASIC_CHIP_MASK_SHIFT 24
-#define DBG_DUMP_MEM_DESC_SIM_CHIP_MASK_MASK 0xF
-#define DBG_DUMP_MEM_DESC_SIM_CHIP_MASK_SHIFT 28
- __le32 dword1;
-#define DBG_DUMP_MEM_DESC_LENGTH_MASK 0x3FFFF
-#define DBG_DUMP_MEM_DESC_LENGTH_SHIFT 0
-#define DBG_DUMP_MEM_DESC_REG_TYPE_ID_MASK 0x3F
-#define DBG_DUMP_MEM_DESC_REG_TYPE_ID_SHIFT 18
-#define DBG_DUMP_MEM_DESC_BLOCK_ID_MASK 0xFF
-#define DBG_DUMP_MEM_DESC_BLOCK_ID_SHIFT 24
-};
-
-/*
- * registers dump descriptor: chip
- */
-struct dbg_dump_regs_chip_desc {
- __le32 data;
-#define DBG_DUMP_REGS_CHIP_DESC_IS_CHIP_MASK_MASK 0x1
-#define DBG_DUMP_REGS_CHIP_DESC_IS_CHIP_MASK_SHIFT 0
-#define DBG_DUMP_REGS_CHIP_DESC_ASIC_CHIP_MASK_MASK 0x7FFFFF
-#define DBG_DUMP_REGS_CHIP_DESC_ASIC_CHIP_MASK_SHIFT 1
-#define DBG_DUMP_REGS_CHIP_DESC_SIM_CHIP_MASK_MASK 0xFF
-#define DBG_DUMP_REGS_CHIP_DESC_SIM_CHIP_MASK_SHIFT 24
-};
-
-/*
- * registers dump descriptor: raw
- */
-struct dbg_dump_regs_raw_desc {
- __le32 data;
-#define DBG_DUMP_REGS_RAW_DESC_IS_CHIP_MASK_MASK 0x1
-#define DBG_DUMP_REGS_RAW_DESC_IS_CHIP_MASK_SHIFT 0
-#define DBG_DUMP_REGS_RAW_DESC_PARAM1_MASK 0x7FFFFF
-#define DBG_DUMP_REGS_RAW_DESC_PARAM1_SHIFT 1
-#define DBG_DUMP_REGS_RAW_DESC_PARAM2_MASK 0xFF
-#define DBG_DUMP_REGS_RAW_DESC_PARAM2_SHIFT 24
-};
-
-/*
- * registers dump descriptor: sequence
- */
-struct dbg_dump_regs_seq_desc {
- __le32 data;
-#define DBG_DUMP_REGS_SEQ_DESC_IS_CHIP_MASK_MASK 0x1
-#define DBG_DUMP_REGS_SEQ_DESC_IS_CHIP_MASK_SHIFT 0
-#define DBG_DUMP_REGS_SEQ_DESC_ADDRESS_MASK 0x7FFFFF
-#define DBG_DUMP_REGS_SEQ_DESC_ADDRESS_SHIFT 1
-#define DBG_DUMP_REGS_SEQ_DESC_LENGTH_MASK 0xFF
-#define DBG_DUMP_REGS_SEQ_DESC_LENGTH_SHIFT 24
-};
-
-/*
- * registers dump descriptor
- */
-union dbg_dump_regs_desc {
- struct dbg_dump_regs_raw_desc raw /* dumped registers raw descriptor */
- ;
- struct dbg_dump_regs_seq_desc seq /* dumped registers seq descriptor */
- ;
- struct dbg_dump_regs_chip_desc chip
- /* dumped registers chip descriptor */;
-};
-
-/*
- * idle check macro types
- */
-enum idle_chk_macro_types {
- IDLE_CHK_MACRO_TYPE_COMPARE /* parametric register comparison */,
- IDLE_CHK_MACRO_TYPE_QM_RD_WR /* compare QM r/w pointers and banks */,
- MAX_IDLE_CHK_MACRO_TYPES
-};
-
-/*
- * Idle Check result header
- */
-struct idle_chk_result_hdr {
- __le16 rule_idx /* Idle check rule index in CSV file */;
- __le16 loop_idx /* the loop index in which the failure occurred */;
- __le16 num_fw_values;
- __le16 data;
-#define IDLE_CHK_RESULT_HDR_NUM_LSI_VALUES_MASK 0xF
-#define IDLE_CHK_RESULT_HDR_NUM_LSI_VALUES_SHIFT 0
-#define IDLE_CHK_RESULT_HDR_LOOP_VALID_MASK 0x1
-#define IDLE_CHK_RESULT_HDR_LOOP_VALID_SHIFT 4
-#define IDLE_CHK_RESULT_HDR_SEVERITY_MASK 0x7
-#define IDLE_CHK_RESULT_HDR_SEVERITY_SHIFT 5
-#define IDLE_CHK_RESULT_HDR_MACRO_TYPE_MASK 0xF
-#define IDLE_CHK_RESULT_HDR_MACRO_TYPE_SHIFT 8
-#define IDLE_CHK_RESULT_HDR_MACRO_TYPE_ARG_MASK 0xF
-#define IDLE_CHK_RESULT_HDR_MACRO_TYPE_ARG_SHIFT 12
-};
-
-/*
- * Idle Check rule
- */
-struct idle_chk_rule {
- __le32 data;
-#define IDLE_CHK_RULE_ASIC_CHIP_MASK_MASK 0xF
-#define IDLE_CHK_RULE_ASIC_CHIP_MASK_SHIFT 0
-#define IDLE_CHK_RULE_SIM_CHIP_MASK_MASK 0xF
-#define IDLE_CHK_RULE_SIM_CHIP_MASK_SHIFT 4
-#define IDLE_CHK_RULE_BLOCK_ID_MASK 0xFF
-#define IDLE_CHK_RULE_BLOCK_ID_SHIFT 8
-#define IDLE_CHK_RULE_MACRO_TYPE_MASK 0xF
-#define IDLE_CHK_RULE_MACRO_TYPE_SHIFT 16
-#define IDLE_CHK_RULE_SEVERITY_MASK 0x7
-#define IDLE_CHK_RULE_SEVERITY_SHIFT 20
-#define IDLE_CHK_RULE_RESERVED_MASK 0x1
-#define IDLE_CHK_RULE_RESERVED_SHIFT 23
-#define IDLE_CHK_RULE_PRED_ID_MASK 0xFF
-#define IDLE_CHK_RULE_PRED_ID_SHIFT 24
- __le16 loop;
- __le16 increment
- /* address increment of first argument register on each iteration */
- ;
- __le32 reg_addr[3];
- __le32 pred_imm[3]
- /* immediate values passed as arguments to the idle check rule */;
-};
-
-/*
- * idle check severity types
- */
-enum idle_chk_severity_types {
- IDLE_CHK_SEVERITY_ERROR /* idle check failure should cause an error */,
- IDLE_CHK_SEVERITY_ERROR_NO_TRAFFIC
- ,
- IDLE_CHK_SEVERITY_WARNING
- /* idle check failure should cause a warning */,
- MAX_IDLE_CHK_SEVERITY_TYPES
-};
-
-/*
- * init array header: raw
- */
-struct init_array_raw_hdr {
- __le32 data;
-#define INIT_ARRAY_RAW_HDR_TYPE_MASK 0xF
-#define INIT_ARRAY_RAW_HDR_TYPE_SHIFT 0
-#define INIT_ARRAY_RAW_HDR_PARAMS_MASK 0xFFFFFFF
-#define INIT_ARRAY_RAW_HDR_PARAMS_SHIFT 4
-};
-
-/*
- * init array header: standard
- */
-struct init_array_standard_hdr {
- __le32 data;
-#define INIT_ARRAY_STANDARD_HDR_TYPE_MASK 0xF
-#define INIT_ARRAY_STANDARD_HDR_TYPE_SHIFT 0
-#define INIT_ARRAY_STANDARD_HDR_SIZE_MASK 0xFFFFFFF
-#define INIT_ARRAY_STANDARD_HDR_SIZE_SHIFT 4
-};
-
-/*
- * init array header: zipped
- */
-struct init_array_zipped_hdr {
- __le32 data;
-#define INIT_ARRAY_ZIPPED_HDR_TYPE_MASK 0xF
-#define INIT_ARRAY_ZIPPED_HDR_TYPE_SHIFT 0
-#define INIT_ARRAY_ZIPPED_HDR_ZIPPED_SIZE_MASK 0xFFFFFFF
-#define INIT_ARRAY_ZIPPED_HDR_ZIPPED_SIZE_SHIFT 4
-};
-
-/*
- * init array header: pattern
- */
-struct init_array_pattern_hdr {
- __le32 data;
-#define INIT_ARRAY_PATTERN_HDR_TYPE_MASK 0xF
-#define INIT_ARRAY_PATTERN_HDR_TYPE_SHIFT 0
-#define INIT_ARRAY_PATTERN_HDR_PATTERN_SIZE_MASK 0xF
-#define INIT_ARRAY_PATTERN_HDR_PATTERN_SIZE_SHIFT 4
-#define INIT_ARRAY_PATTERN_HDR_REPETITIONS_MASK 0xFFFFFF
-#define INIT_ARRAY_PATTERN_HDR_REPETITIONS_SHIFT 8
-};
-
-/*
- * init array header union
- */
-union init_array_hdr {
- struct init_array_raw_hdr raw /* raw init array header */;
- struct init_array_standard_hdr standard /* standard init array header */
- ;
- struct init_array_zipped_hdr zipped /* zipped init array header */;
- struct init_array_pattern_hdr pattern /* pattern init array header */;
-};
-
-/*
- * init array types
- */
-enum init_array_types {
- INIT_ARR_STANDARD /* standard init array */,
- INIT_ARR_ZIPPED /* zipped init array */,
- INIT_ARR_PATTERN /* a repeated pattern */,
- MAX_INIT_ARRAY_TYPES
-};
-
-/*
- * init operation: callback
- */
-struct init_callback_op {
- __le32 op_data;
-#define INIT_CALLBACK_OP_OP_MASK 0xF
-#define INIT_CALLBACK_OP_OP_SHIFT 0
-#define INIT_CALLBACK_OP_RESERVED_MASK 0xFFFFFFF
-#define INIT_CALLBACK_OP_RESERVED_SHIFT 4
- __le16 callback_id /* Callback ID */;
- __le16 block_id /* Blocks ID */;
-};
-
-/*
- * init operation: delay
- */
-struct init_delay_op {
- __le32 op_data;
-#define INIT_DELAY_OP_OP_MASK 0xF
-#define INIT_DELAY_OP_OP_SHIFT 0
-#define INIT_DELAY_OP_RESERVED_MASK 0xFFFFFFF
-#define INIT_DELAY_OP_RESERVED_SHIFT 4
- __le32 delay /* delay in us */;
-};
-
-/*
- * init operation: if_mode
- */
-struct init_if_mode_op {
- __le32 op_data;
-#define INIT_IF_MODE_OP_OP_MASK 0xF
-#define INIT_IF_MODE_OP_OP_SHIFT 0
-#define INIT_IF_MODE_OP_RESERVED1_MASK 0xFFF
-#define INIT_IF_MODE_OP_RESERVED1_SHIFT 4
-#define INIT_IF_MODE_OP_CMD_OFFSET_MASK 0xFFFF
-#define INIT_IF_MODE_OP_CMD_OFFSET_SHIFT 16
- __le16 reserved2;
- __le16 modes_buf_offset
- /* offset (in bytes) in modes expression buffer */;
-};
-
-/*
- * init operation: if_phase
- */
-struct init_if_phase_op {
- __le32 op_data;
-#define INIT_IF_PHASE_OP_OP_MASK 0xF
-#define INIT_IF_PHASE_OP_OP_SHIFT 0
-#define INIT_IF_PHASE_OP_DMAE_ENABLE_MASK 0x1
-#define INIT_IF_PHASE_OP_DMAE_ENABLE_SHIFT 4
-#define INIT_IF_PHASE_OP_RESERVED1_MASK 0x7FF
-#define INIT_IF_PHASE_OP_RESERVED1_SHIFT 5
-#define INIT_IF_PHASE_OP_CMD_OFFSET_MASK 0xFFFF
-#define INIT_IF_PHASE_OP_CMD_OFFSET_SHIFT 16
- __le32 phase_data;
-#define INIT_IF_PHASE_OP_PHASE_MASK 0xFF
-#define INIT_IF_PHASE_OP_PHASE_SHIFT 0
-#define INIT_IF_PHASE_OP_RESERVED2_MASK 0xFF
-#define INIT_IF_PHASE_OP_RESERVED2_SHIFT 8
-#define INIT_IF_PHASE_OP_PHASE_ID_MASK 0xFFFF
-#define INIT_IF_PHASE_OP_PHASE_ID_SHIFT 16
-};
-
-/*
- * init mode operators
- */
-enum init_mode_ops {
- INIT_MODE_OP_NOT /* init mode not operator */,
- INIT_MODE_OP_OR /* init mode or operator */,
- INIT_MODE_OP_AND /* init mode and operator */,
- MAX_INIT_MODE_OPS
-};
-
-/*
- * init operation: raw
- */
-struct init_raw_op {
- __le32 op_data;
-#define INIT_RAW_OP_OP_MASK 0xF
-#define INIT_RAW_OP_OP_SHIFT 0
-#define INIT_RAW_OP_PARAM1_MASK 0xFFFFFFF
-#define INIT_RAW_OP_PARAM1_SHIFT 4
- __le32 param2 /* Init param 2 */;
-};
-
-/*
- * init array params
- */
-struct init_op_array_params {
- __le16 size /* array size in dwords */;
- __le16 offset /* array start offset in dwords */;
-};
-
-/*
- * Write init operation arguments
- */
-union init_write_args {
- __le32 inline_val
- /* value to write, used when init source is INIT_SRC_INLINE */;
- __le32 zeros_count;
- __le32 array_offset
- /* array offset to write, used when init source is INIT_SRC_ARRAY */
- ;
- struct init_op_array_params runtime;
-};
-
-/*
- * init operation: write
- */
-struct init_write_op {
- __le32 data;
-#define INIT_WRITE_OP_OP_MASK 0xF
-#define INIT_WRITE_OP_OP_SHIFT 0
-#define INIT_WRITE_OP_SOURCE_MASK 0x7
-#define INIT_WRITE_OP_SOURCE_SHIFT 4
-#define INIT_WRITE_OP_RESERVED_MASK 0x1
-#define INIT_WRITE_OP_RESERVED_SHIFT 7
-#define INIT_WRITE_OP_WIDE_BUS_MASK 0x1
-#define INIT_WRITE_OP_WIDE_BUS_SHIFT 8
-#define INIT_WRITE_OP_ADDRESS_MASK 0x7FFFFF
-#define INIT_WRITE_OP_ADDRESS_SHIFT 9
- union init_write_args args /* Write init operation arguments */;
-};
-
-/*
- * init operation: read
- */
-struct init_read_op {
- __le32 op_data;
-#define INIT_READ_OP_OP_MASK 0xF
-#define INIT_READ_OP_OP_SHIFT 0
-#define INIT_READ_OP_POLL_TYPE_MASK 0xF
-#define INIT_READ_OP_POLL_TYPE_SHIFT 4
-#define INIT_READ_OP_RESERVED_MASK 0x1
-#define INIT_READ_OP_RESERVED_SHIFT 8
-#define INIT_READ_OP_ADDRESS_MASK 0x7FFFFF
-#define INIT_READ_OP_ADDRESS_SHIFT 9
- __le32 expected_val
- /* expected polling value, used only when polling is done */;
-};
-
-/*
- * Init operations union
- */
-union init_op {
- struct init_raw_op raw /* raw init operation */;
- struct init_write_op write /* write init operation */;
- struct init_read_op read /* read init operation */;
- struct init_if_mode_op if_mode /* if_mode init operation */;
- struct init_if_phase_op if_phase /* if_phase init operation */;
- struct init_callback_op callback /* callback init operation */;
- struct init_delay_op delay /* delay init operation */;
-};
-
-/*
- * Init command operation types
- */
-enum init_op_types {
- INIT_OP_READ /* GRC read init command */,
- INIT_OP_WRITE /* GRC write init command */,
- INIT_OP_IF_MODE
- /* Skip init commands if the init modes expression doesn't match */,
- INIT_OP_IF_PHASE
- /* Skip init commands if the init phase doesn't match */,
- INIT_OP_DELAY /* delay init command */,
- INIT_OP_CALLBACK /* callback init command */,
- MAX_INIT_OP_TYPES
-};
-
-/*
- * init polling types
- */
-enum init_poll_types {
- INIT_POLL_NONE /* No polling */,
- INIT_POLL_EQ /* init value is included in the init command */,
- INIT_POLL_OR /* init value is all zeros */,
- INIT_POLL_AND /* init value is an array of values */,
- MAX_INIT_POLL_TYPES
-};
-
-/*
- * init source types
- */
-enum init_source_types {
- INIT_SRC_INLINE /* init value is included in the init command */,
- INIT_SRC_ZEROS /* init value is all zeros */,
- INIT_SRC_ARRAY /* init value is an array of values */,
- INIT_SRC_RUNTIME /* init value is provided during runtime */,
- MAX_INIT_SOURCE_TYPES
-};
-
-/*
- * Internal RAM Offsets macro data
- */
-struct iro {
- __le32 base /* RAM field offset */;
- __le16 m1 /* multiplier 1 */;
- __le16 m2 /* multiplier 2 */;
- __le16 m3 /* multiplier 3 */;
- __le16 size /* RAM field size */;
-};
-
-/*
- * register descriptor
- */
-struct reg_desc {
- __le32 data;
-#define REG_DESC_ADDRESS_MASK 0xFFFFFF
-#define REG_DESC_ADDRESS_SHIFT 0
-#define REG_DESC_SIZE_MASK 0xFF
-#define REG_DESC_SIZE_SHIFT 24
-};
-
-/*
- * Debug Bus block data
- */
-struct dbg_bus_block_data {
- u8 enabled /* Indicates if the block is enabled for recording (0/1) */;
- u8 hw_id /* HW ID associated with the block */;
- u8 line_num /* Debug line number to select */;
- u8 right_shift /* Number of units to right the debug data (0-3) */;
- u8 cycle_en /* 4-bit value: bit i set -> unit i is enabled. */;
- u8 force_valid /* 4-bit value: bit i set -> unit i is forced valid. */;
- u8 force_frame
- /* 4-bit value: bit i set -> unit i frame bit is forced. */;
- u8 reserved;
-};
-
-/*
- * Debug Bus Clients
- */
-enum dbg_bus_clients {
- DBG_BUS_CLIENT_RBCN,
- DBG_BUS_CLIENT_RBCP,
- DBG_BUS_CLIENT_RBCR,
- DBG_BUS_CLIENT_RBCT,
- DBG_BUS_CLIENT_RBCU,
- DBG_BUS_CLIENT_RBCF,
- DBG_BUS_CLIENT_RBCX,
- DBG_BUS_CLIENT_RBCS,
- DBG_BUS_CLIENT_RBCH,
- DBG_BUS_CLIENT_RBCZ,
- DBG_BUS_CLIENT_OTHER_ENGINE,
- DBG_BUS_CLIENT_TIMESTAMP,
- DBG_BUS_CLIENT_CPU,
- DBG_BUS_CLIENT_RBCY,
- DBG_BUS_CLIENT_RBCQ,
- DBG_BUS_CLIENT_RBCM,
- DBG_BUS_CLIENT_RBCB,
- DBG_BUS_CLIENT_RBCW,
- DBG_BUS_CLIENT_RBCV,
- MAX_DBG_BUS_CLIENTS
-};
-
-/*
- * Debug Bus constraint operation types
- */
-enum dbg_bus_constraint_ops {
- DBG_BUS_CONSTRAINT_OP_EQ /* equal */,
- DBG_BUS_CONSTRAINT_OP_NE /* not equal */,
- DBG_BUS_CONSTRAINT_OP_LT /* less than */,
- DBG_BUS_CONSTRAINT_OP_LTC /* less than (cyclic) */,
- DBG_BUS_CONSTRAINT_OP_LE /* less than or equal */,
- DBG_BUS_CONSTRAINT_OP_LEC /* less than or equal (cyclic) */,
- DBG_BUS_CONSTRAINT_OP_GT /* greater than */,
- DBG_BUS_CONSTRAINT_OP_GTC /* greater than (cyclic) */,
- DBG_BUS_CONSTRAINT_OP_GE /* greater than or equal */,
- DBG_BUS_CONSTRAINT_OP_GEC /* greater than or equal (cyclic) */,
- MAX_DBG_BUS_CONSTRAINT_OPS
-};
-
-/*
- * Debug Bus memory address
- */
-struct dbg_bus_mem_addr {
- __le32 lo;
- __le32 hi;
-};
-
-/*
- * Debug Bus PCI buffer data
- */
-struct dbg_bus_pci_buf_data {
- struct dbg_bus_mem_addr phys_addr /* PCI buffer physical address */;
- struct dbg_bus_mem_addr virt_addr /* PCI buffer virtual address */;
- __le32 size /* PCI buffer size in bytes */;
-};
-
-/*
- * Debug Bus Storm EID range filter params
- */
-struct dbg_bus_storm_eid_range_params {
- u8 min /* Minimal event ID to filter on */;
- u8 max /* Maximal event ID to filter on */;
-};
-
-/*
- * Debug Bus Storm EID mask filter params
- */
-struct dbg_bus_storm_eid_mask_params {
- u8 val /* Event ID value */;
- u8 mask /* Event ID mask. 1s in the mask = dont care bits. */;
-};
-
-/*
- * Debug Bus Storm EID filter params
- */
-union dbg_bus_storm_eid_params {
- struct dbg_bus_storm_eid_range_params range
- /* EID range filter params */;
- struct dbg_bus_storm_eid_mask_params mask /* EID mask filter params */;
-};
-
-/*
- * Debug Bus Storm data
- */
-struct dbg_bus_storm_data {
- u8 fast_enabled;
- u8 fast_mode
- /* Fast debug Storm mode, valid only if fast_enabled is set */;
- u8 slow_enabled;
- u8 slow_mode
- /* Slow debug Storm mode, valid only if slow_enabled is set */;
- u8 hw_id /* HW ID associated with the Storm */;
- u8 eid_filter_en /* Indicates if EID filtering is performed (0/1) */;
- u8 eid_range_not_mask;
- u8 cid_filter_en /* Indicates if CID filtering is performed (0/1) */;
- union dbg_bus_storm_eid_params eid_filter_params;
- __le16 reserved;
- __le32 cid /* CID to filter on. Valid only if cid_filter_en is set. */;
-};
-
-/*
- * Debug Bus data
- */
-struct dbg_bus_data {
- __le32 app_version /* The tools version number of the application */;
- u8 state /* The current debug bus state */;
- u8 hw_dwords /* HW dwords per cycle */;
- u8 next_hw_id /* Next HW ID to be associated with an input */;
- u8 num_enabled_blocks /* Number of blocks enabled for recording */;
- u8 num_enabled_storms /* Number of Storms enabled for recording */;
- u8 target /* Output target */;
- u8 next_trigger_state /* ID of next trigger state to be added */;
- u8 next_constraint_id
- /* ID of next filter/trigger constraint to be added */;
- u8 one_shot_en /* Indicates if one-shot mode is enabled (0/1) */;
- u8 grc_input_en /* Indicates if GRC recording is enabled (0/1) */;
- u8 timestamp_input_en
- /* Indicates if timestamp recording is enabled (0/1) */;
- u8 filter_en /* Indicates if the recording filter is enabled (0/1) */;
- u8 trigger_en /* Indicates if the recording trigger is enabled (0/1) */
- ;
- u8 adding_filter;
- u8 filter_pre_trigger;
- u8 filter_post_trigger;
- u8 unify_inputs;
- u8 rcv_from_other_engine;
- struct dbg_bus_pci_buf_data pci_buf;
- __le16 reserved;
- struct dbg_bus_block_data blocks[80] /* Debug Bus data for each block */
- ;
- struct dbg_bus_storm_data storms[6] /* Debug Bus data for each block */
- ;
-};
-
-/*
- * Debug bus filter types
- */
-enum dbg_bus_filter_types {
- DBG_BUS_FILTER_TYPE_OFF /* filter always off */,
- DBG_BUS_FILTER_TYPE_PRE /* filter before trigger only */,
- DBG_BUS_FILTER_TYPE_POST /* filter after trigger only */,
- DBG_BUS_FILTER_TYPE_ON /* filter always on */,
- MAX_DBG_BUS_FILTER_TYPES
-};
-
-/*
- * Debug bus frame modes
- */
-enum dbg_bus_frame_modes {
- DBG_BUS_FRAME_MODE_0HW_4ST = 0 /* 0 HW dwords, 4 Storm dwords */,
- DBG_BUS_FRAME_MODE_4HW_0ST = 3 /* 4 HW dwords, 0 Storm dwords */,
- DBG_BUS_FRAME_MODE_8HW_0ST = 4 /* 8 HW dwords, 0 Storm dwords */,
- MAX_DBG_BUS_FRAME_MODES
-};
-
-/*
- * Debug bus input types
- */
-enum dbg_bus_input_types {
- DBG_BUS_INPUT_TYPE_STORM,
- DBG_BUS_INPUT_TYPE_BLOCK,
- MAX_DBG_BUS_INPUT_TYPES
-};
-
-/*
- * Debug bus other engine mode
- */
-enum dbg_bus_other_engine_modes {
- DBG_BUS_OTHER_ENGINE_MODE_NONE,
- DBG_BUS_OTHER_ENGINE_MODE_DOUBLE_BW_TX,
- DBG_BUS_OTHER_ENGINE_MODE_DOUBLE_BW_RX,
- DBG_BUS_OTHER_ENGINE_MODE_CROSS_ENGINE_TX,
- DBG_BUS_OTHER_ENGINE_MODE_CROSS_ENGINE_RX,
- MAX_DBG_BUS_OTHER_ENGINE_MODES
-};
-
-/*
- * Debug bus post-trigger recording types
- */
-enum dbg_bus_post_trigger_types {
- DBG_BUS_POST_TRIGGER_RECORD /* start recording after trigger */,
- DBG_BUS_POST_TRIGGER_DROP /* drop data after trigger */,
- MAX_DBG_BUS_POST_TRIGGER_TYPES
-};
-
-/*
- * Debug bus pre-trigger recording types
- */
-enum dbg_bus_pre_trigger_types {
- DBG_BUS_PRE_TRIGGER_START_FROM_ZERO /* start recording from time 0 */,
- DBG_BUS_PRE_TRIGGER_NUM_CHUNKS
- /* start recording some chunks before trigger */,
- DBG_BUS_PRE_TRIGGER_DROP /* drop data before trigger */,
- MAX_DBG_BUS_PRE_TRIGGER_TYPES
-};
-
-/*
- * Debug bus SEMI frame modes
- */
-enum dbg_bus_semi_frame_modes {
- DBG_BUS_SEMI_FRAME_MODE_0SLOW_4FAST =
- 0 /* 0 slow dwords, 4 fast dwords */,
- DBG_BUS_SEMI_FRAME_MODE_4SLOW_0FAST =
- 3 /* 4 slow dwords, 0 fast dwords */,
- MAX_DBG_BUS_SEMI_FRAME_MODES
-};
-
-/*
- * Debug bus states
- */
-enum dbg_bus_states {
- DBG_BUS_STATE_BEFORE_RECORD /* before debug bus the recording starts */
- ,
- DBG_BUS_STATE_DURING_RECORD /* during debug bus recording */,
- DBG_BUS_STATE_AFTER_RECORD /* after debug bus recording */,
- MAX_DBG_BUS_STATES
-};
-
-/*
- * Debug Bus Storm modes
- */
-enum dbg_bus_storm_modes {
- DBG_BUS_STORM_MODE_PRINTF /* store data (fast debug) */,
- DBG_BUS_STORM_MODE_PRAM_ADDR /* pram address (fast debug) */,
- DBG_BUS_STORM_MODE_DRA_RW /* DRA read/write data (fast debug) */,
- DBG_BUS_STORM_MODE_DRA_W /* DRA write data (fast debug) */,
- DBG_BUS_STORM_MODE_LD_ST_ADDR /* load/store address (fast debug) */,
- DBG_BUS_STORM_MODE_DRA_FSM /* DRA state machines (fast debug) */,
- DBG_BUS_STORM_MODE_RH /* recording handlers (fast debug) */,
- DBG_BUS_STORM_MODE_FOC /* FOC: FIN + DRA Rd (slow debug) */,
- DBG_BUS_STORM_MODE_EXT_STORE /* FOC: External Store (slow) */,
- MAX_DBG_BUS_STORM_MODES
-};
-
-/*
- * Debug bus target IDs
- */
-enum dbg_bus_targets {
- DBG_BUS_TARGET_ID_INT_BUF
- /* records debug bus to DBG block internal buffer */,
- DBG_BUS_TARGET_ID_NIG /* records debug bus to the NW */,
- DBG_BUS_TARGET_ID_PCI /* records debug bus to a PCI buffer */,
- MAX_DBG_BUS_TARGETS
-};
-
-/*
- * GRC Dump data
- */
-struct dbg_grc_data {
- u8 is_updated /* Indicates if the GRC Dump data is updated (0/1) */;
- u8 chip_id /* Chip ID */;
- u8 chip_mask /* Chip mask */;
- u8 reserved;
- __le32 max_dump_dwords /* Max GRC Dump size in dwords */;
- __le32 param_val[40];
- u8 param_set_by_user[40];
-};
-
-/*
- * Debug GRC params
- */
-enum dbg_grc_params {
- DBG_GRC_PARAM_DUMP_TSTORM /* dump Tstorm memories (0/1) */,
- DBG_GRC_PARAM_DUMP_MSTORM /* dump Mstorm memories (0/1) */,
- DBG_GRC_PARAM_DUMP_USTORM /* dump Ustorm memories (0/1) */,
- DBG_GRC_PARAM_DUMP_XSTORM /* dump Xstorm memories (0/1) */,
- DBG_GRC_PARAM_DUMP_YSTORM /* dump Ystorm memories (0/1) */,
- DBG_GRC_PARAM_DUMP_PSTORM /* dump Pstorm memories (0/1) */,
- DBG_GRC_PARAM_DUMP_REGS /* dump non-memory registers (0/1) */,
- DBG_GRC_PARAM_DUMP_RAM /* dump Storm internal RAMs (0/1) */,
- DBG_GRC_PARAM_DUMP_PBUF /* dump Storm passive buffer (0/1) */,
- DBG_GRC_PARAM_DUMP_IOR /* dump Storm IORs (0/1) */,
- DBG_GRC_PARAM_DUMP_VFC /* dump VFC memories (0/1) */,
- DBG_GRC_PARAM_DUMP_CM_CTX /* dump CM contexts (0/1) */,
- DBG_GRC_PARAM_DUMP_PXP /* dump PXP memories (0/1) */,
- DBG_GRC_PARAM_DUMP_RSS /* dump RSS memories (0/1) */,
- DBG_GRC_PARAM_DUMP_CAU /* dump CAU memories (0/1) */,
- DBG_GRC_PARAM_DUMP_QM /* dump QM memories (0/1) */,
- DBG_GRC_PARAM_DUMP_MCP /* dump MCP memories (0/1) */,
- DBG_GRC_PARAM_RESERVED /* reserved */,
- DBG_GRC_PARAM_DUMP_CFC /* dump CFC memories (0/1) */,
- DBG_GRC_PARAM_DUMP_IGU /* dump IGU memories (0/1) */,
- DBG_GRC_PARAM_DUMP_BRB /* dump BRB memories (0/1) */,
- DBG_GRC_PARAM_DUMP_BTB /* dump BTB memories (0/1) */,
- DBG_GRC_PARAM_DUMP_BMB /* dump BMB memories (0/1) */,
- DBG_GRC_PARAM_DUMP_NIG /* dump NIG memories (0/1) */,
- DBG_GRC_PARAM_DUMP_MULD /* dump MULD memories (0/1) */,
- DBG_GRC_PARAM_DUMP_PRS /* dump PRS memories (0/1) */,
- DBG_GRC_PARAM_DUMP_DMAE /* dump PRS memories (0/1) */,
- DBG_GRC_PARAM_DUMP_TM /* dump TM (timers) memories (0/1) */,
- DBG_GRC_PARAM_DUMP_SDM /* dump SDM memories (0/1) */,
- DBG_GRC_PARAM_DUMP_STATIC /* dump static debug data (0/1) */,
- DBG_GRC_PARAM_UNSTALL /* un-stall Storms after dump (0/1) */,
- DBG_GRC_PARAM_NUM_LCIDS /* number of LCIDs (0..320) */,
- DBG_GRC_PARAM_NUM_LTIDS /* number of LTIDs (0..320) */,
- DBG_GRC_PARAM_EXCLUDE_ALL
- /* preset: exclude all memories from dump (1 only) */,
- DBG_GRC_PARAM_CRASH
- /* preset: include memories for crash dump (1 only) */,
- DBG_GRC_PARAM_PARITY_SAFE
- /* perform dump only if MFW is responding (0/1) */,
- DBG_GRC_PARAM_DUMP_CM /* dump CM memories (0/1) */,
- MAX_DBG_GRC_PARAMS
-};
-
-/*
- * Debug reset registers
- */
-enum dbg_reset_regs {
- DBG_RESET_REG_MISCS_PL_UA,
- DBG_RESET_REG_MISCS_PL_HV,
- DBG_RESET_REG_MISC_PL_UA,
- DBG_RESET_REG_MISC_PL_HV,
- DBG_RESET_REG_MISC_PL_PDA_VMAIN_1,
- DBG_RESET_REG_MISC_PL_PDA_VMAIN_2,
- DBG_RESET_REG_MISC_PL_PDA_VAUX,
- MAX_DBG_RESET_REGS
-};
-
-/*
- * @DPDK Debug status codes
- */
-enum dbg_status {
- DBG_STATUS_OK,
- DBG_STATUS_APP_VERSION_NOT_SET,
- DBG_STATUS_UNSUPPORTED_APP_VERSION,
- DBG_STATUS_DBG_BLOCK_NOT_RESET,
- DBG_STATUS_INVALID_ARGS,
- DBG_STATUS_OUTPUT_ALREADY_SET,
- DBG_STATUS_INVALID_PCI_BUF_SIZE,
- DBG_STATUS_PCI_BUF_ALLOC_FAILED,
- DBG_STATUS_PCI_BUF_NOT_ALLOCATED,
- DBG_STATUS_TOO_MANY_INPUTS,
- DBG_STATUS_INPUT_OVERLAP,
- DBG_STATUS_HW_ONLY_RECORDING,
- DBG_STATUS_STORM_ALREADY_ENABLED,
- DBG_STATUS_STORM_NOT_ENABLED,
- DBG_STATUS_BLOCK_ALREADY_ENABLED,
- DBG_STATUS_BLOCK_NOT_ENABLED,
- DBG_STATUS_NO_INPUT_ENABLED,
- DBG_STATUS_NO_FILTER_TRIGGER_64B,
- DBG_STATUS_FILTER_ALREADY_ENABLED,
- DBG_STATUS_TRIGGER_ALREADY_ENABLED,
- DBG_STATUS_TRIGGER_NOT_ENABLED,
- DBG_STATUS_CANT_ADD_CONSTRAINT,
- DBG_STATUS_TOO_MANY_TRIGGER_STATES,
- DBG_STATUS_TOO_MANY_CONSTRAINTS,
- DBG_STATUS_RECORDING_NOT_STARTED,
- DBG_STATUS_NO_DATA_TRIGGERED,
- DBG_STATUS_NO_DATA_RECORDED,
- DBG_STATUS_DUMP_BUF_TOO_SMALL,
- DBG_STATUS_DUMP_NOT_CHUNK_ALIGNED,
- DBG_STATUS_UNKNOWN_CHIP,
- DBG_STATUS_VIRT_MEM_ALLOC_FAILED,
- DBG_STATUS_BLOCK_IN_RESET,
- DBG_STATUS_INVALID_TRACE_SIGNATURE,
- DBG_STATUS_INVALID_NVRAM_BUNDLE,
- DBG_STATUS_NVRAM_GET_IMAGE_FAILED,
- DBG_STATUS_NON_ALIGNED_NVRAM_IMAGE,
- DBG_STATUS_NVRAM_READ_FAILED,
- DBG_STATUS_IDLE_CHK_PARSE_FAILED,
- DBG_STATUS_MCP_TRACE_BAD_DATA,
- DBG_STATUS_MCP_TRACE_NO_META,
- DBG_STATUS_MCP_COULD_NOT_HALT,
- DBG_STATUS_MCP_COULD_NOT_RESUME,
- DBG_STATUS_DMAE_FAILED,
- DBG_STATUS_SEMI_FIFO_NOT_EMPTY,
- DBG_STATUS_IGU_FIFO_BAD_DATA,
- DBG_STATUS_MCP_COULD_NOT_MASK_PRTY,
- DBG_STATUS_FW_ASSERTS_PARSE_FAILED,
- DBG_STATUS_REG_FIFO_BAD_DATA,
- DBG_STATUS_PROTECTION_OVERRIDE_BAD_DATA,
- MAX_DBG_STATUS
-};
-
-/*
- * Debug Storms IDs
- */
-enum dbg_storms {
- DBG_TSTORM_ID,
- DBG_MSTORM_ID,
- DBG_USTORM_ID,
- DBG_XSTORM_ID,
- DBG_YSTORM_ID,
- DBG_PSTORM_ID,
- MAX_DBG_STORMS
-};
-
-/*
- * Idle Check data
- */
-struct idle_chk_data {
- __le32 buf_size /* Idle check buffer size in dwords */;
- u8 buf_size_set
- /* Indicates if the idle check buffer size was set (0/1) */;
- u8 reserved1;
- __le16 reserved2;
-};
-
-/*
- * Idle Check data
- */
-struct mcp_trace_data {
- __le32 buf_size /* MCP Trace buffer size in dwords */;
- u8 buf_size_set
- /* Indicates if the MCP Trace buffer size was set (0/1) */;
- u8 reserved1;
- __le16 reserved2;
-};
-
-/*
- * Debug Tools data (per HW function)
- */
-struct dbg_tools_data {
- struct dbg_grc_data grc /* GRC Dump data */;
- struct dbg_bus_data bus /* Debug Bus data */;
- struct idle_chk_data idle_chk /* Idle Check data */;
- struct mcp_trace_data mcp_trace /* MCP Trace data */;
- u8 block_in_reset[80] /* Indicates if a block is in reset state (0/1) */
- ;
- u8 chip_id /* Chip ID (from enum chip_ids) */;
- u8 chip_mask
- /* Chip mask = bit index chip_id is set, the rest are cleared */;
- u8 initialized /* Indicates if the data was initialized */;
- u8 reset_state_updated
- /* Indicates if blocks reset state is updated (0/1) */;
-};
-
-/*
- * BRB RAM init requirements
- */
-struct init_brb_ram_req {
- __le32 guranteed_per_tc /* guaranteed size per TC, in bytes */;
- __le32 headroom_per_tc /* headroom size per TC, in bytes */;
- __le32 min_pkt_size /* min packet size, in bytes */;
- __le32 max_ports_per_engine /* min packet size, in bytes */;
- u8 num_active_tcs[MAX_NUM_PORTS] /* number of active TCs per port */;
-};
-
-/*
- * ETS per-TC init requirements
- */
-struct init_ets_tc_req {
- u8 use_sp;
- u8 use_wfq;
- __le16 weight /* An arbitration weight. Valid only if use_wfq is set. */
- ;
-};
-
-/*
- * ETS init requirements
- */
-struct init_ets_req {
- __le32 mtu /* Max packet size (in bytes) */;
- struct init_ets_tc_req tc_req[NUM_OF_TCS]
- /* ETS initialization requirements per TC. */;
-};
-
-/*
- * NIG LB RL init requirements
- */
-struct init_nig_lb_rl_req {
- __le16 lb_mac_rate;
- __le16 lb_rate;
- __le32 mtu /* Max packet size (in bytes) */;
- __le16 tc_rate[NUM_OF_PHYS_TCS];
-};
-
-/*
- * NIG TC mapping for each priority
- */
-struct init_nig_pri_tc_map_entry {
- u8 tc_id /* the mapped TC ID */;
- u8 valid /* indicates if the mapping entry is valid */;
-};
-
-/*
- * NIG priority to TC map init requirements
- */
-struct init_nig_pri_tc_map_req {
- struct init_nig_pri_tc_map_entry pri[NUM_OF_VLAN_PRIORITIES];
-};
-
-/*
- * QM per-port init parameters
- */
-struct init_qm_port_params {
- u8 active /* Indicates if this port is active */;
- u8 num_active_phys_tcs /* number of physical TCs used by this port */;
- __le16 num_pbf_cmd_lines
- /* number of PBF command lines that can be used by this port */;
- __le16 num_btb_blocks
- /* number of BTB blocks that can be used by this port */;
- __le16 reserved;
-};
-
-/*
- * QM per-PQ init parameters
- */
-struct init_qm_pq_params {
- u8 vport_id /* VPORT ID */;
- u8 tc_id /* TC ID */;
- u8 wrr_group /* WRR group */;
- u8 reserved;
-};
-
-/*
- * QM per-vport init parameters
- */
-struct init_qm_vport_params {
- __le32 vport_rl;
- __le16 vport_wfq;
- __le16 first_tx_pq_id[NUM_OF_TCS]
- /* the first Tx PQ ID associated with this VPORT for each TC. */;
-};
-
-#endif /* __ECORE_HSI_TOOLS__ */
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index 5324e05..5440731 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -12,7 +12,8 @@
#include "reg_addr.h"
#include "ecore_rt_defs.h"
#include "ecore_hsi_common.h"
-#include "ecore_hsi_tools.h"
+#include "ecore_hsi_init_func.h"
+#include "ecore_hsi_init_tool.h"
#include "ecore_init_fw_funcs.h"
/* @DPDK CmInterfaceEnum */
@@ -187,7 +188,7 @@ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn,
struct init_qm_port_params
port_params[MAX_NUM_PORTS])
{
- u8 tc, voq, port_id;
+ u8 tc, voq, port_id, num_tcs_in_port;
bool eagle_workaround = ENABLE_EAGLE_ENG1_WORKAROUND(p_hwfn);
/* clear PBF lines for all VOQs */
for (voq = 0; voq < MAX_NUM_VOQS; voq++)
@@ -201,18 +202,22 @@ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn,
if (eagle_workaround)
phys_lines -= PBF_CMDQ_EAGLE_WORKAROUND_LINES;
/* find #lines per active physical TC */
- phys_lines_per_tc =
- phys_lines /
- port_params[port_id].num_active_phys_tcs;
+ num_tcs_in_port = 0;
+ for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
+ if (((port_params[port_id].active_phys_tcs >>
+ tc) & 0x1) == 1)
+ num_tcs_in_port++;
+ }
+ phys_lines_per_tc = phys_lines / num_tcs_in_port;
/* init registers per active TC */
- for (tc = 0;
- tc < port_params[port_id].num_active_phys_tcs;
- tc++) {
- voq =
- PHYS_VOQ(port_id, tc,
- max_phys_tcs_per_port);
- ecore_cmdq_lines_voq_rt_init(p_hwfn, voq,
- phys_lines_per_tc);
+ for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
+ if (((port_params[port_id].active_phys_tcs >>
+ tc) & 0x1) == 1) {
+ voq = PHYS_VOQ(port_id, tc,
+ max_phys_tcs_per_port);
+ ecore_cmdq_lines_voq_rt_init(p_hwfn,
+ voq, phys_lines_per_tc);
+ }
}
/* init registers for pure LB TC */
ecore_cmdq_lines_voq_rt_init(p_hwfn, LB_VOQ(port_id),
@@ -255,7 +260,7 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
struct init_qm_port_params
port_params[MAX_NUM_PORTS])
{
- u8 tc, voq, port_id;
+ u8 tc, voq, port_id, num_tcs_in_port;
u32 usable_blocks, pure_lb_blocks, phys_blocks;
bool eagle_workaround = ENABLE_EAGLE_ENG1_WORKAROUND(p_hwfn);
for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
@@ -266,9 +271,15 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
BTB_HEADROOM_BLOCKS;
if (eagle_workaround)
usable_blocks -= BTB_EAGLE_WORKAROUND_BLOCKS;
+
+ num_tcs_in_port = 0;
+ for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
+ if (((port_params[port_id].active_phys_tcs >>
+ tc) & 0x1) == 1)
+ num_tcs_in_port++;
pure_lb_blocks =
(usable_blocks * BTB_PURE_LB_FACTOR) /
- (port_params[port_id].num_active_phys_tcs *
+ (num_tcs_in_port *
BTB_PURE_LB_FACTOR + BTB_PURE_LB_RATIO);
pure_lb_blocks =
OSAL_MAX_T(u32, BTB_JUMBO_PKT_BLOCKS,
@@ -276,17 +287,19 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
phys_blocks =
(usable_blocks -
pure_lb_blocks) /
- port_params[port_id].num_active_phys_tcs;
+ num_tcs_in_port;
/* init physical TCs */
for (tc = 0;
- tc < port_params[port_id].num_active_phys_tcs;
+ tc < NUM_OF_PHYS_TCS;
tc++) {
- voq =
- PHYS_VOQ(port_id, tc,
- max_phys_tcs_per_port);
- STORE_RT_REG(p_hwfn,
+ if (((port_params[port_id].active_phys_tcs >>
+ tc) & 0x1) == 1) {
+ voq = PHYS_VOQ(port_id, tc,
+ max_phys_tcs_per_port);
+ STORE_RT_REG(p_hwfn,
PBF_BTB_GUARANTEED_RT_OFFSET(voq),
phys_blocks);
+ }
}
/* init pure LB TC */
STORE_RT_REG(p_hwfn,
@@ -610,18 +623,6 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
(QM_OPPOR_PQ_EMPTY_DEF <<
QM_RF_OPPORTUNISTIC_MASK_QUEUEEMPTY_SHIFT);
STORE_RT_REG(p_hwfn, QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET, mask);
- /* check eagle workaround */
- for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
- if (port_params[port_id].active &&
- port_params[port_id].num_active_phys_tcs >
- EAGLE_WORKAROUND_TC &&
- ENABLE_EAGLE_ENG1_WORKAROUND(p_hwfn)) {
- DP_NOTICE(p_hwfn, true,
- "Can't config 8 TCs with Eagle"
- " eng1 workaround");
- return -1;
- }
- }
/* enable/disable PF RL */
ecore_enable_pf_rl(p_hwfn, pf_rl_en);
/* enable/disable PF WFQ */
diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index 326eb92..e6e4c36 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -575,7 +575,7 @@ enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
buf_hdr = (struct bin_buffer_hdr *)(uintptr_t)data;
- offset = buf_hdr[BIN_BUF_FW_VER_INFO].offset;
+ offset = buf_hdr[BIN_BUF_INIT_FW_VER_INFO].offset;
fw->fw_ver_info = (struct fw_ver_info *)((uintptr_t)(data + offset));
offset = buf_hdr[BIN_BUF_INIT_CMD].offset;
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index ea0fd7a..bed9ea3 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -284,16 +284,7 @@ out:
#define ECORE_PGLUE_ATTENTION_ILT_VALID (1 << 23)
static enum _ecore_status_t ecore_pglub_rbc_attn_cb(struct ecore_hwfn *p_hwfn)
{
- u32 tmp, reg_addr;
-
- reg_addr =
- attn_blocks[BLOCK_PGLUE_B].chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].
- int_regs[0]->mask_addr;
-
- /* Mask unnecessary attentions -@TBD move to MFW */
- tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, reg_addr);
- tmp |= (1 << 19); /* Was PGL_PCIE_ATTN */
- ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, reg_addr, tmp);
+ u32 tmp;
tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
PGLUE_B_REG_TX_ERR_WR_DETAILS2);
@@ -407,32 +398,6 @@ static enum _ecore_status_t ecore_pglub_rbc_attn_cb(struct ecore_hwfn *p_hwfn)
return ECORE_SUCCESS;
}
-static enum _ecore_status_t ecore_nig_attn_cb(struct ecore_hwfn *p_hwfn)
-{
- u32 tmp, reg_addr;
-
- /* Mask unnecessary attentions -@TBD move to MFW */
- reg_addr =
- attn_blocks[BLOCK_NIG].chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].
- int_regs[3]->mask_addr;
- tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, reg_addr);
- tmp |= (1 << 0); /* Was 3_P0_TX_PAUSE_TOO_LONG_INT */
- tmp |= NIG_REG_INT_MASK_3_P0_LB_TC1_PAUSE_TOO_LONG_INT;
- ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, reg_addr, tmp);
-
- reg_addr =
- attn_blocks[BLOCK_NIG].chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].
- int_regs[5]->mask_addr;
- tmp = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, reg_addr);
- tmp |= (1 << 0); /* Was 5_P1_TX_PAUSE_TOO_LONG_INT */
- ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, reg_addr, tmp);
-
- /* TODO - a bit risky to return success here; But alternative is to
- * actually read the multitdue of interrupt register of the block.
- */
- return ECORE_SUCCESS;
-}
-
static enum _ecore_status_t ecore_fw_assertion(struct ecore_hwfn *p_hwfn)
{
DP_NOTICE(p_hwfn, false, "FW assertion!\n");
@@ -559,7 +524,7 @@ static struct aeu_invert_reg aeu_descs[NUM_ATTN_REGS] = {
{"MSTAT per-path", ATTENTION_PAR_INT, OSAL_NULL, MAX_BLOCK_ID},
{"Reserved %d", (6 << ATTENTION_LENGTH_SHIFT), OSAL_NULL,
MAX_BLOCK_ID},
- {"NIG", ATTENTION_PAR_INT, ecore_nig_attn_cb, BLOCK_NIG},
+ {"NIG", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_NIG},
{"BMB/OPTE/MCP", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BMB},
{"BTB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BTB},
{"BRB", ATTENTION_PAR_INT, OSAL_NULL, BLOCK_BRB},
@@ -839,43 +804,10 @@ ecore_int_deassertion_aeu_bit(struct ecore_hwfn *p_hwfn,
rc = p_aeu->cb(p_hwfn);
}
- /* Handle HW block interrupt registers */
- if (p_aeu->block_index != MAX_BLOCK_ID) {
- u16 chip_type = ECORE_GET_TYPE(p_hwfn->p_dev);
- struct attn_hw_block *p_block;
- int i;
-
- p_block = &attn_blocks[p_aeu->block_index];
-
- /* Handle each interrupt register */
- for (i = 0;
- i < p_block->chip_regs[chip_type].num_of_int_regs; i++) {
- struct attn_hw_reg *p_reg_desc;
- u32 sts_addr;
-
- p_reg_desc = p_block->chip_regs[chip_type].int_regs[i];
-
- /* In case of fatal attention, don't clear the status
- * so it would appear in idle check.
- */
- if (rc == ECORE_SUCCESS)
- sts_addr = p_reg_desc->sts_clr_addr;
- else
- sts_addr = p_reg_desc->sts_addr;
-
- val = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, sts_addr);
- mask = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
- p_reg_desc->mask_addr);
- ecore_int_deassertion_print_bit(p_hwfn, p_reg_desc,
- p_block,
- ECORE_ATTN_TYPE_ATTN,
- val, mask);
-
-#ifndef REMOVE_DBG
- interrupts[i] = val;
-#endif
- }
- }
+ /* Print HW block interrupt registers */
+ if (p_aeu->block_index != MAX_BLOCK_ID)
+ DP_NOTICE(p_hwfn->p_dev, false, "[block_id %d type %d]\n",
+ p_aeu->block_index, ATTN_TYPE_INTERRUPT);
/* Reach assertion if attention is fatal */
if (rc != ECORE_SUCCESS) {
@@ -905,33 +837,6 @@ ecore_int_deassertion_aeu_bit(struct ecore_hwfn *p_hwfn,
return rc;
}
-static void ecore_int_parity_print(struct ecore_hwfn *p_hwfn,
- struct aeu_invert_reg_bit *p_aeu,
- struct attn_hw_block *p_block, u8 bit_index)
-{
- u16 chip_type = ECORE_GET_TYPE(p_hwfn->p_dev);
- int i;
-
- for (i = 0; i < p_block->chip_regs[chip_type].num_of_prty_regs; i++) {
- struct attn_hw_reg *p_reg_desc;
- u32 val, mask;
-
- p_reg_desc = p_block->chip_regs[chip_type].prty_regs[i];
-
- val = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
- p_reg_desc->sts_clr_addr);
- mask = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt,
- p_reg_desc->mask_addr);
- DP_VERBOSE(p_hwfn, ECORE_MSG_INTR,
- "%s[%d] - parity register[%d] is %08x [mask is %08x]\n",
- p_aeu->bit_name, bit_index, i, val, mask);
- ecore_int_deassertion_print_bit(p_hwfn, p_reg_desc,
- p_block,
- ECORE_ATTN_TYPE_PARITY,
- val, mask);
- }
-}
-
/**
* @brief ecore_int_deassertion_parity - handle a single parity AEU source
*
@@ -949,19 +854,15 @@ static void ecore_int_deassertion_parity(struct ecore_hwfn *p_hwfn,
DP_INFO(p_hwfn->p_dev, "%s[%d] parity attention is set\n",
p_aeu->bit_name, bit_index);
- if (block_id != MAX_BLOCK_ID) {
- ecore_int_parity_print(p_hwfn, p_aeu, &attn_blocks[block_id],
- bit_index);
-
- /* In A0, there's a single parity bit for several blocks */
- if (block_id == BLOCK_BTB) {
- ecore_int_parity_print(p_hwfn, p_aeu,
- &attn_blocks[BLOCK_OPTE],
- bit_index);
- ecore_int_parity_print(p_hwfn, p_aeu,
- &attn_blocks[BLOCK_MCP],
- bit_index);
- }
+ if (block_id != MAX_BLOCK_ID)
+ return;
+
+ /* In A0, there's a single parity bit for several blocks */
+ if (block_id == BLOCK_BTB) {
+ DP_NOTICE(p_hwfn->p_dev, false, "[block_id %d type %d]\n",
+ BLOCK_OPTE, ATTN_TYPE_PARITY);
+ DP_NOTICE(p_hwfn->p_dev, false, "[block_id %d type %d]\n",
+ BLOCK_MCP, ATTN_TYPE_PARITY);
}
}
@@ -1778,7 +1679,7 @@ ecore_int_igu_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
enum ecore_int_mode int_mode)
{
enum _ecore_status_t rc = ECORE_SUCCESS;
- u32 tmp, reg_addr;
+ u32 tmp;
/* @@@tmp - Mask General HW attentions 0-31, Enable 32-36 */
tmp = ecore_rd(p_hwfn, p_ptt, MISC_REG_AEU_ENABLE4_IGU_OUT_0);
@@ -1794,16 +1695,6 @@ ecore_int_igu_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
tmp &= ~0x800;
ecore_wr(p_hwfn, p_ptt, MISC_REG_AEU_ENABLE4_IGU_OUT_0, tmp);
- /* @@@tmp - Mask interrupt sources - should move to init tool;
- * Also, correct for A0 [might still change in B0.
- */
- reg_addr =
- attn_blocks[BLOCK_BRB].chip_regs[ECORE_GET_TYPE(p_hwfn->p_dev)].
- int_regs[0]->mask_addr;
- tmp = ecore_rd(p_hwfn, p_ptt, reg_addr);
- tmp |= (1 << 21); /* Was PKT4_LEN_ERROR */
- ecore_wr(p_hwfn, p_ptt, reg_addr, tmp);
-
ecore_int_igu_enable_attn(p_hwfn, p_ptt);
if ((int_mode != ECORE_INT_MODE_INTA) || IS_LEAD_HWFN(p_hwfn)) {
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 02/32] net/qede/base: formatting changes
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 01/32] net/qede/base: add new init files and rearrange the code Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 03/32] net/qede: use FW CONFIG defines as needed Rasesh Mody
` (30 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Rasesh Mody
Fixes white spaces and tabs.
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
---
drivers/net/qede/base/common_hsi.h | 252 ++---
drivers/net/qede/base/ecore.h | 414 +++----
drivers/net/qede/base/ecore_chain.h | 20 +-
drivers/net/qede/base/ecore_cxt.c | 16 +-
drivers/net/qede/base/ecore_cxt_api.h | 10 +-
drivers/net/qede/base/ecore_dcbx.c | 4 +-
drivers/net/qede/base/ecore_dcbx_api.h | 26 +-
drivers/net/qede/base/ecore_dev.c | 112 +-
drivers/net/qede/base/ecore_dev_api.h | 72 +-
drivers/net/qede/base/ecore_gtt_reg_addr.h | 20 +-
drivers/net/qede/base/ecore_gtt_values.h | 20 +-
drivers/net/qede/base/ecore_hsi_common.h | 6 +-
drivers/net/qede/base/ecore_hsi_eth.h | 8 +-
drivers/net/qede/base/ecore_hw.c | 14 +-
drivers/net/qede/base/ecore_hw.h | 28 +-
drivers/net/qede/base/ecore_hw_defs.h | 6 +-
drivers/net/qede/base/ecore_init_fw_funcs.c | 12 +-
drivers/net/qede/base/ecore_init_fw_funcs.h | 68 +-
drivers/net/qede/base/ecore_init_ops.c | 4 +-
drivers/net/qede/base/ecore_int.c | 14 +-
drivers/net/qede/base/ecore_int.h | 4 +-
drivers/net/qede/base/ecore_iov_api.h | 46 +-
drivers/net/qede/base/ecore_iro.h | 12 +-
drivers/net/qede/base/ecore_iro_values.h | 32 +-
drivers/net/qede/base/ecore_l2.c | 18 +-
drivers/net/qede/base/ecore_l2.h | 4 +-
drivers/net/qede/base/ecore_l2_api.h | 88 +-
drivers/net/qede/base/ecore_mcp.c | 28 +-
drivers/net/qede/base/ecore_mcp.h | 6 +-
drivers/net/qede/base/ecore_mcp_api.h | 16 +-
drivers/net/qede/base/ecore_proto_if.h | 4 +-
drivers/net/qede/base/ecore_rt_defs.h | 230 ++--
drivers/net/qede/base/ecore_sp_api.h | 10 +-
drivers/net/qede/base/ecore_sp_commands.c | 2 +-
drivers/net/qede/base/ecore_sp_commands.h | 8 +-
drivers/net/qede/base/ecore_spq.c | 72 +-
drivers/net/qede/base/ecore_spq.h | 136 +--
drivers/net/qede/base/ecore_sriov.c | 222 ++--
drivers/net/qede/base/ecore_sriov.h | 98 +-
drivers/net/qede/base/ecore_status.h | 18 +-
drivers/net/qede/base/ecore_vf.c | 18 +-
drivers/net/qede/base/ecore_vf.h | 34 +-
drivers/net/qede/base/ecore_vf_api.h | 4 +-
drivers/net/qede/base/ecore_vfpf_if.h | 270 ++---
drivers/net/qede/base/eth_common.h | 52 +-
drivers/net/qede/base/mcp_public.h | 194 ++--
drivers/net/qede/base/nvm_cfg.h | 1562 +++++++++++++--------------
47 files changed, 2157 insertions(+), 2157 deletions(-)
diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index 295a41f..4574800 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -9,9 +9,9 @@
#ifndef __COMMON_HSI__
#define __COMMON_HSI__
-#define CORE_SPQE_PAGE_SIZE_BYTES 4096
+#define CORE_SPQE_PAGE_SIZE_BYTES 4096
-#define FW_MAJOR_VERSION 8
+#define FW_MAJOR_VERSION 8
#define FW_MINOR_VERSION 7
#define FW_REVISION_VERSION 7
#define FW_ENGINEERING_VERSION 0
@@ -21,68 +21,68 @@
/***********************/
/* PCI functions */
-#define MAX_NUM_PORTS_K2 (4)
-#define MAX_NUM_PORTS_BB (2)
-#define MAX_NUM_PORTS (MAX_NUM_PORTS_K2)
+#define MAX_NUM_PORTS_K2 (4)
+#define MAX_NUM_PORTS_BB (2)
+#define MAX_NUM_PORTS (MAX_NUM_PORTS_K2)
-#define MAX_NUM_PFS_K2 (16)
-#define MAX_NUM_PFS_BB (8)
-#define MAX_NUM_PFS (MAX_NUM_PFS_K2)
-#define MAX_NUM_OF_PFS_IN_CHIP (16) /* On both engines */
+#define MAX_NUM_PFS_K2 (16)
+#define MAX_NUM_PFS_BB (8)
+#define MAX_NUM_PFS (MAX_NUM_PFS_K2)
+#define MAX_NUM_OF_PFS_IN_CHIP (16) /* On both engines */
-#define MAX_NUM_VFS_K2 (192)
-#define MAX_NUM_VFS_BB (120)
-#define MAX_NUM_VFS (MAX_NUM_VFS_K2)
+#define MAX_NUM_VFS_K2 (192)
+#define MAX_NUM_VFS_BB (120)
+#define MAX_NUM_VFS (MAX_NUM_VFS_K2)
#define MAX_NUM_FUNCTIONS_BB (MAX_NUM_PFS_BB + MAX_NUM_VFS_BB)
-#define MAX_NUM_FUNCTIONS (MAX_NUM_PFS + MAX_NUM_VFS)
+#define MAX_NUM_FUNCTIONS (MAX_NUM_PFS + MAX_NUM_VFS)
#define MAX_FUNCTION_NUMBER_BB (MAX_NUM_PFS + MAX_NUM_VFS_BB)
-#define MAX_FUNCTION_NUMBER (MAX_NUM_PFS + MAX_NUM_VFS)
+#define MAX_FUNCTION_NUMBER (MAX_NUM_PFS + MAX_NUM_VFS)
-#define MAX_NUM_VPORTS_K2 (208)
-#define MAX_NUM_VPORTS_BB (160)
-#define MAX_NUM_VPORTS (MAX_NUM_VPORTS_K2)
+#define MAX_NUM_VPORTS_K2 (208)
+#define MAX_NUM_VPORTS_BB (160)
+#define MAX_NUM_VPORTS (MAX_NUM_VPORTS_K2)
#define MAX_NUM_L2_QUEUES_K2 (320)
#define MAX_NUM_L2_QUEUES_BB (256)
-#define MAX_NUM_L2_QUEUES (MAX_NUM_L2_QUEUES_K2)
+#define MAX_NUM_L2_QUEUES (MAX_NUM_L2_QUEUES_K2)
/* Traffic classes in network-facing blocks (PBF, BTB, NIG, BRB, PRS and QM) */
#define NUM_PHYS_TCS_4PORT_K2 (4)
-#define NUM_OF_PHYS_TCS (8)
+#define NUM_OF_PHYS_TCS (8)
-#define NUM_TCS_4PORT_K2 (NUM_PHYS_TCS_4PORT_K2 + 1)
-#define NUM_OF_TCS (NUM_OF_PHYS_TCS + 1)
+#define NUM_TCS_4PORT_K2 (NUM_PHYS_TCS_4PORT_K2 + 1)
+#define NUM_OF_TCS (NUM_OF_PHYS_TCS + 1)
-#define LB_TC (NUM_OF_PHYS_TCS)
+#define LB_TC (NUM_OF_PHYS_TCS)
/* Num of possible traffic priority values */
-#define NUM_OF_PRIO (8)
+#define NUM_OF_PRIO (8)
-#define MAX_NUM_VOQS_K2 (NUM_TCS_4PORT_K2 * MAX_NUM_PORTS_K2)
-#define MAX_NUM_VOQS_BB (NUM_OF_TCS * MAX_NUM_PORTS_BB)
-#define MAX_NUM_VOQS (MAX_NUM_VOQS_K2)
-#define MAX_PHYS_VOQS (NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB)
+#define MAX_NUM_VOQS_K2 (NUM_TCS_4PORT_K2 * MAX_NUM_PORTS_K2)
+#define MAX_NUM_VOQS_BB (NUM_OF_TCS * MAX_NUM_PORTS_BB)
+#define MAX_NUM_VOQS (MAX_NUM_VOQS_K2)
+#define MAX_PHYS_VOQS (NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB)
/* CIDs */
-#define NUM_OF_CONNECTION_TYPES (8)
-#define NUM_OF_LCIDS (320)
-#define NUM_OF_LTIDS (320)
+#define NUM_OF_CONNECTION_TYPES (8)
+#define NUM_OF_LCIDS (320)
+#define NUM_OF_LTIDS (320)
/*****************/
/* CDU CONSTANTS */
/*****************/
-#define CDU_SEG_TYPE_OFFSET_REG_TYPE_SHIFT (17)
-#define CDU_SEG_TYPE_OFFSET_REG_OFFSET_MASK (0x1ffff)
+#define CDU_SEG_TYPE_OFFSET_REG_TYPE_SHIFT (17)
+#define CDU_SEG_TYPE_OFFSET_REG_OFFSET_MASK (0x1ffff)
/*****************/
/* DQ CONSTANTS */
/*****************/
/* DEMS */
-#define DQ_DEMS_LEGACY 0
+#define DQ_DEMS_LEGACY 0
/* XCM agg val selection */
#define DQ_XCM_AGG_VAL_SEL_WORD2 0
@@ -107,7 +107,7 @@
DQ_XCM_AGG_VAL_SEL_WORD4
#define DQ_XCM_CORE_SPQ_PROD_CMD \
DQ_XCM_AGG_VAL_SEL_WORD4
-#define DQ_XCM_ETH_GO_TO_BD_CONS_CMD DQ_XCM_AGG_VAL_SEL_WORD5
+#define DQ_XCM_ETH_GO_TO_BD_CONS_CMD DQ_XCM_AGG_VAL_SEL_WORD5
/* XCM agg counter flag selection */
#define DQ_XCM_AGG_FLG_SHIFT_BIT14 0
@@ -140,22 +140,22 @@
/*****************/
/* number of TX queues in the QM */
-#define MAX_QM_TX_QUEUES_K2 512
-#define MAX_QM_TX_QUEUES_BB 448
-#define MAX_QM_TX_QUEUES MAX_QM_TX_QUEUES_K2
+#define MAX_QM_TX_QUEUES_K2 512
+#define MAX_QM_TX_QUEUES_BB 448
+#define MAX_QM_TX_QUEUES MAX_QM_TX_QUEUES_K2
/* number of Other queues in the QM */
-#define MAX_QM_OTHER_QUEUES_BB 64
-#define MAX_QM_OTHER_QUEUES_K2 128
-#define MAX_QM_OTHER_QUEUES MAX_QM_OTHER_QUEUES_K2
+#define MAX_QM_OTHER_QUEUES_BB 64
+#define MAX_QM_OTHER_QUEUES_K2 128
+#define MAX_QM_OTHER_QUEUES MAX_QM_OTHER_QUEUES_K2
/* number of queues in a PF queue group */
-#define QM_PF_QUEUE_GROUP_SIZE 8
+#define QM_PF_QUEUE_GROUP_SIZE 8
/* base number of Tx PQs in the CM PQ representation.
* should be used when storing PQ IDs in CM PQ registers and context
*/
-#define CM_TX_PQ_BASE 0x200
+#define CM_TX_PQ_BASE 0x200
/* QM registers data */
#define QM_LINE_CRD_REG_WIDTH 16
@@ -164,7 +164,7 @@
#define QM_BYTE_CRD_REG_SIGN_BIT (1 << (QM_BYTE_CRD_REG_WIDTH - 1))
#define QM_WFQ_CRD_REG_WIDTH 32
#define QM_WFQ_CRD_REG_SIGN_BIT (1 << (QM_WFQ_CRD_REG_WIDTH - 1))
-#define QM_RL_CRD_REG_WIDTH 32
+#define QM_RL_CRD_REG_WIDTH 32
#define QM_RL_CRD_REG_SIGN_BIT (1 << (QM_RL_CRD_REG_WIDTH - 1))
/*****************/
@@ -185,100 +185,100 @@
/* IGU CONSTANTS */
/*****************/
-#define MAX_SB_PER_PATH_K2 (368)
-#define MAX_SB_PER_PATH_BB (288)
+#define MAX_SB_PER_PATH_K2 (368)
+#define MAX_SB_PER_PATH_BB (288)
#define MAX_TOT_SB_PER_PATH \
MAX_SB_PER_PATH_K2
-#define MAX_SB_PER_PF_MIMD 129
-#define MAX_SB_PER_PF_SIMD 64
-#define MAX_SB_PER_VF 64
+#define MAX_SB_PER_PF_MIMD 129
+#define MAX_SB_PER_PF_SIMD 64
+#define MAX_SB_PER_VF 64
/* Memory addresses on the BAR for the IGU Sub Block */
-#define IGU_MEM_BASE 0x0000
+#define IGU_MEM_BASE 0x0000
-#define IGU_MEM_MSIX_BASE 0x0000
-#define IGU_MEM_MSIX_UPPER 0x0101
-#define IGU_MEM_MSIX_RESERVED_UPPER 0x01ff
+#define IGU_MEM_MSIX_BASE 0x0000
+#define IGU_MEM_MSIX_UPPER 0x0101
+#define IGU_MEM_MSIX_RESERVED_UPPER 0x01ff
-#define IGU_MEM_PBA_MSIX_BASE 0x0200
-#define IGU_MEM_PBA_MSIX_UPPER 0x0202
-#define IGU_MEM_PBA_MSIX_RESERVED_UPPER 0x03ff
+#define IGU_MEM_PBA_MSIX_BASE 0x0200
+#define IGU_MEM_PBA_MSIX_UPPER 0x0202
+#define IGU_MEM_PBA_MSIX_RESERVED_UPPER 0x03ff
-#define IGU_CMD_INT_ACK_BASE 0x0400
+#define IGU_CMD_INT_ACK_BASE 0x0400
#define IGU_CMD_INT_ACK_UPPER (IGU_CMD_INT_ACK_BASE + \
MAX_TOT_SB_PER_PATH - \
1)
-#define IGU_CMD_INT_ACK_RESERVED_UPPER 0x05ff
+#define IGU_CMD_INT_ACK_RESERVED_UPPER 0x05ff
-#define IGU_CMD_ATTN_BIT_UPD_UPPER 0x05f0
-#define IGU_CMD_ATTN_BIT_SET_UPPER 0x05f1
-#define IGU_CMD_ATTN_BIT_CLR_UPPER 0x05f2
+#define IGU_CMD_ATTN_BIT_UPD_UPPER 0x05f0
+#define IGU_CMD_ATTN_BIT_SET_UPPER 0x05f1
+#define IGU_CMD_ATTN_BIT_CLR_UPPER 0x05f2
-#define IGU_REG_SISR_MDPC_WMASK_UPPER 0x05f3
-#define IGU_REG_SISR_MDPC_WMASK_LSB_UPPER 0x05f4
-#define IGU_REG_SISR_MDPC_WMASK_MSB_UPPER 0x05f5
-#define IGU_REG_SISR_MDPC_WOMASK_UPPER 0x05f6
+#define IGU_REG_SISR_MDPC_WMASK_UPPER 0x05f3
+#define IGU_REG_SISR_MDPC_WMASK_LSB_UPPER 0x05f4
+#define IGU_REG_SISR_MDPC_WMASK_MSB_UPPER 0x05f5
+#define IGU_REG_SISR_MDPC_WOMASK_UPPER 0x05f6
-#define IGU_CMD_PROD_UPD_BASE 0x0600
+#define IGU_CMD_PROD_UPD_BASE 0x0600
#define IGU_CMD_PROD_UPD_UPPER (IGU_CMD_PROD_UPD_BASE +\
MAX_TOT_SB_PER_PATH - \
1)
-#define IGU_CMD_PROD_UPD_RESERVED_UPPER 0x07ff
+#define IGU_CMD_PROD_UPD_RESERVED_UPPER 0x07ff
/*****************/
/* PXP CONSTANTS */
/*****************/
/* PTT and GTT */
-#define PXP_NUM_PF_WINDOWS 12
-#define PXP_PER_PF_ENTRY_SIZE 8
-#define PXP_NUM_GLOBAL_WINDOWS 243
-#define PXP_GLOBAL_ENTRY_SIZE 4
-#define PXP_ADMIN_WINDOW_ALLOWED_LENGTH 4
-#define PXP_PF_WINDOW_ADMIN_START 0
-#define PXP_PF_WINDOW_ADMIN_LENGTH 0x1000
+#define PXP_NUM_PF_WINDOWS 12
+#define PXP_PER_PF_ENTRY_SIZE 8
+#define PXP_NUM_GLOBAL_WINDOWS 243
+#define PXP_GLOBAL_ENTRY_SIZE 4
+#define PXP_ADMIN_WINDOW_ALLOWED_LENGTH 4
+#define PXP_PF_WINDOW_ADMIN_START 0
+#define PXP_PF_WINDOW_ADMIN_LENGTH 0x1000
#define PXP_PF_WINDOW_ADMIN_END (PXP_PF_WINDOW_ADMIN_START + \
PXP_PF_WINDOW_ADMIN_LENGTH - 1)
-#define PXP_PF_WINDOW_ADMIN_PER_PF_START 0
+#define PXP_PF_WINDOW_ADMIN_PER_PF_START 0
#define PXP_PF_WINDOW_ADMIN_PER_PF_LENGTH (PXP_NUM_PF_WINDOWS * \
PXP_PER_PF_ENTRY_SIZE)
#define PXP_PF_WINDOW_ADMIN_PER_PF_END (PXP_PF_WINDOW_ADMIN_PER_PF_START + \
- PXP_PF_WINDOW_ADMIN_PER_PF_LENGTH - 1)
-#define PXP_PF_WINDOW_ADMIN_GLOBAL_START 0x200
+ PXP_PF_WINDOW_ADMIN_PER_PF_LENGTH - 1)
+#define PXP_PF_WINDOW_ADMIN_GLOBAL_START 0x200
#define PXP_PF_WINDOW_ADMIN_GLOBAL_LENGTH (PXP_NUM_GLOBAL_WINDOWS * \
PXP_GLOBAL_ENTRY_SIZE)
-#define PXP_PF_WINDOW_ADMIN_GLOBAL_END \
- (PXP_PF_WINDOW_ADMIN_GLOBAL_START + \
- PXP_PF_WINDOW_ADMIN_GLOBAL_LENGTH - 1)
-#define PXP_PF_GLOBAL_PRETEND_ADDR 0x1f0
-#define PXP_PF_ME_OPAQUE_MASK_ADDR 0xf4
-#define PXP_PF_ME_OPAQUE_ADDR 0x1f8
-#define PXP_PF_ME_CONCRETE_ADDR 0x1fc
-
-#define PXP_EXTERNAL_BAR_PF_WINDOW_START 0x1000
-#define PXP_EXTERNAL_BAR_PF_WINDOW_NUM PXP_NUM_PF_WINDOWS
-#define PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE 0x1000
-#define PXP_EXTERNAL_BAR_PF_WINDOW_LENGTH \
- (PXP_EXTERNAL_BAR_PF_WINDOW_NUM * \
+#define PXP_PF_WINDOW_ADMIN_GLOBAL_END \
+ (PXP_PF_WINDOW_ADMIN_GLOBAL_START + \
+ PXP_PF_WINDOW_ADMIN_GLOBAL_LENGTH - 1)
+#define PXP_PF_GLOBAL_PRETEND_ADDR 0x1f0
+#define PXP_PF_ME_OPAQUE_MASK_ADDR 0xf4
+#define PXP_PF_ME_OPAQUE_ADDR 0x1f8
+#define PXP_PF_ME_CONCRETE_ADDR 0x1fc
+
+#define PXP_EXTERNAL_BAR_PF_WINDOW_START 0x1000
+#define PXP_EXTERNAL_BAR_PF_WINDOW_NUM PXP_NUM_PF_WINDOWS
+#define PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE 0x1000
+#define PXP_EXTERNAL_BAR_PF_WINDOW_LENGTH \
+ (PXP_EXTERNAL_BAR_PF_WINDOW_NUM * \
PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE)
-#define PXP_EXTERNAL_BAR_PF_WINDOW_END \
- (PXP_EXTERNAL_BAR_PF_WINDOW_START + \
+#define PXP_EXTERNAL_BAR_PF_WINDOW_END \
+ (PXP_EXTERNAL_BAR_PF_WINDOW_START + \
PXP_EXTERNAL_BAR_PF_WINDOW_LENGTH - 1)
-#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_START \
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_START \
(PXP_EXTERNAL_BAR_PF_WINDOW_END + 1)
#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_NUM PXP_NUM_GLOBAL_WINDOWS
-#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_SINGLE_SIZE 0x1000
-#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH \
- (PXP_EXTERNAL_BAR_GLOBAL_WINDOW_NUM * \
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_SINGLE_SIZE 0x1000
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH \
+ (PXP_EXTERNAL_BAR_GLOBAL_WINDOW_NUM * \
PXP_EXTERNAL_BAR_GLOBAL_WINDOW_SINGLE_SIZE)
-#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_END \
- (PXP_EXTERNAL_BAR_GLOBAL_WINDOW_START + \
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_END \
+ (PXP_EXTERNAL_BAR_GLOBAL_WINDOW_START + \
PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH - 1)
-#define PXP_ILT_PAGE_SIZE_NUM_BITS_MIN 12
-#define PXP_ILT_BLOCK_FACTOR_MULTIPLIER 1024
+#define PXP_ILT_PAGE_SIZE_NUM_BITS_MIN 12
+#define PXP_ILT_BLOCK_FACTOR_MULTIPLIER 1024
/* ILT Records */
#define PXP_NUM_ILT_RECORDS_BB 7600
@@ -301,10 +301,10 @@
/* Async data KCQ CQE */
struct async_data {
- __le32 cid;
- __le16 itid;
- u8 error_code;
- u8 fw_debug_param;
+ __le32 cid;
+ __le16 itid;
+ u8 error_code;
+ u8 fw_debug_param;
};
struct regpair {
@@ -359,12 +359,12 @@ struct event_ring_entry {
__le16 reserved0;
__le16 echo;
u8 fw_return_code;
- u8 flags;
+ u8 flags;
#define EVENT_RING_ENTRY_ASYNC_MASK 0x1
#define EVENT_RING_ENTRY_ASYNC_SHIFT 0
#define EVENT_RING_ENTRY_RESERVED1_MASK 0x7F
#define EVENT_RING_ENTRY_RESERVED1_SHIFT 1
- union event_ring_data data;
+ union event_ring_data data;
};
/* Multi function mode */
@@ -444,8 +444,8 @@ struct core_db_data {
#define CORE_DB_DATA_RESERVED_SHIFT 5
#define CORE_DB_DATA_AGG_VAL_SEL_MASK 0x3
#define CORE_DB_DATA_AGG_VAL_SEL_SHIFT 6
- u8 agg_flags;
- __le16 spq_prod;
+ u8 agg_flags;
+ __le16 spq_prod;
};
/* Enum of doorbell aggregative command selection */
@@ -479,10 +479,10 @@ struct db_legacy_addr {
/* Igu interrupt command */
enum igu_int_cmd {
- IGU_INT_ENABLE = 0,
+ IGU_INT_ENABLE = 0,
IGU_INT_DISABLE = 1,
- IGU_INT_NOP = 2,
- IGU_INT_NOP2 = 3,
+ IGU_INT_NOP = 2,
+ IGU_INT_NOP2 = 3,
MAX_IGU_INT_CMD
};
@@ -508,8 +508,8 @@ struct igu_prod_cons_update {
/* Igu segments access for default status block only */
enum igu_seg_access {
- IGU_SEG_ACCESS_REG = 0,
- IGU_SEG_ACCESS_ATTN = 1,
+ IGU_SEG_ACCESS_REG = 0,
+ IGU_SEG_ACCESS_ATTN = 1,
MAX_IGU_SEG_ACCESS
};
@@ -574,13 +574,13 @@ struct pxp_pretend_concrete_fid {
union pxp_pretend_fid {
struct pxp_pretend_concrete_fid concrete_fid;
- __le16 opaque_fid;
+ __le16 opaque_fid;
};
/* Pxp Pretend Command Register. */
struct pxp_pretend_cmd {
- union pxp_pretend_fid fid;
- __le16 control;
+ union pxp_pretend_fid fid;
+ __le16 control;
#define PXP_PRETEND_CMD_PATH_MASK 0x1
#define PXP_PRETEND_CMD_PATH_SHIFT 0
#define PXP_PRETEND_CMD_USE_PORT_MASK 0x1
@@ -603,30 +603,30 @@ struct pxp_pretend_cmd {
/* PTT Record in PXP Admin Window. */
struct pxp_ptt_entry {
- __le32 offset;
+ __le32 offset;
#define PXP_PTT_ENTRY_OFFSET_MASK 0x7FFFFF
#define PXP_PTT_ENTRY_OFFSET_SHIFT 0
#define PXP_PTT_ENTRY_RESERVED0_MASK 0x1FF
#define PXP_PTT_ENTRY_RESERVED0_SHIFT 23
- struct pxp_pretend_cmd pretend;
+ struct pxp_pretend_cmd pretend;
};
/* RSS hash type */
enum rss_hash_type {
- RSS_HASH_TYPE_DEFAULT = 0,
- RSS_HASH_TYPE_IPV4 = 1,
- RSS_HASH_TYPE_TCP_IPV4 = 2,
- RSS_HASH_TYPE_IPV6 = 3,
- RSS_HASH_TYPE_TCP_IPV6 = 4,
- RSS_HASH_TYPE_UDP_IPV4 = 5,
- RSS_HASH_TYPE_UDP_IPV6 = 6,
+ RSS_HASH_TYPE_DEFAULT = 0,
+ RSS_HASH_TYPE_IPV4 = 1,
+ RSS_HASH_TYPE_TCP_IPV4 = 2,
+ RSS_HASH_TYPE_IPV6 = 3,
+ RSS_HASH_TYPE_TCP_IPV6 = 4,
+ RSS_HASH_TYPE_UDP_IPV4 = 5,
+ RSS_HASH_TYPE_UDP_IPV6 = 6,
MAX_RSS_HASH_TYPE
};
/* status block structure */
struct status_block {
- __le16 pi_array[PIS_PER_SB];
- __le32 sb_num;
+ __le16 pi_array[PIS_PER_SB];
+ __le32 sb_num;
#define STATUS_BLOCK_SB_NUM_MASK 0x1FF
#define STATUS_BLOCK_SB_NUM_SHIFT 0
#define STATUS_BLOCK_ZERO_PAD_MASK 0x7F
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index db72f03..c83b22b 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -21,7 +21,7 @@
#define VER_SIZE 16
/* @DPDK ARRAY_DECL */
#define ECORE_WFQ_UNIT 100
-#include "../qede_logs.h" /* @DPDK */
+#include "../qede_logs.h" /* @DPDK */
/* Constants */
#define ECORE_WID_SIZE (1024)
@@ -77,7 +77,7 @@ do { \
static OSAL_INLINE u32 DB_ADDR(u32 cid, u32 DEMS)
{
u32 db_addr = FIELD_VALUE(DB_LEGACY_ADDR_DEMS, DEMS) |
- (cid * ECORE_PF_DEMS_SIZE);
+ (cid * ECORE_PF_DEMS_SIZE);
return db_addr;
}
@@ -105,10 +105,10 @@ static OSAL_INLINE u32 DB_ADDR_VF(u32 cid, u32 DEMS)
#ifndef __EXTRACT__LINUX__
enum DP_LEVEL {
- ECORE_LEVEL_VERBOSE = 0x0,
- ECORE_LEVEL_INFO = 0x1,
- ECORE_LEVEL_NOTICE = 0x2,
- ECORE_LEVEL_ERR = 0x3,
+ ECORE_LEVEL_VERBOSE = 0x0,
+ ECORE_LEVEL_INFO = 0x1,
+ ECORE_LEVEL_NOTICE = 0x2,
+ ECORE_LEVEL_ERR = 0x3,
};
#define ECORE_LOG_LEVEL_SHIFT (30)
@@ -118,31 +118,31 @@ enum DP_LEVEL {
enum DP_MODULE {
#ifndef LINUX_REMOVE
- ECORE_MSG_DRV = 0x0001,
- ECORE_MSG_PROBE = 0x0002,
- ECORE_MSG_LINK = 0x0004,
- ECORE_MSG_TIMER = 0x0008,
- ECORE_MSG_IFDOWN = 0x0010,
- ECORE_MSG_IFUP = 0x0020,
- ECORE_MSG_RX_ERR = 0x0040,
- ECORE_MSG_TX_ERR = 0x0080,
- ECORE_MSG_TX_QUEUED = 0x0100,
- ECORE_MSG_INTR = 0x0200,
- ECORE_MSG_TX_DONE = 0x0400,
- ECORE_MSG_RX_STATUS = 0x0800,
- ECORE_MSG_PKTDATA = 0x1000,
- ECORE_MSG_HW = 0x2000,
- ECORE_MSG_WOL = 0x4000,
+ ECORE_MSG_DRV = 0x0001,
+ ECORE_MSG_PROBE = 0x0002,
+ ECORE_MSG_LINK = 0x0004,
+ ECORE_MSG_TIMER = 0x0008,
+ ECORE_MSG_IFDOWN = 0x0010,
+ ECORE_MSG_IFUP = 0x0020,
+ ECORE_MSG_RX_ERR = 0x0040,
+ ECORE_MSG_TX_ERR = 0x0080,
+ ECORE_MSG_TX_QUEUED = 0x0100,
+ ECORE_MSG_INTR = 0x0200,
+ ECORE_MSG_TX_DONE = 0x0400,
+ ECORE_MSG_RX_STATUS = 0x0800,
+ ECORE_MSG_PKTDATA = 0x1000,
+ ECORE_MSG_HW = 0x2000,
+ ECORE_MSG_WOL = 0x4000,
#endif
- ECORE_MSG_SPQ = 0x10000,
- ECORE_MSG_STATS = 0x20000,
- ECORE_MSG_DCB = 0x40000,
- ECORE_MSG_IOV = 0x80000,
- ECORE_MSG_SP = 0x100000,
- ECORE_MSG_STORAGE = 0x200000,
- ECORE_MSG_CXT = 0x800000,
- ECORE_MSG_ILT = 0x2000000,
- ECORE_MSG_DEBUG = 0x8000000,
+ ECORE_MSG_SPQ = 0x10000,
+ ECORE_MSG_STATS = 0x20000,
+ ECORE_MSG_DCB = 0x40000,
+ ECORE_MSG_IOV = 0x80000,
+ ECORE_MSG_SP = 0x100000,
+ ECORE_MSG_STORAGE = 0x200000,
+ ECORE_MSG_CXT = 0x800000,
+ ECORE_MSG_ILT = 0x2000000,
+ ECORE_MSG_DEBUG = 0x8000000,
/* to be added...up to 0x8000000 */
};
#endif
@@ -166,8 +166,8 @@ struct ecore_mcp_info;
struct ecore_dcbx_info;
struct ecore_rt_data {
- u32 *init_val;
- bool *b_valid;
+ u32 *init_val;
+ bool *b_valid;
};
enum ecore_tunn_mode {
@@ -188,31 +188,31 @@ enum ecore_tunn_clss {
struct ecore_tunn_start_params {
unsigned long tunn_mode;
- u16 vxlan_udp_port;
- u16 geneve_udp_port;
- u8 update_vxlan_udp_port;
- u8 update_geneve_udp_port;
- u8 tunn_clss_vxlan;
- u8 tunn_clss_l2geneve;
- u8 tunn_clss_ipgeneve;
- u8 tunn_clss_l2gre;
- u8 tunn_clss_ipgre;
+ u16 vxlan_udp_port;
+ u16 geneve_udp_port;
+ u8 update_vxlan_udp_port;
+ u8 update_geneve_udp_port;
+ u8 tunn_clss_vxlan;
+ u8 tunn_clss_l2geneve;
+ u8 tunn_clss_ipgeneve;
+ u8 tunn_clss_l2gre;
+ u8 tunn_clss_ipgre;
};
struct ecore_tunn_update_params {
unsigned long tunn_mode_update_mask;
unsigned long tunn_mode;
- u16 vxlan_udp_port;
- u16 geneve_udp_port;
- u8 update_rx_pf_clss;
- u8 update_tx_pf_clss;
- u8 update_vxlan_udp_port;
- u8 update_geneve_udp_port;
- u8 tunn_clss_vxlan;
- u8 tunn_clss_l2geneve;
- u8 tunn_clss_ipgeneve;
- u8 tunn_clss_l2gre;
- u8 tunn_clss_ipgre;
+ u16 vxlan_udp_port;
+ u16 geneve_udp_port;
+ u8 update_rx_pf_clss;
+ u8 update_tx_pf_clss;
+ u8 update_vxlan_udp_port;
+ u8 update_geneve_udp_port;
+ u8 tunn_clss_vxlan;
+ u8 tunn_clss_l2geneve;
+ u8 tunn_clss_ipgeneve;
+ u8 tunn_clss_l2gre;
+ u8 tunn_clss_ipgre;
};
struct ecore_hw_sriov_info {
@@ -244,7 +244,7 @@ struct ecore_hw_sriov_info {
*/
enum ecore_pci_personality {
ECORE_PCI_ETH,
- ECORE_PCI_DEFAULT /* default in shmem */
+ ECORE_PCI_DEFAULT /* default in shmem */
};
/* All VFs are symmetric, all counters are PF + all VFs */
@@ -322,11 +322,11 @@ struct ecore_hw_info {
u32 resc_num[ECORE_MAX_RESC];
u32 feat_num[ECORE_MAX_FEATURES];
-#define RESC_START(_p_hwfn, resc) ((_p_hwfn)->hw_info.resc_start[resc])
-#define RESC_NUM(_p_hwfn, resc) ((_p_hwfn)->hw_info.resc_num[resc])
-#define RESC_END(_p_hwfn, resc) (RESC_START(_p_hwfn, resc) + \
+ #define RESC_START(_p_hwfn, resc) ((_p_hwfn)->hw_info.resc_start[resc])
+ #define RESC_NUM(_p_hwfn, resc) ((_p_hwfn)->hw_info.resc_num[resc])
+ #define RESC_END(_p_hwfn, resc) (RESC_START(_p_hwfn, resc) + \
RESC_NUM(_p_hwfn, resc))
-#define FEAT_NUM(_p_hwfn, resc) ((_p_hwfn)->hw_info.feat_num[resc])
+ #define FEAT_NUM(_p_hwfn, resc) ((_p_hwfn)->hw_info.feat_num[resc])
u8 num_tc;
u8 ooo_tc;
@@ -346,18 +346,18 @@ struct ecore_hw_info {
u8 max_chains_per_vf;
u32 port_mode;
- u32 hw_mode;
+ u32 hw_mode;
unsigned long device_capabilities;
};
struct ecore_hw_cid_data {
- u32 cid;
- bool b_cid_allocated;
- u8 vfid; /* 1-based; 0 signals this is for a PF */
+ u32 cid;
+ bool b_cid_allocated;
+ u8 vfid; /* 1-based; 0 signals this is for a PF */
/* Additional identifiers */
- u16 opaque_fid;
- u8 vport_id;
+ u16 opaque_fid;
+ u8 vport_id;
};
/* maximun size of read/write commands (HW limit) */
@@ -365,7 +365,7 @@ struct ecore_hw_cid_data {
struct ecore_dmae_info {
/* Mutex for synchronizing access to functions */
- osal_mutex_t mutex;
+ osal_mutex_t mutex;
u8 channel;
@@ -389,33 +389,33 @@ struct ecore_dmae_info {
};
struct ecore_wfq_data {
- u32 default_min_speed; /* When wfq feature is not configured */
- u32 min_speed; /* when feature is configured for any 1 vport */
+ u32 default_min_speed; /* When wfq feature is not configured */
+ u32 min_speed; /* when feature is configured for any 1 vport */
bool configured;
};
struct ecore_qm_info {
- struct init_qm_pq_params *qm_pq_params;
+ struct init_qm_pq_params *qm_pq_params;
struct init_qm_vport_params *qm_vport_params;
- struct init_qm_port_params *qm_port_params;
- u16 start_pq;
- u8 start_vport;
- u8 pure_lb_pq;
- u8 offload_pq;
- u8 pure_ack_pq;
- u8 ooo_pq;
- u8 vf_queues_offset;
- u16 num_pqs;
- u16 num_vf_pqs;
- u8 num_vports;
- u8 max_phys_tcs_per_port;
- bool pf_rl_en;
- bool pf_wfq_en;
- bool vport_rl_en;
- bool vport_wfq_en;
- u8 pf_wfq;
- u32 pf_rl;
- struct ecore_wfq_data *wfq_data;
+ struct init_qm_port_params *qm_port_params;
+ u16 start_pq;
+ u8 start_vport;
+ u8 pure_lb_pq;
+ u8 offload_pq;
+ u8 pure_ack_pq;
+ u8 ooo_pq;
+ u8 vf_queues_offset;
+ u16 num_pqs;
+ u16 num_vf_pqs;
+ u8 num_vports;
+ u8 max_phys_tcs_per_port;
+ bool pf_rl_en;
+ bool pf_wfq_en;
+ bool vport_rl_en;
+ bool vport_wfq_en;
+ u8 pf_wfq;
+ u32 pf_rl;
+ struct ecore_wfq_data *wfq_data;
};
struct storm_stats {
@@ -437,106 +437,106 @@ struct ecore_fw_data {
};
struct ecore_hwfn {
- struct ecore_dev *p_dev;
- u8 my_id; /* ID inside the PF */
+ struct ecore_dev *p_dev;
+ u8 my_id; /* ID inside the PF */
#define IS_LEAD_HWFN(edev) (!((edev)->my_id))
- u8 rel_pf_id; /* Relative to engine */
- u8 abs_pf_id;
-#define ECORE_PATH_ID(_p_hwfn) \
+ u8 rel_pf_id; /* Relative to engine*/
+ u8 abs_pf_id;
+ #define ECORE_PATH_ID(_p_hwfn) \
(ECORE_IS_K2((_p_hwfn)->p_dev) ? 0 : ((_p_hwfn)->abs_pf_id & 1))
- u8 port_id;
- bool b_active;
+ u8 port_id;
+ bool b_active;
- u32 dp_module;
- u8 dp_level;
- char name[NAME_SIZE];
- void *dp_ctx;
+ u32 dp_module;
+ u8 dp_level;
+ char name[NAME_SIZE];
+ void *dp_ctx;
- bool first_on_engine;
- bool hw_init_done;
+ bool first_on_engine;
+ bool hw_init_done;
- u8 num_funcs_on_engine;
+ u8 num_funcs_on_engine;
/* BAR access */
- void OSAL_IOMEM *regview;
- void OSAL_IOMEM *doorbells;
- u64 db_phys_addr;
- unsigned long db_size;
+ void OSAL_IOMEM *regview;
+ void OSAL_IOMEM *doorbells;
+ u64 db_phys_addr;
+ unsigned long db_size;
/* PTT pool */
- struct ecore_ptt_pool *p_ptt_pool;
+ struct ecore_ptt_pool *p_ptt_pool;
/* HW info */
- struct ecore_hw_info hw_info;
+ struct ecore_hw_info hw_info;
/* rt_array (for init-tool) */
- struct ecore_rt_data rt_data;
+ struct ecore_rt_data rt_data;
/* SPQ */
- struct ecore_spq *p_spq;
+ struct ecore_spq *p_spq;
/* EQ */
- struct ecore_eq *p_eq;
+ struct ecore_eq *p_eq;
- /* Consolidate Q */
- struct ecore_consq *p_consq;
+ /* Consolidate Q*/
+ struct ecore_consq *p_consq;
/* Slow-Path definitions */
- osal_dpc_t sp_dpc;
- bool b_sp_dpc_enabled;
+ osal_dpc_t sp_dpc;
+ bool b_sp_dpc_enabled;
- struct ecore_ptt *p_main_ptt;
- struct ecore_ptt *p_dpc_ptt;
+ struct ecore_ptt *p_main_ptt;
+ struct ecore_ptt *p_dpc_ptt;
- struct ecore_sb_sp_info *p_sp_sb;
- struct ecore_sb_attn_info *p_sb_attn;
+ struct ecore_sb_sp_info *p_sp_sb;
+ struct ecore_sb_attn_info *p_sb_attn;
/* Protocol related */
- struct ecore_ooo_info *p_ooo_info;
- struct ecore_pf_params pf_params;
+ struct ecore_ooo_info *p_ooo_info;
+ struct ecore_pf_params pf_params;
/* Array of sb_info of all status blocks */
- struct ecore_sb_info *sbs_info[MAX_SB_PER_PF_MIMD];
- u16 num_sbs;
+ struct ecore_sb_info *sbs_info[MAX_SB_PER_PF_MIMD];
+ u16 num_sbs;
- struct ecore_cxt_mngr *p_cxt_mngr;
+ struct ecore_cxt_mngr *p_cxt_mngr;
- /* Flag indicating whether interrupts are enabled or not */
- bool b_int_enabled;
- bool b_int_requested;
+ /* Flag indicating whether interrupts are enabled or not*/
+ bool b_int_enabled;
+ bool b_int_requested;
/* True if the driver requests for the link */
- bool b_drv_link_init;
+ bool b_drv_link_init;
- struct ecore_vf_iov *vf_iov_info;
- struct ecore_pf_iov *pf_iov_info;
- struct ecore_mcp_info *mcp_info;
- struct ecore_dcbx_info *p_dcbx_info;
+ struct ecore_vf_iov *vf_iov_info;
+ struct ecore_pf_iov *pf_iov_info;
+ struct ecore_mcp_info *mcp_info;
+ struct ecore_dcbx_info *p_dcbx_info;
- struct ecore_hw_cid_data *p_tx_cids;
- struct ecore_hw_cid_data *p_rx_cids;
+ struct ecore_hw_cid_data *p_tx_cids;
+ struct ecore_hw_cid_data *p_rx_cids;
- struct ecore_dmae_info dmae_info;
+ struct ecore_dmae_info dmae_info;
/* QM init */
- struct ecore_qm_info qm_info;
+ struct ecore_qm_info qm_info;
/* Buffer for unzipping firmware data */
#ifdef CONFIG_ECORE_ZIPPED_FW
void *unzip_buf;
#endif
- struct dbg_tools_data dbg_info;
+ struct dbg_tools_data dbg_info;
- struct z_stream_s *stream;
+ struct z_stream_s *stream;
/* PWM region specific data */
- u32 dpi_size;
- u32 dpi_count;
- u32 dpi_start_offset; /* this is used to
- * calculate th
- * doorbell address
- */
+ u32 dpi_size;
+ u32 dpi_count;
+ u32 dpi_start_offset; /* this is used to
+ * calculate th
+ * doorbell address
+ */
};
#ifndef __EXTRACT__LINUX__
@@ -548,12 +548,12 @@ enum ecore_mf_mode {
#endif
struct ecore_dev {
- u32 dp_module;
- u8 dp_level;
- char name[NAME_SIZE];
- void *dp_ctx;
+ u32 dp_module;
+ u8 dp_level;
+ char name[NAME_SIZE];
+ void *dp_ctx;
- u8 type;
+ u8 type;
#define ECORE_DEV_TYPE_BB (0 << 0)
#define ECORE_DEV_TYPE_AH (1 << 0)
/* Translate type/revision combo into the proper conditions */
@@ -571,112 +571,112 @@ struct ecore_dev {
u16 vendor_id;
u16 device_id;
- u16 chip_num;
-#define CHIP_NUM_MASK 0xffff
-#define CHIP_NUM_SHIFT 16
+ u16 chip_num;
+ #define CHIP_NUM_MASK 0xffff
+ #define CHIP_NUM_SHIFT 16
- u16 chip_rev;
-#define CHIP_REV_MASK 0xf
-#define CHIP_REV_SHIFT 12
+ u16 chip_rev;
+ #define CHIP_REV_MASK 0xf
+ #define CHIP_REV_SHIFT 12
#ifndef ASIC_ONLY
-#define CHIP_REV_IS_TEDIBEAR(_p_dev) ((_p_dev)->chip_rev == 0x5)
-#define CHIP_REV_IS_EMUL_A0(_p_dev) ((_p_dev)->chip_rev == 0xe)
-#define CHIP_REV_IS_EMUL_B0(_p_dev) ((_p_dev)->chip_rev == 0xc)
-#define CHIP_REV_IS_EMUL(_p_dev) (CHIP_REV_IS_EMUL_A0(_p_dev) || \
+ #define CHIP_REV_IS_TEDIBEAR(_p_dev) ((_p_dev)->chip_rev == 0x5)
+ #define CHIP_REV_IS_EMUL_A0(_p_dev) ((_p_dev)->chip_rev == 0xe)
+ #define CHIP_REV_IS_EMUL_B0(_p_dev) ((_p_dev)->chip_rev == 0xc)
+ #define CHIP_REV_IS_EMUL(_p_dev) (CHIP_REV_IS_EMUL_A0(_p_dev) || \
CHIP_REV_IS_EMUL_B0(_p_dev))
-#define CHIP_REV_IS_FPGA_A0(_p_dev) ((_p_dev)->chip_rev == 0xf)
-#define CHIP_REV_IS_FPGA_B0(_p_dev) ((_p_dev)->chip_rev == 0xd)
-#define CHIP_REV_IS_FPGA(_p_dev) (CHIP_REV_IS_FPGA_A0(_p_dev) || \
+ #define CHIP_REV_IS_FPGA_A0(_p_dev) ((_p_dev)->chip_rev == 0xf)
+ #define CHIP_REV_IS_FPGA_B0(_p_dev) ((_p_dev)->chip_rev == 0xd)
+ #define CHIP_REV_IS_FPGA(_p_dev) (CHIP_REV_IS_FPGA_A0(_p_dev) || \
CHIP_REV_IS_FPGA_B0(_p_dev))
-#define CHIP_REV_IS_SLOW(_p_dev) \
+ #define CHIP_REV_IS_SLOW(_p_dev) \
(CHIP_REV_IS_EMUL(_p_dev) || CHIP_REV_IS_FPGA(_p_dev))
-#define CHIP_REV_IS_A0(_p_dev) \
+ #define CHIP_REV_IS_A0(_p_dev) \
(CHIP_REV_IS_EMUL_A0(_p_dev) || \
CHIP_REV_IS_FPGA_A0(_p_dev) || \
!(_p_dev)->chip_rev)
-#define CHIP_REV_IS_B0(_p_dev) \
+ #define CHIP_REV_IS_B0(_p_dev) \
(CHIP_REV_IS_EMUL_B0(_p_dev) || \
CHIP_REV_IS_FPGA_B0(_p_dev) || \
(_p_dev)->chip_rev == 1)
#define CHIP_REV_IS_ASIC(_p_dev) (!CHIP_REV_IS_SLOW(_p_dev))
#else
-#define CHIP_REV_IS_A0(_p_dev) (!(_p_dev)->chip_rev)
-#define CHIP_REV_IS_B0(_p_dev) ((_p_dev)->chip_rev == 1)
+ #define CHIP_REV_IS_A0(_p_dev) (!(_p_dev)->chip_rev)
+ #define CHIP_REV_IS_B0(_p_dev) ((_p_dev)->chip_rev == 1)
#endif
- u16 chip_metal;
-#define CHIP_METAL_MASK 0xff
-#define CHIP_METAL_SHIFT 4
+ u16 chip_metal;
+ #define CHIP_METAL_MASK 0xff
+ #define CHIP_METAL_SHIFT 4
- u16 chip_bond_id;
-#define CHIP_BOND_ID_MASK 0xf
-#define CHIP_BOND_ID_SHIFT 0
+ u16 chip_bond_id;
+ #define CHIP_BOND_ID_MASK 0xf
+ #define CHIP_BOND_ID_SHIFT 0
- u8 num_engines;
- u8 num_ports_in_engines;
- u8 num_funcs_in_port;
+ u8 num_engines;
+ u8 num_ports_in_engines;
+ u8 num_funcs_in_port;
- u8 path_id;
- enum ecore_mf_mode mf_mode;
-#define IS_MF_DEFAULT(_p_hwfn) \
- (((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_DEFAULT)
+ u8 path_id;
+ enum ecore_mf_mode mf_mode;
+ #define IS_MF_DEFAULT(_p_hwfn) \
+ (((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_DEFAULT)
#define IS_MF_SI(_p_hwfn) (((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_NPAR)
#define IS_MF_SD(_p_hwfn) (((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_OVLAN)
- int pcie_width;
- int pcie_speed;
+ int pcie_width;
+ int pcie_speed;
u8 ver_str[VER_SIZE];
/* Add MF related configuration */
- u8 mcp_rev;
- u8 boot_mode;
+ u8 mcp_rev;
+ u8 boot_mode;
- u8 wol;
+ u8 wol;
- u32 int_mode;
- enum ecore_coalescing_mode int_coalescing_mode;
+ u32 int_mode;
+ enum ecore_coalescing_mode int_coalescing_mode;
u8 rx_coalesce_usecs;
u8 tx_coalesce_usecs;
/* Start Bar offset of first hwfn */
- void OSAL_IOMEM *regview;
- void OSAL_IOMEM *doorbells;
- u64 db_phys_addr;
- unsigned long db_size;
+ void OSAL_IOMEM *regview;
+ void OSAL_IOMEM *doorbells;
+ u64 db_phys_addr;
+ unsigned long db_size;
/* PCI */
- u8 cache_shift;
+ u8 cache_shift;
/* Init */
- const struct iro *iro_arr;
-#define IRO (p_hwfn->p_dev->iro_arr)
+ const struct iro *iro_arr;
+ #define IRO (p_hwfn->p_dev->iro_arr)
/* HW functions */
- u8 num_hwfns;
- struct ecore_hwfn hwfns[MAX_HWFNS_PER_DEVICE];
+ u8 num_hwfns;
+ struct ecore_hwfn hwfns[MAX_HWFNS_PER_DEVICE];
/* SRIOV */
struct ecore_hw_sriov_info sriov_info;
- unsigned long tunn_mode;
+ unsigned long tunn_mode;
#define IS_ECORE_SRIOV(edev) (!!((edev)->sriov_info.total_vfs))
- bool b_is_vf;
+ bool b_is_vf;
- u32 drv_type;
+ u32 drv_type;
- struct ecore_eth_stats *reset_stats;
- struct ecore_fw_data *fw_data;
+ struct ecore_eth_stats *reset_stats;
+ struct ecore_fw_data *fw_data;
- u32 mcp_nvm_resp;
+ u32 mcp_nvm_resp;
/* Recovery */
- bool recov_in_prog;
+ bool recov_in_prog;
#ifndef ASIC_ONLY
- bool b_is_emul_full;
+ bool b_is_emul_full;
#endif
- void *firmware;
+ void *firmware;
- u64 fw_len;
+ u64 fw_len;
};
@@ -707,10 +707,10 @@ struct ecore_dev {
* @return OSAL_INLINE u8
*/
static OSAL_INLINE u8 ecore_concrete_to_sw_fid(struct ecore_dev *p_dev,
- u32 concrete_fid)
+ u32 concrete_fid)
{
- u8 vfid = GET_FIELD(concrete_fid, PXP_CONCRETE_FID_VFID);
- u8 pfid = GET_FIELD(concrete_fid, PXP_CONCRETE_FID_PFID);
+ u8 vfid = GET_FIELD(concrete_fid, PXP_CONCRETE_FID_VFID);
+ u8 pfid = GET_FIELD(concrete_fid, PXP_CONCRETE_FID_PFID);
u8 vf_valid = GET_FIELD(concrete_fid, PXP_CONCRETE_FID_VFVALID);
u8 sw_fid;
diff --git a/drivers/net/qede/base/ecore_chain.h b/drivers/net/qede/base/ecore_chain.h
index c573449..bc18c41 100644
--- a/drivers/net/qede/base/ecore_chain.h
+++ b/drivers/net/qede/base/ecore_chain.h
@@ -129,7 +129,7 @@ struct ecore_chain {
(1 + ((sizeof(struct ecore_chain_next) - 1) / \
(elem_size))) : 0)
-#define USABLE_ELEMS_PER_PAGE(elem_size, mode) \
+#define USABLE_ELEMS_PER_PAGE(elem_size, mode) \
((u32)(ELEMS_PER_PAGE(elem_size) - \
UNUSABLE_ELEMS_PER_PAGE(elem_size, mode)))
@@ -183,7 +183,7 @@ static OSAL_INLINE u16 ecore_chain_get_elem_left(struct ecore_chain *p_chain)
(u32)p_chain->u.chain16.cons_idx);
if (p_chain->mode == ECORE_CHAIN_MODE_NEXT_PTR)
used -= p_chain->u.chain16.prod_idx / p_chain->elem_per_page -
- p_chain->u.chain16.cons_idx / p_chain->elem_per_page;
+ p_chain->u.chain16.cons_idx / p_chain->elem_per_page;
return (u16)(p_chain->capacity - used);
}
@@ -196,11 +196,11 @@ ecore_chain_get_elem_left_u32(struct ecore_chain *p_chain)
OSAL_ASSERT(is_chain_u32(p_chain));
used = (u32)(((u64)ECORE_U32_MAX + 1 +
- (u64)(p_chain->u.chain32.prod_idx)) -
- (u64)p_chain->u.chain32.cons_idx);
+ (u64)(p_chain->u.chain32.prod_idx)) -
+ (u64)p_chain->u.chain32.cons_idx);
if (p_chain->mode == ECORE_CHAIN_MODE_NEXT_PTR)
used -= p_chain->u.chain32.prod_idx / p_chain->elem_per_page -
- p_chain->u.chain32.cons_idx / p_chain->elem_per_page;
+ p_chain->u.chain32.cons_idx / p_chain->elem_per_page;
return p_chain->capacity - used;
}
@@ -518,14 +518,14 @@ static OSAL_INLINE void ecore_chain_reset(struct ecore_chain *p_chain)
switch (p_chain->intended_use) {
case ECORE_CHAIN_USE_TO_CONSUME_PRODUCE:
case ECORE_CHAIN_USE_TO_PRODUCE:
- /* Do nothing */
- break;
+ /* Do nothing */
+ break;
case ECORE_CHAIN_USE_TO_CONSUME:
- /* produce empty elements */
- for (i = 0; i < p_chain->capacity; i++)
+ /* produce empty elements */
+ for (i = 0; i < p_chain->capacity; i++)
ecore_chain_recycle_consumed(p_chain);
- break;
+ break;
}
}
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 1201c1a..415d1c8 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -807,8 +807,8 @@ static u32 ecore_cxt_ilt_shadow_size(struct ecore_ilt_client_cfg *ilt_clients)
if (!ilt_clients[i].active)
continue;
else
- size += (ilt_clients[i].last.val -
- ilt_clients[i].first.val + 1);
+ size += (ilt_clients[i].last.val -
+ ilt_clients[i].first.val + 1);
return size;
}
@@ -1027,8 +1027,8 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
p_mngr->clients[i].p_size.val = ILT_DEFAULT_HW_P_SIZE;
/* Initialize task sizes */
- p_mngr->task_type_size[0] = 512; /* @DPDK */
- p_mngr->task_type_size[1] = 128; /* @DPDK */
+ p_mngr->task_type_size[0] = 512; /* @DPDK */
+ p_mngr->task_type_size[1] = 128; /* @DPDK */
p_mngr->vf_count = p_hwfn->p_dev->sriov_info.total_vfs;
/* Set the cxt mangr pointer priori to further allocations */
@@ -1383,11 +1383,11 @@ static void ecore_ilt_vf_bounds_init(struct ecore_hwfn *p_hwfn)
u32 blk_factor;
/* For simplicty we set the 'block' to be an ILT page */
- STORE_RT_REG(p_hwfn,
- PSWRQ2_REG_VF_BASE_RT_OFFSET,
+ STORE_RT_REG(p_hwfn,
+ PSWRQ2_REG_VF_BASE_RT_OFFSET,
p_hwfn->hw_info.first_vf_in_pf);
- STORE_RT_REG(p_hwfn,
- PSWRQ2_REG_VF_LAST_ILT_RT_OFFSET,
+ STORE_RT_REG(p_hwfn,
+ PSWRQ2_REG_VF_LAST_ILT_RT_OFFSET,
p_hwfn->hw_info.first_vf_in_pf +
p_hwfn->p_dev->sriov_info.total_vfs);
diff --git a/drivers/net/qede/base/ecore_cxt_api.h b/drivers/net/qede/base/ecore_cxt_api.h
index d98dddb..90aff3e 100644
--- a/drivers/net/qede/base/ecore_cxt_api.h
+++ b/drivers/net/qede/base/ecore_cxt_api.h
@@ -12,9 +12,9 @@
struct ecore_hwfn;
struct ecore_cxt_info {
- void *p_cxt;
- u32 iid;
- enum protocol_type type;
+ void *p_cxt;
+ u32 iid;
+ enum protocol_type type;
};
#define MAX_TID_BLOCKS 512
@@ -22,7 +22,7 @@ struct ecore_tid_mem {
u32 tid_size;
u32 num_tids_per_block;
u32 waste;
- u8 *blocks[MAX_TID_BLOCKS]; /* 4K */
+ u8 *blocks[MAX_TID_BLOCKS]; /* 4K */
};
static OSAL_INLINE void *get_task_mem(struct ecore_tid_mem *info, u32 tid)
@@ -49,7 +49,7 @@ static OSAL_INLINE void *get_task_mem(struct ecore_tid_mem *info, u32 tid)
*
* @return enum _ecore_status_t
*/
-enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
+enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
enum protocol_type type,
u32 *p_cid);
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 6a966cb..18843c4 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -116,8 +116,8 @@ ecore_dcbx_set_pf_tcs(struct ecore_hw_info *p_info,
if (personality == ECORE_PCI_ETH)
p_info->non_offload_tc = tc;
else
- p_info->offload_tc = tc;
- }
+ p_info->offload_tc = tc;
+}
}
void
diff --git a/drivers/net/qede/base/ecore_dcbx_api.h b/drivers/net/qede/base/ecore_dcbx_api.h
index 7767d48..7cd8ee0 100644
--- a/drivers/net/qede/base/ecore_dcbx_api.h
+++ b/drivers/net/qede/base/ecore_dcbx_api.h
@@ -53,10 +53,10 @@ enum dcbx_protocol_type {
struct ecore_dcbx_lldp_remote {
u32 peer_chassis_id[LLDP_CHASSIS_ID_STAT_LEN];
u32 peer_port_id[LLDP_PORT_ID_STAT_LEN];
- bool enable_rx;
- bool enable_tx;
- u32 tx_interval;
- u32 max_credit;
+ bool enable_rx;
+ bool enable_tx;
+ u32 tx_interval;
+ u32 max_credit;
};
struct ecore_dcbx_lldp_local {
@@ -65,17 +65,17 @@ struct ecore_dcbx_lldp_local {
};
struct ecore_dcbx_app_prio {
- u8 eth;
+ u8 eth;
};
struct ecore_dcbx_params {
u32 app_bitmap[DCBX_MAX_APP_PROTOCOL];
- u16 num_app_entries;
- bool app_willing;
- bool app_valid;
- bool ets_willing;
- bool ets_enabled;
- bool valid; /* Indicate validity of params */
+ u16 num_app_entries;
+ bool app_willing;
+ bool app_valid;
+ bool ets_willing;
+ bool ets_enabled;
+ bool valid; /* Indicate validity of params */
u32 ets_pri_tc_tbl[1];
u32 ets_tc_bw_tbl[2];
u32 ets_tc_tsa_tbl[2];
@@ -83,7 +83,7 @@ struct ecore_dcbx_params {
bool pfc_enabled;
u32 pfc_bitmap;
u8 max_pfc_tc;
- u8 max_ets_tc;
+ u8 max_ets_tc;
};
struct ecore_dcbx_admin_params {
@@ -129,7 +129,7 @@ struct ecore_dcbx_results {
struct ecore_dcbx_app_metadata {
enum dcbx_protocol_type id;
- const char *name; /* @DPDK */
+ const char *name; /* @DPDK */
enum ecore_pci_personality personality;
};
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 89faa35..46d3e80 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -64,11 +64,11 @@ static u32 ecore_hw_bar_size(struct ecore_hwfn *p_hwfn, enum BAR_ID bar_id)
return BAR_ID_0 ? 256 * 1024 : 512 * 1024;
}
- DP_NOTICE(p_hwfn, false,
+ DP_NOTICE(p_hwfn, false,
"BAR size not configured. Assuming BAR"
" size of 512kB for GRC and 512kB for DB\n");
- return 512 * 1024;
- }
+ return 512 * 1024;
+ }
return 1 << (val + 15);
}
@@ -305,7 +305,7 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
-alloc_err:
+ alloc_err:
DP_NOTICE(p_hwfn, false, "Failed to allocate memory for QM params\n");
ecore_qm_info_free(p_hwfn);
return ECORE_NOMEM;
@@ -494,9 +494,9 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
return ECORE_SUCCESS;
-alloc_no_mem:
+ alloc_no_mem:
rc = ECORE_NOMEM;
-alloc_err:
+ alloc_err:
ecore_resc_free(p_dev);
return rc;
}
@@ -557,11 +557,11 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn *p_hwfn,
command |= id << SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_BIT_SHIFT;
command |= SDM_COMP_TYPE_AGG_INT << SDM_OP_GEN_COMP_TYPE_SHIFT;
- /* Make sure notification is not set before initiating final cleanup */
+/* Make sure notification is not set before initiating final cleanup */
if (REG_RD(p_hwfn, addr)) {
DP_NOTICE(p_hwfn, false,
"Unexpected; Found final cleanup notification "
- "before initiating final cleanup\n");
+ " before initiating final cleanup\n");
REG_WR(p_hwfn, addr, 0);
}
@@ -666,7 +666,7 @@ static void ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
#ifndef ASIC_ONLY
/* MFW-replacement initializations for non-ASIC */
static void ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
+ struct ecore_ptt *p_ptt)
{
u32 pl_hv = 1;
int i;
@@ -907,7 +907,7 @@ static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
}
ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MODE_REG, (0x4 << 4) | 0x4, 1,
- port);
+ port);
ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MAC_CONTROL, 0, 1, port);
ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_CTRL, 0x40, 0, port);
ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_MODE, 0x40, 0, port);
@@ -935,10 +935,10 @@ static void ecore_link_init(struct ecore_hwfn *p_hwfn,
/* Reset of XMAC */
/* FIXME: move to common start */
ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + 2 * sizeof(u32),
- MISC_REG_RESET_REG_2_XMAC_BIT); /* Clear */
+ MISC_REG_RESET_REG_2_XMAC_BIT); /* Clear */
OSAL_MSLEEP(1);
ecore_wr(p_hwfn, p_ptt, MISC_REG_RESET_PL_PDA_VAUX + sizeof(u32),
- MISC_REG_RESET_REG_2_XMAC_BIT); /* Set */
+ MISC_REG_RESET_REG_2_XMAC_BIT); /* Set */
ecore_wr(p_hwfn, p_ptt, MISC_REG_XMAC_CORE_PORT_MODE, 1);
@@ -1078,7 +1078,7 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
p_hwfn->dpi_start_offset = norm_regsize; /* this is later used to
* calculate the doorbell
* address
- */
+ */
/* Update registers */
/* DEMS size is configured log2 of DWORDs, hence the division by 4 */
@@ -1319,7 +1319,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
ecore_reset_mb_shadow(p_hwfn, p_hwfn->p_main_ptt);
DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
- "Load request was sent.Resp:0x%x, Load code: 0x%x\n",
+ "Load request was sent. Resp:0x%x, Load code: 0x%x\n",
rc, load_code);
/* Only relevant for recovery:
@@ -1411,8 +1411,8 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
#define ECORE_HW_STOP_RETRY_LIMIT (10)
static OSAL_INLINE void ecore_hw_timers_stop(struct ecore_dev *p_dev,
- struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
+ struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt)
{
int i;
@@ -1436,9 +1436,9 @@ static OSAL_INLINE void ecore_hw_timers_stop(struct ecore_dev *p_dev,
"Timers linear scans are not over"
" [Connection %02x Tasks %02x]\n",
(u8)ecore_rd(p_hwfn, p_ptt,
- TM_REG_PF_SCAN_ACTIVE_CONN),
+ TM_REG_PF_SCAN_ACTIVE_CONN),
(u8)ecore_rd(p_hwfn, p_ptt,
- TM_REG_PF_SCAN_ACTIVE_TASK));
+ TM_REG_PF_SCAN_ACTIVE_TASK));
}
void ecore_hw_timers_stop_all(struct ecore_dev *p_dev)
@@ -1679,7 +1679,7 @@ static void get_function_id(struct ecore_hwfn *p_hwfn)
{
/* ME Register */
p_hwfn->hw_info.opaque_fid = (u16)REG_RD(p_hwfn,
- PXP_PF_ME_OPAQUE_ADDR);
+ PXP_PF_ME_OPAQUE_ADDR);
p_hwfn->hw_info.concrete_fid = REG_RD(p_hwfn, PXP_PF_ME_CONCRETE_ADDR);
@@ -1725,7 +1725,7 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn)
struct ecore_sb_cnt_info sb_cnt_info;
bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
- OSAL_MEM_ZERO(&sb_cnt_info, sizeof(sb_cnt_info));
+ OSAL_MEM_ZERO(&sb_cnt_info, sizeof(sb_cnt_info));
#ifdef CONFIG_ECORE_SRIOV
max_vf_vlan_filters = ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS;
@@ -1733,19 +1733,19 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn)
max_vf_vlan_filters = 0;
#endif
- ecore_int_get_num_sbs(p_hwfn, &sb_cnt_info);
+ ecore_int_get_num_sbs(p_hwfn, &sb_cnt_info);
resc_num[ECORE_SB] = OSAL_MIN_T(u32,
(MAX_SB_PER_PATH_BB / num_funcs),
sb_cnt_info.sb_cnt);
resc_num[ECORE_L2_QUEUE] = (b_ah ? MAX_NUM_L2_QUEUES_K2 :
- MAX_NUM_L2_QUEUES_BB) / num_funcs;
+ MAX_NUM_L2_QUEUES_BB) / num_funcs;
resc_num[ECORE_VPORT] = (b_ah ? MAX_NUM_VPORTS_K2 :
MAX_NUM_VPORTS_BB) / num_funcs;
resc_num[ECORE_RSS_ENG] = (b_ah ? ETH_RSS_ENGINE_NUM_K2 :
- ETH_RSS_ENGINE_NUM_BB) / num_funcs;
+ ETH_RSS_ENGINE_NUM_BB) / num_funcs;
resc_num[ECORE_PQ] = (b_ah ? MAX_QM_TX_QUEUES_K2 :
- MAX_QM_TX_QUEUES_BB) / num_funcs;
+ MAX_QM_TX_QUEUES_BB) / num_funcs;
resc_num[ECORE_RL] = 8;
resc_num[ECORE_MAC] = ETH_NUM_MAC_FILTERS / num_funcs;
resc_num[ECORE_VLAN] = (ETH_NUM_VLAN_FILTERS -
@@ -1754,7 +1754,7 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn)
/* TODO - there will be a problem in AH - there are only 11k lines */
resc_num[ECORE_ILT] = (b_ah ? PXP_NUM_ILT_RECORDS_K2 :
- PXP_NUM_ILT_RECORDS_BB) / num_funcs;
+ PXP_NUM_ILT_RECORDS_BB) / num_funcs;
#ifndef ASIC_ONLY
if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
@@ -1840,7 +1840,7 @@ static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
return ECORE_INVAL;
}
- /* Read nvm_cfg1 (Notice this is just offset, and not offsize (TBD) */
+/* Read nvm_cfg1 (Notice this is just offset, and not offsize (TBD) */
nvm_cfg1_offset = ecore_rd(p_hwfn, p_ptt, nvm_cfg_addr + 4);
addr = MCP_REG_SCRATCH + nvm_cfg1_offset +
@@ -2003,8 +2003,8 @@ static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
if (ECORE_PATH_ID(p_hwfn) && p_hwfn->p_dev->num_hwfns == 1) {
num_funcs = 0;
mask = 0xaaaa;
- } else {
- num_funcs = 1;
+ } else {
+ num_funcs = 1;
mask = 0x5554;
}
@@ -2070,12 +2070,12 @@ static void ecore_hw_info_port_num_ah(struct ecore_hwfn *p_hwfn,
p_hwfn->p_dev->num_ports_in_engines = 0;
- for (i = 0; i < MAX_NUM_PORTS_K2; i++) {
- port = ecore_rd(p_hwfn, p_ptt,
- CNIG_REG_NIG_PORT0_CONF_K2 + (i * 4));
- if (port & 1)
- p_hwfn->p_dev->num_ports_in_engines++;
- }
+ for (i = 0; i < MAX_NUM_PORTS_K2; i++) {
+ port = ecore_rd(p_hwfn, p_ptt,
+ CNIG_REG_NIG_PORT0_CONF_K2 + (i * 4));
+ if (port & 1)
+ p_hwfn->p_dev->num_ports_in_engines++;
+ }
}
static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
@@ -2095,8 +2095,8 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t rc;
rc = ecore_iov_hw_info(p_hwfn, p_hwfn->p_main_ptt);
- if (rc)
- return rc;
+ if (rc)
+ return rc;
/* TODO In get_hw_info, amoungst others:
* Get MCP FW revision and determine according to it the supported
@@ -2178,7 +2178,7 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
p_dev->chip_num = (u16)ecore_rd(p_hwfn, p_hwfn->p_main_ptt,
MISCS_REG_CHIP_NUM);
p_dev->chip_rev = (u16)ecore_rd(p_hwfn, p_hwfn->p_main_ptt,
- MISCS_REG_CHIP_REV);
+ MISCS_REG_CHIP_REV);
MASK_FIELD(CHIP_REV, p_dev->chip_rev);
@@ -2214,7 +2214,7 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
MISCS_REG_CHIP_TEST_REG) >> 4;
MASK_FIELD(CHIP_BOND_ID, p_dev->chip_bond_id);
p_dev->chip_metal = (u16)ecore_rd(p_hwfn, p_hwfn->p_main_ptt,
- MISCS_REG_CHIP_METAL);
+ MISCS_REG_CHIP_METAL);
MASK_FIELD(CHIP_METAL, p_dev->chip_metal);
DP_INFO(p_dev->hwfns,
"Chip details - %s%d, Num: %04x Rev: %04x Bond id: %04x"
@@ -2344,11 +2344,11 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
#endif
return rc;
-err2:
+ err2:
ecore_mcp_free(p_hwfn);
-err1:
+ err1:
ecore_hw_hwfn_free(p_hwfn);
-err0:
+ err0:
return rc;
}
@@ -2361,7 +2361,7 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev, int personality)
return ecore_vf_hw_prepare(p_dev);
/* Store the precompiled init data ptrs */
- ecore_init_iro_array(p_dev);
+ ecore_init_iro_array(p_dev);
/* Initialize the first hwfn - will learn number of hwfns */
rc = ecore_hw_prepare_single(p_hwfn,
@@ -2490,7 +2490,7 @@ static void ecore_chain_free_pbl(struct ecore_dev *p_dev,
pbl_size = page_cnt * ECORE_CHAIN_PBL_ENTRY_SIZE;
OSAL_DMA_FREE_COHERENT(p_dev, p_chain->pbl.p_virt_table,
p_chain->pbl.p_phys_table, pbl_size);
-out:
+ out:
OSAL_VFREE(p_dev, p_chain->pbl.pp_virt_addr_tbl);
}
@@ -2692,7 +2692,7 @@ enum _ecore_status_t ecore_chain_alloc(struct ecore_dev *p_dev,
return ECORE_SUCCESS;
-nomem:
+ nomem:
ecore_chain_free(p_dev, p_chain);
return rc;
}
@@ -2848,7 +2848,7 @@ void ecore_llh_remove_mac_filter(struct ecore_hwfn *p_hwfn,
}
enum _ecore_status_t ecore_llh_add_ethertype_filter(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
+ struct ecore_ptt *p_ptt,
u16 filter)
{
u32 high, low, en;
@@ -2887,7 +2887,7 @@ enum _ecore_status_t ecore_llh_add_ethertype_filter(struct ecore_hwfn *p_hwfn,
return ECORE_INVAL;
}
- DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+ DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
"ETH type: %x is added at %d\n", filter, i);
return ECORE_SUCCESS;
@@ -2952,7 +2952,7 @@ void ecore_llh_clear_all_filters(struct ecore_hwfn *p_hwfn,
}
enum _ecore_status_t ecore_test_registers(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
+ struct ecore_ptt *p_ptt)
{
u32 reg_tbl[] = {
BRB_REG_HEADER_SIZE,
@@ -3032,8 +3032,8 @@ enum _ecore_status_t ecore_test_registers(struct ecore_hwfn *p_hwfn,
}
}
}
- return ECORE_SUCCESS;
-}
+ return ECORE_SUCCESS;
+ }
static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
@@ -3089,7 +3089,7 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
goto out;
p_hwfn->p_dev->rx_coalesce_usecs = coalesce;
-out:
+ out:
return rc;
}
@@ -3119,7 +3119,7 @@ enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
goto out;
p_hwfn->p_dev->tx_coalesce_usecs = coalesce;
-out:
+ out:
return rc;
}
@@ -3305,16 +3305,16 @@ static int __ecore_configure_vp_wfq_on_link_change(struct ecore_hwfn *p_hwfn,
if (p_hwfn->qm_info.wfq_data[i].configured) {
u32 rate = p_hwfn->qm_info.wfq_data[i].min_speed;
- use_wfq = true;
- rc = ecore_init_wfq_param(p_hwfn, i, rate, min_pf_rate);
+ use_wfq = true;
+ rc = ecore_init_wfq_param(p_hwfn, i, rate, min_pf_rate);
if (rc == ECORE_INVAL) {
- DP_NOTICE(p_hwfn, false,
+ DP_NOTICE(p_hwfn, false,
"Validation failed while"
" configuring min rate\n");
- break;
- }
+ break;
}
}
+ }
if (rc == ECORE_SUCCESS && use_wfq)
ecore_configure_wfq_for_all_vports(p_hwfn, p_ptt, min_pf_rate);
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 535b82b..1b78c32 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -270,22 +270,22 @@ enum ecore_dmae_address_type_t {
#define ECORE_DMAE_FLAG_COMPLETION_DST 0x00000008
struct ecore_dmae_params {
- u32 flags; /* consists of ECORE_DMAE_FLAG_* values */
+ u32 flags; /* consists of ECORE_DMAE_FLAG_* values */
u8 src_vfid;
u8 dst_vfid;
};
/**
-* @brief ecore_dmae_host2grc - copy data from source addr to
-* dmae registers using the given ptt
-*
-* @param p_hwfn
-* @param p_ptt
-* @param source_addr
-* @param grc_addr (dmae_data_offset)
-* @param size_in_dwords
-* @param flags (one of the flags defined above)
-*/
+ * @brief ecore_dmae_host2grc - copy data from source addr to
+ * dmae registers using the given ptt
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param source_addr
+ * @param grc_addr (dmae_data_offset)
+ * @param size_in_dwords
+ * @param flags (one of the flags defined above)
+ */
enum _ecore_status_t
ecore_dmae_host2grc(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
@@ -293,15 +293,15 @@ ecore_dmae_host2grc(struct ecore_hwfn *p_hwfn,
u32 grc_addr, u32 size_in_dwords, u32 flags);
/**
-* @brief ecore_dmae_grc2host - Read data from dmae data offset
-* to source address using the given ptt
-*
-* @param p_ptt
-* @param grc_addr (dmae_data_offset)
-* @param dest_addr
-* @param size_in_dwords
-* @param flags - one of the flags defined above
-*/
+ * @brief ecore_dmae_grc2host - Read data from dmae data offset
+ * to source address using the given ptt
+ *
+ * @param p_ptt
+ * @param grc_addr (dmae_data_offset)
+ * @param dest_addr
+ * @param size_in_dwords
+ * @param flags - one of the flags defined above
+ */
enum _ecore_status_t
ecore_dmae_grc2host(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
@@ -309,16 +309,16 @@ ecore_dmae_grc2host(struct ecore_hwfn *p_hwfn,
dma_addr_t dest_addr, u32 size_in_dwords, u32 flags);
/**
-* @brief ecore_dmae_host2host - copy data from to source address
-* to a destination address (for SRIOV) using the given ptt
-*
-* @param p_hwfn
-* @param p_ptt
-* @param source_addr
-* @param dest_addr
-* @param size_in_dwords
-* @param params
-*/
+ * @brief ecore_dmae_host2host - copy data from to source address
+ * to a destination address (for SRIOV) using the given ptt
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param source_addr
+ * @param dest_addr
+ * @param size_in_dwords
+ * @param params
+ */
enum _ecore_status_t
ecore_dmae_host2host(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
@@ -398,8 +398,8 @@ enum _ecore_status_t ecore_fw_rss_eng(struct ecore_hwfn *p_hwfn,
* @param p_filter - MAC to add
*/
enum _ecore_status_t ecore_llh_add_mac_filter(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u8 *p_filter);
+ struct ecore_ptt *p_ptt,
+ u8 *p_filter);
/**
* @brief ecore_llh_remove_mac_filter - removes a MAC filtre from llh
@@ -419,7 +419,7 @@ void ecore_llh_remove_mac_filter(struct ecore_hwfn *p_hwfn,
* @param filter - ethertype to add
*/
enum _ecore_status_t ecore_llh_add_ethertype_filter(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
+ struct ecore_ptt *p_ptt,
u16 filter);
/**
@@ -439,9 +439,9 @@ void ecore_llh_remove_ethertype_filter(struct ecore_hwfn *p_hwfn,
* @param p_ptt
*/
void ecore_llh_clear_all_filters(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt);
+ struct ecore_ptt *p_ptt);
- /**
+/**
*@brief Cleanup of previous driver remains prior to load
*
* @param p_hwfn
@@ -461,7 +461,7 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn *p_hwfn,
* @param p_hwfn
* @param p_ptt
*
- * @return enum _ecore_status_t
+ * @return enum _ecore_status_t
*/
enum _ecore_status_t ecore_test_registers(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt);
diff --git a/drivers/net/qede/base/ecore_gtt_reg_addr.h b/drivers/net/qede/base/ecore_gtt_reg_addr.h
index cc49fc7..0eba1aa 100644
--- a/drivers/net/qede/base/ecore_gtt_reg_addr.h
+++ b/drivers/net/qede/base/ecore_gtt_reg_addr.h
@@ -10,33 +10,33 @@
#define GTT_REG_ADDR_H
/* Win 2 */
-#define GTT_BAR0_MAP_REG_IGU_CMD 0x00f000UL
+#define GTT_BAR0_MAP_REG_IGU_CMD 0x00f000UL
/* Win 3 */
-#define GTT_BAR0_MAP_REG_TSDM_RAM 0x010000UL
+#define GTT_BAR0_MAP_REG_TSDM_RAM 0x010000UL
/* Win 4 */
-#define GTT_BAR0_MAP_REG_MSDM_RAM 0x011000UL
+#define GTT_BAR0_MAP_REG_MSDM_RAM 0x011000UL
/* Win 5 */
-#define GTT_BAR0_MAP_REG_MSDM_RAM_1024 0x012000UL
+#define GTT_BAR0_MAP_REG_MSDM_RAM_1024 0x012000UL
/* Win 6 */
-#define GTT_BAR0_MAP_REG_USDM_RAM 0x013000UL
+#define GTT_BAR0_MAP_REG_USDM_RAM 0x013000UL
/* Win 7 */
-#define GTT_BAR0_MAP_REG_USDM_RAM_1024 0x014000UL
+#define GTT_BAR0_MAP_REG_USDM_RAM_1024 0x014000UL
/* Win 8 */
-#define GTT_BAR0_MAP_REG_USDM_RAM_2048 0x015000UL
+#define GTT_BAR0_MAP_REG_USDM_RAM_2048 0x015000UL
/* Win 9 */
-#define GTT_BAR0_MAP_REG_XSDM_RAM 0x016000UL
+#define GTT_BAR0_MAP_REG_XSDM_RAM 0x016000UL
/* Win 10 */
-#define GTT_BAR0_MAP_REG_YSDM_RAM 0x017000UL
+#define GTT_BAR0_MAP_REG_YSDM_RAM 0x017000UL
/* Win 11 */
-#define GTT_BAR0_MAP_REG_PSDM_RAM 0x018000UL
+#define GTT_BAR0_MAP_REG_PSDM_RAM 0x018000UL
#endif
diff --git a/drivers/net/qede/base/ecore_gtt_values.h b/drivers/net/qede/base/ecore_gtt_values.h
index f2efe24..2ddc5f1 100644
--- a/drivers/net/qede/base/ecore_gtt_values.h
+++ b/drivers/net/qede/base/ecore_gtt_values.h
@@ -11,16 +11,16 @@
static u32 pxp_global_win[] = {
0,
0,
- 0x1c02, /* win 2: addr=0x1c02000, size=4096 bytes */
- 0x1c80, /* win 3: addr=0x1c80000, size=4096 bytes */
- 0x1d00, /* win 4: addr=0x1d00000, size=4096 bytes */
- 0x1d01, /* win 5: addr=0x1d01000, size=4096 bytes */
- 0x1d80, /* win 6: addr=0x1d80000, size=4096 bytes */
- 0x1d81, /* win 7: addr=0x1d81000, size=4096 bytes */
- 0x1d82, /* win 8: addr=0x1d82000, size=4096 bytes */
- 0x1e00, /* win 9: addr=0x1e00000, size=4096 bytes */
- 0x1e80, /* win 10: addr=0x1e80000, size=4096 bytes */
- 0x1f00, /* win 11: addr=0x1f00000, size=4096 bytes */
+ 0x1c02, /* win 2: addr=0x1c02000, size=4096 bytes */
+ 0x1c80, /* win 3: addr=0x1c80000, size=4096 bytes */
+ 0x1d00, /* win 4: addr=0x1d00000, size=4096 bytes */
+ 0x1d01, /* win 5: addr=0x1d01000, size=4096 bytes */
+ 0x1d80, /* win 6: addr=0x1d80000, size=4096 bytes */
+ 0x1d81, /* win 7: addr=0x1d81000, size=4096 bytes */
+ 0x1d82, /* win 8: addr=0x1d82000, size=4096 bytes */
+ 0x1e00, /* win 9: addr=0x1e00000, size=4096 bytes */
+ 0x1e80, /* win 10: addr=0x1e80000, size=4096 bytes */
+ 0x1f00, /* win 11: addr=0x1f00000, size=4096 bytes */
0,
0,
0,
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index 9cd55c4..877de8b 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -953,7 +953,7 @@ enum malicious_vf_error_id {
VF_ZONE_MSG_NOT_VALID /* VF channel message is not valid */,
VF_ZONE_FUNC_NOT_ENABLED /* Parent PF of VF channel is not active */,
ETH_PACKET_TOO_SMALL
- /* TX packet is shorter then reported on BDs or from minimal size */
+/* TX packet is shorter then reported on BDs or from minimal size */
,
ETH_ILLEGAL_VLAN_MODE
/* Tx packet with marked as insert VLAN when its illegal */,
@@ -1060,7 +1060,7 @@ struct pf_start_ramrod_data {
u8 allow_npar_tx_switching;
u8 inner_to_outer_pri_map[8];
u8 pri_map_valid
- /* If inner_to_outer_pri_map is initialize then set pri_map_valid */
+/* If inner_to_outer_pri_map is initialize then set pri_map_valid */
;
__le32 outer_tag;
u8 reserved0[4];
@@ -1244,7 +1244,7 @@ enum tunnel_clss {
TUNNEL_CLSS_MAC_VNI
,
TUNNEL_CLSS_INNER_MAC_VLAN
- /* Use MAC and VLAN from last L2 header for vport classification */
+/* Use MAC and VLAN from last L2 header for vport classification */
,
TUNNEL_CLSS_INNER_MAC_VNI
,
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index 80f4165..78cc55d 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -872,7 +872,7 @@ struct eth_vport_tpa_param {
u8 tpa_ipv6_tunn_en_flg /* Enable TPA for IPv6 over tunnel */;
u8 tpa_pkt_split_flg;
u8 tpa_hdr_data_split_flg
- /* If set, put header of first TPA segment on bd and data on SGE */
+/* If set, put header of first TPA segment on bd and data on SGE */
;
u8 tpa_gro_consistent_flg
/* If set, GRO data consistent will checked for TPA continue */;
@@ -882,10 +882,10 @@ struct eth_vport_tpa_param {
__le16 tpa_min_size_to_start
/* minimum TCP payload size for a packet to start aggregation */;
__le16 tpa_min_size_to_cont
- /* minimum TCP payload size for a packet to continue aggregation */
+/* minimum TCP payload size for a packet to continue aggregation */
;
u8 max_buff_num
- /* maximal number of buffers that can be used for one aggregation */
+/* maximal number of buffers that can be used for one aggregation */
;
u8 reserved;
};
@@ -1124,7 +1124,7 @@ struct vport_start_ramrod_data {
u8 handle_ptp_pkts /* If set, the vport handles PTP Timesync Packets */
;
u8 silent_vlan_removal_en;
- /* If enable then innerVlan will be striped and not written to cqe */
+/* If enable then innerVlan will be striped and not written to cqe */
u8 untagged;
struct eth_tx_err_vals tx_err_behav
/* Desired behavior per TX error type */;
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 5403b94..e9b96d5 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -108,15 +108,15 @@ struct ecore_ptt *ecore_ptt_acquire(struct ecore_hwfn *p_hwfn)
}
p_ptt = OSAL_LIST_FIRST_ENTRY(&p_hwfn->p_ptt_pool->free_list,
- struct ecore_ptt, list_entry);
- OSAL_LIST_REMOVE_ENTRY(&p_ptt->list_entry,
- &p_hwfn->p_ptt_pool->free_list);
- OSAL_SPIN_UNLOCK(&p_hwfn->p_ptt_pool->lock);
+ struct ecore_ptt, list_entry);
+ OSAL_LIST_REMOVE_ENTRY(&p_ptt->list_entry,
+ &p_hwfn->p_ptt_pool->free_list);
+ OSAL_SPIN_UNLOCK(&p_hwfn->p_ptt_pool->lock);
DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "allocated ptt %d\n", p_ptt->idx);
- return p_ptt;
-}
+ return p_ptt;
+ }
void ecore_ptt_release(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
{
@@ -298,7 +298,7 @@ void ecore_fid_pretend(struct ecore_hwfn *p_hwfn,
SET_FIELD(control, PXP_PRETEND_CMD_IS_CONCRETE, 1);
SET_FIELD(control, PXP_PRETEND_CMD_PRETEND_FUNCTION, 1);
- /* Every pretend undos prev pretends, including previous port pretend */
+/* Every pretend undos prev pretends, including previous port pretend */
SET_FIELD(control, PXP_PRETEND_CMD_PORT, 0);
SET_FIELD(control, PXP_PRETEND_CMD_USE_PORT, 0);
SET_FIELD(control, PXP_PRETEND_CMD_PRETEND_PORT, 1);
diff --git a/drivers/net/qede/base/ecore_hw.h b/drivers/net/qede/base/ecore_hw.h
index 8949944..9603c99 100644
--- a/drivers/net/qede/base/ecore_hw.h
+++ b/drivers/net/qede/base/ecore_hw.h
@@ -115,7 +115,7 @@ u32 ecore_ptt_get_hw_addr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
*
* @return u32
*/
-u32 ecore_ptt_get_bar_addr(struct ecore_ptt *p_ptt);
+u32 ecore_ptt_get_bar_addr(struct ecore_ptt *p_ptt);
/**
* @brief ecore_ptt_set_win - Set PTT Window's GRC BAR address
@@ -124,7 +124,7 @@ u32 ecore_ptt_get_bar_addr(struct ecore_ptt *p_ptt);
* @param new_hw_addr
* @param p_ptt
*/
-void ecore_ptt_set_win(struct ecore_hwfn *p_hwfn,
+void ecore_ptt_set_win(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt, u32 new_hw_addr);
/**
@@ -135,8 +135,8 @@ void ecore_ptt_set_win(struct ecore_hwfn *p_hwfn,
*
* @return struct ecore_ptt *
*/
-struct ecore_ptt *ecore_get_reserved_ptt(struct ecore_hwfn *p_hwfn,
- enum reserved_ptts ptt_idx);
+struct ecore_ptt *ecore_get_reserved_ptt(struct ecore_hwfn *p_hwfn,
+ enum reserved_ptts ptt_idx);
/**
* @brief ecore_wr - Write value to BAR using the given ptt
@@ -146,7 +146,7 @@ struct ecore_ptt *ecore_get_reserved_ptt(struct ecore_hwfn *p_hwfn,
* @param val
* @param hw_addr
*/
-void ecore_wr(struct ecore_hwfn *p_hwfn,
+void ecore_wr(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt, u32 hw_addr, u32 val);
/**
@@ -169,8 +169,8 @@ u32 ecore_rd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 hw_addr);
* @param hw_addr
* @param n
*/
-void ecore_memcpy_from(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
+void ecore_memcpy_from(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
void *dest, u32 hw_addr, osal_size_t n);
/**
@@ -183,8 +183,8 @@ void ecore_memcpy_from(struct ecore_hwfn *p_hwfn,
* @param src
* @param n
*/
-void ecore_memcpy_to(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
+void ecore_memcpy_to(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
u32 hw_addr, void *src, osal_size_t n);
/**
* @brief ecore_fid_pretend - pretend to another function when
@@ -197,7 +197,7 @@ void ecore_memcpy_to(struct ecore_hwfn *p_hwfn,
* @param fid - fid field of pxp_pretend structure. Can contain
* either pf / vf, port/path fields are don't care.
*/
-void ecore_fid_pretend(struct ecore_hwfn *p_hwfn,
+void ecore_fid_pretend(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt, u16 fid);
/**
@@ -208,7 +208,7 @@ void ecore_fid_pretend(struct ecore_hwfn *p_hwfn,
* @param p_ptt
* @param port_id - the port to pretend to
*/
-void ecore_port_pretend(struct ecore_hwfn *p_hwfn,
+void ecore_port_pretend(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt, u8 port_id);
/**
@@ -235,7 +235,7 @@ u32 ecore_vfid_to_concrete(struct ecore_hwfn *p_hwfn, u8 vfid);
* which is part of p_hwfn.
* @param p_hwfn
*/
-enum _ecore_status_t ecore_dmae_info_alloc(struct ecore_hwfn *p_hwfn);
+enum _ecore_status_t ecore_dmae_info_alloc(struct ecore_hwfn *p_hwfn);
/**
* @brief ecore_dmae_info_free - Free the dmae_info structure
@@ -243,7 +243,7 @@ enum _ecore_status_t ecore_dmae_info_alloc(struct ecore_hwfn *p_hwfn);
*
* @param p_hwfn
*/
-void ecore_dmae_info_free(struct ecore_hwfn *p_hwfn);
+void ecore_dmae_info_free(struct ecore_hwfn *p_hwfn);
union ecore_qm_pq_params {
struct {
@@ -257,7 +257,7 @@ union ecore_qm_pq_params {
} eth;
};
-u16 ecore_get_qm_pq(struct ecore_hwfn *p_hwfn,
+u16 ecore_get_qm_pq(struct ecore_hwfn *p_hwfn,
enum protocol_type proto, union ecore_qm_pq_params *params);
enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
diff --git a/drivers/net/qede/base/ecore_hw_defs.h b/drivers/net/qede/base/ecore_hw_defs.h
index fa518ce..19816ff 100644
--- a/drivers/net/qede/base/ecore_hw_defs.h
+++ b/drivers/net/qede/base/ecore_hw_defs.h
@@ -36,13 +36,13 @@ enum igu_ctrl_cmd {
*/
struct igu_ctrl_reg {
u32 ctrl_data;
-#define IGU_CTRL_REG_FID_MASK 0xFFFF /* Opaque_FID */
+#define IGU_CTRL_REG_FID_MASK 0xFFFF /* Opaque_FID */
#define IGU_CTRL_REG_FID_SHIFT 0
-#define IGU_CTRL_REG_PXP_ADDR_MASK 0xFFF /* Command address */
+#define IGU_CTRL_REG_PXP_ADDR_MASK 0xFFF /* Command address */
#define IGU_CTRL_REG_PXP_ADDR_SHIFT 16
#define IGU_CTRL_REG_RESERVED_MASK 0x1
#define IGU_CTRL_REG_RESERVED_SHIFT 28
-#define IGU_CTRL_REG_TYPE_MASK 0x1 /* use enum igu_ctrl_cmd */
+#define IGU_CTRL_REG_TYPE_MASK 0x1 /* use enum igu_ctrl_cmd */
#define IGU_CTRL_REG_TYPE_SHIFT 31
};
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index 5440731..0844194 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -206,7 +206,7 @@ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn,
for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
if (((port_params[port_id].active_phys_tcs >>
tc) & 0x1) == 1)
- num_tcs_in_port++;
+ num_tcs_in_port++;
}
phys_lines_per_tc = phys_lines / num_tcs_in_port;
/* init registers per active TC */
@@ -293,9 +293,9 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
tc < NUM_OF_PHYS_TCS;
tc++) {
if (((port_params[port_id].active_phys_tcs >>
- tc) & 0x1) == 1) {
+ tc) & 0x1) == 1) {
voq = PHYS_VOQ(port_id, tc,
- max_phys_tcs_per_port);
+ max_phys_tcs_per_port);
STORE_RT_REG(p_hwfn,
PBF_BTB_GUARANTEED_RT_OFFSET(voq),
phys_blocks);
@@ -412,7 +412,7 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
u32 curr_mask =
is_first_pf ? 0 : ecore_rd(p_hwfn, p_ptt,
QM_REG_MAXPQSIZETXSEL_0
- + i * 4);
+ + i * 4);
STORE_RT_REG(p_hwfn,
QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET +
i, curr_mask | tx_pq_vf_mask[i]);
@@ -518,8 +518,8 @@ static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
vport_params[i].first_tx_pq_id[tc];
if (vport_pq_id != QM_INVALID_PQ_ID) {
STORE_RT_REG(p_hwfn,
- QM_REG_WFQVPCRD_RT_OFFSET +
- vport_pq_id,
+ QM_REG_WFQVPCRD_RT_OFFSET +
+ vport_pq_id,
QM_WFQ_CRD_REG_SIGN_BIT);
STORE_RT_REG(p_hwfn,
QM_REG_WFQVPWEIGHT_RT_OFFSET
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index 5280cd7..0c8d1fb 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -26,8 +26,8 @@ struct init_qm_pq_params;
* @return The required host memory size in 4KB units.
*/
u32 ecore_qm_pf_mem_size(u8 pf_id,
- u32 num_pf_cids,
- u32 num_vf_cids,
+ u32 num_pf_cids,
+ u32 num_vf_cids,
u32 num_tids, u16 num_pf_pqs, u16 num_vf_pqs);
/**
* @brief ecore_qm_common_rt_init -
@@ -45,33 +45,33 @@ u32 ecore_qm_pf_mem_size(u8 pf_id,
* @return 0 on success, -1 on error.
*/
int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
- u8 max_ports_per_engine,
- u8 max_phys_tcs_per_port,
- bool pf_rl_en,
- bool pf_wfq_en,
- bool vport_rl_en,
- bool vport_wfq_en,
+ u8 max_ports_per_engine,
+ u8 max_phys_tcs_per_port,
+ bool pf_rl_en,
+ bool pf_wfq_en,
+ bool vport_rl_en,
+ bool vport_wfq_en,
struct init_qm_port_params
port_params[MAX_NUM_PORTS]);
int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u8 port_id,
- u8 pf_id,
- u8 max_phys_tcs_per_port,
- bool is_first_pf,
- u32 num_pf_cids,
- u32 num_vf_cids,
- u32 num_tids,
- u16 start_pq,
- u16 num_pf_pqs,
- u16 num_vf_pqs,
- u8 start_vport,
- u8 num_vports,
- u16 pf_wfq,
- u32 pf_rl,
- struct init_qm_pq_params *pq_params,
- struct init_qm_vport_params *vport_params);
+ struct ecore_ptt *p_ptt,
+ u8 port_id,
+ u8 pf_id,
+ u8 max_phys_tcs_per_port,
+ bool is_first_pf,
+ u32 num_pf_cids,
+ u32 num_vf_cids,
+ u32 num_tids,
+ u16 start_pq,
+ u16 num_pf_pqs,
+ u16 num_vf_pqs,
+ u8 start_vport,
+ u8 num_vports,
+ u16 pf_wfq,
+ u32 pf_rl,
+ struct init_qm_pq_params *pq_params,
+ struct init_qm_vport_params *vport_params);
/**
* @brief ecore_init_pf_wfq Initializes the WFQ weight of the specified PF
*
@@ -109,7 +109,7 @@ int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
* @return 0 on success, -1 on error.
*/
int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
+ struct ecore_ptt *p_ptt,
u16 first_tx_pq_id[NUM_OF_TCS], u16 vport_wfq);
/**
* @brief ecore_init_vport_rl Initializes the rate limit of the specified VPORT
@@ -137,8 +137,8 @@ int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
* waiting for QM command done.
*/
bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- bool is_release_cmd,
+ struct ecore_ptt *p_ptt,
+ bool is_release_cmd,
bool is_tx_pq, u16 start_pq, u16 num_pqs);
/**
* @brief ecore_init_nig_ets - initializes the NIG ETS arbiter
@@ -152,7 +152,7 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
* requirements are ignored when is_lb is cleared.
*/
void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
+ struct ecore_ptt *p_ptt,
struct init_ets_req *req, bool is_lb);
/**
* @brief ecore_init_nig_lb_rl - initializes the NIG LB RLs
@@ -163,8 +163,8 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
* @param req - the NIG LB RLs initialization requirements.
*/
void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct init_nig_lb_rl_req *req);
+ struct ecore_ptt *p_ptt,
+ struct init_nig_lb_rl_req *req);
/**
* @brief ecore_init_nig_pri_tc_map - initializes the NIG priority to TC map.
*
@@ -174,8 +174,8 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
* @param req - required mapping from prioirties to TCs.
*/
void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct init_nig_pri_tc_map_req *req);
+ struct ecore_ptt *p_ptt,
+ struct init_nig_pri_tc_map_req *req);
/**
* @brief ecore_init_prs_ets - initializes the PRS Rx ETS arbiter
*
@@ -227,7 +227,7 @@ void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
/**
* @brief ecore_set_vxlan_enable - enable or disable VXLAN tunnel in HW
*
- * @param p_ptt - ptt window used for writing the registers.
+ * @param p_ptt - ptt window used for writing the registers.
* @param vxlan_enable - vxlan enable flag.
*/
void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index e6e4c36..71bad30 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -251,9 +251,9 @@ static enum _ecore_status_t ecore_init_cmd_array(struct ecore_hwfn *p_hwfn,
b_can_dmae);
if (rc)
break;
- }
- break;
}
+ break;
+ }
case INIT_ARR_STANDARD:
size = GET_FIELD(data, INIT_ARRAY_STANDARD_HDR_SIZE);
rc = ecore_init_array_dmae(p_hwfn, p_ptt, addr,
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index bed9ea3..e4c002a 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -100,7 +100,7 @@ static enum _ecore_status_t ecore_mcp_attn_cb(struct ecore_hwfn *p_hwfn)
#define ECORE_PSWHST_ATTNETION_DISABLED_WRITE_SHIFT (0)
#define ECORE_PSWHST_ATTENTION_VF_DISABLED (0x1)
#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS (0x1)
-#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_WR_MASK (0x1)
+#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_WR_MASK (0x1)
#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_WR_SHIFT (0)
#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_CLIENT_MASK (0x1e)
#define ECORE_PSWHST_ATTENTION_INCORRECT_ACCESS_CLIENT_SHIFT (1)
@@ -1138,7 +1138,7 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie)
return;
}
- /* Check the validity of the DPC ptt. If not ack interrupts and fail */
+/* Check the validity of the DPC ptt. If not ack interrupts and fail */
if (!p_hwfn->p_dpc_ptt) {
DP_NOTICE(p_hwfn->p_dev, true, "Failed to allocate PTT\n");
ecore_sb_ack(sb_info, IGU_INT_ENABLE, 1);
@@ -1676,7 +1676,7 @@ static void ecore_int_igu_enable_attn(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t
ecore_int_igu_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
- enum ecore_int_mode int_mode)
+ enum ecore_int_mode int_mode)
{
enum _ecore_status_t rc = ECORE_SUCCESS;
u32 tmp;
@@ -2102,10 +2102,10 @@ u16 ecore_int_queue_id_from_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id)
return sb_id - p_info->igu_base_sb_iov + p_info->igu_sb_cnt;
}
- DP_NOTICE(p_hwfn, true, "SB %d not in range for function\n",
- sb_id);
- return 0;
-}
+ DP_NOTICE(p_hwfn, true, "SB %d not in range for function\n",
+ sb_id);
+ return 0;
+ }
void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev)
{
diff --git a/drivers/net/qede/base/ecore_int.h b/drivers/net/qede/base/ecore_int.h
index 17c9521..eeec8ca 100644
--- a/drivers/net/qede/base/ecore_int.h
+++ b/drivers/net/qede/base/ecore_int.h
@@ -169,8 +169,8 @@ void ecore_int_cau_conf_sb(struct ecore_hwfn *p_hwfn,
*
* @return enum _ecore_status_t
*/
-enum _ecore_status_t ecore_int_alloc(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt);
+enum _ecore_status_t ecore_int_alloc(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt);
/**
* @brief ecore_int_free
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index b34a9c6..5ad4ec6 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -21,22 +21,22 @@
#define IS_PF_SRIOV(p_hwfn) (0)
#endif
#define IS_PF_SRIOV_ALLOC(p_hwfn) (!!((p_hwfn)->pf_iov_info))
-#define IS_PF_PDA(p_hwfn) 0 /* @@TBD Michalk */
+#define IS_PF_PDA(p_hwfn) 0 /* @@TBD Michalk */
/* @@@ TBD MichalK - what should this number be*/
#define ECORE_MAX_VF_CHAINS_PER_PF 16
/* vport update extended feature tlvs flags */
enum ecore_iov_vport_update_flag {
- ECORE_IOV_VP_UPDATE_ACTIVATE = 0,
- ECORE_IOV_VP_UPDATE_VLAN_STRIP = 1,
- ECORE_IOV_VP_UPDATE_TX_SWITCH = 2,
- ECORE_IOV_VP_UPDATE_MCAST = 3,
- ECORE_IOV_VP_UPDATE_ACCEPT_PARAM = 4,
- ECORE_IOV_VP_UPDATE_RSS = 5,
- ECORE_IOV_VP_UPDATE_ACCEPT_ANY_VLAN = 6,
- ECORE_IOV_VP_UPDATE_SGE_TPA = 7,
- ECORE_IOV_VP_UPDATE_MAX = 8,
+ ECORE_IOV_VP_UPDATE_ACTIVATE = 0,
+ ECORE_IOV_VP_UPDATE_VLAN_STRIP = 1,
+ ECORE_IOV_VP_UPDATE_TX_SWITCH = 2,
+ ECORE_IOV_VP_UPDATE_MCAST = 3,
+ ECORE_IOV_VP_UPDATE_ACCEPT_PARAM = 4,
+ ECORE_IOV_VP_UPDATE_RSS = 5,
+ ECORE_IOV_VP_UPDATE_ACCEPT_ANY_VLAN = 6,
+ ECORE_IOV_VP_UPDATE_SGE_TPA = 7,
+ ECORE_IOV_VP_UPDATE_MAX = 8,
};
struct ecore_mcp_link_params;
@@ -67,21 +67,21 @@ struct ecore_public_vf_info {
#ifdef CONFIG_ECORE_SW_CHANNEL
/* This is SW channel related only... */
enum mbx_state {
- VF_PF_UNKNOWN_STATE = 0,
- VF_PF_WAIT_FOR_START_REQUEST = 1,
- VF_PF_WAIT_FOR_NEXT_CHUNK_OF_REQUEST = 2,
- VF_PF_REQUEST_IN_PROCESSING = 3,
- VF_PF_RESPONSE_READY = 4,
+ VF_PF_UNKNOWN_STATE = 0,
+ VF_PF_WAIT_FOR_START_REQUEST = 1,
+ VF_PF_WAIT_FOR_NEXT_CHUNK_OF_REQUEST = 2,
+ VF_PF_REQUEST_IN_PROCESSING = 3,
+ VF_PF_RESPONSE_READY = 4,
};
struct ecore_iov_sw_mbx {
- enum mbx_state mbx_state;
+ enum mbx_state mbx_state;
- u32 request_size;
- u32 request_offset;
+ u32 request_size;
+ u32 request_offset;
- u32 response_size;
- u32 response_offset;
+ u32 response_size;
+ u32 response_offset;
};
/**
@@ -93,7 +93,7 @@ struct ecore_iov_sw_mbx {
* @return struct ecore_iov_sw_mbx*
*/
struct ecore_iov_sw_mbx *ecore_iov_get_vf_sw_mbx(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id);
+ u16 rel_vf_id);
#endif
#ifdef CONFIG_ECORE_SRIOV
@@ -457,9 +457,9 @@ void ecore_iov_get_vf_req_virt_mbx_params(struct ecore_hwfn *p_hwfn,
* @param p_reply_virt_size
*/
void ecore_iov_get_vf_reply_virt_mbx_params(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id,
+ u16 rel_vf_id,
void **pp_reply_virt_addr,
- u16 *p_reply_virt_size);
+ u16 *p_reply_virt_size);
/**
* @brief Validate if the given length is a valid vfpf message
diff --git a/drivers/net/qede/base/ecore_iro.h b/drivers/net/qede/base/ecore_iro.h
index dd53ea9..7cabdf7 100644
--- a/drivers/net/qede/base/ecore_iro.h
+++ b/drivers/net/qede/base/ecore_iro.h
@@ -10,24 +10,24 @@
#define __IRO_H__
/* Ystorm flow control mode. Use enum fw_flow_ctrl_mode */
-#define YSTORM_FLOW_CONTROL_MODE_OFFSET (IRO[0].base)
-#define YSTORM_FLOW_CONTROL_MODE_SIZE (IRO[0].size)
+#define YSTORM_FLOW_CONTROL_MODE_OFFSET (IRO[0].base)
+#define YSTORM_FLOW_CONTROL_MODE_SIZE (IRO[0].size)
/* Tstorm port statistics */
#define TSTORM_PORT_STAT_OFFSET(port_id) \
(IRO[1].base + ((port_id) * IRO[1].m1))
-#define TSTORM_PORT_STAT_SIZE (IRO[1].size)
+#define TSTORM_PORT_STAT_SIZE (IRO[1].size)
/* Ustorm VF-PF Channel ready flag */
#define USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id) \
(IRO[3].base + ((vf_id) * IRO[3].m1))
-#define USTORM_VF_PF_CHANNEL_READY_SIZE (IRO[3].size)
+#define USTORM_VF_PF_CHANNEL_READY_SIZE (IRO[3].size)
/* Ustorm Final flr cleanup ack */
#define USTORM_FLR_FINAL_ACK_OFFSET(pf_id) \
(IRO[4].base + ((pf_id) * IRO[4].m1))
-#define USTORM_FLR_FINAL_ACK_SIZE (IRO[4].size)
+#define USTORM_FLR_FINAL_ACK_SIZE (IRO[4].size)
/* Ustorm Event ring consumer */
#define USTORM_EQE_CONS_OFFSET(pf_id) \
(IRO[5].base + ((pf_id) * IRO[5].m1))
-#define USTORM_EQE_CONS_SIZE (IRO[5].size)
+#define USTORM_EQE_CONS_SIZE (IRO[5].size)
/* Ustorm Common Queue ring consumer */
#define USTORM_COMMON_QUEUE_CONS_OFFSET(global_queue_id) \
(IRO[6].base + ((global_queue_id) * IRO[6].m1))
diff --git a/drivers/net/qede/base/ecore_iro_values.h b/drivers/net/qede/base/ecore_iro_values.h
index c818b58..548ad14 100644
--- a/drivers/net/qede/base/ecore_iro_values.h
+++ b/drivers/net/qede/base/ecore_iro_values.h
@@ -10,49 +10,49 @@
#define __IRO_VALUES_H__
static const struct iro iro_arr[44] = {
- {0x0, 0x0, 0x0, 0x0, 0x8},
+ { 0x0, 0x0, 0x0, 0x0, 0x8},
{0x4db0, 0x60, 0x0, 0x0, 0x60},
{0x6418, 0x20, 0x0, 0x0, 0x20},
{0x500, 0x8, 0x0, 0x0, 0x4},
{0x480, 0x8, 0x0, 0x0, 0x4},
- {0x0, 0x8, 0x0, 0x0, 0x2},
+ { 0x0, 0x8, 0x0, 0x0, 0x2},
{0x80, 0x8, 0x0, 0x0, 0x2},
{0x4938, 0x0, 0x0, 0x0, 0x78},
- {0x3df0, 0x0, 0x0, 0x0, 0x78},
- {0x29b0, 0x0, 0x0, 0x0, 0x78},
+ { 0x3df0, 0x0, 0x0, 0x0, 0x78},
+ { 0x29b0, 0x0, 0x0, 0x0, 0x78},
{0x4d38, 0x0, 0x0, 0x0, 0x78},
{0x56c8, 0x0, 0x0, 0x0, 0x78},
- {0x7e48, 0x0, 0x0, 0x0, 0x78},
- {0xa28, 0x8, 0x0, 0x0, 0x8},
+ { 0x7e48, 0x0, 0x0, 0x0, 0x78},
+ { 0xa28, 0x8, 0x0, 0x0, 0x8},
{0x61f8, 0x10, 0x0, 0x0, 0x10},
{0xb500, 0x30, 0x0, 0x0, 0x30},
- {0x95b8, 0x30, 0x0, 0x0, 0x30},
+ { 0x95b8, 0x30, 0x0, 0x0, 0x30},
{0x5898, 0x40, 0x0, 0x0, 0x40},
{0x1f8, 0x10, 0x0, 0x0, 0x8},
{0xa228, 0x0, 0x0, 0x0, 0x4},
- {0x8050, 0x40, 0x0, 0x0, 0x30},
+ { 0x8050, 0x40, 0x0, 0x0, 0x30},
{0xcf8, 0x8, 0x0, 0x0, 0x8},
- {0x2b48, 0x80, 0x0, 0x0, 0x38},
+ { 0x2b48, 0x80, 0x0, 0x0, 0x38},
{0xadf0, 0x0, 0x0, 0x0, 0xf0},
{0xaee0, 0x8, 0x0, 0x0, 0x8},
{0x80, 0x8, 0x0, 0x0, 0x8},
- {0xac0, 0x8, 0x0, 0x0, 0x8},
- {0x2578, 0x8, 0x0, 0x0, 0x8},
- {0x24f8, 0x8, 0x0, 0x0, 0x8},
- {0x0, 0x8, 0x0, 0x0, 0x8},
- {0x200, 0x10, 0x8, 0x0, 0x8},
+ { 0xac0, 0x8, 0x0, 0x0, 0x8},
+ { 0x2578, 0x8, 0x0, 0x0, 0x8},
+ { 0x24f8, 0x8, 0x0, 0x0, 0x8},
+ { 0x0, 0x8, 0x0, 0x0, 0x8},
+ { 0x200, 0x10, 0x8, 0x0, 0x8},
{0x17f8, 0x8, 0x0, 0x0, 0x2},
{0x19f8, 0x10, 0x8, 0x0, 0x2},
{0xd988, 0x38, 0x0, 0x0, 0x24},
{0x11040, 0x10, 0x0, 0x0, 0x8},
{0x11670, 0x38, 0x0, 0x0, 0x18},
{0xaeb8, 0x30, 0x0, 0x0, 0x10},
- {0x86f8, 0x28, 0x0, 0x0, 0x18},
+ { 0x86f8, 0x28, 0x0, 0x0, 0x18},
{0xebf8, 0x10, 0x0, 0x0, 0x10},
{0xde08, 0x40, 0x0, 0x0, 0x30},
{0x121a0, 0x38, 0x0, 0x0, 0x8},
{0xf060, 0x20, 0x0, 0x0, 0x20},
- {0x2b80, 0x80, 0x0, 0x0, 0x10},
+ { 0x2b80, 0x80, 0x0, 0x0, 0x10},
{0x50a0, 0x10, 0x0, 0x0, 0x10},
};
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 9e6ef5a..b31523b 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -234,7 +234,7 @@ ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn,
SET_FIELD(*state, ETH_VPORT_RX_MODE_UCAST_DROP_ALL,
!(!!(accept_filter & ECORE_ACCEPT_UCAST_MATCHED) ||
- !!(accept_filter & ECORE_ACCEPT_UCAST_UNMATCHED)));
+ !!(accept_filter & ECORE_ACCEPT_UCAST_UNMATCHED)));
SET_FIELD(*state, ETH_VPORT_RX_MODE_UCAST_ACCEPT_UNMATCHED,
!!(accept_filter & ECORE_ACCEPT_UCAST_UNMATCHED));
@@ -429,7 +429,7 @@ ecore_sp_vport_update(struct ecore_hwfn *p_hwfn,
rc = ecore_sp_vport_update_rss(p_hwfn, p_ramrod, p_rss_params);
if (rc != ECORE_SUCCESS) {
- /* Return spq entry which is taken in ecore_sp_init_request() */
+ /* Return spq entry which is taken in ecore_sp_init_request()*/
ecore_spq_return_entry(p_hwfn, p_ent);
return rc;
}
@@ -632,7 +632,7 @@ enum _ecore_status_t ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
dma_addr_t bd_chain_phys_addr,
dma_addr_t cqe_pbl_addr,
u16 cqe_pbl_size,
- void OSAL_IOMEM * *pp_prod)
+ void OSAL_IOMEM **pp_prod)
{
struct ecore_hw_cid_data *p_rx_cid = &p_hwfn->p_rx_cids[rx_queue_id];
u8 abs_stats_id = 0;
@@ -788,7 +788,7 @@ ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
* In addition, VFs require the answer to come as eqe to PF.
*/
p_ramrod->complete_cqe_flg = (!!(p_rx_cid->opaque_fid ==
- p_hwfn->hw_info.opaque_fid) &&
+ p_hwfn->hw_info.opaque_fid) &&
!eq_completion_only) || cqe_completion;
p_ramrod->complete_event_flg = !(p_rx_cid->opaque_fid ==
p_hwfn->hw_info.opaque_fid) ||
@@ -876,7 +876,7 @@ enum _ecore_status_t ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
u8 sb_index,
dma_addr_t pbl_addr,
u16 pbl_size,
- void OSAL_IOMEM * *pp_doorbell)
+ void OSAL_IOMEM **pp_doorbell)
{
struct ecore_hw_cid_data *p_tx_cid = &p_hwfn->p_tx_cids[tx_queue_id];
union ecore_qm_pq_params pq_params;
@@ -1274,7 +1274,7 @@ ecore_sp_eth_filter_mcast(struct ecore_hwfn *p_hwfn,
u8 abs_vport_id = 0;
int i;
- rc = ecore_fw_vport(p_hwfn,
+ rc = ecore_fw_vport(p_hwfn,
(p_filter_cmd->opcode == ECORE_FILTER_ADD) ?
p_filter_cmd->vport_to_add_to :
p_filter_cmd->vport_to_remove_from, &abs_vport_id);
@@ -1306,9 +1306,9 @@ ecore_sp_eth_filter_mcast(struct ecore_hwfn *p_hwfn,
ETH_MULTICAST_MAC_BINS_IN_REGS);
if (p_filter_cmd->opcode == ECORE_FILTER_ADD) {
- /* filter ADD op is explicit set op and it removes
- * any existing filters for the vport.
- */
+ /* filter ADD op is explicit set op and it removes
+ * any existing filters for the vport.
+ */
for (i = 0; i < p_filter_cmd->num_mc_addrs; i++) {
u32 bit;
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index b0850ca..5594a08 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -103,7 +103,7 @@ ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
* @return enum _ecore_status_t
*/
enum _ecore_status_t
-ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
+ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
u16 opaque_fid,
u32 cid,
u16 rx_queue_id,
@@ -134,7 +134,7 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
* @return enum _ecore_status_t
*/
enum _ecore_status_t
-ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
+ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
u16 opaque_fid,
u16 tx_queue_id,
u32 cid,
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index b41dd7f..ab9aca0 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -14,17 +14,17 @@
#ifndef __EXTRACT__LINUX__
enum ecore_rss_caps {
- ECORE_RSS_IPV4 = 0x1,
- ECORE_RSS_IPV6 = 0x2,
- ECORE_RSS_IPV4_TCP = 0x4,
- ECORE_RSS_IPV6_TCP = 0x8,
- ECORE_RSS_IPV4_UDP = 0x10,
- ECORE_RSS_IPV6_UDP = 0x20,
+ ECORE_RSS_IPV4 = 0x1,
+ ECORE_RSS_IPV6 = 0x2,
+ ECORE_RSS_IPV4_TCP = 0x4,
+ ECORE_RSS_IPV6_TCP = 0x8,
+ ECORE_RSS_IPV4_UDP = 0x10,
+ ECORE_RSS_IPV6_UDP = 0x20,
};
/* Should be the same as ETH_RSS_IND_TABLE_ENTRIES_NUM */
#define ECORE_RSS_IND_TABLE_SIZE 128
-#define ECORE_RSS_KEY_SIZE 10 /* size in 32b chunks */
+#define ECORE_RSS_KEY_SIZE 10 /* size in 32b chunks */
#endif
struct ecore_rss_params {
@@ -35,7 +35,7 @@ struct ecore_rss_params {
u8 update_rss_ind_table;
u8 update_rss_key;
u8 rss_caps;
- u8 rss_table_size_log; /* The table size is 2 ^ rss_table_size_log */
+ u8 rss_table_size_log; /* The table size is 2 ^ rss_table_size_log */
u16 rss_ind_table[ECORE_RSS_IND_TABLE_SIZE];
u32 rss_key[ECORE_RSS_KEY_SIZE];
};
@@ -63,8 +63,8 @@ enum ecore_filter_opcode {
ECORE_FILTER_ADD,
ECORE_FILTER_REMOVE,
ECORE_FILTER_MOVE,
- ECORE_FILTER_REPLACE, /* Delete all MACs and add new one instead */
- ECORE_FILTER_FLUSH, /* Removes all filters */
+ ECORE_FILTER_REPLACE, /* Delete all MACs and add new one instead */
+ ECORE_FILTER_FLUSH, /* Removes all filters */
};
enum ecore_filter_ucast_type {
@@ -97,7 +97,7 @@ struct ecore_filter_mcast {
enum ecore_filter_opcode opcode;
u8 vport_to_add_to;
u8 vport_to_remove_from;
- u8 num_mc_addrs;
+ u8 num_mc_addrs;
#define ECORE_MAX_MC_ADDRS 64
unsigned char mac[ECORE_MAX_MC_ADDRS][ETH_ALEN];
};
@@ -138,12 +138,12 @@ ecore_filter_mcast_cmd(struct ecore_dev *p_dev,
/* Set "accept" filters */
enum _ecore_status_t
ecore_filter_accept_cmd(struct ecore_dev *p_dev,
- u8 vport,
- struct ecore_filter_accept_flags accept_flags,
- u8 update_accept_any_vlan,
- u8 accept_any_vlan,
- enum spq_mode comp_mode,
- struct ecore_spq_comp_cb *p_comp_data);
+ u8 vport,
+ struct ecore_filter_accept_flags accept_flags,
+ u8 update_accept_any_vlan,
+ u8 accept_any_vlan,
+ enum spq_mode comp_mode,
+ struct ecore_spq_comp_cb *p_comp_data);
/**
* @brief ecore_sp_eth_rx_queue_start - RX Queue Start Ramrod
@@ -156,11 +156,11 @@ ecore_filter_accept_cmd(struct ecore_dev *p_dev,
* @param rx_queue_id RX Queue ID: Zero based, per VPort, allocated
* by assignment (=rssId)
* @param vport_id VPort ID
- * @param u8 stats_id VPort ID which the queue stats
+ * @param u8 stats_id VPort ID which the queue stats
* will be added to
* @param sb Status Block of the Function Event Ring
* @param sb_index Index into the status block of the
- * Function Event Ring
+ * Function Event Ring
* @param bd_max_bytes Maximum bytes that can be placed on a BD
* @param bd_chain_phys_addr Physical address of BDs for receive.
* @param cqe_pbl_addr Physical address of the CQE PBL Table.
@@ -182,7 +182,7 @@ enum _ecore_status_t ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
dma_addr_t bd_chain_phys_addr,
dma_addr_t cqe_pbl_addr,
u16 cqe_pbl_size,
- void OSAL_IOMEM * *pp_prod);
+ void OSAL_IOMEM **pp_prod);
/**
* @brief ecore_sp_eth_rx_queue_stop -
@@ -224,7 +224,7 @@ ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
* @param pbl_addr address of the pbl array
* @param pbl_size number of entries in pbl
* @param pp_doorbell Pointer to place doorbell pointer (May be NULL).
- * This address should be used with the
+ * This address should be used with the
* DIRECT_REG_WR macro.
*
* @return enum _ecore_status_t
@@ -255,7 +255,7 @@ enum _ecore_status_t ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
u16 tx_queue_id);
-enum ecore_tpa_mode {
+enum ecore_tpa_mode {
ECORE_TPA_MODE_NONE,
ECORE_TPA_MODE_RSC,
ECORE_TPA_MODE_GRO,
@@ -293,28 +293,28 @@ ecore_sp_vport_start(struct ecore_hwfn *p_hwfn,
struct ecore_sp_vport_start_params *p_params);
struct ecore_sp_vport_update_params {
- u16 opaque_fid;
- u8 vport_id;
- u8 update_vport_active_rx_flg;
- u8 vport_active_rx_flg;
- u8 update_vport_active_tx_flg;
- u8 vport_active_tx_flg;
- u8 update_inner_vlan_removal_flg;
- u8 inner_vlan_removal_flg;
- u8 silent_vlan_removal_flg;
- u8 update_default_vlan_enable_flg;
- u8 default_vlan_enable_flg;
- u8 update_default_vlan_flg;
- u16 default_vlan;
- u8 update_tx_switching_flg;
- u8 tx_switching_flg;
- u8 update_approx_mcast_flg;
- u8 update_anti_spoofing_en_flg;
- u8 anti_spoofing_en;
- u8 update_accept_any_vlan_flg;
- u8 accept_any_vlan;
- unsigned long bins[8];
- struct ecore_rss_params *rss_params;
+ u16 opaque_fid;
+ u8 vport_id;
+ u8 update_vport_active_rx_flg;
+ u8 vport_active_rx_flg;
+ u8 update_vport_active_tx_flg;
+ u8 vport_active_tx_flg;
+ u8 update_inner_vlan_removal_flg;
+ u8 inner_vlan_removal_flg;
+ u8 silent_vlan_removal_flg;
+ u8 update_default_vlan_enable_flg;
+ u8 default_vlan_enable_flg;
+ u8 update_default_vlan_flg;
+ u16 default_vlan;
+ u8 update_tx_switching_flg;
+ u8 tx_switching_flg;
+ u8 update_approx_mcast_flg;
+ u8 update_anti_spoofing_en_flg;
+ u8 anti_spoofing_en;
+ u8 update_accept_any_vlan_flg;
+ u8 accept_any_vlan;
+ unsigned long bins[8];
+ struct ecore_rss_params *rss_params;
struct ecore_filter_accept_flags accept_flags;
struct ecore_sge_tpa_params *sge_tpa_params;
};
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 9dd2eed..2823113 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -333,7 +333,7 @@ enum _ecore_status_t ecore_mcp_cmd(struct ecore_hwfn *p_hwfn,
}
enum _ecore_status_t ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
+ struct ecore_ptt *p_ptt,
u32 cmd, u32 param,
union drv_union_data *p_union_data,
u32 *o_mcp_resp,
@@ -354,18 +354,18 @@ enum _ecore_status_t ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
OSAL_SPIN_LOCK(&p_hwfn->mcp_info->lock);
if (p_union_data != OSAL_NULL) {
- union_data_addr = p_hwfn->mcp_info->drv_mb_addr +
- OFFSETOF(struct public_drv_mb, union_data);
+ union_data_addr = p_hwfn->mcp_info->drv_mb_addr +
+ OFFSETOF(struct public_drv_mb, union_data);
ecore_memcpy_to(p_hwfn, p_ptt, union_data_addr, p_union_data,
sizeof(*p_union_data));
- }
+}
rc = ecore_do_mcp_cmd(p_hwfn, p_ptt, cmd, param, o_mcp_resp,
o_mcp_param);
OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->lock);
- return rc;
+ return rc;
}
enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn,
@@ -577,7 +577,7 @@ static void ecore_mcp_handle_transceiver_change(struct ecore_hwfn *p_hwfn,
DP_VERBOSE(p_hwfn, (ECORE_MSG_HW | ECORE_MSG_SP),
"Received transceiver state update [0x%08x] from mfw"
- "[Addr 0x%x]\n",
+ " [Addr 0x%x]\n",
transceiver_state, (u32)(p_hwfn->mcp_info->port_addr +
OFFSETOF(struct public_port,
transceiver_data)));
@@ -661,18 +661,18 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn,
if (p_hwfn->mcp_info->func_info.bandwidth_max && p_link->speed) {
u8 max_bw = p_hwfn->mcp_info->func_info.bandwidth_max;
- __ecore_configure_pf_max_bandwidth(p_hwfn, p_ptt,
- p_link, max_bw);
+ __ecore_configure_pf_max_bandwidth(p_hwfn, p_ptt,
+ p_link, max_bw);
}
if (p_hwfn->mcp_info->func_info.bandwidth_min && p_link->speed) {
u8 min_bw = p_hwfn->mcp_info->func_info.bandwidth_min;
- __ecore_configure_pf_min_bandwidth(p_hwfn, p_ptt,
- p_link, min_bw);
+ __ecore_configure_pf_min_bandwidth(p_hwfn, p_ptt,
+ p_link, min_bw);
- ecore_configure_vp_wfq_on_link_change(p_hwfn->p_dev,
- p_link->min_pf_rate);
+ ecore_configure_vp_wfq_on_link_change(p_hwfn->p_dev,
+ p_link->min_pf_rate);
}
p_link->an = !!(status & LINK_STATUS_AUTO_NEGOTIATE_ENABLED);
@@ -1090,8 +1090,8 @@ enum _ecore_status_t ecore_mcp_get_mfw_ver(struct ecore_dev *p_dev,
DP_VERBOSE(p_dev, ECORE_MSG_IOV,
"VF requested MFW vers prior to ACQUIRE\n");
- return ECORE_INVAL;
- }
+ return ECORE_INVAL;
+ }
global_offsize = ecore_rd(p_hwfn, p_ptt,
SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 448c30b..7af4349 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -38,10 +38,10 @@ struct ecore_mcp_info {
u32 port_addr; /* Address of the port configuration (link) */
u16 drv_mb_seq; /* Current driver mailbox sequence */
u16 drv_pulse_seq; /* Current driver pulse sequence */
- struct ecore_mcp_link_params link_input;
- struct ecore_mcp_link_state link_output;
+ struct ecore_mcp_link_params link_input;
+ struct ecore_mcp_link_state link_output;
struct ecore_mcp_link_capabilities link_capabilities;
- struct ecore_mcp_function_info func_info;
+ struct ecore_mcp_function_info func_info;
u8 *mfw_mb_cur;
u8 *mfw_mb_shadow;
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 7360b35..530c0ec 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -13,8 +13,8 @@
struct ecore_mcp_link_speed_params {
bool autoneg;
- u32 advertised_speeds; /* bitmask of DRV_SPEED_CAPABILITY */
- u32 forced_speed; /* In Mb/s */
+ u32 advertised_speeds; /* bitmask of DRV_SPEED_CAPABILITY */
+ u32 forced_speed; /* In Mb/s */
};
struct ecore_mcp_link_pause_params {
@@ -26,7 +26,7 @@ struct ecore_mcp_link_pause_params {
struct ecore_mcp_link_params {
struct ecore_mcp_link_speed_params speed;
struct ecore_mcp_link_pause_params pause;
- u32 loopback_mode; /* in PMM_LOOPBACK values */
+ u32 loopback_mode; /* in PMM_LOOPBACK values */
};
struct ecore_mcp_link_capabilities {
@@ -36,9 +36,9 @@ struct ecore_mcp_link_capabilities {
struct ecore_mcp_link_state {
bool link_up;
- u32 line_speed; /* In Mb/s */
- u32 min_pf_rate; /* In Mb/s */
- u32 speed; /* In Mb/s */
+ u32 line_speed; /* In Mb/s */
+ u32 min_pf_rate; /* In Mb/s */
+ u32 speed; /* In Mb/s */
bool full_duplex;
bool an;
@@ -237,7 +237,7 @@ enum _ecore_status_t ecore_mcp_get_mfw_ver(struct ecore_dev *p_dev,
* ECORE_BUSY - Operation failed
*/
enum _ecore_status_t ecore_mcp_get_media_type(struct ecore_dev *p_dev,
- u32 *media_type);
+ u32 *media_type);
/**
* @brief - Sends a command to the MCP mailbox.
@@ -542,7 +542,7 @@ enum _ecore_status_t ecore_mcp_phy_read(struct ecore_dev *p_dev, u32 cmd,
* @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
*/
enum _ecore_status_t ecore_mcp_nvm_read(struct ecore_dev *p_dev, u32 addr,
- u8 *p_buf, u32 len);
+ u8 *p_buf, u32 len);
/**
* @brief Read from sfp
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index 2fecbc8..bcbd9f0 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -18,11 +18,11 @@ struct ecore_eth_pf_params {
* and these parameters need to be passed as arguments
* to update_pf_params routine invoked before slowpath start
*/
- u16 num_cons;
+ u16 num_cons;
};
struct ecore_pf_params {
- struct ecore_eth_pf_params eth_pf_params;
+ struct ecore_eth_pf_params eth_pf_params;
};
#endif
diff --git a/drivers/net/qede/base/ecore_rt_defs.h b/drivers/net/qede/base/ecore_rt_defs.h
index 1f5139e..cc8a8ed 100644
--- a/drivers/net/qede/base/ecore_rt_defs.h
+++ b/drivers/net/qede/base/ecore_rt_defs.h
@@ -10,93 +10,93 @@
#define __RT_DEFS_H__
/* Runtime array offsets */
-#define DORQ_REG_PF_MAX_ICID_0_RT_OFFSET 0
-#define DORQ_REG_PF_MAX_ICID_1_RT_OFFSET 1
-#define DORQ_REG_PF_MAX_ICID_2_RT_OFFSET 2
-#define DORQ_REG_PF_MAX_ICID_3_RT_OFFSET 3
-#define DORQ_REG_PF_MAX_ICID_4_RT_OFFSET 4
-#define DORQ_REG_PF_MAX_ICID_5_RT_OFFSET 5
-#define DORQ_REG_PF_MAX_ICID_6_RT_OFFSET 6
-#define DORQ_REG_PF_MAX_ICID_7_RT_OFFSET 7
-#define DORQ_REG_VF_MAX_ICID_0_RT_OFFSET 8
-#define DORQ_REG_VF_MAX_ICID_1_RT_OFFSET 9
-#define DORQ_REG_VF_MAX_ICID_2_RT_OFFSET 10
-#define DORQ_REG_VF_MAX_ICID_3_RT_OFFSET 11
-#define DORQ_REG_VF_MAX_ICID_4_RT_OFFSET 12
-#define DORQ_REG_VF_MAX_ICID_5_RT_OFFSET 13
-#define DORQ_REG_VF_MAX_ICID_6_RT_OFFSET 14
-#define DORQ_REG_VF_MAX_ICID_7_RT_OFFSET 15
-#define DORQ_REG_PF_WAKE_ALL_RT_OFFSET 16
-#define DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET 17
-#define IGU_REG_PF_CONFIGURATION_RT_OFFSET 18
-#define IGU_REG_VF_CONFIGURATION_RT_OFFSET 19
-#define IGU_REG_ATTN_MSG_ADDR_L_RT_OFFSET 20
-#define IGU_REG_ATTN_MSG_ADDR_H_RT_OFFSET 21
-#define IGU_REG_LEADING_EDGE_LATCH_RT_OFFSET 22
-#define IGU_REG_TRAILING_EDGE_LATCH_RT_OFFSET 23
-#define CAU_REG_CQE_AGG_UNIT_SIZE_RT_OFFSET 24
-#define CAU_REG_SB_VAR_MEMORY_RT_OFFSET 761
-#define CAU_REG_SB_VAR_MEMORY_RT_SIZE 736
-#define CAU_REG_SB_VAR_MEMORY_RT_OFFSET 761
-#define CAU_REG_SB_VAR_MEMORY_RT_SIZE 736
-#define CAU_REG_SB_ADDR_MEMORY_RT_OFFSET 1497
-#define CAU_REG_SB_ADDR_MEMORY_RT_SIZE 736
-#define CAU_REG_PI_MEMORY_RT_OFFSET 2233
-#define CAU_REG_PI_MEMORY_RT_SIZE 4416
-#define PRS_REG_SEARCH_RESP_INITIATOR_TYPE_RT_OFFSET 6649
-#define PRS_REG_TASK_ID_MAX_INITIATOR_PF_RT_OFFSET 6650
-#define PRS_REG_TASK_ID_MAX_INITIATOR_VF_RT_OFFSET 6651
-#define PRS_REG_TASK_ID_MAX_TARGET_PF_RT_OFFSET 6652
-#define PRS_REG_TASK_ID_MAX_TARGET_VF_RT_OFFSET 6653
-#define PRS_REG_SEARCH_TCP_RT_OFFSET 6654
-#define PRS_REG_SEARCH_OPENFLOW_RT_OFFSET 6659
-#define PRS_REG_SEARCH_NON_IP_AS_OPENFLOW_RT_OFFSET 6660
-#define PRS_REG_OPENFLOW_SUPPORT_ONLY_KNOWN_OVER_IP_RT_OFFSET 6661
-#define PRS_REG_OPENFLOW_SEARCH_KEY_MASK_RT_OFFSET 6662
-#define PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET 6663
-#define PRS_REG_LIGHT_L2_ETHERTYPE_EN_RT_OFFSET 6664
-#define SRC_REG_FIRSTFREE_RT_OFFSET 6665
-#define SRC_REG_FIRSTFREE_RT_SIZE 2
-#define SRC_REG_LASTFREE_RT_OFFSET 6667
-#define SRC_REG_LASTFREE_RT_SIZE 2
-#define SRC_REG_COUNTFREE_RT_OFFSET 6669
-#define SRC_REG_NUMBER_HASH_BITS_RT_OFFSET 6670
-#define PSWRQ2_REG_CDUT_P_SIZE_RT_OFFSET 6671
-#define PSWRQ2_REG_CDUC_P_SIZE_RT_OFFSET 6672
-#define PSWRQ2_REG_TM_P_SIZE_RT_OFFSET 6673
-#define PSWRQ2_REG_QM_P_SIZE_RT_OFFSET 6674
-#define PSWRQ2_REG_SRC_P_SIZE_RT_OFFSET 6675
-#define PSWRQ2_REG_TSDM_P_SIZE_RT_OFFSET 6676
-#define PSWRQ2_REG_TM_FIRST_ILT_RT_OFFSET 6677
-#define PSWRQ2_REG_TM_LAST_ILT_RT_OFFSET 6678
-#define PSWRQ2_REG_QM_FIRST_ILT_RT_OFFSET 6679
-#define PSWRQ2_REG_QM_LAST_ILT_RT_OFFSET 6680
-#define PSWRQ2_REG_SRC_FIRST_ILT_RT_OFFSET 6681
-#define PSWRQ2_REG_SRC_LAST_ILT_RT_OFFSET 6682
-#define PSWRQ2_REG_CDUC_FIRST_ILT_RT_OFFSET 6683
-#define PSWRQ2_REG_CDUC_LAST_ILT_RT_OFFSET 6684
-#define PSWRQ2_REG_CDUT_FIRST_ILT_RT_OFFSET 6685
-#define PSWRQ2_REG_CDUT_LAST_ILT_RT_OFFSET 6686
-#define PSWRQ2_REG_TSDM_FIRST_ILT_RT_OFFSET 6687
-#define PSWRQ2_REG_TSDM_LAST_ILT_RT_OFFSET 6688
-#define PSWRQ2_REG_TM_NUMBER_OF_PF_BLOCKS_RT_OFFSET 6689
-#define PSWRQ2_REG_CDUT_NUMBER_OF_PF_BLOCKS_RT_OFFSET 6690
-#define PSWRQ2_REG_CDUC_NUMBER_OF_PF_BLOCKS_RT_OFFSET 6691
-#define PSWRQ2_REG_TM_VF_BLOCKS_RT_OFFSET 6692
-#define PSWRQ2_REG_CDUT_VF_BLOCKS_RT_OFFSET 6693
-#define PSWRQ2_REG_CDUC_VF_BLOCKS_RT_OFFSET 6694
-#define PSWRQ2_REG_TM_BLOCKS_FACTOR_RT_OFFSET 6695
-#define PSWRQ2_REG_CDUT_BLOCKS_FACTOR_RT_OFFSET 6696
-#define PSWRQ2_REG_CDUC_BLOCKS_FACTOR_RT_OFFSET 6697
-#define PSWRQ2_REG_VF_BASE_RT_OFFSET 6698
-#define PSWRQ2_REG_VF_LAST_ILT_RT_OFFSET 6699
-#define PSWRQ2_REG_WR_MBS0_RT_OFFSET 6700
-#define PSWRQ2_REG_RD_MBS0_RT_OFFSET 6701
-#define PSWRQ2_REG_DRAM_ALIGN_WR_RT_OFFSET 6702
-#define PSWRQ2_REG_DRAM_ALIGN_RD_RT_OFFSET 6703
-#define PSWRQ2_REG_ILT_MEMORY_RT_OFFSET 6704
-#define PSWRQ2_REG_ILT_MEMORY_RT_SIZE 22000
-#define PGLUE_REG_B_VF_BASE_RT_OFFSET 28704
+#define DORQ_REG_PF_MAX_ICID_0_RT_OFFSET 0
+#define DORQ_REG_PF_MAX_ICID_1_RT_OFFSET 1
+#define DORQ_REG_PF_MAX_ICID_2_RT_OFFSET 2
+#define DORQ_REG_PF_MAX_ICID_3_RT_OFFSET 3
+#define DORQ_REG_PF_MAX_ICID_4_RT_OFFSET 4
+#define DORQ_REG_PF_MAX_ICID_5_RT_OFFSET 5
+#define DORQ_REG_PF_MAX_ICID_6_RT_OFFSET 6
+#define DORQ_REG_PF_MAX_ICID_7_RT_OFFSET 7
+#define DORQ_REG_VF_MAX_ICID_0_RT_OFFSET 8
+#define DORQ_REG_VF_MAX_ICID_1_RT_OFFSET 9
+#define DORQ_REG_VF_MAX_ICID_2_RT_OFFSET 10
+#define DORQ_REG_VF_MAX_ICID_3_RT_OFFSET 11
+#define DORQ_REG_VF_MAX_ICID_4_RT_OFFSET 12
+#define DORQ_REG_VF_MAX_ICID_5_RT_OFFSET 13
+#define DORQ_REG_VF_MAX_ICID_6_RT_OFFSET 14
+#define DORQ_REG_VF_MAX_ICID_7_RT_OFFSET 15
+#define DORQ_REG_PF_WAKE_ALL_RT_OFFSET 16
+#define DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET 17
+#define IGU_REG_PF_CONFIGURATION_RT_OFFSET 18
+#define IGU_REG_VF_CONFIGURATION_RT_OFFSET 19
+#define IGU_REG_ATTN_MSG_ADDR_L_RT_OFFSET 20
+#define IGU_REG_ATTN_MSG_ADDR_H_RT_OFFSET 21
+#define IGU_REG_LEADING_EDGE_LATCH_RT_OFFSET 22
+#define IGU_REG_TRAILING_EDGE_LATCH_RT_OFFSET 23
+#define CAU_REG_CQE_AGG_UNIT_SIZE_RT_OFFSET 24
+#define CAU_REG_SB_VAR_MEMORY_RT_OFFSET 761
+#define CAU_REG_SB_VAR_MEMORY_RT_SIZE 736
+#define CAU_REG_SB_VAR_MEMORY_RT_OFFSET 761
+#define CAU_REG_SB_VAR_MEMORY_RT_SIZE 736
+#define CAU_REG_SB_ADDR_MEMORY_RT_OFFSET 1497
+#define CAU_REG_SB_ADDR_MEMORY_RT_SIZE 736
+#define CAU_REG_PI_MEMORY_RT_OFFSET 2233
+#define CAU_REG_PI_MEMORY_RT_SIZE 4416
+#define PRS_REG_SEARCH_RESP_INITIATOR_TYPE_RT_OFFSET 6649
+#define PRS_REG_TASK_ID_MAX_INITIATOR_PF_RT_OFFSET 6650
+#define PRS_REG_TASK_ID_MAX_INITIATOR_VF_RT_OFFSET 6651
+#define PRS_REG_TASK_ID_MAX_TARGET_PF_RT_OFFSET 6652
+#define PRS_REG_TASK_ID_MAX_TARGET_VF_RT_OFFSET 6653
+#define PRS_REG_SEARCH_TCP_RT_OFFSET 6654
+#define PRS_REG_SEARCH_OPENFLOW_RT_OFFSET 6659
+#define PRS_REG_SEARCH_NON_IP_AS_OPENFLOW_RT_OFFSET 6660
+#define PRS_REG_OPENFLOW_SUPPORT_ONLY_KNOWN_OVER_IP_RT_OFFSET 6661
+#define PRS_REG_OPENFLOW_SEARCH_KEY_MASK_RT_OFFSET 6662
+#define PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET 6663
+#define PRS_REG_LIGHT_L2_ETHERTYPE_EN_RT_OFFSET 6664
+#define SRC_REG_FIRSTFREE_RT_OFFSET 6665
+#define SRC_REG_FIRSTFREE_RT_SIZE 2
+#define SRC_REG_LASTFREE_RT_OFFSET 6667
+#define SRC_REG_LASTFREE_RT_SIZE 2
+#define SRC_REG_COUNTFREE_RT_OFFSET 6669
+#define SRC_REG_NUMBER_HASH_BITS_RT_OFFSET 6670
+#define PSWRQ2_REG_CDUT_P_SIZE_RT_OFFSET 6671
+#define PSWRQ2_REG_CDUC_P_SIZE_RT_OFFSET 6672
+#define PSWRQ2_REG_TM_P_SIZE_RT_OFFSET 6673
+#define PSWRQ2_REG_QM_P_SIZE_RT_OFFSET 6674
+#define PSWRQ2_REG_SRC_P_SIZE_RT_OFFSET 6675
+#define PSWRQ2_REG_TSDM_P_SIZE_RT_OFFSET 6676
+#define PSWRQ2_REG_TM_FIRST_ILT_RT_OFFSET 6677
+#define PSWRQ2_REG_TM_LAST_ILT_RT_OFFSET 6678
+#define PSWRQ2_REG_QM_FIRST_ILT_RT_OFFSET 6679
+#define PSWRQ2_REG_QM_LAST_ILT_RT_OFFSET 6680
+#define PSWRQ2_REG_SRC_FIRST_ILT_RT_OFFSET 6681
+#define PSWRQ2_REG_SRC_LAST_ILT_RT_OFFSET 6682
+#define PSWRQ2_REG_CDUC_FIRST_ILT_RT_OFFSET 6683
+#define PSWRQ2_REG_CDUC_LAST_ILT_RT_OFFSET 6684
+#define PSWRQ2_REG_CDUT_FIRST_ILT_RT_OFFSET 6685
+#define PSWRQ2_REG_CDUT_LAST_ILT_RT_OFFSET 6686
+#define PSWRQ2_REG_TSDM_FIRST_ILT_RT_OFFSET 6687
+#define PSWRQ2_REG_TSDM_LAST_ILT_RT_OFFSET 6688
+#define PSWRQ2_REG_TM_NUMBER_OF_PF_BLOCKS_RT_OFFSET 6689
+#define PSWRQ2_REG_CDUT_NUMBER_OF_PF_BLOCKS_RT_OFFSET 6690
+#define PSWRQ2_REG_CDUC_NUMBER_OF_PF_BLOCKS_RT_OFFSET 6691
+#define PSWRQ2_REG_TM_VF_BLOCKS_RT_OFFSET 6692
+#define PSWRQ2_REG_CDUT_VF_BLOCKS_RT_OFFSET 6693
+#define PSWRQ2_REG_CDUC_VF_BLOCKS_RT_OFFSET 6694
+#define PSWRQ2_REG_TM_BLOCKS_FACTOR_RT_OFFSET 6695
+#define PSWRQ2_REG_CDUT_BLOCKS_FACTOR_RT_OFFSET 6696
+#define PSWRQ2_REG_CDUC_BLOCKS_FACTOR_RT_OFFSET 6697
+#define PSWRQ2_REG_VF_BASE_RT_OFFSET 6698
+#define PSWRQ2_REG_VF_LAST_ILT_RT_OFFSET 6699
+#define PSWRQ2_REG_WR_MBS0_RT_OFFSET 6700
+#define PSWRQ2_REG_RD_MBS0_RT_OFFSET 6701
+#define PSWRQ2_REG_DRAM_ALIGN_WR_RT_OFFSET 6702
+#define PSWRQ2_REG_DRAM_ALIGN_RD_RT_OFFSET 6703
+#define PSWRQ2_REG_ILT_MEMORY_RT_OFFSET 6704
+#define PSWRQ2_REG_ILT_MEMORY_RT_SIZE 22000
+#define PGLUE_REG_B_VF_BASE_RT_OFFSET 28704
#define PGLUE_REG_B_CACHE_LINE_SIZE_RT_OFFSET 28705
#define PGLUE_REG_B_PF_BAR0_SIZE_RT_OFFSET 28706
#define PGLUE_REG_B_PF_BAR1_SIZE_RT_OFFSET 28707
@@ -107,9 +107,9 @@
#define TM_REG_GROUP_SIZE_RESOLUTION_CONN_RT_OFFSET 28712
#define TM_REG_GROUP_SIZE_RESOLUTION_TASK_RT_OFFSET 28713
#define TM_REG_CONFIG_CONN_MEM_RT_OFFSET 28714
-#define TM_REG_CONFIG_CONN_MEM_RT_SIZE 416
+#define TM_REG_CONFIG_CONN_MEM_RT_SIZE 416
#define TM_REG_CONFIG_TASK_MEM_RT_OFFSET 29130
-#define TM_REG_CONFIG_TASK_MEM_RT_SIZE 512
+#define TM_REG_CONFIG_TASK_MEM_RT_SIZE 512
#define QM_REG_MAXPQSIZE_0_RT_OFFSET 29642
#define QM_REG_MAXPQSIZE_1_RT_OFFSET 29643
#define QM_REG_MAXPQSIZE_2_RT_OFFSET 29644
@@ -178,11 +178,11 @@
#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET 29707
#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET 29708
#define QM_REG_BASEADDROTHERPQ_RT_OFFSET 29709
-#define QM_REG_BASEADDROTHERPQ_RT_SIZE 128
+#define QM_REG_BASEADDROTHERPQ_RT_SIZE 128
#define QM_REG_VOQCRDLINE_RT_OFFSET 29837
-#define QM_REG_VOQCRDLINE_RT_SIZE 20
+#define QM_REG_VOQCRDLINE_RT_SIZE 20
#define QM_REG_VOQINITCRDLINE_RT_OFFSET 29857
-#define QM_REG_VOQINITCRDLINE_RT_SIZE 20
+#define QM_REG_VOQINITCRDLINE_RT_SIZE 20
#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET 29877
#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET 29878
#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET 29879
@@ -303,42 +303,42 @@
#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET 29994
#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET 29995
#define QM_REG_RLGLBLINCVAL_RT_OFFSET 29996
-#define QM_REG_RLGLBLINCVAL_RT_SIZE 256
+#define QM_REG_RLGLBLINCVAL_RT_SIZE 256
#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET 30252
-#define QM_REG_RLGLBLUPPERBOUND_RT_SIZE 256
+#define QM_REG_RLGLBLUPPERBOUND_RT_SIZE 256
#define QM_REG_RLGLBLCRD_RT_OFFSET 30508
-#define QM_REG_RLGLBLCRD_RT_SIZE 256
+#define QM_REG_RLGLBLCRD_RT_SIZE 256
#define QM_REG_RLGLBLENABLE_RT_OFFSET 30764
#define QM_REG_RLPFPERIOD_RT_OFFSET 30765
#define QM_REG_RLPFPERIODTIMER_RT_OFFSET 30766
#define QM_REG_RLPFINCVAL_RT_OFFSET 30767
-#define QM_REG_RLPFINCVAL_RT_SIZE 16
+#define QM_REG_RLPFINCVAL_RT_SIZE 16
#define QM_REG_RLPFUPPERBOUND_RT_OFFSET 30783
-#define QM_REG_RLPFUPPERBOUND_RT_SIZE 16
+#define QM_REG_RLPFUPPERBOUND_RT_SIZE 16
#define QM_REG_RLPFCRD_RT_OFFSET 30799
-#define QM_REG_RLPFCRD_RT_SIZE 16
+#define QM_REG_RLPFCRD_RT_SIZE 16
#define QM_REG_RLPFENABLE_RT_OFFSET 30815
#define QM_REG_RLPFVOQENABLE_RT_OFFSET 30816
#define QM_REG_WFQPFWEIGHT_RT_OFFSET 30817
-#define QM_REG_WFQPFWEIGHT_RT_SIZE 16
+#define QM_REG_WFQPFWEIGHT_RT_SIZE 16
#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET 30833
-#define QM_REG_WFQPFUPPERBOUND_RT_SIZE 16
+#define QM_REG_WFQPFUPPERBOUND_RT_SIZE 16
#define QM_REG_WFQPFCRD_RT_OFFSET 30849
-#define QM_REG_WFQPFCRD_RT_SIZE 160
+#define QM_REG_WFQPFCRD_RT_SIZE 160
#define QM_REG_WFQPFENABLE_RT_OFFSET 31009
#define QM_REG_WFQVPENABLE_RT_OFFSET 31010
#define QM_REG_BASEADDRTXPQ_RT_OFFSET 31011
-#define QM_REG_BASEADDRTXPQ_RT_SIZE 512
+#define QM_REG_BASEADDRTXPQ_RT_SIZE 512
#define QM_REG_TXPQMAP_RT_OFFSET 31523
-#define QM_REG_TXPQMAP_RT_SIZE 512
+#define QM_REG_TXPQMAP_RT_SIZE 512
#define QM_REG_WFQVPWEIGHT_RT_OFFSET 32035
-#define QM_REG_WFQVPWEIGHT_RT_SIZE 512
+#define QM_REG_WFQVPWEIGHT_RT_SIZE 512
#define QM_REG_WFQVPCRD_RT_OFFSET 32547
-#define QM_REG_WFQVPCRD_RT_SIZE 512
+#define QM_REG_WFQVPCRD_RT_SIZE 512
#define QM_REG_WFQVPMAP_RT_OFFSET 33059
-#define QM_REG_WFQVPMAP_RT_SIZE 512
+#define QM_REG_WFQVPMAP_RT_SIZE 512
#define QM_REG_WFQPFCRD_MSB_RT_OFFSET 33571
-#define QM_REG_WFQPFCRD_MSB_RT_SIZE 160
+#define QM_REG_WFQPFCRD_MSB_RT_SIZE 160
#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET 33731
#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET 33732
#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET 33733
@@ -347,22 +347,22 @@
#define NIG_REG_OUTER_TAG_VALUE_MASK_RT_OFFSET 33736
#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET 33737
#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET 33738
-#define NIG_REG_LLH_FUNC_TAG_EN_RT_SIZE 4
+#define NIG_REG_LLH_FUNC_TAG_EN_RT_SIZE 4
#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_OFFSET 33742
-#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_SIZE 4
+#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_SIZE 4
#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET 33746
-#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_SIZE 4
+#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_SIZE 4
#define NIG_REG_LLH_FUNC_NO_TAG_RT_OFFSET 33750
#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET 33751
-#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_SIZE 32
+#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_SIZE 32
#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET 33783
-#define NIG_REG_LLH_FUNC_FILTER_EN_RT_SIZE 16
+#define NIG_REG_LLH_FUNC_FILTER_EN_RT_SIZE 16
#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET 33799
-#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_SIZE 16
+#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_SIZE 16
#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET 33815
-#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE 16
+#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE 16
#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET 33831
-#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_SIZE 16
+#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_SIZE 16
#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET 33847
#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET 33848
#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET 33849
diff --git a/drivers/net/qede/base/ecore_sp_api.h b/drivers/net/qede/base/ecore_sp_api.h
index e80f5ef..71e2359 100644
--- a/drivers/net/qede/base/ecore_sp_api.h
+++ b/drivers/net/qede/base/ecore_sp_api.h
@@ -12,9 +12,9 @@
#include "ecore_status.h"
enum spq_mode {
- ECORE_SPQ_MODE_BLOCK, /* Client will poll a designated mem. address */
- ECORE_SPQ_MODE_CB, /* Client supplies a callback */
- ECORE_SPQ_MODE_EBLOCK, /* ECORE should block until completion */
+ ECORE_SPQ_MODE_BLOCK, /* Client will poll a designated mem. address */
+ ECORE_SPQ_MODE_CB, /* Client supplies a callback */
+ ECORE_SPQ_MODE_EBLOCK, /* ECORE should block until completion */
};
struct ecore_hwfn;
@@ -22,9 +22,9 @@ union event_ring_data;
struct eth_slow_path_rx_cqe;
struct ecore_spq_comp_cb {
- void (*function)(struct ecore_hwfn *,
+ void (*function)(struct ecore_hwfn *,
void *, union event_ring_data *, u8 fw_return_code);
- void *cookie;
+ void *cookie;
};
/**
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index e9ac898..e150415 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -391,7 +391,7 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
break;
default:
DP_NOTICE(p_hwfn, true, "Unknown personality %d\n",
- p_hwfn->hw_info.personality);
+ p_hwfn->hw_info.personality);
p_ramrod->personality = PERSONALITY_ETH;
}
diff --git a/drivers/net/qede/base/ecore_sp_commands.h b/drivers/net/qede/base/ecore_sp_commands.h
index e281ab0..22c7462 100644
--- a/drivers/net/qede/base/ecore_sp_commands.h
+++ b/drivers/net/qede/base/ecore_sp_commands.h
@@ -21,12 +21,12 @@ struct ecore_sp_init_data {
* e.g., in IOV scenarios. CID might defer between SPQ and
* other elements.
*/
- u32 cid;
- u16 opaque_fid;
+ u32 cid;
+ u16 opaque_fid;
/* Information regarding operation upon sending & completion */
- enum spq_mode comp_mode;
- struct ecore_spq_comp_cb *p_comp_data;
+ enum spq_mode comp_mode;
+ struct ecore_spq_comp_cb *p_comp_data;
};
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index b263693..440f5b3 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -49,7 +49,7 @@ static void ecore_spq_blocking_cb(struct ecore_hwfn *p_hwfn,
}
static enum _ecore_status_t ecore_spq_block(struct ecore_hwfn *p_hwfn,
- struct ecore_spq_entry *p_ent,
+ struct ecore_spq_entry *p_ent,
u8 *p_fw_ret)
{
int sleep_count = SPQ_BLOCK_SLEEP_LENGTH;
@@ -83,7 +83,7 @@ static enum _ecore_status_t ecore_spq_block(struct ecore_hwfn *p_hwfn,
if (comp_done->done == 1) {
if (p_fw_ret)
*p_fw_ret = comp_done->fw_return_code;
- return ECORE_SUCCESS;
+ return ECORE_SUCCESS;
}
OSAL_MSLEEP(5);
sleep_count--;
@@ -310,9 +310,9 @@ enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn,
p_eqe->protocol_id, /* Event Protocol ID */
p_eqe->reserved0, /* Reserved */
OSAL_LE16_TO_CPU(p_eqe->echo),
- p_eqe->fw_return_code, /* FW return code for SP
- * ramrods
- */
+ p_eqe->fw_return_code, /* FW return code for SP
+ * ramrods
+ */
p_eqe->flags);
if (GET_FIELD(p_eqe->flags, EVENT_RING_ENTRY_ASYNC)) {
@@ -345,7 +345,7 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
return OSAL_NULL;
}
- /* Allocate and initialize EQ chain */
+ /* Allocate and initialize EQ chain*/
if (ecore_chain_alloc(p_hwfn->p_dev,
ECORE_CHAIN_USE_TO_PRODUCE,
ECORE_CHAIN_MODE_PBL,
@@ -607,32 +607,32 @@ ecore_spq_add_entry(struct ecore_hwfn *p_hwfn,
p_spq->unlimited_pending_count++;
return ECORE_SUCCESS;
- }
+ } else {
+ struct ecore_spq_entry *p_en2;
- struct ecore_spq_entry *p_en2;
+ p_en2 = OSAL_LIST_FIRST_ENTRY(&p_spq->free_pool,
+ struct ecore_spq_entry,
+ list);
+ OSAL_LIST_REMOVE_ENTRY(&p_en2->list, &p_spq->free_pool);
- p_en2 = OSAL_LIST_FIRST_ENTRY(&p_spq->free_pool,
- struct ecore_spq_entry,
- list);
- OSAL_LIST_REMOVE_ENTRY(&p_en2->list, &p_spq->free_pool);
-
- /* Copy the ring element physical pointer to the new
- * entry, since we are about to override the entire ring
- * entry and don't want to lose the pointer.
- */
- p_ent->elem.data_ptr = p_en2->elem.data_ptr;
+ /* Copy the ring element physical pointer to the new
+ * entry, since we are about to override the entire ring
+ * entry and don't want to lose the pointer.
+ */
+ p_ent->elem.data_ptr = p_en2->elem.data_ptr;
- /* Setting the cookie to the comp_done of the
- * new element.
- */
- if (p_ent->comp_cb.cookie == &p_ent->comp_done)
- p_ent->comp_cb.cookie = &p_en2->comp_done;
+ /* Setting the cookie to the comp_done of the
+ * new element.
+ */
+ if (p_ent->comp_cb.cookie == &p_ent->comp_done)
+ p_ent->comp_cb.cookie = &p_en2->comp_done;
- *p_en2 = *p_ent;
+ *p_en2 = *p_ent;
- OSAL_FREE(p_hwfn->p_dev, p_ent);
+ OSAL_FREE(p_hwfn->p_dev, p_ent);
- p_ent = p_en2;
+ p_ent = p_en2;
+ }
}
/* entry is to be placed in 'pending' queue */
@@ -682,18 +682,18 @@ static enum _ecore_status_t ecore_spq_post_list(struct ecore_hwfn *p_hwfn,
!OSAL_LIST_IS_EMPTY(head)) {
struct ecore_spq_entry *p_ent =
OSAL_LIST_FIRST_ENTRY(head, struct ecore_spq_entry, list);
- OSAL_LIST_REMOVE_ENTRY(&p_ent->list, head);
+ OSAL_LIST_REMOVE_ENTRY(&p_ent->list, head);
OSAL_LIST_PUSH_TAIL(&p_ent->list, &p_spq->completion_pending);
- p_spq->comp_sent_count++;
-
- rc = ecore_spq_hw_post(p_hwfn, p_spq, p_ent);
- if (rc) {
- OSAL_LIST_REMOVE_ENTRY(&p_ent->list,
- &p_spq->completion_pending);
- __ecore_spq_return_entry(p_hwfn, p_ent);
- return rc;
+ p_spq->comp_sent_count++;
+
+ rc = ecore_spq_hw_post(p_hwfn, p_spq, p_ent);
+ if (rc) {
+ OSAL_LIST_REMOVE_ENTRY(&p_ent->list,
+ &p_spq->completion_pending);
+ __ecore_spq_return_entry(p_hwfn, p_ent);
+ return rc;
+ }
}
- }
return ECORE_SUCCESS;
}
diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h
index 5c16865..74484ab 100644
--- a/drivers/net/qede/base/ecore_spq.h
+++ b/drivers/net/qede/base/ecore_spq.h
@@ -16,24 +16,24 @@
#include "ecore_sp_api.h"
union ramrod_data {
- struct pf_start_ramrod_data pf_start;
- struct pf_update_ramrod_data pf_update;
- struct rx_queue_start_ramrod_data rx_queue_start;
- struct rx_queue_update_ramrod_data rx_queue_update;
- struct rx_queue_stop_ramrod_data rx_queue_stop;
- struct tx_queue_start_ramrod_data tx_queue_start;
- struct tx_queue_stop_ramrod_data tx_queue_stop;
- struct vport_start_ramrod_data vport_start;
- struct vport_stop_ramrod_data vport_stop;
- struct vport_update_ramrod_data vport_update;
- struct core_rx_start_ramrod_data core_rx_queue_start;
- struct core_rx_stop_ramrod_data core_rx_queue_stop;
- struct core_tx_start_ramrod_data core_tx_queue_start;
- struct core_tx_stop_ramrod_data core_tx_queue_stop;
- struct vport_filter_update_ramrod_data vport_filter_update;
-
- struct vf_start_ramrod_data vf_start;
- struct vf_stop_ramrod_data vf_stop;
+ struct pf_start_ramrod_data pf_start;
+ struct pf_update_ramrod_data pf_update;
+ struct rx_queue_start_ramrod_data rx_queue_start;
+ struct rx_queue_update_ramrod_data rx_queue_update;
+ struct rx_queue_stop_ramrod_data rx_queue_stop;
+ struct tx_queue_start_ramrod_data tx_queue_start;
+ struct tx_queue_stop_ramrod_data tx_queue_stop;
+ struct vport_start_ramrod_data vport_start;
+ struct vport_stop_ramrod_data vport_stop;
+ struct vport_update_ramrod_data vport_update;
+ struct core_rx_start_ramrod_data core_rx_queue_start;
+ struct core_rx_stop_ramrod_data core_rx_queue_stop;
+ struct core_tx_start_ramrod_data core_tx_queue_start;
+ struct core_tx_stop_ramrod_data core_tx_queue_stop;
+ struct vport_filter_update_ramrod_data vport_filter_update;
+
+ struct vf_start_ramrod_data vf_start;
+ struct vf_stop_ramrod_data vf_stop;
};
#define EQ_MAX_CREDIT 0xffffffff
@@ -45,83 +45,83 @@ enum spq_priority {
union ecore_spq_req_comp {
struct ecore_spq_comp_cb cb;
- u64 *done_addr;
+ u64 *done_addr;
};
/* SPQ_MODE_EBLOCK */
struct ecore_spq_comp_done {
u64 done;
- u8 fw_return_code;
+ u8 fw_return_code;
};
struct ecore_spq_entry {
- osal_list_entry_t list;
+ osal_list_entry_t list;
- u8 flags;
+ u8 flags;
/* HSI slow path element */
- struct slow_path_element elem;
+ struct slow_path_element elem;
- union ramrod_data ramrod;
+ union ramrod_data ramrod;
- enum spq_priority priority;
+ enum spq_priority priority;
/* pending queue for this entry */
- osal_list_t *queue;
+ osal_list_t *queue;
- enum spq_mode comp_mode;
- struct ecore_spq_comp_cb comp_cb;
- struct ecore_spq_comp_done comp_done; /* SPQ_MODE_EBLOCK */
+ enum spq_mode comp_mode;
+ struct ecore_spq_comp_cb comp_cb;
+ struct ecore_spq_comp_done comp_done; /* SPQ_MODE_EBLOCK */
};
struct ecore_eq {
- struct ecore_chain chain;
- u8 eq_sb_index; /* index within the SB */
- __le16 *p_fw_cons; /* ptr to index value */
+ struct ecore_chain chain;
+ u8 eq_sb_index; /* index within the SB */
+ __le16 *p_fw_cons; /* ptr to index value */
};
struct ecore_consq {
- struct ecore_chain chain;
+ struct ecore_chain chain;
};
struct ecore_spq {
- osal_spinlock_t lock;
+ osal_spinlock_t lock;
- osal_list_t unlimited_pending;
- osal_list_t pending;
- osal_list_t completion_pending;
- osal_list_t free_pool;
+ osal_list_t unlimited_pending;
+ osal_list_t pending;
+ osal_list_t completion_pending;
+ osal_list_t free_pool;
- struct ecore_chain chain;
+ struct ecore_chain chain;
/* allocated dma-able memory for spq entries (+ramrod data) */
- dma_addr_t p_phys;
- struct ecore_spq_entry *p_virt;
+ dma_addr_t p_phys;
+ struct ecore_spq_entry *p_virt;
/* Bitmap for handling out-of-order completions */
-#define SPQ_RING_SIZE \
+#define SPQ_RING_SIZE \
(CORE_SPQE_PAGE_SIZE_BYTES / sizeof(struct slow_path_element))
#define SPQ_COMP_BMAP_SIZE \
(SPQ_RING_SIZE / (sizeof(unsigned long) * 8 /* BITS_PER_LONG */))
- unsigned long p_comp_bitmap[SPQ_COMP_BMAP_SIZE];
- u8 comp_bitmap_idx;
-#define SPQ_COMP_BMAP_SET_BIT(p_spq, idx) \
-(OSAL_SET_BIT(((idx) % SPQ_RING_SIZE), (p_spq)->p_comp_bitmap))
+ unsigned long p_comp_bitmap[SPQ_COMP_BMAP_SIZE];
+ u8 comp_bitmap_idx;
+#define SPQ_COMP_BMAP_SET_BIT(p_spq, idx) \
+ (OSAL_SET_BIT(((idx) % SPQ_RING_SIZE), (p_spq)->p_comp_bitmap))
-#define SPQ_COMP_BMAP_CLEAR_BIT(p_spq, idx) \
-(OSAL_CLEAR_BIT(((idx) % SPQ_RING_SIZE), (p_spq)->p_comp_bitmap))
+#define SPQ_COMP_BMAP_CLEAR_BIT(p_spq, idx) \
+ (OSAL_CLEAR_BIT(((idx) % SPQ_RING_SIZE), (p_spq)->p_comp_bitmap))
-#define SPQ_COMP_BMAP_TEST_BIT(p_spq, idx) \
-(OSAL_TEST_BIT(((idx) % SPQ_RING_SIZE), (p_spq)->p_comp_bitmap))
+#define SPQ_COMP_BMAP_TEST_BIT(p_spq, idx) \
+ (OSAL_TEST_BIT(((idx) % SPQ_RING_SIZE), (p_spq)->p_comp_bitmap))
/* Statistics */
- u32 unlimited_pending_count;
- u32 normal_count;
- u32 high_count;
- u32 comp_sent_count;
- u32 comp_count;
+ u32 unlimited_pending_count;
+ u32 normal_count;
+ u32 high_count;
+ u32 comp_sent_count;
+ u32 comp_count;
- u32 cid;
+ u32 cid;
};
struct ecore_port;
@@ -136,9 +136,9 @@ struct ecore_hwfn;
*
* @return enum _ecore_status_t
*/
-enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn,
+enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn,
struct ecore_spq_entry *p_ent,
- u8 *fw_return_code);
+ u8 *fw_return_code);
/**
* @brief ecore_spq_allocate - Alloocates & initializes the SPQ and EQ.
@@ -147,7 +147,7 @@ enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn,
*
* @return enum _ecore_status_t
*/
-enum _ecore_status_t ecore_spq_alloc(struct ecore_hwfn *p_hwfn);
+enum _ecore_status_t ecore_spq_alloc(struct ecore_hwfn *p_hwfn);
/**
* @brief ecore_spq_setup - Reset the SPQ to its start state.
@@ -184,8 +184,8 @@ ecore_spq_get_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry **pp_ent);
* @param p_hwfn
* @param p_ent
*/
-void ecore_spq_return_entry(struct ecore_hwfn *p_hwfn,
- struct ecore_spq_entry *p_ent);
+void ecore_spq_return_entry(struct ecore_hwfn *p_hwfn,
+ struct ecore_spq_entry *p_ent);
/**
* @brief ecore_eq_allocate - Allocates & initializes an EQ struct
*
@@ -228,8 +228,8 @@ void ecore_eq_prod_update(struct ecore_hwfn *p_hwfn, u16 prod);
*
* @return enum _ecore_status_t
*/
-enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn,
- void *cookie);
+enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn,
+ void *cookie);
/**
* @brief ecore_spq_completion - Completes a single event
@@ -240,10 +240,10 @@ enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn,
*
* @return enum _ecore_status_t
*/
-enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn,
- __le16 echo,
- u8 fw_return_code,
- union event_ring_data *p_data);
+enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn,
+ __le16 echo,
+ u8 fw_return_code,
+ union event_ring_data *p_data);
/**
* @brief ecore_spq_get_cid - Given p_hwfn, return cid for the hwfn's SPQ
@@ -262,7 +262,7 @@ u32 ecore_spq_get_cid(struct ecore_hwfn *p_hwfn);
*
* @return struct ecore_eq* - a newly allocated structure; NULL upon error.
*/
-struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn);
+struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn);
/**
* @brief ecore_consq_setup - Reset the ConsQ to its start
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 1b3119d..d8d1aac 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -393,16 +393,16 @@ enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn,
int *pos = &p_hwfn->p_dev->sriov_info.pos;
*pos = OSAL_PCI_FIND_EXT_CAPABILITY(p_hwfn->p_dev,
- PCI_EXT_CAP_ID_SRIOV);
+ PCI_EXT_CAP_ID_SRIOV);
if (!*pos) {
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
"No PCIe IOV support\n");
- return ECORE_SUCCESS;
- }
+ return ECORE_SUCCESS;
+ }
- rc = ecore_iov_pci_cfg_info(p_dev);
- if (rc)
- return rc;
+ rc = ecore_iov_pci_cfg_info(p_dev);
+ if (rc)
+ return rc;
} else if (!p_hwfn->p_dev->sriov_info.pos) {
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "No PCIe IOV support\n");
return ECORE_SUCCESS;
@@ -413,7 +413,7 @@ enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn,
* after the first engine's VFs.
*/
p_hwfn->hw_info.first_vf_in_pf = p_hwfn->p_dev->sriov_info.offset +
- p_hwfn->abs_pf_id - 16;
+ p_hwfn->abs_pf_id - 16;
if (ECORE_PATH_ID(p_hwfn))
p_hwfn->hw_info.first_vf_in_pf -= MAX_NUM_VFS_BB;
@@ -448,12 +448,12 @@ void ecore_iov_set_vf_to_disable(struct ecore_hwfn *p_hwfn,
{
struct ecore_vf_info *vf;
- vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, false);
- if (!vf)
+ vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, false);
+ if (!vf)
return;
- vf->to_disable = to_disable;
-}
+ vf->to_disable = to_disable;
+ }
void ecore_iov_set_vfs_to_disable(struct ecore_hwfn *p_hwfn, u8 to_disable)
{
@@ -504,7 +504,7 @@ static OSAL_INLINE void ecore_iov_vf_semi_clear_err(struct ecore_hwfn *p_hwfn,
ecore_wr(p_hwfn, p_ptt, PSEM_REG_VF_ERROR, 1);
}
-static void ecore_iov_vf_pglue_clear_err(struct ecore_hwfn *p_hwfn,
+static void ecore_iov_vf_pglue_clear_err(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt, u8 abs_vfid)
{
ecore_wr(p_hwfn, p_ptt,
@@ -754,7 +754,7 @@ enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
{
enum _ecore_status_t rc = ECORE_SUCCESS;
struct ecore_vf_info *vf = OSAL_NULL;
- u8 num_of_vf_available_chains = 0;
+ u8 num_of_vf_available_chains = 0;
u32 cids;
u8 i;
@@ -896,9 +896,9 @@ static void ecore_iov_lock_vf_pf_channel(struct ecore_hwfn *p_hwfn,
/* vf->op_current = tlv; @@@TBD MichalK */
/* log the lock */
- DP_VERBOSE(p_hwfn,
- ECORE_MSG_IOV,
- "VF[%d]: vf pf channel locked by %s\n",
+ DP_VERBOSE(p_hwfn,
+ ECORE_MSG_IOV,
+ "VF[%d]: vf pf channel locked by %s\n",
vf->abs_vf_id, ecore_channel_tlvs_string[tlv]);
}
@@ -921,9 +921,9 @@ static void ecore_iov_unlock_vf_pf_channel(struct ecore_hwfn *p_hwfn,
/* mutex_unlock(&vf->op_mutex); @@@TBD MichalK add the lock */
/* log the unlock */
- DP_VERBOSE(p_hwfn,
- ECORE_MSG_IOV,
- "VF[%d]: vf pf channel unlocked by %s\n",
+ DP_VERBOSE(p_hwfn,
+ ECORE_MSG_IOV,
+ "VF[%d]: vf pf channel unlocked by %s\n",
vf->abs_vf_id, ecore_channel_tlvs_string[expected_tlv]);
/* record the locking op */
@@ -1131,9 +1131,9 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
OSAL_IOV_VF_CLEANUP(p_hwfn, p_vf->relative_vf_id);
}
-static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct ecore_vf_info *vf)
+static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct ecore_vf_info *vf)
{
struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
struct vfpf_acquire_tlv *req = &mbx->req_virt->acquire;
@@ -1148,7 +1148,7 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn,
req->vfdev_info.fw_minor != FW_MINOR_VERSION ||
req->vfdev_info.fw_revision != FW_REVISION_VERSION ||
req->vfdev_info.fw_engineering != FW_ENGINEERING_VERSION) {
- DP_INFO(p_hwfn,
+ DP_INFO(p_hwfn,
"VF[%d] is running an incompatible driver [VF needs"
" FW %02x:%02x:%02x:%02x but Hypervisor is"
" using %02x:%02x:%02x:%02x]\n",
@@ -1323,25 +1323,25 @@ ecore_iov_reconfigure_unicast_vlan(struct ecore_hwfn *p_hwfn,
/* Reconfigure vlans */
for (i = 0; i < ECORE_ETH_VF_NUM_VLAN_FILTERS + 1; i++) {
if (p_vf->shadow_config.vlans[i].used) {
- filter.type = ECORE_FILTER_VLAN;
- filter.vlan = p_vf->shadow_config.vlans[i].vid;
- DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ filter.type = ECORE_FILTER_VLAN;
+ filter.vlan = p_vf->shadow_config.vlans[i].vid;
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
"Reconfig VLAN [0x%04x] for VF [%04x]\n",
- filter.vlan, p_vf->relative_vf_id);
- rc = ecore_sp_eth_filter_ucast(p_hwfn,
- p_vf->opaque_fid,
- &filter,
- ECORE_SPQ_MODE_CB,
+ filter.vlan, p_vf->relative_vf_id);
+ rc = ecore_sp_eth_filter_ucast(p_hwfn,
+ p_vf->opaque_fid,
+ &filter,
+ ECORE_SPQ_MODE_CB,
OSAL_NULL);
- if (rc) {
- DP_NOTICE(p_hwfn, true,
- "Failed to configure VLAN [%04x]"
- " to VF [%04x]\n",
- filter.vlan, p_vf->relative_vf_id);
- break;
- }
+ if (rc) {
+ DP_NOTICE(p_hwfn, true,
+ "Failed to configure VLAN [%04x]"
+ " to VF [%04x]\n",
+ filter.vlan, p_vf->relative_vf_id);
+ break;
}
}
+ }
return rc;
}
@@ -1646,14 +1646,14 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
pq_params.eth.vf_id = vf->relative_vf_id;
rc = ecore_sp_eth_txq_start_ramrod(p_hwfn,
- vf->opaque_fid,
- vf->vf_queues[req->tx_qid].fw_tx_qid,
- vf->vf_queues[req->tx_qid].fw_cid,
- vf->vport_id,
- vf->abs_vf_id + 0x10,
- req->hw_sb,
- req->sb_index,
- req->pbl_addr,
+ vf->opaque_fid,
+ vf->vf_queues[req->tx_qid].fw_tx_qid,
+ vf->vf_queues[req->tx_qid].fw_cid,
+ vf->vport_id,
+ vf->abs_vf_id + 0x10,
+ req->hw_sb,
+ req->sb_index,
+ req->pbl_addr,
req->pbl_size, &pq_params);
if (rc)
@@ -1852,12 +1852,12 @@ ecore_iov_vp_update_act_param(struct ecore_hwfn *p_hwfn,
p_act_tlv = (struct vfpf_vport_update_activate_tlv *)
ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
if (p_act_tlv) {
- p_data->update_vport_active_rx_flg = p_act_tlv->update_rx;
- p_data->vport_active_rx_flg = p_act_tlv->active_rx;
- p_data->update_vport_active_tx_flg = p_act_tlv->update_tx;
- p_data->vport_active_tx_flg = p_act_tlv->active_tx;
- *tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_ACTIVATE;
- }
+ p_data->update_vport_active_rx_flg = p_act_tlv->update_rx;
+ p_data->vport_active_rx_flg = p_act_tlv->active_rx;
+ p_data->update_vport_active_tx_flg = p_act_tlv->update_tx;
+ p_data->vport_active_tx_flg = p_act_tlv->active_tx;
+ *tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_ACTIVATE;
+}
}
static void
@@ -1905,10 +1905,10 @@ ecore_iov_vp_update_tx_switch(struct ecore_hwfn *p_hwfn,
#endif
if (p_tx_switch_tlv) {
- p_data->update_tx_switching_flg = 1;
- p_data->tx_switching_flg = p_tx_switch_tlv->tx_switching;
- *tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_TX_SWITCH;
- }
+ p_data->update_tx_switching_flg = 1;
+ p_data->tx_switching_flg = p_tx_switch_tlv->tx_switching;
+ *tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_TX_SWITCH;
+}
}
static void
@@ -1924,12 +1924,12 @@ ecore_iov_vp_update_mcast_bin_param(struct ecore_hwfn *p_hwfn,
ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
if (p_mcast_tlv) {
- p_data->update_approx_mcast_flg = 1;
- OSAL_MEMCPY(p_data->bins, p_mcast_tlv->bins,
- sizeof(unsigned long) *
- ETH_MULTICAST_MAC_BINS_IN_REGS);
- *tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_MCAST;
- }
+ p_data->update_approx_mcast_flg = 1;
+ OSAL_MEMCPY(p_data->bins, p_mcast_tlv->bins,
+ sizeof(unsigned long) *
+ ETH_MULTICAST_MAC_BINS_IN_REGS);
+ *tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_MCAST;
+}
}
static void
@@ -1952,8 +1952,8 @@ ecore_iov_vp_update_accept_flag(struct ecore_hwfn *p_hwfn,
p_accept_tlv->update_tx_mode;
p_data->accept_flags.tx_accept_filter =
p_accept_tlv->tx_accept_filter;
- *tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_ACCEPT_PARAM;
- }
+ *tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_ACCEPT_PARAM;
+}
}
static void
@@ -1969,11 +1969,11 @@ ecore_iov_vp_update_accept_any_vlan(struct ecore_hwfn *p_hwfn,
ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
if (p_accept_any_vlan) {
- p_data->accept_any_vlan = p_accept_any_vlan->accept_any_vlan;
- p_data->update_accept_any_vlan_flg =
- p_accept_any_vlan->update_accept_any_vlan_flg;
- *tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_ACCEPT_ANY_VLAN;
- }
+ p_data->accept_any_vlan = p_accept_any_vlan->accept_any_vlan;
+ p_data->update_accept_any_vlan_flg =
+ p_accept_any_vlan->update_accept_any_vlan_flg;
+ *tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_ACCEPT_ANY_VLAN;
+}
}
static void
@@ -1991,48 +1991,48 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
p_rss_tlv = (struct vfpf_vport_update_rss_tlv *)
ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
if (p_rss_tlv) {
- OSAL_MEMSET(p_rss, 0, sizeof(struct ecore_rss_params));
-
- p_rss->update_rss_config =
- !!(p_rss_tlv->update_rss_flags &
- VFPF_UPDATE_RSS_CONFIG_FLAG);
- p_rss->update_rss_capabilities =
- !!(p_rss_tlv->update_rss_flags &
- VFPF_UPDATE_RSS_CAPS_FLAG);
- p_rss->update_rss_ind_table =
- !!(p_rss_tlv->update_rss_flags &
- VFPF_UPDATE_RSS_IND_TABLE_FLAG);
- p_rss->update_rss_key =
+ OSAL_MEMSET(p_rss, 0, sizeof(struct ecore_rss_params));
+
+ p_rss->update_rss_config =
+ !!(p_rss_tlv->update_rss_flags &
+ VFPF_UPDATE_RSS_CONFIG_FLAG);
+ p_rss->update_rss_capabilities =
+ !!(p_rss_tlv->update_rss_flags &
+ VFPF_UPDATE_RSS_CAPS_FLAG);
+ p_rss->update_rss_ind_table =
+ !!(p_rss_tlv->update_rss_flags &
+ VFPF_UPDATE_RSS_IND_TABLE_FLAG);
+ p_rss->update_rss_key =
!!(p_rss_tlv->update_rss_flags & VFPF_UPDATE_RSS_KEY_FLAG);
- p_rss->rss_enable = p_rss_tlv->rss_enable;
- p_rss->rss_eng_id = vf->relative_vf_id + 1;
- p_rss->rss_caps = p_rss_tlv->rss_caps;
- p_rss->rss_table_size_log = p_rss_tlv->rss_table_size_log;
- OSAL_MEMCPY(p_rss->rss_ind_table, p_rss_tlv->rss_ind_table,
- sizeof(p_rss->rss_ind_table));
- OSAL_MEMCPY(p_rss->rss_key, p_rss_tlv->rss_key,
- sizeof(p_rss->rss_key));
+ p_rss->rss_enable = p_rss_tlv->rss_enable;
+ p_rss->rss_eng_id = vf->relative_vf_id + 1;
+ p_rss->rss_caps = p_rss_tlv->rss_caps;
+ p_rss->rss_table_size_log = p_rss_tlv->rss_table_size_log;
+ OSAL_MEMCPY(p_rss->rss_ind_table, p_rss_tlv->rss_ind_table,
+ sizeof(p_rss->rss_ind_table));
+ OSAL_MEMCPY(p_rss->rss_key, p_rss_tlv->rss_key,
+ sizeof(p_rss->rss_key));
table_size = OSAL_MIN_T(u16,
OSAL_ARRAY_SIZE(p_rss->rss_ind_table),
- (1 << p_rss_tlv->rss_table_size_log));
+ (1 << p_rss_tlv->rss_table_size_log));
- max_q_idx = OSAL_ARRAY_SIZE(vf->vf_queues);
+ max_q_idx = OSAL_ARRAY_SIZE(vf->vf_queues);
- for (i = 0; i < table_size; i++) {
- q_idx = p_rss->rss_ind_table[i];
+ for (i = 0; i < table_size; i++) {
+ q_idx = p_rss->rss_ind_table[i];
if (q_idx >= max_q_idx) {
- DP_NOTICE(p_hwfn, true,
+ DP_NOTICE(p_hwfn, true,
"rss_ind_table[%d] = %d, rxq is out of range\n",
- i, q_idx);
+ i, q_idx);
/* TBD: fail the request mark VF as malicious */
p_rss->rss_ind_table[i] =
vf->vf_queues[0].fw_rx_qid;
} else if (!vf->vf_queues[q_idx].rxq_active) {
- DP_NOTICE(p_hwfn, true,
- "rss_ind_table[%d] = %d, rxq is not active\n",
- i, q_idx);
+ DP_NOTICE(p_hwfn, true,
+ "rss_ind_table[%d] = %d, rxq is not active\n",
+ i, q_idx);
/* TBD: fail the request mark VF as malicious */
p_rss->rss_ind_table[i] =
vf->vf_queues[0].fw_rx_qid;
@@ -2040,10 +2040,10 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
p_rss->rss_ind_table[i] =
vf->vf_queues[q_idx].fw_rx_qid;
}
- }
+ }
- p_data->rss_params = p_rss;
- *tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_RSS;
+ p_data->rss_params = p_rss;
+ *tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_RSS;
} else {
p_data->rss_params = OSAL_NULL;
}
@@ -2172,8 +2172,8 @@ out:
static enum _ecore_status_t
ecore_iov_vf_update_unicast_shadow(struct ecore_hwfn *p_hwfn,
- struct ecore_vf_info *p_vf,
- struct ecore_filter_ucast *p_params)
+ struct ecore_vf_info *p_vf,
+ struct ecore_filter_ucast *p_params)
{
int i;
@@ -2212,11 +2212,11 @@ ecore_iov_vf_update_unicast_shadow(struct ecore_hwfn *p_hwfn,
p_params->opcode == ECORE_FILTER_REPLACE) {
for (i = 0; i < ECORE_ETH_VF_NUM_VLAN_FILTERS + 1; i++)
if (!p_vf->shadow_config.vlans[i].used) {
- p_vf->shadow_config.vlans[i].used = true;
+ p_vf->shadow_config.vlans[i].used = true;
p_vf->shadow_config.vlans[i].vid =
p_params->vlan;
- break;
- }
+ break;
+ }
if (i == ECORE_ETH_VF_NUM_VLAN_FILTERS + 1) {
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
"VF [%d] - Tries to configure more than %d vlan filters\n",
@@ -2737,11 +2737,11 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
/* check if tlv type is known */
if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type)) {
- /* Lock the per vf op mutex and note the locker's identity.
- * The unlock will take place in mbx response.
- */
- ecore_iov_lock_vf_pf_channel(p_hwfn,
- p_vf, mbx->first_tlv.tl.type);
+ /* Lock the per vf op mutex and note the locker's identity.
+ * The unlock will take place in mbx response.
+ */
+ ecore_iov_lock_vf_pf_channel(p_hwfn,
+ p_vf, mbx->first_tlv.tl.type);
/* switch on the opcode */
switch (mbx->first_tlv.tl.type) {
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 3471e5c..7eca169 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -57,13 +57,13 @@ struct ecore_vf_iov {
* a message
*/
struct ecore_iov_vf_mbx {
- union vfpf_tlvs *req_virt;
- dma_addr_t req_phys;
- union pfvf_tlvs *reply_virt;
- dma_addr_t reply_phys;
+ union vfpf_tlvs *req_virt;
+ dma_addr_t req_phys;
+ union pfvf_tlvs *reply_virt;
+ dma_addr_t reply_phys;
/* Address in VF where a pending message is located */
- dma_addr_t pending_req;
+ dma_addr_t pending_req;
u8 *offset;
@@ -72,12 +72,12 @@ struct ecore_iov_vf_mbx {
#endif
/* VF GPA address */
- u32 vf_addr_lo;
- u32 vf_addr_hi;
+ u32 vf_addr_lo;
+ u32 vf_addr_hi;
- struct vfpf_first_tlv first_tlv; /* saved VF request header */
+ struct vfpf_first_tlv first_tlv; /* saved VF request header */
- u8 flags;
+ u8 flags;
#define VF_MSG_INPROCESS 0x1 /* failsafe - the FW should prevent
* more then one pending msg
*/
@@ -101,11 +101,11 @@ enum int_mod {
};
enum vf_state {
- VF_FREE = 0, /* VF ready to be acquired holds no resc */
- VF_ACQUIRED = 1, /* VF, acquired, but not initalized */
- VF_ENABLED = 2, /* VF, Enabled */
- VF_RESET = 3, /* VF, FLR'd, pending cleanup */
- VF_STOPPED = 4 /* VF, Stopped */
+ VF_FREE = 0, /* VF ready to be acquired holds no resc */
+ VF_ACQUIRED = 1, /* VF, acquired, but not initalized */
+ VF_ENABLED = 2, /* VF, Enabled */
+ VF_RESET = 3, /* VF, FLR'd, pending cleanup */
+ VF_STOPPED = 4 /* VF, Stopped */
};
struct ecore_vf_vlan_shadow {
@@ -124,34 +124,34 @@ struct ecore_vf_shadow_config {
struct ecore_vf_info {
struct ecore_iov_vf_mbx vf_mbx;
enum vf_state state;
- u8 to_disable;
+ u8 to_disable;
- struct ecore_bulletin bulletin;
- dma_addr_t vf_bulletin;
+ struct ecore_bulletin bulletin;
+ dma_addr_t vf_bulletin;
- u32 concrete_fid;
- u16 opaque_fid;
- u16 mtu;
+ u32 concrete_fid;
+ u16 opaque_fid;
+ u16 mtu;
- u8 vport_id;
- u8 relative_vf_id;
- u8 abs_vf_id;
+ u8 vport_id;
+ u8 relative_vf_id;
+ u8 abs_vf_id;
#define ECORE_VF_ABS_ID(p_hwfn, p_vf) (ECORE_PATH_ID(p_hwfn) ? \
(p_vf)->abs_vf_id + MAX_NUM_VFS_BB : \
(p_vf)->abs_vf_id)
- u8 vport_instance; /* Number of active vports */
- u8 num_rxqs;
- u8 num_txqs;
+ u8 vport_instance; /* Number of active vports */
+ u8 num_rxqs;
+ u8 num_txqs;
- u8 num_sbs;
+ u8 num_sbs;
- u8 num_mac_filters;
- u8 num_vlan_filters;
+ u8 num_mac_filters;
+ u8 num_vlan_filters;
u8 num_mc_filters;
- struct ecore_vf_q_info vf_queues[ECORE_MAX_VF_CHAINS_PER_PF];
- u16 igu_sbs[ECORE_MAX_VF_CHAINS_PER_PF];
+ struct ecore_vf_q_info vf_queues[ECORE_MAX_VF_CHAINS_PER_PF];
+ u16 igu_sbs[ECORE_MAX_VF_CHAINS_PER_PF];
/* TODO - Only windows is using it - should be removed */
u8 was_malicious;
@@ -159,7 +159,7 @@ struct ecore_vf_info {
void *ctx;
struct ecore_public_vf_info p_vf_info;
bool spoof_chk; /* Current configured on HW */
- bool req_spoofchk_val; /* Requested value */
+ bool req_spoofchk_val; /* Requested value */
/* Stores the configuration requested by VF */
struct ecore_vf_shadow_config shadow_config;
@@ -176,21 +176,21 @@ struct ecore_vf_info {
* capability enabled.
*/
struct ecore_pf_iov {
- struct ecore_vf_info vfs_array[MAX_NUM_VFS];
- u64 pending_events[ECORE_VF_ARRAY_LENGTH];
- u64 pending_flr[ECORE_VF_ARRAY_LENGTH];
- u16 base_vport_id;
+ struct ecore_vf_info vfs_array[MAX_NUM_VFS];
+ u64 pending_events[ECORE_VF_ARRAY_LENGTH];
+ u64 pending_flr[ECORE_VF_ARRAY_LENGTH];
+ u16 base_vport_id;
/* Allocate message address continuosuly and split to each VF */
- void *mbx_msg_virt_addr;
- dma_addr_t mbx_msg_phys_addr;
- u32 mbx_msg_size;
- void *mbx_reply_virt_addr;
- dma_addr_t mbx_reply_phys_addr;
- u32 mbx_reply_size;
- void *p_bulletins;
- dma_addr_t bulletins_phys;
- u32 bulletins_size;
+ void *mbx_msg_virt_addr;
+ dma_addr_t mbx_msg_phys_addr;
+ u32 mbx_msg_size;
+ void *mbx_reply_virt_addr;
+ dma_addr_t mbx_reply_phys_addr;
+ u32 mbx_reply_size;
+ void *p_bulletins;
+ dma_addr_t bulletins_phys;
+ u32 bulletins_size;
};
#ifdef CONFIG_ECORE_SRIOV
@@ -217,7 +217,7 @@ enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn,
*
* @return pointer to the newly placed tlv
*/
-void *ecore_add_tlv(struct ecore_hwfn *p_hwfn,
+void *ecore_add_tlv(struct ecore_hwfn *p_hwfn,
u8 **offset, u16 type, u16 length);
/**
@@ -260,9 +260,9 @@ void ecore_iov_free(struct ecore_hwfn *p_hwfn);
* @param echo
* @param data
*/
-enum _ecore_status_t ecore_sriov_eqe_event(struct ecore_hwfn *p_hwfn,
- u8 opcode,
- __le16 echo,
+enum _ecore_status_t ecore_sriov_eqe_event(struct ecore_hwfn *p_hwfn,
+ u8 opcode,
+ __le16 echo,
union event_ring_data *data);
/**
diff --git a/drivers/net/qede/base/ecore_status.h b/drivers/net/qede/base/ecore_status.h
index 98d40bb..6277bc8 100644
--- a/drivers/net/qede/base/ecore_status.h
+++ b/drivers/net/qede/base/ecore_status.h
@@ -10,18 +10,18 @@
#define __ECORE_STATUS_H__
enum _ecore_status_t {
- ECORE_UNKNOWN_ERROR = -12,
- ECORE_NORESOURCES = -11,
- ECORE_NODEV = -10,
+ ECORE_UNKNOWN_ERROR = -12,
+ ECORE_NORESOURCES = -11,
+ ECORE_NODEV = -10,
ECORE_ABORTED = -9,
- ECORE_AGAIN = -8,
+ ECORE_AGAIN = -8,
ECORE_NOTIMPL = -7,
- ECORE_EXISTS = -6,
- ECORE_IO = -5,
+ ECORE_EXISTS = -6,
+ ECORE_IO = -5,
ECORE_TIMEOUT = -4,
- ECORE_INVAL = -3,
- ECORE_BUSY = -2,
- ECORE_NOMEM = -1,
+ ECORE_INVAL = -3,
+ ECORE_BUSY = -2,
+ ECORE_NOMEM = -1,
ECORE_SUCCESS = 0,
/* PENDING is not an error and should be positive */
ECORE_PENDING = 1,
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index d32fb35..a03a2ce 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -264,7 +264,7 @@ static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
p_hwfn->p_dev->chip_num = pfdev_info->chip_num & 0xffff;
return 0;
-}
+ }
enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_dev *p_dev)
{
@@ -280,7 +280,7 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_dev *p_dev)
"regview should be initialized before"
" ecore_vf_hw_prepare is called\n");
return ECORE_INVAL;
- }
+}
/* Set the doorbell bar. Assumption: regview is set */
p_hwfn->doorbells = (u8 OSAL_IOMEM *)p_hwfn->regview +
@@ -310,7 +310,7 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_dev *p_dev)
vfpf_tlvs));
if (!p_sriov->vf2pf_request) {
DP_NOTICE(p_hwfn, true,
- "Failed to allocate `vf2pf_request' DMA memory\n");
+ "Failed to allocate `vf2pf_request' DMA memory\n");
goto free_p_sriov;
}
@@ -388,7 +388,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
dma_addr_t bd_chain_phys_addr,
dma_addr_t cqe_pbl_addr,
u16 cqe_pbl_size,
- void OSAL_IOMEM * *pp_prod)
+ void OSAL_IOMEM **pp_prod)
{
struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
struct vfpf_start_rxq_tlv *req;
@@ -421,7 +421,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
hw_qid = p_iov->acquire_resp.resc.hw_qid[rx_qid];
*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview +
- MSTORM_QZONE_START(p_hwfn->p_dev) +
+ MSTORM_QZONE_START(p_hwfn->p_dev) +
(hw_qid) * MSTORM_QZONE_SIZE +
OFFSETOF(struct mstorm_eth_queue_zone, rx_producers);
@@ -481,7 +481,7 @@ enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
u8 sb_index,
dma_addr_t pbl_addr,
u16 pbl_size,
- void OSAL_IOMEM * *pp_doorbell)
+ void OSAL_IOMEM **pp_doorbell)
{
struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
struct vfpf_start_txq_tlv *req;
@@ -519,8 +519,8 @@ enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
u8 cid = p_iov->acquire_resp.resc.cid[tx_queue_id];
*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
- DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
- }
+ DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
+ }
return rc;
}
@@ -1117,7 +1117,7 @@ enum _ecore_status_t ecore_vf_pf_int_cleanup(struct ecore_hwfn *p_hwfn)
rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
if (rc)
- return rc;
+ return rc;
if (resp->hdr.status != PFVF_STATUS_SUCCESS)
return ECORE_INVAL;
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 334b588..7600710 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -18,7 +18,7 @@
/**
*
* @brief hw preparation for VF
- * sends ACQUIRE message
+ * sends ACQUIRE message
*
* @param p_dev
*
@@ -63,7 +63,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
dma_addr_t bd_chain_phys_addr,
dma_addr_t cqe_pbl_addr,
u16 cqe_pbl_size,
- void OSAL_IOMEM * *pp_prod);
+ void OSAL_IOMEM **pp_prod);
/**
*
@@ -76,7 +76,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
* @param sb_index - index within the status block
* @param bd_chain_phys_addr - physical address of tx chain
* @param pp_doorbell - pointer to address to which to
- * write the doorbell too..
+ * write the doorbell too..
*
* @return enum _ecore_status_t
*/
@@ -86,7 +86,7 @@ enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
u8 sb_index,
dma_addr_t pbl_addr,
u16 pbl_size,
- void OSAL_IOMEM * *pp_doorbell);
+ void OSAL_IOMEM **pp_doorbell);
/**
*
@@ -98,7 +98,7 @@ enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
*
* @return enum _ecore_status_t
*/
-enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
+enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
u16 rx_qid, bool cqe_completion);
/**
@@ -110,8 +110,8 @@ enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
*
* @return enum _ecore_status_t
*/
-enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
- u16 tx_qid);
+enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
+ u16 tx_qid);
/**
* @brief VF - update the RX queue by sending a message to the
@@ -127,10 +127,10 @@ enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
* @return enum _ecore_status_t
*/
enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
- u16 rx_queue_id,
- u8 num_rxqs,
- u8 comp_cqe_flg,
- u8 comp_event_flg);
+ u16 rx_queue_id,
+ u8 num_rxqs,
+ u8 comp_cqe_flg,
+ u8 comp_event_flg);
/**
*
@@ -191,12 +191,12 @@ u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id);
* @return enum _ecore_status
*/
enum _ecore_status_t ecore_vf_pf_vport_start(struct ecore_hwfn *p_hwfn,
- u8 vport_id,
- u16 mtu,
- u8 inner_vlan_removal,
- enum ecore_tpa_mode tpa_mode,
- u8 max_buffers_per_cqe,
- u8 only_untagged);
+ u8 vport_id,
+ u16 mtu,
+ u8 inner_vlan_removal,
+ enum ecore_tpa_mode tpa_mode,
+ u8 max_buffers_per_cqe,
+ u8 only_untagged);
/**
* @brief ecore_vf_pf_vport_stop - stop the VF's vport
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
index f28b686..a2b4ba5 100644
--- a/drivers/net/qede/base/ecore_vf_api.h
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -79,7 +79,7 @@ void ecore_vf_get_num_vlan_filters(struct ecore_hwfn *p_hwfn,
/**
* @brief Get number of MAC filters allocated for VF by ecore
*
- * @param p_hwfn
+ * @param p_hwfn
* @param num_mac - allocated MAC filters
*/
void ecore_vf_get_num_mac_filters(struct ecore_hwfn *p_hwfn,
@@ -101,7 +101,7 @@ bool ecore_vf_check_mac(struct ecore_hwfn *p_hwfn, u8 *mac);
* @param hwfn
* @param dst_mac
* @param p_is_forced - out param which indicate in case mac
- * exist if it forced or not.
+ * exist if it forced or not.
*
* @return bool - return true if mac exist and false if
* not.
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index 2fa4d15..e98a2a7 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -21,18 +21,18 @@
*
**/
struct vf_pf_resc_request {
- u8 num_rxqs;
- u8 num_txqs;
- u8 num_sbs;
- u8 num_mac_filters;
- u8 num_vlan_filters;
- u8 num_mc_filters; /* No limit so superfluous */
+ u8 num_rxqs;
+ u8 num_txqs;
+ u8 num_sbs;
+ u8 num_mac_filters;
+ u8 num_vlan_filters;
+ u8 num_mc_filters; /* No limit so superfluous */
u16 padding;
};
struct hw_sb_info {
- u16 hw_sb_id; /* aka absolute igu id, used to ack the sb */
- u8 sb_qid; /* used to update DHC for sb */
+ u16 hw_sb_id; /* aka absolute igu id, used to ack the sb */
+ u8 sb_qid; /* used to update DHC for sb */
u8 padding[5];
};
@@ -114,8 +114,8 @@ struct vfpf_acquire_tlv {
u8 fw_revision;
u8 fw_engineering;
u32 driver_version;
- u16 opaque_fid; /* ME register value */
- u8 os_type; /* VFPF_ACQUIRE_OS_* value */
+ u16 opaque_fid; /* ME register value */
+ u8 os_type; /* VFPF_ACQUIRE_OS_* value */
u8 padding[5];
} vfdev_info;
@@ -128,17 +128,17 @@ struct vfpf_acquire_tlv {
/* receive side scaling tlv */
struct vfpf_vport_update_rss_tlv {
- struct channel_tlv tl;
+ struct channel_tlv tl;
u8 update_rss_flags;
-#define VFPF_UPDATE_RSS_CONFIG_FLAG (1 << 0)
-#define VFPF_UPDATE_RSS_CAPS_FLAG (1 << 1)
-#define VFPF_UPDATE_RSS_IND_TABLE_FLAG (1 << 2)
-#define VFPF_UPDATE_RSS_KEY_FLAG (1 << 3)
+ #define VFPF_UPDATE_RSS_CONFIG_FLAG (1 << 0)
+ #define VFPF_UPDATE_RSS_CAPS_FLAG (1 << 1)
+ #define VFPF_UPDATE_RSS_IND_TABLE_FLAG (1 << 2)
+ #define VFPF_UPDATE_RSS_KEY_FLAG (1 << 3)
u8 rss_enable;
u8 rss_caps;
- u8 rss_table_size_log; /* The table size is 2 ^ rss_table_size_log */
+ u8 rss_table_size_log; /* The table size is 2 ^ rss_table_size_log */
u16 rss_ind_table[T_ETH_INDIRECTION_TABLE_SIZE];
u32 rss_key[T_ETH_RSS_KEY_SIZE];
};
@@ -172,7 +172,7 @@ struct pfvf_acquire_resp_tlv {
#define PFVF_ACQUIRE_CAP_DEFAULT_UNTAGGED (1 << 0)
u16 db_size;
- u8 indices_per_sb;
+ u8 indices_per_sb;
u8 os_type;
/* Thesee should match the PF's ecore_dev values */
@@ -192,19 +192,19 @@ struct pfvf_acquire_resp_tlv {
* this struct with suggested amount of resources for next
* acquire request
*/
-#define PFVF_MAX_QUEUES_PER_VF 16
-#define PFVF_MAX_SBS_PER_VF 16
+ #define PFVF_MAX_QUEUES_PER_VF 16
+ #define PFVF_MAX_SBS_PER_VF 16
struct hw_sb_info hw_sbs[PFVF_MAX_SBS_PER_VF];
- u8 hw_qid[PFVF_MAX_QUEUES_PER_VF];
- u8 cid[PFVF_MAX_QUEUES_PER_VF];
-
- u8 num_rxqs;
- u8 num_txqs;
- u8 num_sbs;
- u8 num_mac_filters;
- u8 num_vlan_filters;
- u8 num_mc_filters;
- u8 padding[2];
+ u8 hw_qid[PFVF_MAX_QUEUES_PER_VF];
+ u8 cid[PFVF_MAX_QUEUES_PER_VF];
+
+ u8 num_rxqs;
+ u8 num_txqs;
+ u8 num_sbs;
+ u8 num_mac_filters;
+ u8 num_vlan_filters;
+ u8 num_mc_filters;
+ u8 padding[2];
} resc;
u32 bulletin_size;
@@ -225,141 +225,141 @@ struct vfpf_init_tlv {
/* Setup Queue */
struct vfpf_start_rxq_tlv {
- struct vfpf_first_tlv first_tlv;
+ struct vfpf_first_tlv first_tlv;
/* physical addresses */
aligned_u64 rxq_addr;
aligned_u64 deprecated_sge_addr;
aligned_u64 cqe_pbl_addr;
- u16 cqe_pbl_size;
- u16 hw_sb;
- u16 rx_qid;
- u16 hc_rate; /* desired interrupts per sec. */
+ u16 cqe_pbl_size;
+ u16 hw_sb;
+ u16 rx_qid;
+ u16 hc_rate; /* desired interrupts per sec. */
- u16 bd_max_bytes;
- u16 stat_id;
- u8 sb_index;
- u8 padding[3];
+ u16 bd_max_bytes;
+ u16 stat_id;
+ u8 sb_index;
+ u8 padding[3];
};
struct vfpf_start_txq_tlv {
- struct vfpf_first_tlv first_tlv;
+ struct vfpf_first_tlv first_tlv;
/* physical addresses */
aligned_u64 pbl_addr;
- u16 pbl_size;
- u16 stat_id;
- u16 tx_qid;
- u16 hw_sb;
-
- u32 flags; /* VFPF_QUEUE_FLG_X flags */
- u16 hc_rate; /* desired interrupts per sec. */
- u8 sb_index;
- u8 padding[3];
+ u16 pbl_size;
+ u16 stat_id;
+ u16 tx_qid;
+ u16 hw_sb;
+
+ u32 flags; /* VFPF_QUEUE_FLG_X flags */
+ u16 hc_rate; /* desired interrupts per sec. */
+ u8 sb_index;
+ u8 padding[3];
};
/* Stop RX Queue */
struct vfpf_stop_rxqs_tlv {
- struct vfpf_first_tlv first_tlv;
+ struct vfpf_first_tlv first_tlv;
- u16 rx_qid;
- u8 num_rxqs;
- u8 cqe_completion;
- u8 padding[4];
+ u16 rx_qid;
+ u8 num_rxqs;
+ u8 cqe_completion;
+ u8 padding[4];
};
/* Stop TX Queues */
struct vfpf_stop_txqs_tlv {
- struct vfpf_first_tlv first_tlv;
+ struct vfpf_first_tlv first_tlv;
- u16 tx_qid;
- u8 num_txqs;
- u8 padding[5];
+ u16 tx_qid;
+ u8 num_txqs;
+ u8 padding[5];
};
struct vfpf_update_rxq_tlv {
- struct vfpf_first_tlv first_tlv;
+ struct vfpf_first_tlv first_tlv;
aligned_u64 deprecated_sge_addr[PFVF_MAX_QUEUES_PER_VF];
- u16 rx_qid;
- u8 num_rxqs;
- u8 flags;
-#define VFPF_RXQ_UPD_INIT_SGE_DEPRECATE_FLAG (1 << 0)
-#define VFPF_RXQ_UPD_COMPLETE_CQE_FLAG (1 << 1)
-#define VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG (1 << 2)
+ u16 rx_qid;
+ u8 num_rxqs;
+ u8 flags;
+ #define VFPF_RXQ_UPD_INIT_SGE_DEPRECATE_FLAG (1 << 0)
+ #define VFPF_RXQ_UPD_COMPLETE_CQE_FLAG (1 << 1)
+ #define VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG (1 << 2)
- u8 padding[4];
+ u8 padding[4];
};
/* Set Queue Filters */
struct vfpf_q_mac_vlan_filter {
u32 flags;
-#define VFPF_Q_FILTER_DEST_MAC_VALID 0x01
-#define VFPF_Q_FILTER_VLAN_TAG_VALID 0x02
-#define VFPF_Q_FILTER_SET_MAC 0x100 /* set/clear */
+ #define VFPF_Q_FILTER_DEST_MAC_VALID 0x01
+ #define VFPF_Q_FILTER_VLAN_TAG_VALID 0x02
+ #define VFPF_Q_FILTER_SET_MAC 0x100 /* set/clear */
- u8 mac[ETH_ALEN];
+ u8 mac[ETH_ALEN];
u16 vlan_tag;
- u8 padding[4];
+ u8 padding[4];
};
/* Start a vport */
struct vfpf_vport_start_tlv {
- struct vfpf_first_tlv first_tlv;
+ struct vfpf_first_tlv first_tlv;
aligned_u64 sb_addr[PFVF_MAX_SBS_PER_VF];
- u32 tpa_mode;
- u16 dep1;
- u16 mtu;
+ u32 tpa_mode;
+ u16 dep1;
+ u16 mtu;
- u8 vport_id;
- u8 inner_vlan_removal;
+ u8 vport_id;
+ u8 inner_vlan_removal;
- u8 only_untagged;
- u8 max_buffers_per_cqe;
+ u8 only_untagged;
+ u8 max_buffers_per_cqe;
- u8 padding[4];
+ u8 padding[4];
};
/* Extended tlvs - need to add rss, mcast, accept mode tlvs */
struct vfpf_vport_update_activate_tlv {
- struct channel_tlv tl;
- u8 update_rx;
- u8 update_tx;
- u8 active_rx;
- u8 active_tx;
+ struct channel_tlv tl;
+ u8 update_rx;
+ u8 update_tx;
+ u8 active_rx;
+ u8 active_tx;
};
struct vfpf_vport_update_tx_switch_tlv {
- struct channel_tlv tl;
- u8 tx_switching;
- u8 padding[3];
+ struct channel_tlv tl;
+ u8 tx_switching;
+ u8 padding[3];
};
struct vfpf_vport_update_vlan_strip_tlv {
- struct channel_tlv tl;
- u8 remove_vlan;
- u8 padding[3];
+ struct channel_tlv tl;
+ u8 remove_vlan;
+ u8 padding[3];
};
struct vfpf_vport_update_mcast_bin_tlv {
- struct channel_tlv tl;
- u8 padding[4];
+ struct channel_tlv tl;
+ u8 padding[4];
aligned_u64 bins[8];
};
struct vfpf_vport_update_accept_param_tlv {
struct channel_tlv tl;
- u8 update_rx_mode;
- u8 update_tx_mode;
- u8 rx_accept_filter;
- u8 tx_accept_filter;
+ u8 update_rx_mode;
+ u8 update_tx_mode;
+ u8 rx_accept_filter;
+ u8 tx_accept_filter;
};
struct vfpf_vport_update_accept_any_vlan_tlv {
@@ -371,29 +371,29 @@ struct vfpf_vport_update_accept_any_vlan_tlv {
};
struct vfpf_vport_update_sge_tpa_tlv {
- struct channel_tlv tl;
+ struct channel_tlv tl;
- u16 sge_tpa_flags;
-#define VFPF_TPA_IPV4_EN_FLAG (1 << 0)
-#define VFPF_TPA_IPV6_EN_FLAG (1 << 1)
-#define VFPF_TPA_PKT_SPLIT_FLAG (1 << 2)
-#define VFPF_TPA_HDR_DATA_SPLIT_FLAG (1 << 3)
-#define VFPF_TPA_GRO_CONSIST_FLAG (1 << 4)
+ u16 sge_tpa_flags;
+ #define VFPF_TPA_IPV4_EN_FLAG (1 << 0)
+ #define VFPF_TPA_IPV6_EN_FLAG (1 << 1)
+ #define VFPF_TPA_PKT_SPLIT_FLAG (1 << 2)
+ #define VFPF_TPA_HDR_DATA_SPLIT_FLAG (1 << 3)
+ #define VFPF_TPA_GRO_CONSIST_FLAG (1 << 4)
- u8 update_sge_tpa_flags;
-#define VFPF_UPDATE_SGE_DEPRECATED_FLAG (1 << 0)
-#define VFPF_UPDATE_TPA_EN_FLAG (1 << 1)
-#define VFPF_UPDATE_TPA_PARAM_FLAG (1 << 2)
+ u8 update_sge_tpa_flags;
+ #define VFPF_UPDATE_SGE_DEPRECATED_FLAG (1 << 0)
+ #define VFPF_UPDATE_TPA_EN_FLAG (1 << 1)
+ #define VFPF_UPDATE_TPA_PARAM_FLAG (1 << 2)
- u8 max_buffers_per_cqe;
+ u8 max_buffers_per_cqe;
- u16 deprecated_sge_buff_size;
- u16 tpa_max_size;
- u16 tpa_min_size_to_start;
- u16 tpa_min_size_to_cont;
+ u16 deprecated_sge_buff_size;
+ u16 tpa_max_size;
+ u16 tpa_min_size_to_start;
+ u16 tpa_min_size_to_cont;
- u8 tpa_max_aggs_num;
- u8 padding[7];
+ u8 tpa_max_aggs_num;
+ u8 padding[7];
};
@@ -405,15 +405,15 @@ struct vfpf_vport_update_tlv {
};
struct vfpf_ucast_filter_tlv {
- struct vfpf_first_tlv first_tlv;
+ struct vfpf_first_tlv first_tlv;
- u8 opcode;
- u8 type;
+ u8 opcode;
+ u8 type;
- u8 mac[ETH_ALEN];
+ u8 mac[ETH_ALEN];
- u16 vlan;
- u16 padding[3];
+ u16 vlan;
+ u16 padding[3];
};
struct tlv_buffer_size {
@@ -421,26 +421,26 @@ struct tlv_buffer_size {
};
union vfpf_tlvs {
- struct vfpf_first_tlv first_tlv;
- struct vfpf_acquire_tlv acquire;
+ struct vfpf_first_tlv first_tlv;
+ struct vfpf_acquire_tlv acquire;
struct vfpf_init_tlv init;
- struct vfpf_start_rxq_tlv start_rxq;
- struct vfpf_start_txq_tlv start_txq;
- struct vfpf_stop_rxqs_tlv stop_rxqs;
- struct vfpf_stop_txqs_tlv stop_txqs;
- struct vfpf_update_rxq_tlv update_rxq;
- struct vfpf_vport_start_tlv start_vport;
- struct vfpf_vport_update_tlv vport_update;
- struct vfpf_ucast_filter_tlv ucast_filter;
+ struct vfpf_start_rxq_tlv start_rxq;
+ struct vfpf_start_txq_tlv start_txq;
+ struct vfpf_stop_rxqs_tlv stop_rxqs;
+ struct vfpf_stop_txqs_tlv stop_txqs;
+ struct vfpf_update_rxq_tlv update_rxq;
+ struct vfpf_vport_start_tlv start_vport;
+ struct vfpf_vport_update_tlv vport_update;
+ struct vfpf_ucast_filter_tlv ucast_filter;
struct channel_list_end_tlv list_end;
- struct tlv_buffer_size tlv_buf_size;
+ struct tlv_buffer_size tlv_buf_size;
};
union pfvf_tlvs {
- struct pfvf_def_resp_tlv default_resp;
- struct pfvf_acquire_resp_tlv acquire_resp;
+ struct pfvf_def_resp_tlv default_resp;
+ struct pfvf_acquire_resp_tlv acquire_resp;
struct channel_list_end_tlv list_end;
- struct tlv_buffer_size tlv_buf_size;
+ struct tlv_buffer_size tlv_buf_size;
};
/* This is a structure which is allocated in the VF, which the PF may update
@@ -533,7 +533,7 @@ struct ecore_bulletin {
enum {
/*!!!!! Make sure to update STRINGS structure accordingly !!!!!*/
- CHANNEL_TLV_NONE, /* ends tlv sequence */
+ CHANNEL_TLV_NONE, /* ends tlv sequence */
CHANNEL_TLV_ACQUIRE,
CHANNEL_TLV_VPORT_START,
CHANNEL_TLV_VPORT_UPDATE,
diff --git a/drivers/net/qede/base/eth_common.h b/drivers/net/qede/base/eth_common.h
index 046bbb2..71ef615 100644
--- a/drivers/net/qede/base/eth_common.h
+++ b/drivers/net/qede/base/eth_common.h
@@ -12,43 +12,43 @@
/* ETH FW CONSTANTS */
/********************/
#define ETH_CACHE_LINE_SIZE 64
-#define ETH_RX_CQE_GAP 32
-#define ETH_MAX_RAMROD_PER_CON 8
-#define ETH_TX_BD_PAGE_SIZE_BYTES 4096
-#define ETH_RX_BD_PAGE_SIZE_BYTES 4096
-#define ETH_RX_CQE_PAGE_SIZE_BYTES 4096
-#define ETH_RX_NUM_NEXT_PAGE_BDS 2
-
-#define ETH_TX_MIN_BDS_PER_NON_LSO_PKT 1
-#define ETH_TX_MAX_BDS_PER_NON_LSO_PACKET 18
-#define ETH_TX_MAX_LSO_HDR_NBD 4
-#define ETH_TX_MIN_BDS_PER_LSO_PKT 3
-#define ETH_TX_MIN_BDS_PER_TUNN_IPV6_WITH_EXT_PKT 3
-#define ETH_TX_MIN_BDS_PER_IPV6_WITH_EXT_PKT 2
-#define ETH_TX_MIN_BDS_PER_PKT_W_LOOPBACK_MODE 2
+#define ETH_RX_CQE_GAP 32
+#define ETH_MAX_RAMROD_PER_CON 8
+#define ETH_TX_BD_PAGE_SIZE_BYTES 4096
+#define ETH_RX_BD_PAGE_SIZE_BYTES 4096
+#define ETH_RX_CQE_PAGE_SIZE_BYTES 4096
+#define ETH_RX_NUM_NEXT_PAGE_BDS 2
+
+#define ETH_TX_MIN_BDS_PER_NON_LSO_PKT 1
+#define ETH_TX_MAX_BDS_PER_NON_LSO_PACKET 18
+#define ETH_TX_MAX_LSO_HDR_NBD 4
+#define ETH_TX_MIN_BDS_PER_LSO_PKT 3
+#define ETH_TX_MIN_BDS_PER_TUNN_IPV6_WITH_EXT_PKT 3
+#define ETH_TX_MIN_BDS_PER_IPV6_WITH_EXT_PKT 2
+#define ETH_TX_MIN_BDS_PER_PKT_W_LOOPBACK_MODE 2
#define ETH_TX_MAX_NON_LSO_PKT_LEN (9700 - (4 + 12 + 8))
#define ETH_TX_MAX_LSO_HDR_BYTES 510
#define ETH_TX_LSO_WINDOW_BDS_NUM 18
#define ETH_TX_LSO_WINDOW_MIN_LEN 9700
#define ETH_TX_MAX_LSO_PAYLOAD_LEN 0xFFFF
-#define ETH_NUM_STATISTIC_COUNTERS MAX_NUM_VPORTS
+#define ETH_NUM_STATISTIC_COUNTERS MAX_NUM_VPORTS
#define ETH_RX_MAX_BUFF_PER_PKT 5
/* num of MAC/VLAN filters */
-#define ETH_NUM_MAC_FILTERS 512
-#define ETH_NUM_VLAN_FILTERS 512
+#define ETH_NUM_MAC_FILTERS 512
+#define ETH_NUM_VLAN_FILTERS 512
/* approx. multicast constants */
-#define ETH_MULTICAST_BIN_FROM_MAC_SEED 0
-#define ETH_MULTICAST_MAC_BINS 256
-#define ETH_MULTICAST_MAC_BINS_IN_REGS (ETH_MULTICAST_MAC_BINS / 32)
+#define ETH_MULTICAST_BIN_FROM_MAC_SEED 0
+#define ETH_MULTICAST_MAC_BINS 256
+#define ETH_MULTICAST_MAC_BINS_IN_REGS (ETH_MULTICAST_MAC_BINS / 32)
/* ethernet vport update constants */
-#define ETH_FILTER_RULES_COUNT 10
-#define ETH_RSS_IND_TABLE_ENTRIES_NUM 128
-#define ETH_RSS_KEY_SIZE_REGS 10
+#define ETH_FILTER_RULES_COUNT 10
+#define ETH_RSS_IND_TABLE_ENTRIES_NUM 128
+#define ETH_RSS_KEY_SIZE_REGS 10
#define ETH_RSS_ENGINE_NUM_K2 207
#define ETH_RSS_ENGINE_NUM_BB 127
@@ -115,14 +115,14 @@ struct eth_tx_data_1st_bd {
__le16 vlan /* VLAN to insert to packet (if needed). */;
/* Number of BDs in packet. Should be at least 2 in non-LSO
* packet and at least 3 in LSO (or Tunnel with IPv6+ext) packet.
- */
+ */
u8 nbds;
struct eth_tx_1st_bd_flags bd_flags;
__le16 bitfields;
#define ETH_TX_DATA_1ST_BD_TUNN_CFG_OVERRIDE_MASK 0x1
#define ETH_TX_DATA_1ST_BD_TUNN_CFG_OVERRIDE_SHIFT 0
-#define ETH_TX_DATA_1ST_BD_RESERVED0_MASK 0x1
-#define ETH_TX_DATA_1ST_BD_RESERVED0_SHIFT 1
+#define ETH_TX_DATA_1ST_BD_RESERVED0_MASK 0x1
+#define ETH_TX_DATA_1ST_BD_RESERVED0_SHIFT 1
#define ETH_TX_DATA_1ST_BD_FW_USE_ONLY_MASK 0x3FFF
#define ETH_TX_DATA_1ST_BD_FW_USE_ONLY_SHIFT 2
};
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 7192265..6f4b4f8 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -26,7 +26,7 @@
#define MCP_GLOB_PORT_MAX 4 /* Global */
#define MCP_GLOB_FUNC_MAX 16 /* Global */
-typedef u32 offsize_t; /* In DWORDS !!! */
+typedef u32 offsize_t; /* In DWORDS !!! */
/* Offset from the beginning of the MCP scratchpad */
#define OFFSIZE_OFFSET_SHIFT 0
#define OFFSIZE_OFFSET_MASK 0x0000ffff
@@ -35,18 +35,18 @@ typedef u32 offsize_t; /* In DWORDS !!! */
#define OFFSIZE_SIZE_MASK 0xffff0000
/* SECTION_OFFSET is calculating the offset in bytes out of offsize */
-#define SECTION_OFFSET(_offsize) \
-((((_offsize & OFFSIZE_OFFSET_MASK) >> OFFSIZE_OFFSET_SHIFT) << 2))
+#define SECTION_OFFSET(_offsize) \
+ ((((_offsize & OFFSIZE_OFFSET_MASK) >> OFFSIZE_OFFSET_SHIFT) << 2))
/* SECTION_SIZE is calculating the size in bytes out of offsize */
-#define SECTION_SIZE(_offsize) \
-(((_offsize & OFFSIZE_SIZE_MASK) >> OFFSIZE_SIZE_SHIFT) << 2)
+#define SECTION_SIZE(_offsize) \
+ (((_offsize & OFFSIZE_SIZE_MASK) >> OFFSIZE_SIZE_SHIFT) << 2)
-#define SECTION_ADDR(_offsize, idx) \
+#define SECTION_ADDR(_offsize, idx) \
(MCP_REG_SCRATCH + SECTION_OFFSET(_offsize) + (SECTION_SIZE(_offsize) * idx))
#define SECTION_OFFSIZE_ADDR(_pub_base, _section) \
-(_pub_base + offsetof(struct mcp_public_data, sections[_section]))
+ (_pub_base + offsetof(struct mcp_public_data, sections[_section]))
/* PHY configuration */
struct pmm_phy_cfg {
@@ -54,13 +54,13 @@ struct pmm_phy_cfg {
#define PMM_SPEED_AUTONEG 0
#define PMM_SPEED_SMARTLINQ 0x8
- u32 pause; /* bitmask */
+ u32 pause; /* bitmask */
#define PMM_PAUSE_NONE 0x0
#define PMM_PAUSE_AUTONEG 0x1
#define PMM_PAUSE_RX 0x2
#define PMM_PAUSE_TX 0x4
- u32 adv_speed; /* Default should be the speed_cap_mask */
+ u32 adv_speed; /* Default should be the speed_cap_mask */
u32 loopback_mode;
#define PMM_LOOPBACK_NONE 0
#define PMM_LOOPBACK_INT_PHY 1
@@ -76,7 +76,7 @@ struct pmm_phy_cfg {
};
struct port_mf_cfg {
- u32 dynamic_cfg; /* device control channel */
+ u32 dynamic_cfg; /* device control channel */
#define PORT_MF_CFG_OV_TAG_MASK 0x0000ffff
#define PORT_MF_CFG_OV_TAG_SHIFT 0
#define PORT_MF_CFG_OV_TAG_DEFAULT PORT_MF_CFG_OV_TAG_MASK
@@ -88,51 +88,51 @@ struct port_mf_cfg {
* MUST be synced with struct pmm_stats_map
*/
struct pmm_stats {
- u64 r64; /* 0x00 (Offset 0x00 ) RX 64-byte frame counter */
- u64 r127; /* 0x01 (Offset 0x08 ) RX 65 to 127 byte frame counter */
- u64 r255; /* 0x02 (Offset 0x10 ) RX 128 to 255 byte frame counter */
- u64 r511; /* 0x03 (Offset 0x18 ) RX 256 to 511 byte frame counter */
- u64 r1023; /* 0x04 (Offset 0x20 ) RX 512 to 1023 byte frame counter */
+ u64 r64; /* 0x00 (Offset 0x00 ) RX 64-byte frame counter*/
+ u64 r127; /* 0x01 (Offset 0x08 ) RX 65 to 127 byte frame counter*/
+ u64 r255; /* 0x02 (Offset 0x10 ) RX 128 to 255 byte frame counter*/
+ u64 r511; /* 0x03 (Offset 0x18 ) RX 256 to 511 byte frame counter*/
+ u64 r1023; /* 0x04 (Offset 0x20 ) RX 512 to 1023 byte frame counter*/
u64 r1518; /* 0x05 (Offset 0x28 ) RX 1024 to 1518 byte frame counter */
u64 r1522; /* 0x06 (Offset 0x30 ) RX 1519 to 1522 byte VLAN-tagged */
- u64 r2047; /* 0x07 (Offset 0x38 ) RX 1519 to 2047 byte frame counter */
- u64 r4095; /* 0x08 (Offset 0x40 ) RX 2048 to 4095 byte frame counter */
- u64 r9216; /* 0x09 (Offset 0x48 ) RX 4096 to 9216 byte frame counter */
+ u64 r2047; /* 0x07 (Offset 0x38 ) RX 1519 to 2047 byte frame counter*/
+ u64 r4095; /* 0x08 (Offset 0x40 ) RX 2048 to 4095 byte frame counter*/
+ u64 r9216; /* 0x09 (Offset 0x48 ) RX 4096 to 9216 byte frame counter*/
u64 r16383; /* 0x0A (Offset 0x50 ) RX 9217 to 16383 byte frame ctr */
- u64 rfcs; /* 0x0F (Offset 0x58 ) RX FCS error frame counter */
- u64 rxcf; /* 0x10 (Offset 0x60 ) RX control frame counter */
- u64 rxpf; /* 0x11 (Offset 0x68 ) RX pause frame counter */
- u64 rxpp; /* 0x12 (Offset 0x70 ) RX PFC frame counter */
- u64 raln; /* 0x16 (Offset 0x78 ) RX alignment error counter */
- u64 rfcr; /* 0x19 (Offset 0x80 ) RX false carrier counter */
- u64 rovr; /* 0x1A (Offset 0x88 ) RX oversized frame counter */
- u64 rjbr; /* 0x1B (Offset 0x90 ) RX jabber frame counter */
- u64 rund; /* 0x34 (Offset 0x98 ) RX undersized frame counter */
- u64 rfrg; /* 0x35 (Offset 0xa0 ) RX fragment counter */
- u64 t64; /* 0x40 (Offset 0xa8 ) TX 64-byte frame counter */
+ u64 rfcs; /* 0x0F (Offset 0x58 ) RX FCS error frame counter*/
+ u64 rxcf; /* 0x10 (Offset 0x60 ) RX control frame counter*/
+ u64 rxpf; /* 0x11 (Offset 0x68 ) RX pause frame counter*/
+ u64 rxpp; /* 0x12 (Offset 0x70 ) RX PFC frame counter*/
+ u64 raln; /* 0x16 (Offset 0x78 ) RX alignment error counter*/
+ u64 rfcr; /* 0x19 (Offset 0x80 ) RX false carrier counter */
+ u64 rovr; /* 0x1A (Offset 0x88 ) RX oversized frame counter*/
+ u64 rjbr; /* 0x1B (Offset 0x90 ) RX jabber frame counter */
+ u64 rund; /* 0x34 (Offset 0x98 ) RX undersized frame counter */
+ u64 rfrg; /* 0x35 (Offset 0xa0 ) RX fragment counter */
+ u64 t64; /* 0x40 (Offset 0xa8 ) TX 64-byte frame counter */
u64 t127; /* 0x41 (Offset 0xb0 ) TX 65 to 127 byte frame counter */
- u64 t255; /* 0x42 (Offset 0xb8 ) TX 128 to 255 byte frame counter */
- u64 t511; /* 0x43 (Offset 0xc0 ) TX 256 to 511 byte frame counter */
- u64 t1023; /* 0x44 (Offset 0xc8 ) TX 512 to 1023 byte frame counter */
+ u64 t255; /* 0x42 (Offset 0xb8 ) TX 128 to 255 byte frame counter*/
+ u64 t511; /* 0x43 (Offset 0xc0 ) TX 256 to 511 byte frame counter*/
+ u64 t1023; /* 0x44 (Offset 0xc8 ) TX 512 to 1023 byte frame counter*/
u64 t1518; /* 0x45 (Offset 0xd0 ) TX 1024 to 1518 byte frame counter */
u64 t2047; /* 0x47 (Offset 0xd8 ) TX 1519 to 2047 byte frame counter */
u64 t4095; /* 0x48 (Offset 0xe0 ) TX 2048 to 4095 byte frame counter */
u64 t9216; /* 0x49 (Offset 0xe8 ) TX 4096 to 9216 byte frame counter */
u64 t16383; /* 0x4A (Offset 0xf0 ) TX 9217 to 16383 byte frame ctr */
- u64 txpf; /* 0x50 (Offset 0xf8 ) TX pause frame counter */
- u64 txpp; /* 0x51 (Offset 0x100) TX PFC frame counter */
+ u64 txpf; /* 0x50 (Offset 0xf8 ) TX pause frame counter */
+ u64 txpp; /* 0x51 (Offset 0x100) TX PFC frame counter */
u64 tlpiec; /* 0x6C (Offset 0x108) Transmit Logical Type LLFC */
u64 tncl; /* 0x6E (Offset 0x110) Transmit Total Collision Counter */
- u64 rbyte; /* 0x3d (Offset 0x118) RX byte counter */
- u64 rxuca; /* 0x0c (Offset 0x120) RX UC frame counter */
- u64 rxmca; /* 0x0d (Offset 0x128) RX MC frame counter */
- u64 rxbca; /* 0x0e (Offset 0x130) RX BC frame counter */
+ u64 rbyte; /* 0x3d (Offset 0x118) RX byte counter */
+ u64 rxuca; /* 0x0c (Offset 0x120) RX UC frame counter */
+ u64 rxmca; /* 0x0d (Offset 0x128) RX MC frame counter */
+ u64 rxbca; /* 0x0e (Offset 0x130) RX BC frame counter */
u64 rxpok; /* 0x22 (Offset 0x138) RX good frame */
- u64 tbyte; /* 0x6f (Offset 0x140) TX byte counter */
- u64 txuca; /* 0x4d (Offset 0x148) TX UC frame counter */
- u64 txmca; /* 0x4e (Offset 0x150) TX MC frame counter */
- u64 txbca; /* 0x4f (Offset 0x158) TX BC frame counter */
- u64 txcf; /* 0x54 (Offset 0x160) TX control frame counter */
+ u64 tbyte; /* 0x6f (Offset 0x140) TX byte counter */
+ u64 txuca; /* 0x4d (Offset 0x148) TX UC frame counter */
+ u64 txmca; /* 0x4e (Offset 0x150) TX MC frame counter */
+ u64 txbca; /* 0x4f (Offset 0x158) TX BC frame counter */
+ u64 txcf; /* 0x54 (Offset 0x160) TX control frame counter */
};
struct brb_stats {
@@ -151,18 +151,18 @@ struct port_stats {
* | ports | | | | |
*======+==================+=========+=========+========+======================
* BB | 1x100G | This is special mode, where there are 2 HW func
- * BB | 2x10/20Gbps | 0,1 | NA | No | 1 | 1
- * BB | 2x40 Gbps | 0,1 | NA | Yes | 1 | 1
- * BB | 2x50Gbps | 0,1 | NA | No | 1 | 1
+ * BB | 2x10/20Gbps| 0,1 | NA | No | 1 | 1
+ * BB | 2x40 Gbps | 0,1 | NA | Yes | 1 | 1
+ * BB | 2x50Gbps | 0,1 | NA | No | 1 | 1
* BB | 4x10Gbps | 0,2 | 1,3 | No | 1/2 | 1,2
* BB | 4x10Gbps | 0,1 | 2,3 | No | 1/2 | 1,2
* BB | 4x10Gbps | 0,3 | 1,2 | No | 1/2 | 1,2
- * BB | 4x10Gbps | 0,1,2,3 | NA | No | 1 | 1
- * AH | 2x10/20Gbps | 0,1 | NA | NA | 1 | NA
- * AH | 4x10Gbps | 0,1 | 2,3 | NA | 2 | NA
- * AH | 4x10Gbps | 0,2 | 1,3 | NA | 2 | NA
- * AH | 4x10Gbps | 0,3 | 1,2 | NA | 2 | NA
- * AH | 4x10Gbps | 0,1,2,3 | NA | NA | 1 | NA
+ * BB | 4x10Gbps | 0,1,2,3 | NA | No | 1 | 1
+ * AH | 2x10/20Gbps| 0,1 | NA | NA | 1 | NA
+ * AH | 4x10Gbps | 0,1 | 2,3 | NA | 2 | NA
+ * AH | 4x10Gbps | 0,2 | 1,3 | NA | 2 | NA
+ * AH | 4x10Gbps | 0,3 | 1,2 | NA | 2 | NA
+ * AH | 4x10Gbps | 0,1,2,3 | NA | NA | 1 | NA
*======+==================+=========+=========+========+=======================
*/
@@ -216,13 +216,13 @@ struct lldp_config_params_s {
u32 local_chassis_id[LLDP_CHASSIS_ID_STAT_LEN];
/* Holds local Port ID TLV header, subtype and 9B of payload.
* If firtst byte is 0, then we will use default port ID
- */
+ */
u32 local_port_id[LLDP_PORT_ID_STAT_LEN];
};
struct lldp_status_params_s {
u32 prefix_seq_num;
- u32 status; /* TBD */
+ u32 status; /* TBD */
/* Holds remote Chassis ID TLV header, subtype and 9B of payload.
*/
u32 local_port_id[LLDP_PORT_ID_STAT_LEN];
@@ -245,11 +245,11 @@ struct dcbx_ets_feature {
#define DCBX_ETS_CBS_SHIFT 3
#define DCBX_ETS_MAX_TCS_MASK 0x000000f0
#define DCBX_ETS_MAX_TCS_SHIFT 4
- u32 pri_tc_tbl[1];
+ u32 pri_tc_tbl[1];
#define DCBX_CEE_STRICT_PRIORITY 0xf
#define DCBX_CEE_STRICT_PRIORITY_TC 0x7
- u32 tc_bw_tbl[2];
- u32 tc_tsa_tbl[2];
+ u32 tc_bw_tbl[2];
+ u32 tc_tsa_tbl[2];
#define DCBX_ETS_TSA_STRICT 0
#define DCBX_ETS_TSA_CBS 1
#define DCBX_ETS_TSA_ETS 2
@@ -287,12 +287,12 @@ struct dcbx_app_priority_feature {
/* Not in use
* #define DCBX_APP_DEFAULT_PRI_MASK 0x00000f00
* #define DCBX_APP_DEFAULT_PRI_SHIFT 8
- */
+ */
#define DCBX_APP_MAX_TCS_MASK 0x0000f000
#define DCBX_APP_MAX_TCS_SHIFT 12
#define DCBX_APP_NUM_ENTRIES_MASK 0x00ff0000
#define DCBX_APP_NUM_ENTRIES_SHIFT 16
- struct dcbx_app_priority_entry app_pri_tbl[DCBX_MAX_APP_PROTOCOL];
+ struct dcbx_app_priority_entry app_pri_tbl[DCBX_MAX_APP_PROTOCOL];
};
/* FW structure in BE */
@@ -350,7 +350,7 @@ struct dcbx_mib {
* #define DCBX_CONFIG_VERSION_DISABLED 0
* #define DCBX_CONFIG_VERSION_IEEE 1
* #define DCBX_CONFIG_VERSION_CEE 2
- */
+ */
struct dcbx_features features;
u32 suffix_seq_num;
};
@@ -367,9 +367,9 @@ struct lldp_system_tlvs_buffer_s {
/* */
/**************************************/
struct public_global {
- u32 max_path; /* 32bit is wasty, but this will be used often */
+ u32 max_path; /* 32bit is wasty, but this will be used often */
u32 max_ports; /* (Global) 32bit is wasty, this will be used often */
-#define MODE_1P 1 /* TBD - NEED TO THINK OF A BETTER NAME */
+#define MODE_1P 1 /* TBD - NEED TO THINK OF A BETTER NAME */
#define MODE_2P 2
#define MODE_3P 3
#define MODE_4P 4
@@ -406,7 +406,7 @@ struct public_global {
struct fw_flr_mb {
u32 aggint;
u32 opgen_addr;
- u32 accum_ack; /* 0..15:PF, 16..207:VF, 256..271:IOV_DIS */
+ u32 accum_ack; /* 0..15:PF, 16..207:VF, 256..271:IOV_DIS */
#define ACCUM_ACK_PF_BASE 0
#define ACCUM_ACK_PF_SHIFT 0
@@ -424,10 +424,10 @@ struct public_path {
* mcp_vf_disabled is set by the MCP to indicate the driver about VFs
* which were disabled/flred
*/
- u32 mcp_vf_disabled[VF_MAX_STATIC / 32]; /* 0x003c */
+ u32 mcp_vf_disabled[VF_MAX_STATIC / 32]; /* 0x003c */
u32 process_kill;
- /* Reset on mcp reset, and incremented for eveny process kill event. */
+/* Reset on mcp reset, and incremented for eveny process kill event. */
#define PROCESS_KILL_COUNTER_MASK 0x0000ffff
#define PROCESS_KILL_COUNTER_SHIFT 0
#define PROCESS_KILL_GLOB_AEU_BIT_MASK 0xffff0000
@@ -464,7 +464,7 @@ struct dci_fc_npiv_tbl {
****************************************************************************/
struct public_port {
- u32 validity_map; /* 0x0 (4*2 = 0x8) */
+ u32 validity_map; /* 0x0 (4*2 = 0x8) */
/* validity bits */
#define MCP_VALIDITY_PCI_CFG 0x00100000
@@ -485,7 +485,7 @@ struct public_port {
#define MCP_VALIDITY_ACTIVE_MFW_NONE 0x000001c0
u32 link_status;
-#define LINK_STATUS_LINK_UP 0x00000001
+#define LINK_STATUS_LINK_UP 0x00000001
#define LINK_STATUS_SPEED_AND_DUPLEX_MASK 0x0000001e
#define LINK_STATUS_SPEED_AND_DUPLEX_1000THD (1 << 1)
#define LINK_STATUS_SPEED_AND_DUPLEX_1000TFD (2 << 1)
@@ -501,7 +501,7 @@ struct public_port {
#define LINK_STATUS_AUTO_NEGOTIATE_COMPLETE 0x00000040
#define LINK_STATUS_PARALLEL_DETECTION_USED 0x00000080
-#define LINK_STATUS_PFC_ENABLED 0x00000100
+#define LINK_STATUS_PFC_ENABLED 0x00000100
#define LINK_STATUS_LINK_PARTNER_1000TFD_CAPABLE 0x00000200
#define LINK_STATUS_LINK_PARTNER_1000THD_CAPABLE 0x00000400
#define LINK_STATUS_LINK_PARTNER_10G_CAPABLE 0x00000800
@@ -537,15 +537,15 @@ struct public_port {
struct port_stats stats;
u32 media_type;
-#define MEDIA_UNSPECIFIED 0x0
+#define MEDIA_UNSPECIFIED 0x0
#define MEDIA_SFPP_10G_FIBER 0x1
#define MEDIA_XFP_FIBER 0x2
-#define MEDIA_DA_TWINAX 0x3
-#define MEDIA_BASE_T 0x4
+#define MEDIA_DA_TWINAX 0x3
+#define MEDIA_BASE_T 0x4
#define MEDIA_SFP_1G_FIBER 0x5
-#define MEDIA_MODULE_FIBER 0x6
-#define MEDIA_KR 0xf0
-#define MEDIA_NOT_PRESENT 0xff
+#define MEDIA_MODULE_FIBER 0x6
+#define MEDIA_KR 0xf0
+#define MEDIA_NOT_PRESENT 0xff
u32 lfa_status;
#define LFA_LINK_FLAP_REASON_OFFSET 0
@@ -574,7 +574,7 @@ struct public_port {
struct dcbx_mib remote_dcbx_mib;
struct dcbx_mib operational_dcbx_mib;
- /* FC_NPIV table offset & size in NVRAM value of 0 means not present */
+/* FC_NPIV table offset & size in NVRAM value of 0 means not present */
u32 fc_npiv_nvram_tbl_addr;
u32 fc_npiv_nvram_tbl_size;
u32 transceiver_data;
@@ -641,7 +641,7 @@ struct public_func {
/* MTU size per funciton is needed for the OV feature */
u32 mtu_size;
- /* 9 entires for the C2S PCP map for each inner VLAN PCP + 1 default */
+/* 9 entires for the C2S PCP map for each inner VLAN PCP + 1 default */
/* For PCP values 0-3 use the map lower */
/* 0xFF000000 - PCP 0, 0x00FF0000 - PCP 1,
* 0x0000FF00 - PCP 2, 0x000000FF PCP 3
@@ -650,7 +650,7 @@ struct public_func {
/* For PCP values 4-7 use the map upper */
/* 0xFF000000 - PCP 4, 0x00FF0000 - PCP 5,
* 0x0000FF00 - PCP 6, 0x000000FF PCP 7
- */
+ */
u32 c2s_pcp_map_upper;
/* For PCP default value get the MSB byte of the map default */
@@ -683,7 +683,7 @@ struct public_func {
u32 status;
#define FUNC_STATUS_VLINK_DOWN 0x00000001
- u32 mac_upper; /* MAC */
+ u32 mac_upper; /* MAC */
#define FUNC_MF_CFG_UPPERMAC_MASK 0x0000ffff
#define FUNC_MF_CFG_UPPERMAC_SHIFT 0
#define FUNC_MF_CFG_UPPERMAC_DEFAULT FUNC_MF_CFG_UPPERMAC_MASK
@@ -692,14 +692,14 @@ struct public_func {
u32 dpdk_rsvd2[4];
- u32 ovlan_stag; /* tags */
+ u32 ovlan_stag; /* tags */
#define FUNC_MF_CFG_OV_STAG_MASK 0x0000ffff
#define FUNC_MF_CFG_OV_STAG_SHIFT 0
#define FUNC_MF_CFG_OV_STAG_DEFAULT FUNC_MF_CFG_OV_STAG_MASK
- u32 pf_allocation; /* vf per pf */
+ u32 pf_allocation; /* vf per pf */
- u32 preserve_data; /* Will be used bt CCM */
+ u32 preserve_data; /* Will be used bt CCM */
u32 driver_last_activity_ts;
@@ -707,7 +707,7 @@ struct public_func {
* drv_ack_vf_disabled is set by the PF driver to ack handled disabled
* VFs
*/
- u32 drv_ack_vf_disabled[VF_MAX_STATIC / 32]; /* 0x0044 */
+ u32 drv_ack_vf_disabled[VF_MAX_STATIC / 32]; /* 0x0044 */
u32 drv_id;
#define DRV_ID_PDA_COMP_VER_MASK 0x0000ffff
@@ -747,7 +747,7 @@ struct public_func {
*/
struct mcp_mac {
- u32 mac_upper; /* Upper 16 bits are always zeroes */
+ u32 mac_upper; /* Upper 16 bits are always zeroes */
u32 mac_lower;
};
@@ -784,12 +784,12 @@ struct ocbb_data_stc {
};
union drv_union_data {
- u32 ver_str[MCP_DRV_VER_STR_SIZE_DWORD]; /* LOAD_REQ */
- struct mcp_mac wol_mac; /* UNLOAD_DONE */
+ u32 ver_str[MCP_DRV_VER_STR_SIZE_DWORD]; /* LOAD_REQ */
+ struct mcp_mac wol_mac; /* UNLOAD_DONE */
struct pmm_phy_cfg drv_phy_cfg;
- struct mcp_val64 val64; /* For PHY / AVS commands */
+ struct mcp_val64 val64; /* For PHY / AVS commands */
u8 raw_data[MCP_DRV_NVM_BUF_LEN];
@@ -822,7 +822,7 @@ struct public_drv_mb {
/* Vitaly: LLDP commands */
#define DRV_MSG_CODE_SET_LLDP 0x24000000
#define DRV_MSG_CODE_SET_DCBX 0x25000000
- /* OneView feature driver HSI */
+ /* OneView feature driver HSI*/
#define DRV_MSG_CODE_OV_UPDATE_CURR_CFG 0x26000000
#define DRV_MSG_CODE_OV_UPDATE_BUS_NUM 0x27000000
#define DRV_MSG_CODE_OV_UPDATE_BOOT_PROGRESS 0x28000000
@@ -893,7 +893,7 @@ struct public_drv_mb {
#define DRV_MB_PARAM_INIT_PHY_FORCE 0x00000001
#define DRV_MB_PARAM_INIT_PHY_DONT_CARE 0x00000002
- /* LLDP / DCBX params */
+ /* LLDP / DCBX params*/
#define DRV_MB_PARAM_LLDP_SEND_MASK 0x00000001
#define DRV_MB_PARAM_LLDP_SEND_SHIFT 0
#define DRV_MB_PARAM_LLDP_AGENT_MASK 0x00000006
@@ -925,7 +925,7 @@ struct public_drv_mb {
#define DRV_MB_PARAM_PHYMOD_LANE_MASK 0x000000FF
#define DRV_MB_PARAM_PHYMOD_SIZE_SHIFT 8
#define DRV_MB_PARAM_PHYMOD_SIZE_MASK 0x000FFF00
- /* configure vf MSIX params */
+ /* configure vf MSIX params*/
#define DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_SHIFT 0
#define DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_MASK 0x000000FF
#define DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_SHIFT 8
@@ -943,16 +943,16 @@ struct public_drv_mb {
#define DRV_MB_PARAM_OV_CURR_CFG_DCI 6
#define DRV_MB_PARAM_OV_CURR_CFG_HII 7
-#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_SHIFT 0
+#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_SHIFT 0
#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_MASK 0x000000FF
-#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_NONE (1 << 0)
+#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_NONE (1 << 0)
#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_TRARGET_FOUND (1 << 2)
#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_LOGGED_INTO_TGT (1 << 4)
#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_IMG_DOWNLOADED (1 << 5)
#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_OS_HANDOFF (1 << 6)
#define DRV_MB_PARAM_OV_UPDATE_BOOT_COMPLETED 0
-#define DRV_MB_PARAM_OV_PCI_BUS_NUM_SHIFT 0
+#define DRV_MB_PARAM_OV_PCI_BUS_NUM_SHIFT 0
#define DRV_MB_PARAM_OV_PCI_BUS_NUM_MASK 0x000000FF
#define DRV_MB_PARAM_OV_STORM_FW_VER_SHIFT 0
@@ -1063,7 +1063,7 @@ struct public_drv_mb {
#define FW_MSG_CODE_TRANSCEIVER_DIAG_OK 0x00160000
#define FW_MSG_CODE_TRANSCEIVER_DIAG_ERROR 0x00170000
#define FW_MSG_CODE_TRANSCEIVER_NOT_PRESENT 0x00020000
-#define FW_MSG_CODE_TRANSCEIVER_BAD_BUFFER_SIZE 0x000f0000
+#define FW_MSG_CODE_TRANSCEIVER_BAD_BUFFER_SIZE 0x000f0000
#define FW_MSG_CODE_GPIO_OK 0x00160000
#define FW_MSG_CODE_GPIO_DIRECTION_ERR 0x00170000
#define FW_MSG_CODE_GPIO_CTRL_ERR 0x00020000
@@ -1152,7 +1152,7 @@ enum MFW_DRV_MSG_TYPE {
((u8)((u8 *)(MFW_MB_P(shmem_func)->msg))[msg_id]++;)
struct public_mfw_mb {
- u32 sup_msgs; /* Assigend with MFW_DRV_MSG_MAX */
+ u32 sup_msgs; /* Assigend with MFW_DRV_MSG_MAX */
u32 msg[MFW_DRV_MSG_MAX_DWORDS(MFW_DRV_MSG_MAX)];
u32 ack[MFW_DRV_MSG_MAX_DWORDS(MFW_DRV_MSG_MAX)];
};
@@ -1163,8 +1163,8 @@ struct public_mfw_mb {
/* */
/**************************************/
enum public_sections {
- PUBLIC_DRV_MB, /* Points to the first drv_mb of path0 */
- PUBLIC_MFW_MB, /* Points to the first mfw_mb of path0 */
+ PUBLIC_DRV_MB, /* Points to the first drv_mb of path0 */
+ PUBLIC_MFW_MB, /* Points to the first mfw_mb of path0 */
PUBLIC_GLOBAL,
PUBLIC_PATH,
PUBLIC_PORT,
@@ -1202,4 +1202,4 @@ struct mcp_public_data {
#define MAX_I2C_TRANSACTION_SIZE 16
#define MAX_I2C_TRANSCEIVER_PAGE_SIZE 256
-#endif /* MCP_PUBLIC_H */
+#endif /* MCP_PUBLIC_H */
diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index 7f1a60d..8d99880 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -22,8 +22,8 @@
struct nvm_cfg_mac_address {
u32 mac_addr_hi;
-#define NVM_CFG_MAC_ADDRESS_HI_MASK 0x0000FFFF
-#define NVM_CFG_MAC_ADDRESS_HI_OFFSET 0
+ #define NVM_CFG_MAC_ADDRESS_HI_MASK 0x0000FFFF
+ #define NVM_CFG_MAC_ADDRESS_HI_OFFSET 0
u32 mac_addr_lo;
};
@@ -31,107 +31,107 @@ struct nvm_cfg_mac_address {
* nvm_cfg1 structs
******************************************/
struct nvm_cfg1_glob {
- u32 generic_cont0; /* 0x0 */
-#define NVM_CFG1_GLOB_BOARD_SWAP_MASK 0x0000000F
-#define NVM_CFG1_GLOB_BOARD_SWAP_OFFSET 0
-#define NVM_CFG1_GLOB_BOARD_SWAP_NONE 0x0
-#define NVM_CFG1_GLOB_BOARD_SWAP_PATH 0x1
-#define NVM_CFG1_GLOB_BOARD_SWAP_PORT 0x2
-#define NVM_CFG1_GLOB_BOARD_SWAP_BOTH 0x3
-#define NVM_CFG1_GLOB_MF_MODE_MASK 0x00000FF0
-#define NVM_CFG1_GLOB_MF_MODE_OFFSET 4
-#define NVM_CFG1_GLOB_MF_MODE_MF_ALLOWED 0x0
-#define NVM_CFG1_GLOB_MF_MODE_DEFAULT 0x1
-#define NVM_CFG1_GLOB_MF_MODE_SPIO4 0x2
-#define NVM_CFG1_GLOB_MF_MODE_NPAR1_0 0x3
-#define NVM_CFG1_GLOB_MF_MODE_NPAR1_5 0x4
-#define NVM_CFG1_GLOB_MF_MODE_NPAR2_0 0x5
-#define NVM_CFG1_GLOB_MF_MODE_BD 0x6
-#define NVM_CFG1_GLOB_MF_MODE_UFP 0x7
-#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_MASK 0x00001000
-#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_OFFSET 12
-#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_DISABLED 0x0
-#define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_ENABLED 0x1
-#define NVM_CFG1_GLOB_AVS_MARGIN_LOW_MASK 0x001FE000
-#define NVM_CFG1_GLOB_AVS_MARGIN_LOW_OFFSET 13
-#define NVM_CFG1_GLOB_AVS_MARGIN_HIGH_MASK 0x1FE00000
-#define NVM_CFG1_GLOB_AVS_MARGIN_HIGH_OFFSET 21
-#define NVM_CFG1_GLOB_ENABLE_SRIOV_MASK 0x20000000
-#define NVM_CFG1_GLOB_ENABLE_SRIOV_OFFSET 29
-#define NVM_CFG1_GLOB_ENABLE_SRIOV_DISABLED 0x0
-#define NVM_CFG1_GLOB_ENABLE_SRIOV_ENABLED 0x1
-#define NVM_CFG1_GLOB_ENABLE_ATC_MASK 0x40000000
-#define NVM_CFG1_GLOB_ENABLE_ATC_OFFSET 30
-#define NVM_CFG1_GLOB_ENABLE_ATC_DISABLED 0x0
-#define NVM_CFG1_GLOB_ENABLE_ATC_ENABLED 0x1
-#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_MASK 0x80000000
-#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_OFFSET 31
-#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_DISABLED 0x0
-#define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_ENABLED 0x1
- u32 engineering_change[3]; /* 0x4 */
- u32 manufacturing_id; /* 0x10 */
- u32 serial_number[4]; /* 0x14 */
- u32 pcie_cfg; /* 0x24 */
-#define NVM_CFG1_GLOB_PCI_GEN_MASK 0x00000003
-#define NVM_CFG1_GLOB_PCI_GEN_OFFSET 0
-#define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN1 0x0
-#define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN2 0x1
-#define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN3 0x2
-#define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_MASK 0x00000004
-#define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_OFFSET 2
-#define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_DISABLED 0x0
-#define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_ENABLED 0x1
-#define NVM_CFG1_GLOB_ASPM_SUPPORT_MASK 0x00000018
-#define NVM_CFG1_GLOB_ASPM_SUPPORT_OFFSET 3
-#define NVM_CFG1_GLOB_ASPM_SUPPORT_L0S_L1_ENABLED 0x0
-#define NVM_CFG1_GLOB_ASPM_SUPPORT_L1_DISABLED 0x2
+ u32 generic_cont0; /* 0x0 */
+ #define NVM_CFG1_GLOB_BOARD_SWAP_MASK 0x0000000F
+ #define NVM_CFG1_GLOB_BOARD_SWAP_OFFSET 0
+ #define NVM_CFG1_GLOB_BOARD_SWAP_NONE 0x0
+ #define NVM_CFG1_GLOB_BOARD_SWAP_PATH 0x1
+ #define NVM_CFG1_GLOB_BOARD_SWAP_PORT 0x2
+ #define NVM_CFG1_GLOB_BOARD_SWAP_BOTH 0x3
+ #define NVM_CFG1_GLOB_MF_MODE_MASK 0x00000FF0
+ #define NVM_CFG1_GLOB_MF_MODE_OFFSET 4
+ #define NVM_CFG1_GLOB_MF_MODE_MF_ALLOWED 0x0
+ #define NVM_CFG1_GLOB_MF_MODE_DEFAULT 0x1
+ #define NVM_CFG1_GLOB_MF_MODE_SPIO4 0x2
+ #define NVM_CFG1_GLOB_MF_MODE_NPAR1_0 0x3
+ #define NVM_CFG1_GLOB_MF_MODE_NPAR1_5 0x4
+ #define NVM_CFG1_GLOB_MF_MODE_NPAR2_0 0x5
+ #define NVM_CFG1_GLOB_MF_MODE_BD 0x6
+ #define NVM_CFG1_GLOB_MF_MODE_UFP 0x7
+ #define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_MASK 0x00001000
+ #define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_OFFSET 12
+ #define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_DISABLED 0x0
+ #define NVM_CFG1_GLOB_FAN_FAILURE_ENFORCEMENT_ENABLED 0x1
+ #define NVM_CFG1_GLOB_AVS_MARGIN_LOW_MASK 0x001FE000
+ #define NVM_CFG1_GLOB_AVS_MARGIN_LOW_OFFSET 13
+ #define NVM_CFG1_GLOB_AVS_MARGIN_HIGH_MASK 0x1FE00000
+ #define NVM_CFG1_GLOB_AVS_MARGIN_HIGH_OFFSET 21
+ #define NVM_CFG1_GLOB_ENABLE_SRIOV_MASK 0x20000000
+ #define NVM_CFG1_GLOB_ENABLE_SRIOV_OFFSET 29
+ #define NVM_CFG1_GLOB_ENABLE_SRIOV_DISABLED 0x0
+ #define NVM_CFG1_GLOB_ENABLE_SRIOV_ENABLED 0x1
+ #define NVM_CFG1_GLOB_ENABLE_ATC_MASK 0x40000000
+ #define NVM_CFG1_GLOB_ENABLE_ATC_OFFSET 30
+ #define NVM_CFG1_GLOB_ENABLE_ATC_DISABLED 0x0
+ #define NVM_CFG1_GLOB_ENABLE_ATC_ENABLED 0x1
+ #define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_MASK 0x80000000
+ #define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_OFFSET 31
+ #define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_DISABLED 0x0
+ #define NVM_CFG1_GLOB_CLOCK_SLOWDOWN_ENABLED 0x1
+ u32 engineering_change[3]; /* 0x4 */
+ u32 manufacturing_id; /* 0x10 */
+ u32 serial_number[4]; /* 0x14 */
+ u32 pcie_cfg; /* 0x24 */
+ #define NVM_CFG1_GLOB_PCI_GEN_MASK 0x00000003
+ #define NVM_CFG1_GLOB_PCI_GEN_OFFSET 0
+ #define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN1 0x0
+ #define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN2 0x1
+ #define NVM_CFG1_GLOB_PCI_GEN_PCI_GEN3 0x2
+ #define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_MASK 0x00000004
+ #define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_OFFSET 2
+ #define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_DISABLED 0x0
+ #define NVM_CFG1_GLOB_BEACON_WOL_ENABLED_ENABLED 0x1
+ #define NVM_CFG1_GLOB_ASPM_SUPPORT_MASK 0x00000018
+ #define NVM_CFG1_GLOB_ASPM_SUPPORT_OFFSET 3
+ #define NVM_CFG1_GLOB_ASPM_SUPPORT_L0S_L1_ENABLED 0x0
+ #define NVM_CFG1_GLOB_ASPM_SUPPORT_L1_DISABLED 0x2
#define NVM_CFG1_GLOB_RESERVED_MPREVENT_PCIE_L1_MENTRY_MASK 0x00000020
-#define NVM_CFG1_GLOB_RESERVED_MPREVENT_PCIE_L1_MENTRY_OFFSET 5
-#define NVM_CFG1_GLOB_PCIE_G2_TX_AMPLITUDE_MASK 0x000003C0
-#define NVM_CFG1_GLOB_PCIE_G2_TX_AMPLITUDE_OFFSET 6
-#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_MASK 0x00001C00
-#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_OFFSET 10
-#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_HW 0x0
-#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_0DB 0x1
-#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_3_5DB 0x2
-#define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_6_0DB 0x3
-#define NVM_CFG1_GLOB_WWN_NODE_PREFIX0_MASK 0x001FE000
-#define NVM_CFG1_GLOB_WWN_NODE_PREFIX0_OFFSET 13
-#define NVM_CFG1_GLOB_WWN_NODE_PREFIX1_MASK 0x1FE00000
-#define NVM_CFG1_GLOB_WWN_NODE_PREFIX1_OFFSET 21
-#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_MASK 0x60000000
-#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_OFFSET 29
+ #define NVM_CFG1_GLOB_RESERVED_MPREVENT_PCIE_L1_MENTRY_OFFSET 5
+ #define NVM_CFG1_GLOB_PCIE_G2_TX_AMPLITUDE_MASK 0x000003C0
+ #define NVM_CFG1_GLOB_PCIE_G2_TX_AMPLITUDE_OFFSET 6
+ #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_MASK 0x00001C00
+ #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_OFFSET 10
+ #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_HW 0x0
+ #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_0DB 0x1
+ #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_3_5DB 0x2
+ #define NVM_CFG1_GLOB_PCIE_PREEMPHASIS_6_0DB 0x3
+ #define NVM_CFG1_GLOB_WWN_NODE_PREFIX0_MASK 0x001FE000
+ #define NVM_CFG1_GLOB_WWN_NODE_PREFIX0_OFFSET 13
+ #define NVM_CFG1_GLOB_WWN_NODE_PREFIX1_MASK 0x1FE00000
+ #define NVM_CFG1_GLOB_WWN_NODE_PREFIX1_OFFSET 21
+ #define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_MASK 0x60000000
+ #define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_OFFSET 29
/* Set the duration, in seconds, fan failure signal should be
* sampled
*/
#define NVM_CFG1_GLOB_RESERVED_FAN_FAILURE_DURATION_MASK 0x80000000
-#define NVM_CFG1_GLOB_RESERVED_FAN_FAILURE_DURATION_OFFSET 31
- u32 mgmt_traffic; /* 0x28 */
-#define NVM_CFG1_GLOB_RESERVED60_MASK 0x00000001
-#define NVM_CFG1_GLOB_RESERVED60_OFFSET 0
-#define NVM_CFG1_GLOB_WWN_PORT_PREFIX0_MASK 0x000001FE
-#define NVM_CFG1_GLOB_WWN_PORT_PREFIX0_OFFSET 1
-#define NVM_CFG1_GLOB_WWN_PORT_PREFIX1_MASK 0x0001FE00
-#define NVM_CFG1_GLOB_WWN_PORT_PREFIX1_OFFSET 9
-#define NVM_CFG1_GLOB_SMBUS_ADDRESS_MASK 0x01FE0000
-#define NVM_CFG1_GLOB_SMBUS_ADDRESS_OFFSET 17
-#define NVM_CFG1_GLOB_SIDEBAND_MODE_MASK 0x06000000
-#define NVM_CFG1_GLOB_SIDEBAND_MODE_OFFSET 25
-#define NVM_CFG1_GLOB_SIDEBAND_MODE_DISABLED 0x0
-#define NVM_CFG1_GLOB_SIDEBAND_MODE_RMII 0x1
-#define NVM_CFG1_GLOB_SIDEBAND_MODE_SGMII 0x2
-#define NVM_CFG1_GLOB_AUX_MODE_MASK 0x78000000
-#define NVM_CFG1_GLOB_AUX_MODE_OFFSET 27
-#define NVM_CFG1_GLOB_AUX_MODE_DEFAULT 0x0
-#define NVM_CFG1_GLOB_AUX_MODE_SMBUS_ONLY 0x1
+ #define NVM_CFG1_GLOB_RESERVED_FAN_FAILURE_DURATION_OFFSET 31
+ u32 mgmt_traffic; /* 0x28 */
+ #define NVM_CFG1_GLOB_RESERVED60_MASK 0x00000001
+ #define NVM_CFG1_GLOB_RESERVED60_OFFSET 0
+ #define NVM_CFG1_GLOB_WWN_PORT_PREFIX0_MASK 0x000001FE
+ #define NVM_CFG1_GLOB_WWN_PORT_PREFIX0_OFFSET 1
+ #define NVM_CFG1_GLOB_WWN_PORT_PREFIX1_MASK 0x0001FE00
+ #define NVM_CFG1_GLOB_WWN_PORT_PREFIX1_OFFSET 9
+ #define NVM_CFG1_GLOB_SMBUS_ADDRESS_MASK 0x01FE0000
+ #define NVM_CFG1_GLOB_SMBUS_ADDRESS_OFFSET 17
+ #define NVM_CFG1_GLOB_SIDEBAND_MODE_MASK 0x06000000
+ #define NVM_CFG1_GLOB_SIDEBAND_MODE_OFFSET 25
+ #define NVM_CFG1_GLOB_SIDEBAND_MODE_DISABLED 0x0
+ #define NVM_CFG1_GLOB_SIDEBAND_MODE_RMII 0x1
+ #define NVM_CFG1_GLOB_SIDEBAND_MODE_SGMII 0x2
+ #define NVM_CFG1_GLOB_AUX_MODE_MASK 0x78000000
+ #define NVM_CFG1_GLOB_AUX_MODE_OFFSET 27
+ #define NVM_CFG1_GLOB_AUX_MODE_DEFAULT 0x0
+ #define NVM_CFG1_GLOB_AUX_MODE_SMBUS_ONLY 0x1
/* Indicates whether external thermal sonsor is available */
-#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_MASK 0x80000000
-#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_OFFSET 31
-#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_DISABLED 0x0
-#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ENABLED 0x1
- u32 core_cfg; /* 0x2C */
-#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_MASK 0x000000FF
-#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_OFFSET 0
+ #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_MASK 0x80000000
+ #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_OFFSET 31
+ #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_DISABLED 0x0
+ #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ENABLED 0x1
+ u32 core_cfg; /* 0x2C */
+ #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_MASK 0x000000FF
+ #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_OFFSET 0
#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_2X40G 0x0
#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_2X50G 0x1
#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_1X100G 0x2
@@ -153,753 +153,753 @@ struct nvm_cfg1_glob {
#define NVM_CFG1_GLOB_EAGLE_CORE_ADDR_OFFSET 10
#define NVM_CFG1_GLOB_FALCON_CORE_ADDR_MASK 0x03FC0000
#define NVM_CFG1_GLOB_FALCON_CORE_ADDR_OFFSET 18
-#define NVM_CFG1_GLOB_AVS_MODE_MASK 0x1C000000
-#define NVM_CFG1_GLOB_AVS_MODE_OFFSET 26
-#define NVM_CFG1_GLOB_AVS_MODE_CLOSE_LOOP 0x0
-#define NVM_CFG1_GLOB_AVS_MODE_OPEN_LOOP_CFG 0x1
-#define NVM_CFG1_GLOB_AVS_MODE_OPEN_LOOP_OTP 0x2
-#define NVM_CFG1_GLOB_AVS_MODE_DISABLED 0x3
-#define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_MASK 0x60000000
-#define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_OFFSET 29
-#define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_DISABLED 0x0
-#define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_ENABLED 0x1
- u32 e_lane_cfg1; /* 0x30 */
-#define NVM_CFG1_GLOB_RX_LANE0_SWAP_MASK 0x0000000F
-#define NVM_CFG1_GLOB_RX_LANE0_SWAP_OFFSET 0
-#define NVM_CFG1_GLOB_RX_LANE1_SWAP_MASK 0x000000F0
-#define NVM_CFG1_GLOB_RX_LANE1_SWAP_OFFSET 4
-#define NVM_CFG1_GLOB_RX_LANE2_SWAP_MASK 0x00000F00
-#define NVM_CFG1_GLOB_RX_LANE2_SWAP_OFFSET 8
-#define NVM_CFG1_GLOB_RX_LANE3_SWAP_MASK 0x0000F000
-#define NVM_CFG1_GLOB_RX_LANE3_SWAP_OFFSET 12
-#define NVM_CFG1_GLOB_TX_LANE0_SWAP_MASK 0x000F0000
-#define NVM_CFG1_GLOB_TX_LANE0_SWAP_OFFSET 16
-#define NVM_CFG1_GLOB_TX_LANE1_SWAP_MASK 0x00F00000
-#define NVM_CFG1_GLOB_TX_LANE1_SWAP_OFFSET 20
-#define NVM_CFG1_GLOB_TX_LANE2_SWAP_MASK 0x0F000000
-#define NVM_CFG1_GLOB_TX_LANE2_SWAP_OFFSET 24
-#define NVM_CFG1_GLOB_TX_LANE3_SWAP_MASK 0xF0000000
-#define NVM_CFG1_GLOB_TX_LANE3_SWAP_OFFSET 28
- u32 e_lane_cfg2; /* 0x34 */
-#define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_MASK 0x00000001
-#define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_OFFSET 0
-#define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_MASK 0x00000002
-#define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_OFFSET 1
-#define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_MASK 0x00000004
-#define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_OFFSET 2
-#define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_MASK 0x00000008
-#define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_OFFSET 3
-#define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_MASK 0x00000010
-#define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_OFFSET 4
-#define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_MASK 0x00000020
-#define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_OFFSET 5
-#define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_MASK 0x00000040
-#define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_OFFSET 6
-#define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_MASK 0x00000080
-#define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_OFFSET 7
-#define NVM_CFG1_GLOB_SMBUS_MODE_MASK 0x00000F00
-#define NVM_CFG1_GLOB_SMBUS_MODE_OFFSET 8
-#define NVM_CFG1_GLOB_SMBUS_MODE_DISABLED 0x0
-#define NVM_CFG1_GLOB_SMBUS_MODE_100KHZ 0x1
-#define NVM_CFG1_GLOB_SMBUS_MODE_400KHZ 0x2
-#define NVM_CFG1_GLOB_NCSI_MASK 0x0000F000
-#define NVM_CFG1_GLOB_NCSI_OFFSET 12
-#define NVM_CFG1_GLOB_NCSI_DISABLED 0x0
-#define NVM_CFG1_GLOB_NCSI_ENABLED 0x1
+ #define NVM_CFG1_GLOB_AVS_MODE_MASK 0x1C000000
+ #define NVM_CFG1_GLOB_AVS_MODE_OFFSET 26
+ #define NVM_CFG1_GLOB_AVS_MODE_CLOSE_LOOP 0x0
+ #define NVM_CFG1_GLOB_AVS_MODE_OPEN_LOOP_CFG 0x1
+ #define NVM_CFG1_GLOB_AVS_MODE_OPEN_LOOP_OTP 0x2
+ #define NVM_CFG1_GLOB_AVS_MODE_DISABLED 0x3
+ #define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_MASK 0x60000000
+ #define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_OFFSET 29
+ #define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_DISABLED 0x0
+ #define NVM_CFG1_GLOB_OVERRIDE_SECURE_MODE_ENABLED 0x1
+ u32 e_lane_cfg1; /* 0x30 */
+ #define NVM_CFG1_GLOB_RX_LANE0_SWAP_MASK 0x0000000F
+ #define NVM_CFG1_GLOB_RX_LANE0_SWAP_OFFSET 0
+ #define NVM_CFG1_GLOB_RX_LANE1_SWAP_MASK 0x000000F0
+ #define NVM_CFG1_GLOB_RX_LANE1_SWAP_OFFSET 4
+ #define NVM_CFG1_GLOB_RX_LANE2_SWAP_MASK 0x00000F00
+ #define NVM_CFG1_GLOB_RX_LANE2_SWAP_OFFSET 8
+ #define NVM_CFG1_GLOB_RX_LANE3_SWAP_MASK 0x0000F000
+ #define NVM_CFG1_GLOB_RX_LANE3_SWAP_OFFSET 12
+ #define NVM_CFG1_GLOB_TX_LANE0_SWAP_MASK 0x000F0000
+ #define NVM_CFG1_GLOB_TX_LANE0_SWAP_OFFSET 16
+ #define NVM_CFG1_GLOB_TX_LANE1_SWAP_MASK 0x00F00000
+ #define NVM_CFG1_GLOB_TX_LANE1_SWAP_OFFSET 20
+ #define NVM_CFG1_GLOB_TX_LANE2_SWAP_MASK 0x0F000000
+ #define NVM_CFG1_GLOB_TX_LANE2_SWAP_OFFSET 24
+ #define NVM_CFG1_GLOB_TX_LANE3_SWAP_MASK 0xF0000000
+ #define NVM_CFG1_GLOB_TX_LANE3_SWAP_OFFSET 28
+ u32 e_lane_cfg2; /* 0x34 */
+ #define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_MASK 0x00000001
+ #define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_OFFSET 0
+ #define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_MASK 0x00000002
+ #define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_OFFSET 1
+ #define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_MASK 0x00000004
+ #define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_OFFSET 2
+ #define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_MASK 0x00000008
+ #define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_OFFSET 3
+ #define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_MASK 0x00000010
+ #define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_OFFSET 4
+ #define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_MASK 0x00000020
+ #define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_OFFSET 5
+ #define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_MASK 0x00000040
+ #define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_OFFSET 6
+ #define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_MASK 0x00000080
+ #define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_OFFSET 7
+ #define NVM_CFG1_GLOB_SMBUS_MODE_MASK 0x00000F00
+ #define NVM_CFG1_GLOB_SMBUS_MODE_OFFSET 8
+ #define NVM_CFG1_GLOB_SMBUS_MODE_DISABLED 0x0
+ #define NVM_CFG1_GLOB_SMBUS_MODE_100KHZ 0x1
+ #define NVM_CFG1_GLOB_SMBUS_MODE_400KHZ 0x2
+ #define NVM_CFG1_GLOB_NCSI_MASK 0x0000F000
+ #define NVM_CFG1_GLOB_NCSI_OFFSET 12
+ #define NVM_CFG1_GLOB_NCSI_DISABLED 0x0
+ #define NVM_CFG1_GLOB_NCSI_ENABLED 0x1
/* Maximum advertised pcie link width */
-#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_MASK 0x000F0000
-#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_OFFSET 16
+ #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_MASK 0x000F0000
+ #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_OFFSET 16
#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_16_LANES 0x0
-#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_1_LANE 0x1
-#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_2_LANES 0x2
-#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_4_LANES 0x3
-#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_8_LANES 0x4
+ #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_1_LANE 0x1
+ #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_2_LANES 0x2
+ #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_4_LANES 0x3
+ #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_8_LANES 0x4
/* ASPM L1 mode */
-#define NVM_CFG1_GLOB_ASPM_L1_MODE_MASK 0x00300000
-#define NVM_CFG1_GLOB_ASPM_L1_MODE_OFFSET 20
-#define NVM_CFG1_GLOB_ASPM_L1_MODE_FORCED 0x0
-#define NVM_CFG1_GLOB_ASPM_L1_MODE_DYNAMIC_LOW_LATENCY 0x1
-#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_MASK 0x01C00000
-#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_OFFSET 22
-#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_DISABLED 0x0
-#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_EXT_I2C 0x1
-#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_ONLY 0x2
-#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_EXT_SMBUS 0x3
+ #define NVM_CFG1_GLOB_ASPM_L1_MODE_MASK 0x00300000
+ #define NVM_CFG1_GLOB_ASPM_L1_MODE_OFFSET 20
+ #define NVM_CFG1_GLOB_ASPM_L1_MODE_FORCED 0x0
+ #define NVM_CFG1_GLOB_ASPM_L1_MODE_DYNAMIC_LOW_LATENCY 0x1
+ #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_MASK 0x01C00000
+ #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_OFFSET 22
+ #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_DISABLED 0x0
+ #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_EXT_I2C 0x1
+ #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_ONLY 0x2
+ #define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_EXT_SMBUS 0x3
#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_MASK 0x06000000
-#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_OFFSET 25
-#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_DISABLE 0x0
-#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_INTERNAL 0x1
-#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_EXTERNAL 0x2
-#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_BOTH 0x3
+ #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_OFFSET 25
+ #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_DISABLE 0x0
+ #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_INTERNAL 0x1
+ #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_EXTERNAL 0x2
+ #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_BOTH 0x3
/* Set the PLDM sensor modes */
-#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_MASK 0x38000000
-#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_OFFSET 27
-#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_INTERNAL 0x0
-#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_EXTERNAL 0x1
-#define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_BOTH 0x2
- u32 f_lane_cfg1; /* 0x38 */
-#define NVM_CFG1_GLOB_RX_LANE0_SWAP_MASK 0x0000000F
-#define NVM_CFG1_GLOB_RX_LANE0_SWAP_OFFSET 0
-#define NVM_CFG1_GLOB_RX_LANE1_SWAP_MASK 0x000000F0
-#define NVM_CFG1_GLOB_RX_LANE1_SWAP_OFFSET 4
-#define NVM_CFG1_GLOB_RX_LANE2_SWAP_MASK 0x00000F00
-#define NVM_CFG1_GLOB_RX_LANE2_SWAP_OFFSET 8
-#define NVM_CFG1_GLOB_RX_LANE3_SWAP_MASK 0x0000F000
-#define NVM_CFG1_GLOB_RX_LANE3_SWAP_OFFSET 12
-#define NVM_CFG1_GLOB_TX_LANE0_SWAP_MASK 0x000F0000
-#define NVM_CFG1_GLOB_TX_LANE0_SWAP_OFFSET 16
-#define NVM_CFG1_GLOB_TX_LANE1_SWAP_MASK 0x00F00000
-#define NVM_CFG1_GLOB_TX_LANE1_SWAP_OFFSET 20
-#define NVM_CFG1_GLOB_TX_LANE2_SWAP_MASK 0x0F000000
-#define NVM_CFG1_GLOB_TX_LANE2_SWAP_OFFSET 24
-#define NVM_CFG1_GLOB_TX_LANE3_SWAP_MASK 0xF0000000
-#define NVM_CFG1_GLOB_TX_LANE3_SWAP_OFFSET 28
- u32 f_lane_cfg2; /* 0x3C */
-#define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_MASK 0x00000001
-#define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_OFFSET 0
-#define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_MASK 0x00000002
-#define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_OFFSET 1
-#define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_MASK 0x00000004
-#define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_OFFSET 2
-#define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_MASK 0x00000008
-#define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_OFFSET 3
-#define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_MASK 0x00000010
-#define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_OFFSET 4
-#define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_MASK 0x00000020
-#define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_OFFSET 5
-#define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_MASK 0x00000040
-#define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_OFFSET 6
-#define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_MASK 0x00000080
-#define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_OFFSET 7
+ #define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_MASK 0x38000000
+ #define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_OFFSET 27
+ #define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_INTERNAL 0x0
+ #define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_EXTERNAL 0x1
+ #define NVM_CFG1_GLOB_PLDM_SENSOR_MODE_BOTH 0x2
+ u32 f_lane_cfg1; /* 0x38 */
+ #define NVM_CFG1_GLOB_RX_LANE0_SWAP_MASK 0x0000000F
+ #define NVM_CFG1_GLOB_RX_LANE0_SWAP_OFFSET 0
+ #define NVM_CFG1_GLOB_RX_LANE1_SWAP_MASK 0x000000F0
+ #define NVM_CFG1_GLOB_RX_LANE1_SWAP_OFFSET 4
+ #define NVM_CFG1_GLOB_RX_LANE2_SWAP_MASK 0x00000F00
+ #define NVM_CFG1_GLOB_RX_LANE2_SWAP_OFFSET 8
+ #define NVM_CFG1_GLOB_RX_LANE3_SWAP_MASK 0x0000F000
+ #define NVM_CFG1_GLOB_RX_LANE3_SWAP_OFFSET 12
+ #define NVM_CFG1_GLOB_TX_LANE0_SWAP_MASK 0x000F0000
+ #define NVM_CFG1_GLOB_TX_LANE0_SWAP_OFFSET 16
+ #define NVM_CFG1_GLOB_TX_LANE1_SWAP_MASK 0x00F00000
+ #define NVM_CFG1_GLOB_TX_LANE1_SWAP_OFFSET 20
+ #define NVM_CFG1_GLOB_TX_LANE2_SWAP_MASK 0x0F000000
+ #define NVM_CFG1_GLOB_TX_LANE2_SWAP_OFFSET 24
+ #define NVM_CFG1_GLOB_TX_LANE3_SWAP_MASK 0xF0000000
+ #define NVM_CFG1_GLOB_TX_LANE3_SWAP_OFFSET 28
+ u32 f_lane_cfg2; /* 0x3C */
+ #define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_MASK 0x00000001
+ #define NVM_CFG1_GLOB_RX_LANE0_POL_FLIP_OFFSET 0
+ #define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_MASK 0x00000002
+ #define NVM_CFG1_GLOB_RX_LANE1_POL_FLIP_OFFSET 1
+ #define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_MASK 0x00000004
+ #define NVM_CFG1_GLOB_RX_LANE2_POL_FLIP_OFFSET 2
+ #define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_MASK 0x00000008
+ #define NVM_CFG1_GLOB_RX_LANE3_POL_FLIP_OFFSET 3
+ #define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_MASK 0x00000010
+ #define NVM_CFG1_GLOB_TX_LANE0_POL_FLIP_OFFSET 4
+ #define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_MASK 0x00000020
+ #define NVM_CFG1_GLOB_TX_LANE1_POL_FLIP_OFFSET 5
+ #define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_MASK 0x00000040
+ #define NVM_CFG1_GLOB_TX_LANE2_POL_FLIP_OFFSET 6
+ #define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_MASK 0x00000080
+ #define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_OFFSET 7
/* Control the period between two successive checks */
#define NVM_CFG1_GLOB_TEMPERATURE_PERIOD_BETWEEN_CHECKS_MASK 0x0000FF00
-#define NVM_CFG1_GLOB_TEMPERATURE_PERIOD_BETWEEN_CHECKS_OFFSET 8
+ #define NVM_CFG1_GLOB_TEMPERATURE_PERIOD_BETWEEN_CHECKS_OFFSET 8
/* Set shutdown temperature */
#define NVM_CFG1_GLOB_SHUTDOWN_THRESHOLD_TEMPERATURE_MASK 0x00FF0000
-#define NVM_CFG1_GLOB_SHUTDOWN_THRESHOLD_TEMPERATURE_OFFSET 16
+ #define NVM_CFG1_GLOB_SHUTDOWN_THRESHOLD_TEMPERATURE_OFFSET 16
/* Set max. count for over operational temperature */
-#define NVM_CFG1_GLOB_MAX_COUNT_OPER_THRESHOLD_MASK 0xFF000000
-#define NVM_CFG1_GLOB_MAX_COUNT_OPER_THRESHOLD_OFFSET 24
+ #define NVM_CFG1_GLOB_MAX_COUNT_OPER_THRESHOLD_MASK 0xFF000000
+ #define NVM_CFG1_GLOB_MAX_COUNT_OPER_THRESHOLD_OFFSET 24
u32 eagle_preemphasis; /* 0x40 */
-#define NVM_CFG1_GLOB_LANE0_PREEMP_MASK 0x000000FF
-#define NVM_CFG1_GLOB_LANE0_PREEMP_OFFSET 0
-#define NVM_CFG1_GLOB_LANE1_PREEMP_MASK 0x0000FF00
-#define NVM_CFG1_GLOB_LANE1_PREEMP_OFFSET 8
-#define NVM_CFG1_GLOB_LANE2_PREEMP_MASK 0x00FF0000
-#define NVM_CFG1_GLOB_LANE2_PREEMP_OFFSET 16
-#define NVM_CFG1_GLOB_LANE3_PREEMP_MASK 0xFF000000
-#define NVM_CFG1_GLOB_LANE3_PREEMP_OFFSET 24
+ #define NVM_CFG1_GLOB_LANE0_PREEMP_MASK 0x000000FF
+ #define NVM_CFG1_GLOB_LANE0_PREEMP_OFFSET 0
+ #define NVM_CFG1_GLOB_LANE1_PREEMP_MASK 0x0000FF00
+ #define NVM_CFG1_GLOB_LANE1_PREEMP_OFFSET 8
+ #define NVM_CFG1_GLOB_LANE2_PREEMP_MASK 0x00FF0000
+ #define NVM_CFG1_GLOB_LANE2_PREEMP_OFFSET 16
+ #define NVM_CFG1_GLOB_LANE3_PREEMP_MASK 0xFF000000
+ #define NVM_CFG1_GLOB_LANE3_PREEMP_OFFSET 24
u32 eagle_driver_current; /* 0x44 */
-#define NVM_CFG1_GLOB_LANE0_AMP_MASK 0x000000FF
-#define NVM_CFG1_GLOB_LANE0_AMP_OFFSET 0
-#define NVM_CFG1_GLOB_LANE1_AMP_MASK 0x0000FF00
-#define NVM_CFG1_GLOB_LANE1_AMP_OFFSET 8
-#define NVM_CFG1_GLOB_LANE2_AMP_MASK 0x00FF0000
-#define NVM_CFG1_GLOB_LANE2_AMP_OFFSET 16
-#define NVM_CFG1_GLOB_LANE3_AMP_MASK 0xFF000000
-#define NVM_CFG1_GLOB_LANE3_AMP_OFFSET 24
+ #define NVM_CFG1_GLOB_LANE0_AMP_MASK 0x000000FF
+ #define NVM_CFG1_GLOB_LANE0_AMP_OFFSET 0
+ #define NVM_CFG1_GLOB_LANE1_AMP_MASK 0x0000FF00
+ #define NVM_CFG1_GLOB_LANE1_AMP_OFFSET 8
+ #define NVM_CFG1_GLOB_LANE2_AMP_MASK 0x00FF0000
+ #define NVM_CFG1_GLOB_LANE2_AMP_OFFSET 16
+ #define NVM_CFG1_GLOB_LANE3_AMP_MASK 0xFF000000
+ #define NVM_CFG1_GLOB_LANE3_AMP_OFFSET 24
u32 falcon_preemphasis; /* 0x48 */
-#define NVM_CFG1_GLOB_LANE0_PREEMP_MASK 0x000000FF
-#define NVM_CFG1_GLOB_LANE0_PREEMP_OFFSET 0
-#define NVM_CFG1_GLOB_LANE1_PREEMP_MASK 0x0000FF00
-#define NVM_CFG1_GLOB_LANE1_PREEMP_OFFSET 8
-#define NVM_CFG1_GLOB_LANE2_PREEMP_MASK 0x00FF0000
-#define NVM_CFG1_GLOB_LANE2_PREEMP_OFFSET 16
-#define NVM_CFG1_GLOB_LANE3_PREEMP_MASK 0xFF000000
-#define NVM_CFG1_GLOB_LANE3_PREEMP_OFFSET 24
+ #define NVM_CFG1_GLOB_LANE0_PREEMP_MASK 0x000000FF
+ #define NVM_CFG1_GLOB_LANE0_PREEMP_OFFSET 0
+ #define NVM_CFG1_GLOB_LANE1_PREEMP_MASK 0x0000FF00
+ #define NVM_CFG1_GLOB_LANE1_PREEMP_OFFSET 8
+ #define NVM_CFG1_GLOB_LANE2_PREEMP_MASK 0x00FF0000
+ #define NVM_CFG1_GLOB_LANE2_PREEMP_OFFSET 16
+ #define NVM_CFG1_GLOB_LANE3_PREEMP_MASK 0xFF000000
+ #define NVM_CFG1_GLOB_LANE3_PREEMP_OFFSET 24
u32 falcon_driver_current; /* 0x4C */
-#define NVM_CFG1_GLOB_LANE0_AMP_MASK 0x000000FF
-#define NVM_CFG1_GLOB_LANE0_AMP_OFFSET 0
-#define NVM_CFG1_GLOB_LANE1_AMP_MASK 0x0000FF00
-#define NVM_CFG1_GLOB_LANE1_AMP_OFFSET 8
-#define NVM_CFG1_GLOB_LANE2_AMP_MASK 0x00FF0000
-#define NVM_CFG1_GLOB_LANE2_AMP_OFFSET 16
-#define NVM_CFG1_GLOB_LANE3_AMP_MASK 0xFF000000
-#define NVM_CFG1_GLOB_LANE3_AMP_OFFSET 24
- u32 pci_id; /* 0x50 */
-#define NVM_CFG1_GLOB_VENDOR_ID_MASK 0x0000FFFF
-#define NVM_CFG1_GLOB_VENDOR_ID_OFFSET 0
+ #define NVM_CFG1_GLOB_LANE0_AMP_MASK 0x000000FF
+ #define NVM_CFG1_GLOB_LANE0_AMP_OFFSET 0
+ #define NVM_CFG1_GLOB_LANE1_AMP_MASK 0x0000FF00
+ #define NVM_CFG1_GLOB_LANE1_AMP_OFFSET 8
+ #define NVM_CFG1_GLOB_LANE2_AMP_MASK 0x00FF0000
+ #define NVM_CFG1_GLOB_LANE2_AMP_OFFSET 16
+ #define NVM_CFG1_GLOB_LANE3_AMP_MASK 0xFF000000
+ #define NVM_CFG1_GLOB_LANE3_AMP_OFFSET 24
+ u32 pci_id; /* 0x50 */
+ #define NVM_CFG1_GLOB_VENDOR_ID_MASK 0x0000FFFF
+ #define NVM_CFG1_GLOB_VENDOR_ID_OFFSET 0
/* Set caution temperature */
#define NVM_CFG1_GLOB_CAUTION_THRESHOLD_TEMPERATURE_MASK 0x00FF0000
-#define NVM_CFG1_GLOB_CAUTION_THRESHOLD_TEMPERATURE_OFFSET 16
+ #define NVM_CFG1_GLOB_CAUTION_THRESHOLD_TEMPERATURE_OFFSET 16
/* Set external thermal sensor I2C address */
#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ADDRESS_MASK 0xFF000000
-#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ADDRESS_OFFSET 24
- u32 pci_subsys_id; /* 0x54 */
-#define NVM_CFG1_GLOB_SUBSYSTEM_VENDOR_ID_MASK 0x0000FFFF
-#define NVM_CFG1_GLOB_SUBSYSTEM_VENDOR_ID_OFFSET 0
-#define NVM_CFG1_GLOB_SUBSYSTEM_DEVICE_ID_MASK 0xFFFF0000
-#define NVM_CFG1_GLOB_SUBSYSTEM_DEVICE_ID_OFFSET 16
- u32 bar; /* 0x58 */
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_MASK 0x0000000F
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_OFFSET 0
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_DISABLED 0x0
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_2K 0x1
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_4K 0x2
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_8K 0x3
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_16K 0x4
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_32K 0x5
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_64K 0x6
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_128K 0x7
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_256K 0x8
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_512K 0x9
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_1M 0xA
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_2M 0xB
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_4M 0xC
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_8M 0xD
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_16M 0xE
-#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_32M 0xF
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_MASK 0x000000F0
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_OFFSET 4
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_DISABLED 0x0
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_4K 0x1
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_8K 0x2
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_16K 0x3
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_32K 0x4
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_64K 0x5
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_128K 0x6
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_256K 0x7
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_512K 0x8
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_1M 0x9
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_2M 0xA
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_4M 0xB
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_8M 0xC
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_16M 0xD
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_32M 0xE
-#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_64M 0xF
-#define NVM_CFG1_GLOB_BAR2_SIZE_MASK 0x00000F00
-#define NVM_CFG1_GLOB_BAR2_SIZE_OFFSET 8
-#define NVM_CFG1_GLOB_BAR2_SIZE_DISABLED 0x0
-#define NVM_CFG1_GLOB_BAR2_SIZE_64K 0x1
-#define NVM_CFG1_GLOB_BAR2_SIZE_128K 0x2
-#define NVM_CFG1_GLOB_BAR2_SIZE_256K 0x3
-#define NVM_CFG1_GLOB_BAR2_SIZE_512K 0x4
-#define NVM_CFG1_GLOB_BAR2_SIZE_1M 0x5
-#define NVM_CFG1_GLOB_BAR2_SIZE_2M 0x6
-#define NVM_CFG1_GLOB_BAR2_SIZE_4M 0x7
-#define NVM_CFG1_GLOB_BAR2_SIZE_8M 0x8
-#define NVM_CFG1_GLOB_BAR2_SIZE_16M 0x9
-#define NVM_CFG1_GLOB_BAR2_SIZE_32M 0xA
-#define NVM_CFG1_GLOB_BAR2_SIZE_64M 0xB
-#define NVM_CFG1_GLOB_BAR2_SIZE_128M 0xC
-#define NVM_CFG1_GLOB_BAR2_SIZE_256M 0xD
-#define NVM_CFG1_GLOB_BAR2_SIZE_512M 0xE
-#define NVM_CFG1_GLOB_BAR2_SIZE_1G 0xF
+ #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ADDRESS_OFFSET 24
+ u32 pci_subsys_id; /* 0x54 */
+ #define NVM_CFG1_GLOB_SUBSYSTEM_VENDOR_ID_MASK 0x0000FFFF
+ #define NVM_CFG1_GLOB_SUBSYSTEM_VENDOR_ID_OFFSET 0
+ #define NVM_CFG1_GLOB_SUBSYSTEM_DEVICE_ID_MASK 0xFFFF0000
+ #define NVM_CFG1_GLOB_SUBSYSTEM_DEVICE_ID_OFFSET 16
+ u32 bar; /* 0x58 */
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_MASK 0x0000000F
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_OFFSET 0
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_DISABLED 0x0
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_2K 0x1
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_4K 0x2
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_8K 0x3
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_16K 0x4
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_32K 0x5
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_64K 0x6
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_128K 0x7
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_256K 0x8
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_512K 0x9
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_1M 0xA
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_2M 0xB
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_4M 0xC
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_8M 0xD
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_16M 0xE
+ #define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_32M 0xF
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_MASK 0x000000F0
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_OFFSET 4
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_DISABLED 0x0
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_4K 0x1
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_8K 0x2
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_16K 0x3
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_32K 0x4
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_64K 0x5
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_128K 0x6
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_256K 0x7
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_512K 0x8
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_1M 0x9
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_2M 0xA
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_4M 0xB
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_8M 0xC
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_16M 0xD
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_32M 0xE
+ #define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_64M 0xF
+ #define NVM_CFG1_GLOB_BAR2_SIZE_MASK 0x00000F00
+ #define NVM_CFG1_GLOB_BAR2_SIZE_OFFSET 8
+ #define NVM_CFG1_GLOB_BAR2_SIZE_DISABLED 0x0
+ #define NVM_CFG1_GLOB_BAR2_SIZE_64K 0x1
+ #define NVM_CFG1_GLOB_BAR2_SIZE_128K 0x2
+ #define NVM_CFG1_GLOB_BAR2_SIZE_256K 0x3
+ #define NVM_CFG1_GLOB_BAR2_SIZE_512K 0x4
+ #define NVM_CFG1_GLOB_BAR2_SIZE_1M 0x5
+ #define NVM_CFG1_GLOB_BAR2_SIZE_2M 0x6
+ #define NVM_CFG1_GLOB_BAR2_SIZE_4M 0x7
+ #define NVM_CFG1_GLOB_BAR2_SIZE_8M 0x8
+ #define NVM_CFG1_GLOB_BAR2_SIZE_16M 0x9
+ #define NVM_CFG1_GLOB_BAR2_SIZE_32M 0xA
+ #define NVM_CFG1_GLOB_BAR2_SIZE_64M 0xB
+ #define NVM_CFG1_GLOB_BAR2_SIZE_128M 0xC
+ #define NVM_CFG1_GLOB_BAR2_SIZE_256M 0xD
+ #define NVM_CFG1_GLOB_BAR2_SIZE_512M 0xE
+ #define NVM_CFG1_GLOB_BAR2_SIZE_1G 0xF
/* Set the duration, in seconds, fan failure signal should be
* sampled
*/
-#define NVM_CFG1_GLOB_FAN_FAILURE_DURATION_MASK 0x0000F000
-#define NVM_CFG1_GLOB_FAN_FAILURE_DURATION_OFFSET 12
+ #define NVM_CFG1_GLOB_FAN_FAILURE_DURATION_MASK 0x0000F000
+ #define NVM_CFG1_GLOB_FAN_FAILURE_DURATION_OFFSET 12
u32 eagle_txfir_main; /* 0x5C */
-#define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_MASK 0x000000FF
-#define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_OFFSET 0
-#define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_MASK 0x0000FF00
-#define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_OFFSET 8
-#define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_MASK 0x00FF0000
-#define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_OFFSET 16
-#define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_MASK 0xFF000000
-#define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_OFFSET 24
+ #define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_MASK 0x000000FF
+ #define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_OFFSET 0
+ #define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_MASK 0x0000FF00
+ #define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_OFFSET 8
+ #define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_MASK 0x00FF0000
+ #define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_OFFSET 16
+ #define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_MASK 0xFF000000
+ #define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_OFFSET 24
u32 eagle_txfir_post; /* 0x60 */
-#define NVM_CFG1_GLOB_LANE0_TXFIR_POST_MASK 0x000000FF
-#define NVM_CFG1_GLOB_LANE0_TXFIR_POST_OFFSET 0
-#define NVM_CFG1_GLOB_LANE1_TXFIR_POST_MASK 0x0000FF00
-#define NVM_CFG1_GLOB_LANE1_TXFIR_POST_OFFSET 8
-#define NVM_CFG1_GLOB_LANE2_TXFIR_POST_MASK 0x00FF0000
-#define NVM_CFG1_GLOB_LANE2_TXFIR_POST_OFFSET 16
-#define NVM_CFG1_GLOB_LANE3_TXFIR_POST_MASK 0xFF000000
-#define NVM_CFG1_GLOB_LANE3_TXFIR_POST_OFFSET 24
+ #define NVM_CFG1_GLOB_LANE0_TXFIR_POST_MASK 0x000000FF
+ #define NVM_CFG1_GLOB_LANE0_TXFIR_POST_OFFSET 0
+ #define NVM_CFG1_GLOB_LANE1_TXFIR_POST_MASK 0x0000FF00
+ #define NVM_CFG1_GLOB_LANE1_TXFIR_POST_OFFSET 8
+ #define NVM_CFG1_GLOB_LANE2_TXFIR_POST_MASK 0x00FF0000
+ #define NVM_CFG1_GLOB_LANE2_TXFIR_POST_OFFSET 16
+ #define NVM_CFG1_GLOB_LANE3_TXFIR_POST_MASK 0xFF000000
+ #define NVM_CFG1_GLOB_LANE3_TXFIR_POST_OFFSET 24
u32 falcon_txfir_main; /* 0x64 */
-#define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_MASK 0x000000FF
-#define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_OFFSET 0
-#define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_MASK 0x0000FF00
-#define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_OFFSET 8
-#define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_MASK 0x00FF0000
-#define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_OFFSET 16
-#define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_MASK 0xFF000000
-#define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_OFFSET 24
+ #define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_MASK 0x000000FF
+ #define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_OFFSET 0
+ #define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_MASK 0x0000FF00
+ #define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_OFFSET 8
+ #define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_MASK 0x00FF0000
+ #define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_OFFSET 16
+ #define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_MASK 0xFF000000
+ #define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_OFFSET 24
u32 falcon_txfir_post; /* 0x68 */
-#define NVM_CFG1_GLOB_LANE0_TXFIR_POST_MASK 0x000000FF
-#define NVM_CFG1_GLOB_LANE0_TXFIR_POST_OFFSET 0
-#define NVM_CFG1_GLOB_LANE1_TXFIR_POST_MASK 0x0000FF00
-#define NVM_CFG1_GLOB_LANE1_TXFIR_POST_OFFSET 8
-#define NVM_CFG1_GLOB_LANE2_TXFIR_POST_MASK 0x00FF0000
-#define NVM_CFG1_GLOB_LANE2_TXFIR_POST_OFFSET 16
-#define NVM_CFG1_GLOB_LANE3_TXFIR_POST_MASK 0xFF000000
-#define NVM_CFG1_GLOB_LANE3_TXFIR_POST_OFFSET 24
- u32 manufacture_ver; /* 0x6C */
-#define NVM_CFG1_GLOB_MANUF0_VER_MASK 0x0000003F
-#define NVM_CFG1_GLOB_MANUF0_VER_OFFSET 0
-#define NVM_CFG1_GLOB_MANUF1_VER_MASK 0x00000FC0
-#define NVM_CFG1_GLOB_MANUF1_VER_OFFSET 6
-#define NVM_CFG1_GLOB_MANUF2_VER_MASK 0x0003F000
-#define NVM_CFG1_GLOB_MANUF2_VER_OFFSET 12
-#define NVM_CFG1_GLOB_MANUF3_VER_MASK 0x00FC0000
-#define NVM_CFG1_GLOB_MANUF3_VER_OFFSET 18
-#define NVM_CFG1_GLOB_MANUF4_VER_MASK 0x3F000000
-#define NVM_CFG1_GLOB_MANUF4_VER_OFFSET 24
- u32 manufacture_time; /* 0x70 */
-#define NVM_CFG1_GLOB_MANUF0_TIME_MASK 0x0000003F
-#define NVM_CFG1_GLOB_MANUF0_TIME_OFFSET 0
-#define NVM_CFG1_GLOB_MANUF1_TIME_MASK 0x00000FC0
-#define NVM_CFG1_GLOB_MANUF1_TIME_OFFSET 6
-#define NVM_CFG1_GLOB_MANUF2_TIME_MASK 0x0003F000
-#define NVM_CFG1_GLOB_MANUF2_TIME_OFFSET 12
- u32 led_global_settings; /* 0x74 */
-#define NVM_CFG1_GLOB_LED_SWAP_0_MASK 0x0000000F
-#define NVM_CFG1_GLOB_LED_SWAP_0_OFFSET 0
-#define NVM_CFG1_GLOB_LED_SWAP_1_MASK 0x000000F0
-#define NVM_CFG1_GLOB_LED_SWAP_1_OFFSET 4
-#define NVM_CFG1_GLOB_LED_SWAP_2_MASK 0x00000F00
-#define NVM_CFG1_GLOB_LED_SWAP_2_OFFSET 8
-#define NVM_CFG1_GLOB_LED_SWAP_3_MASK 0x0000F000
-#define NVM_CFG1_GLOB_LED_SWAP_3_OFFSET 12
- u32 generic_cont1; /* 0x78 */
-#define NVM_CFG1_GLOB_AVS_DAC_CODE_MASK 0x000003FF
-#define NVM_CFG1_GLOB_AVS_DAC_CODE_OFFSET 0
- u32 mbi_version; /* 0x7C */
-#define NVM_CFG1_GLOB_MBI_VERSION_0_MASK 0x000000FF
-#define NVM_CFG1_GLOB_MBI_VERSION_0_OFFSET 0
-#define NVM_CFG1_GLOB_MBI_VERSION_1_MASK 0x0000FF00
-#define NVM_CFG1_GLOB_MBI_VERSION_1_OFFSET 8
-#define NVM_CFG1_GLOB_MBI_VERSION_2_MASK 0x00FF0000
-#define NVM_CFG1_GLOB_MBI_VERSION_2_OFFSET 16
- u32 mbi_date; /* 0x80 */
- u32 misc_sig; /* 0x84 */
+ #define NVM_CFG1_GLOB_LANE0_TXFIR_POST_MASK 0x000000FF
+ #define NVM_CFG1_GLOB_LANE0_TXFIR_POST_OFFSET 0
+ #define NVM_CFG1_GLOB_LANE1_TXFIR_POST_MASK 0x0000FF00
+ #define NVM_CFG1_GLOB_LANE1_TXFIR_POST_OFFSET 8
+ #define NVM_CFG1_GLOB_LANE2_TXFIR_POST_MASK 0x00FF0000
+ #define NVM_CFG1_GLOB_LANE2_TXFIR_POST_OFFSET 16
+ #define NVM_CFG1_GLOB_LANE3_TXFIR_POST_MASK 0xFF000000
+ #define NVM_CFG1_GLOB_LANE3_TXFIR_POST_OFFSET 24
+ u32 manufacture_ver; /* 0x6C */
+ #define NVM_CFG1_GLOB_MANUF0_VER_MASK 0x0000003F
+ #define NVM_CFG1_GLOB_MANUF0_VER_OFFSET 0
+ #define NVM_CFG1_GLOB_MANUF1_VER_MASK 0x00000FC0
+ #define NVM_CFG1_GLOB_MANUF1_VER_OFFSET 6
+ #define NVM_CFG1_GLOB_MANUF2_VER_MASK 0x0003F000
+ #define NVM_CFG1_GLOB_MANUF2_VER_OFFSET 12
+ #define NVM_CFG1_GLOB_MANUF3_VER_MASK 0x00FC0000
+ #define NVM_CFG1_GLOB_MANUF3_VER_OFFSET 18
+ #define NVM_CFG1_GLOB_MANUF4_VER_MASK 0x3F000000
+ #define NVM_CFG1_GLOB_MANUF4_VER_OFFSET 24
+ u32 manufacture_time; /* 0x70 */
+ #define NVM_CFG1_GLOB_MANUF0_TIME_MASK 0x0000003F
+ #define NVM_CFG1_GLOB_MANUF0_TIME_OFFSET 0
+ #define NVM_CFG1_GLOB_MANUF1_TIME_MASK 0x00000FC0
+ #define NVM_CFG1_GLOB_MANUF1_TIME_OFFSET 6
+ #define NVM_CFG1_GLOB_MANUF2_TIME_MASK 0x0003F000
+ #define NVM_CFG1_GLOB_MANUF2_TIME_OFFSET 12
+ u32 led_global_settings; /* 0x74 */
+ #define NVM_CFG1_GLOB_LED_SWAP_0_MASK 0x0000000F
+ #define NVM_CFG1_GLOB_LED_SWAP_0_OFFSET 0
+ #define NVM_CFG1_GLOB_LED_SWAP_1_MASK 0x000000F0
+ #define NVM_CFG1_GLOB_LED_SWAP_1_OFFSET 4
+ #define NVM_CFG1_GLOB_LED_SWAP_2_MASK 0x00000F00
+ #define NVM_CFG1_GLOB_LED_SWAP_2_OFFSET 8
+ #define NVM_CFG1_GLOB_LED_SWAP_3_MASK 0x0000F000
+ #define NVM_CFG1_GLOB_LED_SWAP_3_OFFSET 12
+ u32 generic_cont1; /* 0x78 */
+ #define NVM_CFG1_GLOB_AVS_DAC_CODE_MASK 0x000003FF
+ #define NVM_CFG1_GLOB_AVS_DAC_CODE_OFFSET 0
+ u32 mbi_version; /* 0x7C */
+ #define NVM_CFG1_GLOB_MBI_VERSION_0_MASK 0x000000FF
+ #define NVM_CFG1_GLOB_MBI_VERSION_0_OFFSET 0
+ #define NVM_CFG1_GLOB_MBI_VERSION_1_MASK 0x0000FF00
+ #define NVM_CFG1_GLOB_MBI_VERSION_1_OFFSET 8
+ #define NVM_CFG1_GLOB_MBI_VERSION_2_MASK 0x00FF0000
+ #define NVM_CFG1_GLOB_MBI_VERSION_2_OFFSET 16
+ u32 mbi_date; /* 0x80 */
+ u32 misc_sig; /* 0x84 */
/* Define the GPIO mapping to switch i2c mux */
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_0_MASK 0x000000FF
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_0_OFFSET 0
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_1_MASK 0x0000FF00
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_1_OFFSET 8
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__NA 0x0
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO0 0x1
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO1 0x2
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO2 0x3
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO3 0x4
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO4 0x5
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO5 0x6
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO6 0x7
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO7 0x8
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO8 0x9
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO9 0xA
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO10 0xB
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO11 0xC
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO12 0xD
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO13 0xE
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO14 0xF
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO15 0x10
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO16 0x11
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO17 0x12
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO18 0x13
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO19 0x14
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO20 0x15
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO21 0x16
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO22 0x17
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO23 0x18
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO24 0x19
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO25 0x1A
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO26 0x1B
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO27 0x1C
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO28 0x1D
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO29 0x1E
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO30 0x1F
-#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO31 0x20
- u32 device_capabilities; /* 0x88 */
-#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ETHERNET 0x1
- u32 power_dissipated; /* 0x8C */
-#define NVM_CFG1_GLOB_POWER_DIS_D0_MASK 0x000000FF
-#define NVM_CFG1_GLOB_POWER_DIS_D0_OFFSET 0
-#define NVM_CFG1_GLOB_POWER_DIS_D1_MASK 0x0000FF00
-#define NVM_CFG1_GLOB_POWER_DIS_D1_OFFSET 8
-#define NVM_CFG1_GLOB_POWER_DIS_D2_MASK 0x00FF0000
-#define NVM_CFG1_GLOB_POWER_DIS_D2_OFFSET 16
-#define NVM_CFG1_GLOB_POWER_DIS_D3_MASK 0xFF000000
-#define NVM_CFG1_GLOB_POWER_DIS_D3_OFFSET 24
- u32 power_consumed; /* 0x90 */
-#define NVM_CFG1_GLOB_POWER_CONS_D0_MASK 0x000000FF
-#define NVM_CFG1_GLOB_POWER_CONS_D0_OFFSET 0
-#define NVM_CFG1_GLOB_POWER_CONS_D1_MASK 0x0000FF00
-#define NVM_CFG1_GLOB_POWER_CONS_D1_OFFSET 8
-#define NVM_CFG1_GLOB_POWER_CONS_D2_MASK 0x00FF0000
-#define NVM_CFG1_GLOB_POWER_CONS_D2_OFFSET 16
-#define NVM_CFG1_GLOB_POWER_CONS_D3_MASK 0xFF000000
-#define NVM_CFG1_GLOB_POWER_CONS_D3_OFFSET 24
- u32 efi_version; /* 0x94 */
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_0_MASK 0x000000FF
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_0_OFFSET 0
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_1_MASK 0x0000FF00
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO_1_OFFSET 8
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__NA 0x0
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO0 0x1
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO1 0x2
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO2 0x3
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO3 0x4
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO4 0x5
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO5 0x6
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO6 0x7
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO7 0x8
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO8 0x9
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO9 0xA
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO10 0xB
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO11 0xC
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO12 0xD
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO13 0xE
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO14 0xF
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO15 0x10
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO16 0x11
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO17 0x12
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO18 0x13
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO19 0x14
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO20 0x15
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO21 0x16
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO22 0x17
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO23 0x18
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO24 0x19
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO25 0x1A
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO26 0x1B
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO27 0x1C
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO28 0x1D
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO29 0x1E
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO30 0x1F
+ #define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO31 0x20
+ u32 device_capabilities; /* 0x88 */
+ #define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ETHERNET 0x1
+ u32 power_dissipated; /* 0x8C */
+ #define NVM_CFG1_GLOB_POWER_DIS_D0_MASK 0x000000FF
+ #define NVM_CFG1_GLOB_POWER_DIS_D0_OFFSET 0
+ #define NVM_CFG1_GLOB_POWER_DIS_D1_MASK 0x0000FF00
+ #define NVM_CFG1_GLOB_POWER_DIS_D1_OFFSET 8
+ #define NVM_CFG1_GLOB_POWER_DIS_D2_MASK 0x00FF0000
+ #define NVM_CFG1_GLOB_POWER_DIS_D2_OFFSET 16
+ #define NVM_CFG1_GLOB_POWER_DIS_D3_MASK 0xFF000000
+ #define NVM_CFG1_GLOB_POWER_DIS_D3_OFFSET 24
+ u32 power_consumed; /* 0x90 */
+ #define NVM_CFG1_GLOB_POWER_CONS_D0_MASK 0x000000FF
+ #define NVM_CFG1_GLOB_POWER_CONS_D0_OFFSET 0
+ #define NVM_CFG1_GLOB_POWER_CONS_D1_MASK 0x0000FF00
+ #define NVM_CFG1_GLOB_POWER_CONS_D1_OFFSET 8
+ #define NVM_CFG1_GLOB_POWER_CONS_D2_MASK 0x00FF0000
+ #define NVM_CFG1_GLOB_POWER_CONS_D2_OFFSET 16
+ #define NVM_CFG1_GLOB_POWER_CONS_D3_MASK 0xFF000000
+ #define NVM_CFG1_GLOB_POWER_CONS_D3_OFFSET 24
+ u32 efi_version; /* 0x94 */
u32 reserved[42]; /* 0x98 */
};
struct nvm_cfg1_path {
- u32 reserved[30]; /* 0x0 */
+ u32 reserved[30]; /* 0x0 */
};
struct nvm_cfg1_port {
- u32 reserved__m_relocated_to_option_123; /* 0x0 */
- u32 reserved__m_relocated_to_option_124; /* 0x4 */
- u32 generic_cont0; /* 0x8 */
-#define NVM_CFG1_PORT_LED_MODE_MASK 0x000000FF
-#define NVM_CFG1_PORT_LED_MODE_OFFSET 0
-#define NVM_CFG1_PORT_LED_MODE_MAC1 0x0
-#define NVM_CFG1_PORT_LED_MODE_PHY1 0x1
-#define NVM_CFG1_PORT_LED_MODE_PHY2 0x2
-#define NVM_CFG1_PORT_LED_MODE_PHY3 0x3
-#define NVM_CFG1_PORT_LED_MODE_MAC2 0x4
-#define NVM_CFG1_PORT_LED_MODE_PHY4 0x5
-#define NVM_CFG1_PORT_LED_MODE_PHY5 0x6
-#define NVM_CFG1_PORT_LED_MODE_PHY6 0x7
-#define NVM_CFG1_PORT_LED_MODE_MAC3 0x8
-#define NVM_CFG1_PORT_LED_MODE_PHY7 0x9
-#define NVM_CFG1_PORT_LED_MODE_PHY8 0xA
-#define NVM_CFG1_PORT_LED_MODE_PHY9 0xB
-#define NVM_CFG1_PORT_LED_MODE_MAC4 0xC
-#define NVM_CFG1_PORT_LED_MODE_PHY10 0xD
-#define NVM_CFG1_PORT_LED_MODE_PHY11 0xE
-#define NVM_CFG1_PORT_LED_MODE_PHY12 0xF
-#define NVM_CFG1_PORT_DCBX_MODE_MASK 0x000F0000
-#define NVM_CFG1_PORT_DCBX_MODE_OFFSET 16
-#define NVM_CFG1_PORT_DCBX_MODE_DISABLED 0x0
-#define NVM_CFG1_PORT_DCBX_MODE_IEEE 0x1
-#define NVM_CFG1_PORT_DCBX_MODE_CEE 0x2
-#define NVM_CFG1_PORT_DCBX_MODE_DYNAMIC 0x3
-#define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_MASK 0x00F00000
-#define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_OFFSET 20
-#define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_ETHERNET 0x1
- u32 pcie_cfg; /* 0xC */
-#define NVM_CFG1_PORT_RESERVED15_MASK 0x00000007
-#define NVM_CFG1_PORT_RESERVED15_OFFSET 0
- u32 features; /* 0x10 */
-#define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_MASK 0x00000001
-#define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_OFFSET 0
-#define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_DISABLED 0x0
-#define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_ENABLED 0x1
-#define NVM_CFG1_PORT_MAGIC_PACKET_WOL_MASK 0x00000002
-#define NVM_CFG1_PORT_MAGIC_PACKET_WOL_OFFSET 1
-#define NVM_CFG1_PORT_MAGIC_PACKET_WOL_DISABLED 0x0
-#define NVM_CFG1_PORT_MAGIC_PACKET_WOL_ENABLED 0x1
- u32 speed_cap_mask; /* 0x14 */
-#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_MASK 0x0000FFFF
-#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_OFFSET 0
-#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G 0x1
-#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G 0x2
-#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G 0x8
-#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G 0x10
-#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G 0x20
+ u32 reserved__m_relocated_to_option_123; /* 0x0 */
+ u32 reserved__m_relocated_to_option_124; /* 0x4 */
+ u32 generic_cont0; /* 0x8 */
+ #define NVM_CFG1_PORT_LED_MODE_MASK 0x000000FF
+ #define NVM_CFG1_PORT_LED_MODE_OFFSET 0
+ #define NVM_CFG1_PORT_LED_MODE_MAC1 0x0
+ #define NVM_CFG1_PORT_LED_MODE_PHY1 0x1
+ #define NVM_CFG1_PORT_LED_MODE_PHY2 0x2
+ #define NVM_CFG1_PORT_LED_MODE_PHY3 0x3
+ #define NVM_CFG1_PORT_LED_MODE_MAC2 0x4
+ #define NVM_CFG1_PORT_LED_MODE_PHY4 0x5
+ #define NVM_CFG1_PORT_LED_MODE_PHY5 0x6
+ #define NVM_CFG1_PORT_LED_MODE_PHY6 0x7
+ #define NVM_CFG1_PORT_LED_MODE_MAC3 0x8
+ #define NVM_CFG1_PORT_LED_MODE_PHY7 0x9
+ #define NVM_CFG1_PORT_LED_MODE_PHY8 0xA
+ #define NVM_CFG1_PORT_LED_MODE_PHY9 0xB
+ #define NVM_CFG1_PORT_LED_MODE_MAC4 0xC
+ #define NVM_CFG1_PORT_LED_MODE_PHY10 0xD
+ #define NVM_CFG1_PORT_LED_MODE_PHY11 0xE
+ #define NVM_CFG1_PORT_LED_MODE_PHY12 0xF
+ #define NVM_CFG1_PORT_DCBX_MODE_MASK 0x000F0000
+ #define NVM_CFG1_PORT_DCBX_MODE_OFFSET 16
+ #define NVM_CFG1_PORT_DCBX_MODE_DISABLED 0x0
+ #define NVM_CFG1_PORT_DCBX_MODE_IEEE 0x1
+ #define NVM_CFG1_PORT_DCBX_MODE_CEE 0x2
+ #define NVM_CFG1_PORT_DCBX_MODE_DYNAMIC 0x3
+ #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_MASK 0x00F00000
+ #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_OFFSET 20
+ #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_ETHERNET 0x1
+ u32 pcie_cfg; /* 0xC */
+ #define NVM_CFG1_PORT_RESERVED15_MASK 0x00000007
+ #define NVM_CFG1_PORT_RESERVED15_OFFSET 0
+ u32 features; /* 0x10 */
+ #define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_MASK 0x00000001
+ #define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_OFFSET 0
+ #define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_DISABLED 0x0
+ #define NVM_CFG1_PORT_ENABLE_WOL_ON_ACPI_PATTERN_ENABLED 0x1
+ #define NVM_CFG1_PORT_MAGIC_PACKET_WOL_MASK 0x00000002
+ #define NVM_CFG1_PORT_MAGIC_PACKET_WOL_OFFSET 1
+ #define NVM_CFG1_PORT_MAGIC_PACKET_WOL_DISABLED 0x0
+ #define NVM_CFG1_PORT_MAGIC_PACKET_WOL_ENABLED 0x1
+ u32 speed_cap_mask; /* 0x14 */
+ #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_MASK 0x0000FFFF
+ #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_OFFSET 0
+ #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G 0x1
+ #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G 0x2
+ #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G 0x8
+ #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G 0x10
+ #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G 0x20
#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_100G 0x40
-#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_MASK 0xFFFF0000
-#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_OFFSET 16
-#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_1G 0x1
-#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_10G 0x2
-#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_25G 0x8
-#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_40G 0x10
-#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_50G 0x20
+ #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_MASK 0xFFFF0000
+ #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_OFFSET 16
+ #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_1G 0x1
+ #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_10G 0x2
+ #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_25G 0x8
+ #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_40G 0x10
+ #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_50G 0x20
#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_100G 0x40
- u32 link_settings; /* 0x18 */
-#define NVM_CFG1_PORT_DRV_LINK_SPEED_MASK 0x0000000F
-#define NVM_CFG1_PORT_DRV_LINK_SPEED_OFFSET 0
-#define NVM_CFG1_PORT_DRV_LINK_SPEED_AUTONEG 0x0
-#define NVM_CFG1_PORT_DRV_LINK_SPEED_1G 0x1
-#define NVM_CFG1_PORT_DRV_LINK_SPEED_10G 0x2
-#define NVM_CFG1_PORT_DRV_LINK_SPEED_25G 0x4
-#define NVM_CFG1_PORT_DRV_LINK_SPEED_40G 0x5
-#define NVM_CFG1_PORT_DRV_LINK_SPEED_50G 0x6
+ u32 link_settings; /* 0x18 */
+ #define NVM_CFG1_PORT_DRV_LINK_SPEED_MASK 0x0000000F
+ #define NVM_CFG1_PORT_DRV_LINK_SPEED_OFFSET 0
+ #define NVM_CFG1_PORT_DRV_LINK_SPEED_AUTONEG 0x0
+ #define NVM_CFG1_PORT_DRV_LINK_SPEED_1G 0x1
+ #define NVM_CFG1_PORT_DRV_LINK_SPEED_10G 0x2
+ #define NVM_CFG1_PORT_DRV_LINK_SPEED_25G 0x4
+ #define NVM_CFG1_PORT_DRV_LINK_SPEED_40G 0x5
+ #define NVM_CFG1_PORT_DRV_LINK_SPEED_50G 0x6
#define NVM_CFG1_PORT_DRV_LINK_SPEED_100G 0x7
-#define NVM_CFG1_PORT_DRV_LINK_SPEED_SMARTLINQ 0x8
-#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_MASK 0x00000070
-#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_OFFSET 4
-#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_AUTONEG 0x1
-#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_RX 0x2
-#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_TX 0x4
-#define NVM_CFG1_PORT_MFW_LINK_SPEED_MASK 0x00000780
-#define NVM_CFG1_PORT_MFW_LINK_SPEED_OFFSET 7
-#define NVM_CFG1_PORT_MFW_LINK_SPEED_AUTONEG 0x0
-#define NVM_CFG1_PORT_MFW_LINK_SPEED_1G 0x1
-#define NVM_CFG1_PORT_MFW_LINK_SPEED_10G 0x2
-#define NVM_CFG1_PORT_MFW_LINK_SPEED_25G 0x4
-#define NVM_CFG1_PORT_MFW_LINK_SPEED_40G 0x5
-#define NVM_CFG1_PORT_MFW_LINK_SPEED_50G 0x6
+ #define NVM_CFG1_PORT_DRV_LINK_SPEED_SMARTLINQ 0x8
+ #define NVM_CFG1_PORT_DRV_FLOW_CONTROL_MASK 0x00000070
+ #define NVM_CFG1_PORT_DRV_FLOW_CONTROL_OFFSET 4
+ #define NVM_CFG1_PORT_DRV_FLOW_CONTROL_AUTONEG 0x1
+ #define NVM_CFG1_PORT_DRV_FLOW_CONTROL_RX 0x2
+ #define NVM_CFG1_PORT_DRV_FLOW_CONTROL_TX 0x4
+ #define NVM_CFG1_PORT_MFW_LINK_SPEED_MASK 0x00000780
+ #define NVM_CFG1_PORT_MFW_LINK_SPEED_OFFSET 7
+ #define NVM_CFG1_PORT_MFW_LINK_SPEED_AUTONEG 0x0
+ #define NVM_CFG1_PORT_MFW_LINK_SPEED_1G 0x1
+ #define NVM_CFG1_PORT_MFW_LINK_SPEED_10G 0x2
+ #define NVM_CFG1_PORT_MFW_LINK_SPEED_25G 0x4
+ #define NVM_CFG1_PORT_MFW_LINK_SPEED_40G 0x5
+ #define NVM_CFG1_PORT_MFW_LINK_SPEED_50G 0x6
#define NVM_CFG1_PORT_MFW_LINK_SPEED_100G 0x7
-#define NVM_CFG1_PORT_MFW_LINK_SPEED_SMARTLINQ 0x8
-#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_MASK 0x00003800
-#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_OFFSET 11
-#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_AUTONEG 0x1
-#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_RX 0x2
-#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_TX 0x4
+ #define NVM_CFG1_PORT_MFW_LINK_SPEED_SMARTLINQ 0x8
+ #define NVM_CFG1_PORT_MFW_FLOW_CONTROL_MASK 0x00003800
+ #define NVM_CFG1_PORT_MFW_FLOW_CONTROL_OFFSET 11
+ #define NVM_CFG1_PORT_MFW_FLOW_CONTROL_AUTONEG 0x1
+ #define NVM_CFG1_PORT_MFW_FLOW_CONTROL_RX 0x2
+ #define NVM_CFG1_PORT_MFW_FLOW_CONTROL_TX 0x4
#define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_MASK 0x00004000
-#define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_OFFSET 14
+ #define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_OFFSET 14
#define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_DISABLED 0x0
#define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_ENABLED 0x1
-#define NVM_CFG1_PORT_AN_25G_50G_OUI_MASK 0x00018000
-#define NVM_CFG1_PORT_AN_25G_50G_OUI_OFFSET 15
-#define NVM_CFG1_PORT_AN_25G_50G_OUI_CONSORTIUM 0x0
-#define NVM_CFG1_PORT_AN_25G_50G_OUI_BAM 0x1
-#define NVM_CFG1_PORT_FEC_FORCE_MODE_MASK 0x000E0000
-#define NVM_CFG1_PORT_FEC_FORCE_MODE_OFFSET 17
+ #define NVM_CFG1_PORT_AN_25G_50G_OUI_MASK 0x00018000
+ #define NVM_CFG1_PORT_AN_25G_50G_OUI_OFFSET 15
+ #define NVM_CFG1_PORT_AN_25G_50G_OUI_CONSORTIUM 0x0
+ #define NVM_CFG1_PORT_AN_25G_50G_OUI_BAM 0x1
+ #define NVM_CFG1_PORT_FEC_FORCE_MODE_MASK 0x000E0000
+ #define NVM_CFG1_PORT_FEC_FORCE_MODE_OFFSET 17
#define NVM_CFG1_PORT_FEC_FORCE_MODE_FEC_FORCE_NONE 0x0
#define NVM_CFG1_PORT_FEC_FORCE_MODE_FEC_FORCE_FIRECODE 0x1
#define NVM_CFG1_PORT_FEC_FORCE_MODE_FEC_FORCE_RS 0x2
- u32 phy_cfg; /* 0x1C */
-#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_MASK 0x0000FFFF
-#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_OFFSET 0
-#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_HIGIG 0x1
-#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_SCRAMBLER 0x2
-#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_FIBER 0x4
-#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_DISABLE_CL72_AN 0x8
-#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_DISABLE_FEC_AN 0x10
-#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_MASK 0x00FF0000
-#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_OFFSET 16
-#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_BYPASS 0x0
-#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR 0x2
-#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR2 0x3
-#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR4 0x4
-#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XFI 0x8
-#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_SFI 0x9
-#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_1000X 0xB
-#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_SGMII 0xC
-#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XLAUI 0x11
-#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XLPPI 0x12
-#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_CAUI 0x21
-#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_CPPI 0x22
-#define NVM_CFG1_PORT_SERDES_NET_INTERFACE_25GAUI 0x31
-#define NVM_CFG1_PORT_AN_MODE_MASK 0xFF000000
-#define NVM_CFG1_PORT_AN_MODE_OFFSET 24
-#define NVM_CFG1_PORT_AN_MODE_NONE 0x0
-#define NVM_CFG1_PORT_AN_MODE_CL73 0x1
-#define NVM_CFG1_PORT_AN_MODE_CL37 0x2
-#define NVM_CFG1_PORT_AN_MODE_CL73_BAM 0x3
+ u32 phy_cfg; /* 0x1C */
+ #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_MASK 0x0000FFFF
+ #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_OFFSET 0
+ #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_HIGIG 0x1
+ #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_SCRAMBLER 0x2
+ #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_FIBER 0x4
+ #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_DISABLE_CL72_AN 0x8
+ #define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_DISABLE_FEC_AN 0x10
+ #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_MASK 0x00FF0000
+ #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_OFFSET 16
+ #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_BYPASS 0x0
+ #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR 0x2
+ #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR2 0x3
+ #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_KR4 0x4
+ #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XFI 0x8
+ #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_SFI 0x9
+ #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_1000X 0xB
+ #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_SGMII 0xC
+ #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XLAUI 0x11
+ #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_XLPPI 0x12
+ #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_CAUI 0x21
+ #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_CPPI 0x22
+ #define NVM_CFG1_PORT_SERDES_NET_INTERFACE_25GAUI 0x31
+ #define NVM_CFG1_PORT_AN_MODE_MASK 0xFF000000
+ #define NVM_CFG1_PORT_AN_MODE_OFFSET 24
+ #define NVM_CFG1_PORT_AN_MODE_NONE 0x0
+ #define NVM_CFG1_PORT_AN_MODE_CL73 0x1
+ #define NVM_CFG1_PORT_AN_MODE_CL37 0x2
+ #define NVM_CFG1_PORT_AN_MODE_CL73_BAM 0x3
#define NVM_CFG1_PORT_AN_MODE_CL37_BAM 0x4
#define NVM_CFG1_PORT_AN_MODE_HPAM 0x5
#define NVM_CFG1_PORT_AN_MODE_SGMII 0x6
- u32 mgmt_traffic; /* 0x20 */
-#define NVM_CFG1_PORT_RESERVED61_MASK 0x0000000F
-#define NVM_CFG1_PORT_RESERVED61_OFFSET 0
- u32 ext_phy; /* 0x24 */
-#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_MASK 0x000000FF
-#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_OFFSET 0
-#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_NONE 0x0
-#define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_BCM84844 0x1
-#define NVM_CFG1_PORT_EXTERNAL_PHY_ADDRESS_MASK 0x0000FF00
-#define NVM_CFG1_PORT_EXTERNAL_PHY_ADDRESS_OFFSET 8
- u32 mba_cfg1; /* 0x28 */
-#define NVM_CFG1_PORT_PREBOOT_OPROM_MASK 0x00000001
-#define NVM_CFG1_PORT_PREBOOT_OPROM_OFFSET 0
-#define NVM_CFG1_PORT_PREBOOT_OPROM_DISABLED 0x0
-#define NVM_CFG1_PORT_PREBOOT_OPROM_ENABLED 0x1
-#define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_TYPE_MASK 0x00000006
-#define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_TYPE_OFFSET 1
-#define NVM_CFG1_PORT_MBA_DELAY_TIME_MASK 0x00000078
-#define NVM_CFG1_PORT_MBA_DELAY_TIME_OFFSET 3
-#define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_MASK 0x00000080
-#define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_OFFSET 7
-#define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_CTRL_S 0x0
-#define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_CTRL_B 0x1
-#define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_MASK 0x00000100
-#define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_OFFSET 8
-#define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_DISABLED 0x0
-#define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_ENABLED 0x1
-#define NVM_CFG1_PORT_RESERVED5_MASK 0x0001FE00
-#define NVM_CFG1_PORT_RESERVED5_OFFSET 9
-#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_MASK 0x001E0000
-#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_OFFSET 17
-#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_AUTONEG 0x0
-#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_1G 0x1
-#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_10G 0x2
-#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_25G 0x4
-#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_40G 0x5
-#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_50G 0x6
+ u32 mgmt_traffic; /* 0x20 */
+ #define NVM_CFG1_PORT_RESERVED61_MASK 0x0000000F
+ #define NVM_CFG1_PORT_RESERVED61_OFFSET 0
+ u32 ext_phy; /* 0x24 */
+ #define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_MASK 0x000000FF
+ #define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_OFFSET 0
+ #define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_NONE 0x0
+ #define NVM_CFG1_PORT_EXTERNAL_PHY_TYPE_BCM84844 0x1
+ #define NVM_CFG1_PORT_EXTERNAL_PHY_ADDRESS_MASK 0x0000FF00
+ #define NVM_CFG1_PORT_EXTERNAL_PHY_ADDRESS_OFFSET 8
+ u32 mba_cfg1; /* 0x28 */
+ #define NVM_CFG1_PORT_PREBOOT_OPROM_MASK 0x00000001
+ #define NVM_CFG1_PORT_PREBOOT_OPROM_OFFSET 0
+ #define NVM_CFG1_PORT_PREBOOT_OPROM_DISABLED 0x0
+ #define NVM_CFG1_PORT_PREBOOT_OPROM_ENABLED 0x1
+ #define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_TYPE_MASK 0x00000006
+ #define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_TYPE_OFFSET 1
+ #define NVM_CFG1_PORT_MBA_DELAY_TIME_MASK 0x00000078
+ #define NVM_CFG1_PORT_MBA_DELAY_TIME_OFFSET 3
+ #define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_MASK 0x00000080
+ #define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_OFFSET 7
+ #define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_CTRL_S 0x0
+ #define NVM_CFG1_PORT_MBA_SETUP_HOT_KEY_CTRL_B 0x1
+ #define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_MASK 0x00000100
+ #define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_OFFSET 8
+ #define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_DISABLED 0x0
+ #define NVM_CFG1_PORT_MBA_HIDE_SETUP_PROMPT_ENABLED 0x1
+ #define NVM_CFG1_PORT_RESERVED5_MASK 0x0001FE00
+ #define NVM_CFG1_PORT_RESERVED5_OFFSET 9
+ #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_MASK 0x001E0000
+ #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_OFFSET 17
+ #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_AUTONEG 0x0
+ #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_1G 0x1
+ #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_10G 0x2
+ #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_25G 0x4
+ #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_40G 0x5
+ #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_50G 0x6
#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_100G 0x7
-#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_SMARTLINQ 0x8
+ #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_SMARTLINQ 0x8
#define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_RETRY_COUNT_MASK 0x00E00000
-#define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_RETRY_COUNT_OFFSET 21
- u32 mba_cfg2; /* 0x2C */
-#define NVM_CFG1_PORT_RESERVED65_MASK 0x0000FFFF
-#define NVM_CFG1_PORT_RESERVED65_OFFSET 0
-#define NVM_CFG1_PORT_RESERVED66_MASK 0x00010000
-#define NVM_CFG1_PORT_RESERVED66_OFFSET 16
- u32 vf_cfg; /* 0x30 */
-#define NVM_CFG1_PORT_RESERVED8_MASK 0x0000FFFF
-#define NVM_CFG1_PORT_RESERVED8_OFFSET 0
-#define NVM_CFG1_PORT_RESERVED6_MASK 0x000F0000
-#define NVM_CFG1_PORT_RESERVED6_OFFSET 16
- struct nvm_cfg_mac_address lldp_mac_address; /* 0x34 */
- u32 led_port_settings; /* 0x3C */
-#define NVM_CFG1_PORT_LANE_LED_SPD_0_SEL_MASK 0x000000FF
-#define NVM_CFG1_PORT_LANE_LED_SPD_0_SEL_OFFSET 0
-#define NVM_CFG1_PORT_LANE_LED_SPD_1_SEL_MASK 0x0000FF00
-#define NVM_CFG1_PORT_LANE_LED_SPD_1_SEL_OFFSET 8
-#define NVM_CFG1_PORT_LANE_LED_SPD_2_SEL_MASK 0x00FF0000
-#define NVM_CFG1_PORT_LANE_LED_SPD_2_SEL_OFFSET 16
-#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_1G 0x1
-#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_10G 0x2
-#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_25G 0x8
-#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_40G 0x10
-#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_50G 0x20
+ #define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_RETRY_COUNT_OFFSET 21
+ u32 mba_cfg2; /* 0x2C */
+ #define NVM_CFG1_PORT_RESERVED65_MASK 0x0000FFFF
+ #define NVM_CFG1_PORT_RESERVED65_OFFSET 0
+ #define NVM_CFG1_PORT_RESERVED66_MASK 0x00010000
+ #define NVM_CFG1_PORT_RESERVED66_OFFSET 16
+ u32 vf_cfg; /* 0x30 */
+ #define NVM_CFG1_PORT_RESERVED8_MASK 0x0000FFFF
+ #define NVM_CFG1_PORT_RESERVED8_OFFSET 0
+ #define NVM_CFG1_PORT_RESERVED6_MASK 0x000F0000
+ #define NVM_CFG1_PORT_RESERVED6_OFFSET 16
+ struct nvm_cfg_mac_address lldp_mac_address; /* 0x34 */
+ u32 led_port_settings; /* 0x3C */
+ #define NVM_CFG1_PORT_LANE_LED_SPD_0_SEL_MASK 0x000000FF
+ #define NVM_CFG1_PORT_LANE_LED_SPD_0_SEL_OFFSET 0
+ #define NVM_CFG1_PORT_LANE_LED_SPD_1_SEL_MASK 0x0000FF00
+ #define NVM_CFG1_PORT_LANE_LED_SPD_1_SEL_OFFSET 8
+ #define NVM_CFG1_PORT_LANE_LED_SPD_2_SEL_MASK 0x00FF0000
+ #define NVM_CFG1_PORT_LANE_LED_SPD_2_SEL_OFFSET 16
+ #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_1G 0x1
+ #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_10G 0x2
+ #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_25G 0x8
+ #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_40G 0x10
+ #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_50G 0x20
#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_100G 0x40
- u32 transceiver_00; /* 0x40 */
+ u32 transceiver_00; /* 0x40 */
/* Define for mapping of transceiver signal module absent */
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_MASK 0x000000FF
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_OFFSET 0
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_NA 0x0
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO0 0x1
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO1 0x2
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO2 0x3
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO3 0x4
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO4 0x5
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO5 0x6
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO6 0x7
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO7 0x8
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO8 0x9
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO9 0xA
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO10 0xB
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO11 0xC
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO12 0xD
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO13 0xE
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO14 0xF
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO15 0x10
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO16 0x11
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO17 0x12
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO18 0x13
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO19 0x14
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO20 0x15
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO21 0x16
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO22 0x17
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO23 0x18
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO24 0x19
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO25 0x1A
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO26 0x1B
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO27 0x1C
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO28 0x1D
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO29 0x1E
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO30 0x1F
-#define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO31 0x20
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_MASK 0x000000FF
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_OFFSET 0
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_NA 0x0
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO0 0x1
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO1 0x2
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO2 0x3
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO3 0x4
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO4 0x5
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO5 0x6
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO6 0x7
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO7 0x8
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO8 0x9
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO9 0xA
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO10 0xB
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO11 0xC
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO12 0xD
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO13 0xE
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO14 0xF
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO15 0x10
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO16 0x11
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO17 0x12
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO18 0x13
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO19 0x14
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO20 0x15
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO21 0x16
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO22 0x17
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO23 0x18
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO24 0x19
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO25 0x1A
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO26 0x1B
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO27 0x1C
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO28 0x1D
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO29 0x1E
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO30 0x1F
+ #define NVM_CFG1_PORT_TRANS_MODULE_ABS_GPIO31 0x20
/* Define the GPIO mux settings to switch i2c mux to this port */
-#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_0_MASK 0x00000F00
-#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_0_OFFSET 8
-#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_1_MASK 0x0000F000
-#define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_1_OFFSET 12
- u32 device_ids; /* 0x44 */
-#define NVM_CFG1_PORT_ETH_DID_SUFFIX_MASK 0x000000FF
-#define NVM_CFG1_PORT_ETH_DID_SUFFIX_OFFSET 0
-#define NVM_CFG1_PORT_RESERVED_DID_SUFFIX_MASK 0xFF000000
-#define NVM_CFG1_PORT_RESERVED_DID_SUFFIX_OFFSET 24
- u32 board_cfg; /* 0x48 */
- /* This field defines the board technology
+ #define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_0_MASK 0x00000F00
+ #define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_0_OFFSET 8
+ #define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_1_MASK 0x0000F000
+ #define NVM_CFG1_PORT_I2C_MUX_SEL_VALUE_1_OFFSET 12
+ u32 device_ids; /* 0x44 */
+ #define NVM_CFG1_PORT_ETH_DID_SUFFIX_MASK 0x000000FF
+ #define NVM_CFG1_PORT_ETH_DID_SUFFIX_OFFSET 0
+ #define NVM_CFG1_PORT_RESERVED_DID_SUFFIX_MASK 0xFF000000
+ #define NVM_CFG1_PORT_RESERVED_DID_SUFFIX_OFFSET 24
+ u32 board_cfg; /* 0x48 */
+ /* This field defines the board technology
* (backpane,transceiver,external PHY)
*/
-#define NVM_CFG1_PORT_PORT_TYPE_MASK 0x000000FF
-#define NVM_CFG1_PORT_PORT_TYPE_OFFSET 0
-#define NVM_CFG1_PORT_PORT_TYPE_UNDEFINED 0x0
-#define NVM_CFG1_PORT_PORT_TYPE_MODULE 0x1
-#define NVM_CFG1_PORT_PORT_TYPE_BACKPLANE 0x2
-#define NVM_CFG1_PORT_PORT_TYPE_EXT_PHY 0x3
-#define NVM_CFG1_PORT_PORT_TYPE_MODULE_SLAVE 0x4
+ #define NVM_CFG1_PORT_PORT_TYPE_MASK 0x000000FF
+ #define NVM_CFG1_PORT_PORT_TYPE_OFFSET 0
+ #define NVM_CFG1_PORT_PORT_TYPE_UNDEFINED 0x0
+ #define NVM_CFG1_PORT_PORT_TYPE_MODULE 0x1
+ #define NVM_CFG1_PORT_PORT_TYPE_BACKPLANE 0x2
+ #define NVM_CFG1_PORT_PORT_TYPE_EXT_PHY 0x3
+ #define NVM_CFG1_PORT_PORT_TYPE_MODULE_SLAVE 0x4
/* This field defines the GPIO mapped to tx_disable signal in SFP */
-#define NVM_CFG1_PORT_TX_DISABLE_MASK 0x0000FF00
-#define NVM_CFG1_PORT_TX_DISABLE_OFFSET 8
-#define NVM_CFG1_PORT_TX_DISABLE_NA 0x0
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO0 0x1
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO1 0x2
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO2 0x3
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO3 0x4
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO4 0x5
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO5 0x6
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO6 0x7
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO7 0x8
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO8 0x9
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO9 0xA
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO10 0xB
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO11 0xC
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO12 0xD
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO13 0xE
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO14 0xF
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO15 0x10
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO16 0x11
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO17 0x12
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO18 0x13
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO19 0x14
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO20 0x15
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO21 0x16
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO22 0x17
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO23 0x18
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO24 0x19
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO25 0x1A
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO26 0x1B
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO27 0x1C
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO28 0x1D
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO29 0x1E
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO30 0x1F
-#define NVM_CFG1_PORT_TX_DISABLE_GPIO31 0x20
+ #define NVM_CFG1_PORT_TX_DISABLE_MASK 0x0000FF00
+ #define NVM_CFG1_PORT_TX_DISABLE_OFFSET 8
+ #define NVM_CFG1_PORT_TX_DISABLE_NA 0x0
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO0 0x1
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO1 0x2
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO2 0x3
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO3 0x4
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO4 0x5
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO5 0x6
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO6 0x7
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO7 0x8
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO8 0x9
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO9 0xA
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO10 0xB
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO11 0xC
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO12 0xD
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO13 0xE
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO14 0xF
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO15 0x10
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO16 0x11
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO17 0x12
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO18 0x13
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO19 0x14
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO20 0x15
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO21 0x16
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO22 0x17
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO23 0x18
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO24 0x19
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO25 0x1A
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO26 0x1B
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO27 0x1C
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO28 0x1D
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO29 0x1E
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO30 0x1F
+ #define NVM_CFG1_PORT_TX_DISABLE_GPIO31 0x20
u32 reserved[131]; /* 0x4C */
};
struct nvm_cfg1_func {
- struct nvm_cfg_mac_address mac_address; /* 0x0 */
- u32 rsrv1; /* 0x8 */
-#define NVM_CFG1_FUNC_RESERVED1_MASK 0x0000FFFF
-#define NVM_CFG1_FUNC_RESERVED1_OFFSET 0
-#define NVM_CFG1_FUNC_RESERVED2_MASK 0xFFFF0000
-#define NVM_CFG1_FUNC_RESERVED2_OFFSET 16
- u32 rsrv2; /* 0xC */
-#define NVM_CFG1_FUNC_RESERVED3_MASK 0x0000FFFF
-#define NVM_CFG1_FUNC_RESERVED3_OFFSET 0
-#define NVM_CFG1_FUNC_RESERVED4_MASK 0xFFFF0000
-#define NVM_CFG1_FUNC_RESERVED4_OFFSET 16
- u32 device_id; /* 0x10 */
-#define NVM_CFG1_FUNC_MF_VENDOR_DEVICE_ID_MASK 0x0000FFFF
-#define NVM_CFG1_FUNC_MF_VENDOR_DEVICE_ID_OFFSET 0
-#define NVM_CFG1_FUNC_RESERVED77_MASK 0xFFFF0000
-#define NVM_CFG1_FUNC_RESERVED77_OFFSET 16
- u32 cmn_cfg; /* 0x14 */
-#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_MASK 0x00000007
-#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_OFFSET 0
-#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_PXE 0x0
-#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_NONE 0x7
-#define NVM_CFG1_FUNC_VF_PCI_DEVICE_ID_MASK 0x0007FFF8
-#define NVM_CFG1_FUNC_VF_PCI_DEVICE_ID_OFFSET 3
-#define NVM_CFG1_FUNC_PERSONALITY_MASK 0x00780000
-#define NVM_CFG1_FUNC_PERSONALITY_OFFSET 19
-#define NVM_CFG1_FUNC_PERSONALITY_ETHERNET 0x0
-#define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_MASK 0x7F800000
-#define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_OFFSET 23
-#define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_MASK 0x80000000
-#define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_OFFSET 31
-#define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_DISABLED 0x0
-#define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_ENABLED 0x1
- u32 pci_cfg; /* 0x18 */
-#define NVM_CFG1_FUNC_NUMBER_OF_VFS_PER_PF_MASK 0x0000007F
-#define NVM_CFG1_FUNC_NUMBER_OF_VFS_PER_PF_OFFSET 0
+ struct nvm_cfg_mac_address mac_address; /* 0x0 */
+ u32 rsrv1; /* 0x8 */
+ #define NVM_CFG1_FUNC_RESERVED1_MASK 0x0000FFFF
+ #define NVM_CFG1_FUNC_RESERVED1_OFFSET 0
+ #define NVM_CFG1_FUNC_RESERVED2_MASK 0xFFFF0000
+ #define NVM_CFG1_FUNC_RESERVED2_OFFSET 16
+ u32 rsrv2; /* 0xC */
+ #define NVM_CFG1_FUNC_RESERVED3_MASK 0x0000FFFF
+ #define NVM_CFG1_FUNC_RESERVED3_OFFSET 0
+ #define NVM_CFG1_FUNC_RESERVED4_MASK 0xFFFF0000
+ #define NVM_CFG1_FUNC_RESERVED4_OFFSET 16
+ u32 device_id; /* 0x10 */
+ #define NVM_CFG1_FUNC_MF_VENDOR_DEVICE_ID_MASK 0x0000FFFF
+ #define NVM_CFG1_FUNC_MF_VENDOR_DEVICE_ID_OFFSET 0
+ #define NVM_CFG1_FUNC_RESERVED77_MASK 0xFFFF0000
+ #define NVM_CFG1_FUNC_RESERVED77_OFFSET 16
+ u32 cmn_cfg; /* 0x14 */
+ #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_MASK 0x00000007
+ #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_OFFSET 0
+ #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_PXE 0x0
+ #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_NONE 0x7
+ #define NVM_CFG1_FUNC_VF_PCI_DEVICE_ID_MASK 0x0007FFF8
+ #define NVM_CFG1_FUNC_VF_PCI_DEVICE_ID_OFFSET 3
+ #define NVM_CFG1_FUNC_PERSONALITY_MASK 0x00780000
+ #define NVM_CFG1_FUNC_PERSONALITY_OFFSET 19
+ #define NVM_CFG1_FUNC_PERSONALITY_ETHERNET 0x0
+ #define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_MASK 0x7F800000
+ #define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_OFFSET 23
+ #define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_MASK 0x80000000
+ #define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_OFFSET 31
+ #define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_DISABLED 0x0
+ #define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_ENABLED 0x1
+ u32 pci_cfg; /* 0x18 */
+ #define NVM_CFG1_FUNC_NUMBER_OF_VFS_PER_PF_MASK 0x0000007F
+ #define NVM_CFG1_FUNC_NUMBER_OF_VFS_PER_PF_OFFSET 0
#define NVM_CFG1_FUNC_RESERVESD12_MASK 0x00003F80
#define NVM_CFG1_FUNC_RESERVESD12_OFFSET 7
-#define NVM_CFG1_FUNC_BAR1_SIZE_MASK 0x0003C000
-#define NVM_CFG1_FUNC_BAR1_SIZE_OFFSET 14
-#define NVM_CFG1_FUNC_BAR1_SIZE_DISABLED 0x0
-#define NVM_CFG1_FUNC_BAR1_SIZE_64K 0x1
-#define NVM_CFG1_FUNC_BAR1_SIZE_128K 0x2
-#define NVM_CFG1_FUNC_BAR1_SIZE_256K 0x3
-#define NVM_CFG1_FUNC_BAR1_SIZE_512K 0x4
-#define NVM_CFG1_FUNC_BAR1_SIZE_1M 0x5
-#define NVM_CFG1_FUNC_BAR1_SIZE_2M 0x6
-#define NVM_CFG1_FUNC_BAR1_SIZE_4M 0x7
-#define NVM_CFG1_FUNC_BAR1_SIZE_8M 0x8
-#define NVM_CFG1_FUNC_BAR1_SIZE_16M 0x9
-#define NVM_CFG1_FUNC_BAR1_SIZE_32M 0xA
-#define NVM_CFG1_FUNC_BAR1_SIZE_64M 0xB
-#define NVM_CFG1_FUNC_BAR1_SIZE_128M 0xC
-#define NVM_CFG1_FUNC_BAR1_SIZE_256M 0xD
-#define NVM_CFG1_FUNC_BAR1_SIZE_512M 0xE
-#define NVM_CFG1_FUNC_BAR1_SIZE_1G 0xF
-#define NVM_CFG1_FUNC_MAX_BANDWIDTH_MASK 0x03FC0000
-#define NVM_CFG1_FUNC_MAX_BANDWIDTH_OFFSET 18
- u32 preboot_generic_cfg; /* 0x2C */
-#define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_MASK 0x0000FFFF
-#define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_OFFSET 0
-#define NVM_CFG1_FUNC_PREBOOT_VLAN_MASK 0x00010000
-#define NVM_CFG1_FUNC_PREBOOT_VLAN_OFFSET 16
- u32 reserved[8]; /* 0x30 */
+ #define NVM_CFG1_FUNC_BAR1_SIZE_MASK 0x0003C000
+ #define NVM_CFG1_FUNC_BAR1_SIZE_OFFSET 14
+ #define NVM_CFG1_FUNC_BAR1_SIZE_DISABLED 0x0
+ #define NVM_CFG1_FUNC_BAR1_SIZE_64K 0x1
+ #define NVM_CFG1_FUNC_BAR1_SIZE_128K 0x2
+ #define NVM_CFG1_FUNC_BAR1_SIZE_256K 0x3
+ #define NVM_CFG1_FUNC_BAR1_SIZE_512K 0x4
+ #define NVM_CFG1_FUNC_BAR1_SIZE_1M 0x5
+ #define NVM_CFG1_FUNC_BAR1_SIZE_2M 0x6
+ #define NVM_CFG1_FUNC_BAR1_SIZE_4M 0x7
+ #define NVM_CFG1_FUNC_BAR1_SIZE_8M 0x8
+ #define NVM_CFG1_FUNC_BAR1_SIZE_16M 0x9
+ #define NVM_CFG1_FUNC_BAR1_SIZE_32M 0xA
+ #define NVM_CFG1_FUNC_BAR1_SIZE_64M 0xB
+ #define NVM_CFG1_FUNC_BAR1_SIZE_128M 0xC
+ #define NVM_CFG1_FUNC_BAR1_SIZE_256M 0xD
+ #define NVM_CFG1_FUNC_BAR1_SIZE_512M 0xE
+ #define NVM_CFG1_FUNC_BAR1_SIZE_1G 0xF
+ #define NVM_CFG1_FUNC_MAX_BANDWIDTH_MASK 0x03FC0000
+ #define NVM_CFG1_FUNC_MAX_BANDWIDTH_OFFSET 18
+ u32 preboot_generic_cfg; /* 0x2C */
+ #define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_MASK 0x0000FFFF
+ #define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_OFFSET 0
+ #define NVM_CFG1_FUNC_PREBOOT_VLAN_MASK 0x00010000
+ #define NVM_CFG1_FUNC_PREBOOT_VLAN_OFFSET 16
+ u32 reserved[8]; /* 0x30 */
};
struct nvm_cfg1 {
- struct nvm_cfg1_glob glob; /* 0x0 */
- struct nvm_cfg1_path path[MCP_GLOB_PATH_MAX]; /* 0x140 */
- struct nvm_cfg1_port port[MCP_GLOB_PORT_MAX]; /* 0x230 */
- struct nvm_cfg1_func func[MCP_GLOB_FUNC_MAX]; /* 0xB90 */
+ struct nvm_cfg1_glob glob; /* 0x0 */
+ struct nvm_cfg1_path path[MCP_GLOB_PATH_MAX]; /* 0x140 */
+ struct nvm_cfg1_port port[MCP_GLOB_PORT_MAX]; /* 0x230 */
+ struct nvm_cfg1_func func[MCP_GLOB_FUNC_MAX]; /* 0xB90 */
};
/******************************************
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 03/32] net/qede: use FW CONFIG defines as needed
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 01/32] net/qede/base: add new init files and rearrange the code Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 02/32] net/qede/base: formatting changes Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 04/32] net/qede/base: add HSI changes and register defines Rasesh Mody
` (29 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Rasesh Mody
Replaced CONFIG_QED_BINARY_FW with CONFIG_ECORE_BINARY_FW.
Use CONFIG_ECORE_BINARY_FW and CONFIG_ECORE_ZIPPED_FW defines as
required.
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
---
drivers/net/qede/base/bcm_osal.c | 2 ++
drivers/net/qede/base/ecore.h | 20 +++++++++++++++-----
drivers/net/qede/qede_main.c | 20 +++++++++++---------
3 files changed, 28 insertions(+), 14 deletions(-)
diff --git a/drivers/net/qede/base/bcm_osal.c b/drivers/net/qede/base/bcm_osal.c
index 16029b5..67270fd 100644
--- a/drivers/net/qede/base/bcm_osal.c
+++ b/drivers/net/qede/base/bcm_osal.c
@@ -152,6 +152,7 @@ void *osal_dma_alloc_coherent_aligned(struct ecore_dev *p_dev,
return mz->addr;
}
+#ifdef CONFIG_ECORE_ZIPPED_FW
u32 qede_unzip_data(struct ecore_hwfn *p_hwfn, u32 input_len,
u8 *input_buf, u32 max_size, u8 *unzip_buf)
{
@@ -182,6 +183,7 @@ u32 qede_unzip_data(struct ecore_hwfn *p_hwfn, u32 input_len,
return p_hwfn->stream->total_out / 4;
}
+#endif
void
qede_get_mcp_proto_stats(struct ecore_dev *edev,
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index c83b22b..b9127de 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -9,6 +9,18 @@
#ifndef __ECORE_H
#define __ECORE_H
+/* @DPDK */
+#include <sys/stat.h>
+#include <fcntl.h>
+#include <unistd.h>
+
+#define CONFIG_ECORE_BINARY_FW
+#define CONFIG_ECORE_ZIPPED_FW
+
+#ifdef CONFIG_ECORE_ZIPPED_FW
+#include <zlib.h>
+#endif
+
#include "ecore_hsi_common.h"
#include "ecore_hsi_debug_tools.h"
#include "ecore_hsi_init_func.h"
@@ -423,9 +435,6 @@ struct storm_stats {
u32 len;
};
-#define CONFIG_ECORE_BINARY_FW
-#define CONFIG_ECORE_ZIPPED_FW
-
struct ecore_fw_data {
#ifdef CONFIG_ECORE_BINARY_FW
struct fw_ver_info *fw_ver_info;
@@ -521,8 +530,8 @@ struct ecore_hwfn {
/* QM init */
struct ecore_qm_info qm_info;
- /* Buffer for unzipping firmware data */
#ifdef CONFIG_ECORE_ZIPPED_FW
+ /* Buffer for unzipping firmware data */
void *unzip_buf;
#endif
@@ -674,9 +683,10 @@ struct ecore_dev {
bool b_is_emul_full;
#endif
+#ifdef CONFIG_ECORE_BINARY_FW /* @DPDK */
void *firmware;
-
u64 fw_len;
+#endif
};
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 73608c6..2e62371 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -6,10 +6,6 @@
* See LICENSE.qede_pmd for copyright and licensing details.
*/
-#include <sys/stat.h>
-#include <fcntl.h>
-#include <unistd.h>
-#include <zlib.h>
#include <limits.h>
#include <rte_alarm.h>
@@ -20,7 +16,6 @@ static uint8_t npar_tx_switching = 1;
/* Alarm timeout. */
#define QEDE_ALARM_TIMEOUT_US 100000
-#define CONFIG_QED_BINARY_FW
/* Global variable to hold absolute path of fw file */
char fw_file[PATH_MAX];
@@ -83,6 +78,7 @@ static int qed_nic_setup(struct ecore_dev *edev)
return rc;
}
+#ifdef CONFIG_ECORE_ZIPPED_FW
static int qed_alloc_stream_mem(struct ecore_dev *edev)
{
int i;
@@ -112,7 +108,9 @@ static void qed_free_stream_mem(struct ecore_dev *edev)
OSAL_FREE(p_hwfn->p_dev, p_hwfn->stream);
}
}
+#endif
+#ifdef CONFIG_ECORE_BINARY_FW
static int qed_load_firmware_data(struct ecore_dev *edev)
{
int fd;
@@ -158,6 +156,7 @@ static int qed_load_firmware_data(struct ecore_dev *edev)
return 0;
}
+#endif
static void qed_handle_bulletin_change(struct ecore_hwfn *hwfn)
{
@@ -222,7 +221,7 @@ static int qed_slowpath_start(struct ecore_dev *edev,
struct ecore_tunn_start_params tunn_info;
#endif
-#ifdef CONFIG_QED_BINARY_FW
+#ifdef CONFIG_ECORE_BINARY_FW
if (IS_PF(edev)) {
rc = qed_load_firmware_data(edev);
if (rc) {
@@ -240,7 +239,7 @@ static int qed_slowpath_start(struct ecore_dev *edev,
/* set int_coalescing_mode */
edev->int_coalescing_mode = ECORE_COAL_MODE_ENABLE;
- /* Should go with CONFIG_QED_BINARY_FW */
+#ifdef CONFIG_ECORE_ZIPPED_FW
if (IS_PF(edev)) {
/* Allocate stream for unzipping */
rc = qed_alloc_stream_mem(edev);
@@ -252,9 +251,10 @@ static int qed_slowpath_start(struct ecore_dev *edev,
}
qed_start_iov_task(edev);
+#endif
/* Start the slowpath */
-#ifdef CONFIG_QED_BINARY_FW
+#ifdef CONFIG_ECORE_BINARY_FW
if (IS_PF(edev))
data = edev->firmware;
#endif
@@ -307,7 +307,7 @@ static int qed_slowpath_start(struct ecore_dev *edev,
err2:
ecore_resc_free(edev);
err:
-#ifdef CONFIG_QED_BINARY_FW
+#ifdef CONFIG_ECORE_BINARY_FW
if (IS_PF(edev)) {
if (edev->firmware)
rte_free(edev->firmware);
@@ -625,7 +625,9 @@ static int qed_slowpath_stop(struct ecore_dev *edev)
return -ENODEV;
if (IS_PF(edev)) {
+#ifdef CONFIG_ECORE_ZIPPED_FW
qed_free_stream_mem(edev);
+#endif
#ifdef CONFIG_QED_SRIOV
if (IS_QED_ETH_IF(edev))
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 04/32] net/qede/base: add HSI changes and register defines
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (2 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 03/32] net/qede: use FW CONFIG defines as needed Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 12:37 ` Ferruh Yigit
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 05/32] net/qede/base: add attention formatting string Rasesh Mody
` (28 subsequent siblings)
32 siblings, 1 reply; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Rasesh Mody
- add the hardware software interface(HSI) changes
- add register definitions
These will be required for 8.10.9.0 FW upgrade.
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
---
drivers/net/qede/base/common_hsi.h | 1202 +++++++++++++++++++++++++-----
drivers/net/qede/base/ecore_dev.c | 2 -
drivers/net/qede/base/ecore_hsi_common.h | 295 +++++++-
drivers/net/qede/base/ecore_hw.c | 4 +-
drivers/net/qede/base/eth_common.h | 27 -
drivers/net/qede/base/reg_addr.h | 36 +
6 files changed, 1330 insertions(+), 236 deletions(-)
diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index 4574800..b431c78 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -8,12 +8,89 @@
#ifndef __COMMON_HSI__
#define __COMMON_HSI__
+/********************************/
+/* PROTOCOL COMMON FW CONSTANTS */
+/********************************/
+
+/* Temporarily here should be added to HSI automatically by resource allocation
+ * tool.
+ */
+#define T_TEST_AGG_INT_TEMP 6
+#define M_TEST_AGG_INT_TEMP 8
+#define U_TEST_AGG_INT_TEMP 6
+#define X_TEST_AGG_INT_TEMP 14
+#define Y_TEST_AGG_INT_TEMP 4
+#define P_TEST_AGG_INT_TEMP 4
+
+#define X_FINAL_CLEANUP_AGG_INT 1
+
+#define EVENT_RING_PAGE_SIZE_BYTES 4096
+
+#define NUM_OF_GLOBAL_QUEUES 128
+#define COMMON_QUEUE_ENTRY_MAX_BYTE_SIZE 64
+
+#define ISCSI_CDU_TASK_SEG_TYPE 0
+#define FCOE_CDU_TASK_SEG_TYPE 0
+#define RDMA_CDU_TASK_SEG_TYPE 1
+
+#define FW_ASSERT_GENERAL_ATTN_IDX 32
+
+#define MAX_PINNED_CCFC 32
+
+#define EAGLE_ENG1_WORKAROUND_NIG_FLOWCTRL_MODE 3
+
+/* Queue Zone sizes in bytes */
+#define TSTORM_QZONE_SIZE 8 /*tstorm_scsi_queue_zone*/
+#define MSTORM_QZONE_SIZE 16 /*mstorm_eth_queue_zone. Used only for RX
+ *producer of VFs in backward compatibility
+ *mode.
+ */
+#define USTORM_QZONE_SIZE 8 /*ustorm_eth_queue_zone*/
+#define XSTORM_QZONE_SIZE 8 /*xstorm_eth_queue_zone*/
+#define YSTORM_QZONE_SIZE 0
+#define PSTORM_QZONE_SIZE 0
+
+/*Log of mstorm default VF zone size.*/
+#define MSTORM_VF_ZONE_DEFAULT_SIZE_LOG 7
+/*Maximum number of RX queues that can be allocated to VF by default*/
+#define ETH_MAX_NUM_RX_QUEUES_PER_VF_DEFAULT 16
+/*Maximum number of RX queues that can be allocated to VF with doubled VF zone
+ * size. Up to 96 VF supported in this mode
+ */
+#define ETH_MAX_NUM_RX_QUEUES_PER_VF_DOUBLE 48
+/*Maximum number of RX queues that can be allocated to VF with 4 VF zone size.
+ * Up to 48 VF supported in this mode
+ */
+#define ETH_MAX_NUM_RX_QUEUES_PER_VF_QUAD 112
+
+
+/********************************/
+/* CORE (LIGHT L2) FW CONSTANTS */
+/********************************/
+
+#define CORE_LL2_MAX_RAMROD_PER_CON 8
+#define CORE_LL2_TX_BD_PAGE_SIZE_BYTES 4096
+#define CORE_LL2_RX_BD_PAGE_SIZE_BYTES 4096
+#define CORE_LL2_RX_CQE_PAGE_SIZE_BYTES 4096
+#define CORE_LL2_RX_NUM_NEXT_PAGE_BDS 1
+
+#define CORE_LL2_TX_MAX_BDS_PER_PACKET 12
+
+#define CORE_SPQE_PAGE_SIZE_BYTES 4096
+
+#define MAX_NUM_LL2_RX_QUEUES 32
+#define MAX_NUM_LL2_TX_STATS_COUNTERS 32
+
+
+/****************************************************************************/
+/* Include firmware version number only- do not add constants here to avoid */
+/* redundunt compilations */
+/****************************************************************************/
-#define CORE_SPQE_PAGE_SIZE_BYTES 4096
#define FW_MAJOR_VERSION 8
-#define FW_MINOR_VERSION 7
-#define FW_REVISION_VERSION 7
+#define FW_MINOR_VERSION 10
+#define FW_REVISION_VERSION 9
#define FW_ENGINEERING_VERSION 0
/***********************/
@@ -21,70 +98,96 @@
/***********************/
/* PCI functions */
-#define MAX_NUM_PORTS_K2 (4)
-#define MAX_NUM_PORTS_BB (2)
-#define MAX_NUM_PORTS (MAX_NUM_PORTS_K2)
+#define MAX_NUM_PORTS_K2 (4)
+#define MAX_NUM_PORTS_BB (2)
+#define MAX_NUM_PORTS (MAX_NUM_PORTS_K2)
-#define MAX_NUM_PFS_K2 (16)
-#define MAX_NUM_PFS_BB (8)
-#define MAX_NUM_PFS (MAX_NUM_PFS_K2)
-#define MAX_NUM_OF_PFS_IN_CHIP (16) /* On both engines */
+#define MAX_NUM_PFS_K2 (16)
+#define MAX_NUM_PFS_BB (8)
+#define MAX_NUM_PFS (MAX_NUM_PFS_K2)
+#define MAX_NUM_OF_PFS_IN_CHIP (16) /* On both engines */
-#define MAX_NUM_VFS_K2 (192)
-#define MAX_NUM_VFS_BB (120)
-#define MAX_NUM_VFS (MAX_NUM_VFS_K2)
+#define MAX_NUM_VFS_K2 (192)
+#define MAX_NUM_VFS_BB (120)
+#define MAX_NUM_VFS (MAX_NUM_VFS_K2)
#define MAX_NUM_FUNCTIONS_BB (MAX_NUM_PFS_BB + MAX_NUM_VFS_BB)
-#define MAX_NUM_FUNCTIONS (MAX_NUM_PFS + MAX_NUM_VFS)
+#define MAX_NUM_FUNCTIONS_K2 (MAX_NUM_PFS_K2 + MAX_NUM_VFS_K2)
+#define MAX_NUM_FUNCTIONS (MAX_NUM_PFS + MAX_NUM_VFS)
+/* in both BB and K2, the VF number starts from 16. so for arrays containing all
+ * possible PFs and VFs - we need a constant for this size
+ */
#define MAX_FUNCTION_NUMBER_BB (MAX_NUM_PFS + MAX_NUM_VFS_BB)
-#define MAX_FUNCTION_NUMBER (MAX_NUM_PFS + MAX_NUM_VFS)
+#define MAX_FUNCTION_NUMBER_K2 (MAX_NUM_PFS + MAX_NUM_VFS_K2)
+#define MAX_FUNCTION_NUMBER (MAX_NUM_PFS + MAX_NUM_VFS)
-#define MAX_NUM_VPORTS_K2 (208)
-#define MAX_NUM_VPORTS_BB (160)
-#define MAX_NUM_VPORTS (MAX_NUM_VPORTS_K2)
+#define MAX_NUM_VPORTS_K2 (208)
+#define MAX_NUM_VPORTS_BB (160)
+#define MAX_NUM_VPORTS (MAX_NUM_VPORTS_K2)
#define MAX_NUM_L2_QUEUES_K2 (320)
#define MAX_NUM_L2_QUEUES_BB (256)
-#define MAX_NUM_L2_QUEUES (MAX_NUM_L2_QUEUES_K2)
+#define MAX_NUM_L2_QUEUES (MAX_NUM_L2_QUEUES_K2)
/* Traffic classes in network-facing blocks (PBF, BTB, NIG, BRB, PRS and QM) */
+/* 4-Port K2. */
#define NUM_PHYS_TCS_4PORT_K2 (4)
-#define NUM_OF_PHYS_TCS (8)
+#define NUM_OF_PHYS_TCS (8)
-#define NUM_TCS_4PORT_K2 (NUM_PHYS_TCS_4PORT_K2 + 1)
-#define NUM_OF_TCS (NUM_OF_PHYS_TCS + 1)
+#define NUM_TCS_4PORT_K2 (NUM_PHYS_TCS_4PORT_K2 + 1)
+#define NUM_OF_TCS (NUM_OF_PHYS_TCS + 1)
-#define LB_TC (NUM_OF_PHYS_TCS)
+#define LB_TC (NUM_OF_PHYS_TCS)
/* Num of possible traffic priority values */
-#define NUM_OF_PRIO (8)
+#define NUM_OF_PRIO (8)
-#define MAX_NUM_VOQS_K2 (NUM_TCS_4PORT_K2 * MAX_NUM_PORTS_K2)
-#define MAX_NUM_VOQS_BB (NUM_OF_TCS * MAX_NUM_PORTS_BB)
-#define MAX_NUM_VOQS (MAX_NUM_VOQS_K2)
-#define MAX_PHYS_VOQS (NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB)
+#define MAX_NUM_VOQS_K2 (NUM_TCS_4PORT_K2 * MAX_NUM_PORTS_K2)
+#define MAX_NUM_VOQS_BB (NUM_OF_TCS * MAX_NUM_PORTS_BB)
+#define MAX_NUM_VOQS (MAX_NUM_VOQS_K2)
+#define MAX_PHYS_VOQS (NUM_OF_PHYS_TCS * MAX_NUM_PORTS_BB)
/* CIDs */
-#define NUM_OF_CONNECTION_TYPES (8)
-#define NUM_OF_LCIDS (320)
-#define NUM_OF_LTIDS (320)
-
+#define NUM_OF_CONNECTION_TYPES (8)
+#define NUM_OF_LCIDS (320)
+#define NUM_OF_LTIDS (320)
+
+/* Clock values */
+#define MASTER_CLK_FREQ_E4 (375e6)
+#define STORM_CLK_FREQ_E4 (1000e6)
+#define CLK25M_CLK_FREQ_E4 (25e6)
+
+/* Global PXP windows (GTT) */
+#define NUM_OF_GTT 19
+#define GTT_DWORD_SIZE_BITS 10
+#define GTT_BYTE_SIZE_BITS (GTT_DWORD_SIZE_BITS + 2)
+#define GTT_DWORD_SIZE (1 << GTT_DWORD_SIZE_BITS)
+
+/* Tools Version */
+#define TOOLS_VERSION 10
/*****************/
/* CDU CONSTANTS */
/*****************/
-#define CDU_SEG_TYPE_OFFSET_REG_TYPE_SHIFT (17)
-#define CDU_SEG_TYPE_OFFSET_REG_OFFSET_MASK (0x1ffff)
+#define CDU_SEG_TYPE_OFFSET_REG_TYPE_SHIFT (17)
+#define CDU_SEG_TYPE_OFFSET_REG_OFFSET_MASK (0x1ffff)
+
+#define CDU_VF_FL_SEG_TYPE_OFFSET_REG_TYPE_SHIFT (12)
+#define CDU_VF_FL_SEG_TYPE_OFFSET_REG_OFFSET_MASK (0xfff)
+
/*****************/
/* DQ CONSTANTS */
/*****************/
/* DEMS */
-#define DQ_DEMS_LEGACY 0
+#define DQ_DEMS_LEGACY 0
+#define DQ_DEMS_TOE_MORE_TO_SEND 3
+#define DQ_DEMS_TOE_LOCAL_ADV_WND 4
+#define DQ_DEMS_ROCE_CQ_CONS 7
-/* XCM agg val selection */
+/* XCM agg val selection (HW) */
#define DQ_XCM_AGG_VAL_SEL_WORD2 0
#define DQ_XCM_AGG_VAL_SEL_WORD3 1
#define DQ_XCM_AGG_VAL_SEL_WORD4 2
@@ -94,7 +197,7 @@
#define DQ_XCM_AGG_VAL_SEL_REG5 6
#define DQ_XCM_AGG_VAL_SEL_REG6 7
-/* XCM agg val selection */
+/* XCM agg val selection (FW) */
#define DQ_XCM_ETH_EDPM_NUM_BDS_CMD \
DQ_XCM_AGG_VAL_SEL_WORD2
#define DQ_XCM_ETH_TX_BD_CONS_CMD \
@@ -107,9 +210,50 @@
DQ_XCM_AGG_VAL_SEL_WORD4
#define DQ_XCM_CORE_SPQ_PROD_CMD \
DQ_XCM_AGG_VAL_SEL_WORD4
-#define DQ_XCM_ETH_GO_TO_BD_CONS_CMD DQ_XCM_AGG_VAL_SEL_WORD5
-
-/* XCM agg counter flag selection */
+#define DQ_XCM_ETH_GO_TO_BD_CONS_CMD DQ_XCM_AGG_VAL_SEL_WORD5
+#define DQ_XCM_FCOE_SQ_CONS_CMD DQ_XCM_AGG_VAL_SEL_WORD3
+#define DQ_XCM_FCOE_SQ_PROD_CMD DQ_XCM_AGG_VAL_SEL_WORD4
+#define DQ_XCM_FCOE_X_FERQ_PROD_CMD DQ_XCM_AGG_VAL_SEL_WORD5
+#define DQ_XCM_ISCSI_SQ_CONS_CMD DQ_XCM_AGG_VAL_SEL_WORD3
+#define DQ_XCM_ISCSI_SQ_PROD_CMD DQ_XCM_AGG_VAL_SEL_WORD4
+#define DQ_XCM_ISCSI_MORE_TO_SEND_SEQ_CMD DQ_XCM_AGG_VAL_SEL_REG3
+#define DQ_XCM_ISCSI_EXP_STAT_SN_CMD DQ_XCM_AGG_VAL_SEL_REG6
+#define DQ_XCM_ROCE_SQ_PROD_CMD DQ_XCM_AGG_VAL_SEL_WORD4
+#define DQ_XCM_TOE_TX_BD_PROD_CMD DQ_XCM_AGG_VAL_SEL_WORD4
+#define DQ_XCM_TOE_MORE_TO_SEND_SEQ_CMD DQ_XCM_AGG_VAL_SEL_REG3
+#define DQ_XCM_TOE_LOCAL_ADV_WND_SEQ_CMD DQ_XCM_AGG_VAL_SEL_REG4
+
+/* UCM agg val selection (HW) */
+#define DQ_UCM_AGG_VAL_SEL_WORD0 0
+#define DQ_UCM_AGG_VAL_SEL_WORD1 1
+#define DQ_UCM_AGG_VAL_SEL_WORD2 2
+#define DQ_UCM_AGG_VAL_SEL_WORD3 3
+#define DQ_UCM_AGG_VAL_SEL_REG0 4
+#define DQ_UCM_AGG_VAL_SEL_REG1 5
+#define DQ_UCM_AGG_VAL_SEL_REG2 6
+#define DQ_UCM_AGG_VAL_SEL_REG3 7
+
+/* UCM agg val selection (FW) */
+#define DQ_UCM_ETH_PMD_TX_CONS_CMD DQ_UCM_AGG_VAL_SEL_WORD2
+#define DQ_UCM_ETH_PMD_RX_CONS_CMD DQ_UCM_AGG_VAL_SEL_WORD3
+#define DQ_UCM_ROCE_CQ_CONS_CMD DQ_UCM_AGG_VAL_SEL_REG0
+#define DQ_UCM_ROCE_CQ_PROD_CMD DQ_UCM_AGG_VAL_SEL_REG2
+
+/* TCM agg val selection (HW) */
+#define DQ_TCM_AGG_VAL_SEL_WORD0 0
+#define DQ_TCM_AGG_VAL_SEL_WORD1 1
+#define DQ_TCM_AGG_VAL_SEL_WORD2 2
+#define DQ_TCM_AGG_VAL_SEL_WORD3 3
+#define DQ_TCM_AGG_VAL_SEL_REG1 4
+#define DQ_TCM_AGG_VAL_SEL_REG2 5
+#define DQ_TCM_AGG_VAL_SEL_REG6 6
+#define DQ_TCM_AGG_VAL_SEL_REG9 7
+
+/* TCM agg val selection (FW) */
+#define DQ_TCM_L2B_BD_PROD_CMD DQ_TCM_AGG_VAL_SEL_WORD1
+#define DQ_TCM_ROCE_RQ_PROD_CMD DQ_TCM_AGG_VAL_SEL_WORD0
+
+/* XCM agg counter flag selection (HW) */
#define DQ_XCM_AGG_FLG_SHIFT_BIT14 0
#define DQ_XCM_AGG_FLG_SHIFT_BIT15 1
#define DQ_XCM_AGG_FLG_SHIFT_CF12 2
@@ -119,7 +263,7 @@
#define DQ_XCM_AGG_FLG_SHIFT_CF22 6
#define DQ_XCM_AGG_FLG_SHIFT_CF23 7
-/* XCM agg counter flag selection */
+/* XCM agg counter flag selection (FW) */
#define DQ_XCM_ETH_DQ_CF_CMD (1 << \
DQ_XCM_AGG_FLG_SHIFT_CF18)
#define DQ_XCM_CORE_DQ_CF_CMD (1 << \
@@ -134,28 +278,109 @@
DQ_XCM_AGG_FLG_SHIFT_CF22)
#define DQ_XCM_ETH_TPH_EN_CMD (1 << \
DQ_XCM_AGG_FLG_SHIFT_CF23)
+#define DQ_XCM_FCOE_SLOW_PATH_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF22)
+#define DQ_XCM_ISCSI_DQ_FLUSH_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF19)
+#define DQ_XCM_ISCSI_SLOW_PATH_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF22)
+#define DQ_XCM_ISCSI_PROC_ONLY_CLEANUP_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF23)
+#define DQ_XCM_TOE_DQ_FLUSH_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF19)
+#define DQ_XCM_TOE_SLOW_PATH_CMD (1 << DQ_XCM_AGG_FLG_SHIFT_CF22)
+
+/* UCM agg counter flag selection (HW) */
+#define DQ_UCM_AGG_FLG_SHIFT_CF0 0
+#define DQ_UCM_AGG_FLG_SHIFT_CF1 1
+#define DQ_UCM_AGG_FLG_SHIFT_CF3 2
+#define DQ_UCM_AGG_FLG_SHIFT_CF4 3
+#define DQ_UCM_AGG_FLG_SHIFT_CF5 4
+#define DQ_UCM_AGG_FLG_SHIFT_CF6 5
+#define DQ_UCM_AGG_FLG_SHIFT_RULE0EN 6
+#define DQ_UCM_AGG_FLG_SHIFT_RULE1EN 7
+
+/* UCM agg counter flag selection (FW) */
+#define DQ_UCM_ETH_PMD_TX_ARM_CMD (1 << DQ_UCM_AGG_FLG_SHIFT_CF4)
+#define DQ_UCM_ETH_PMD_RX_ARM_CMD (1 << DQ_UCM_AGG_FLG_SHIFT_CF5)
+#define DQ_UCM_ROCE_CQ_ARM_SE_CF_CMD (1 << DQ_UCM_AGG_FLG_SHIFT_CF4)
+#define DQ_UCM_ROCE_CQ_ARM_CF_CMD (1 << DQ_UCM_AGG_FLG_SHIFT_CF5)
+#define DQ_UCM_TOE_TIMER_STOP_ALL_CMD (1 << DQ_UCM_AGG_FLG_SHIFT_CF3)
+#define DQ_UCM_TOE_SLOW_PATH_CF_CMD (1 << DQ_UCM_AGG_FLG_SHIFT_CF4)
+#define DQ_UCM_TOE_DQ_CF_CMD (1 << DQ_UCM_AGG_FLG_SHIFT_CF5)
+
+/* TCM agg counter flag selection (HW) */
+#define DQ_TCM_AGG_FLG_SHIFT_CF0 0
+#define DQ_TCM_AGG_FLG_SHIFT_CF1 1
+#define DQ_TCM_AGG_FLG_SHIFT_CF2 2
+#define DQ_TCM_AGG_FLG_SHIFT_CF3 3
+#define DQ_TCM_AGG_FLG_SHIFT_CF4 4
+#define DQ_TCM_AGG_FLG_SHIFT_CF5 5
+#define DQ_TCM_AGG_FLG_SHIFT_CF6 6
+#define DQ_TCM_AGG_FLG_SHIFT_CF7 7
+
+/* TCM agg counter flag selection (FW) */
+#define DQ_TCM_FCOE_FLUSH_Q0_CMD (1 << DQ_TCM_AGG_FLG_SHIFT_CF1)
+#define DQ_TCM_FCOE_DUMMY_TIMER_CMD (1 << DQ_TCM_AGG_FLG_SHIFT_CF2)
+#define DQ_TCM_FCOE_TIMER_STOP_ALL_CMD (1 << DQ_TCM_AGG_FLG_SHIFT_CF3)
+#define DQ_TCM_ISCSI_FLUSH_Q0_CMD (1 << DQ_TCM_AGG_FLG_SHIFT_CF1)
+#define DQ_TCM_ISCSI_TIMER_STOP_ALL_CMD (1 << DQ_TCM_AGG_FLG_SHIFT_CF3)
+#define DQ_TCM_TOE_FLUSH_Q0_CMD (1 << DQ_TCM_AGG_FLG_SHIFT_CF1)
+#define DQ_TCM_TOE_TIMER_STOP_ALL_CMD (1 << DQ_TCM_AGG_FLG_SHIFT_CF3)
+#define DQ_TCM_IWARP_POST_RQ_CF_CMD (1 << DQ_TCM_AGG_FLG_SHIFT_CF1)
+
+/* PWM address mapping */
+#define DQ_PWM_OFFSET_DPM_BASE 0x0
+#define DQ_PWM_OFFSET_DPM_END 0x27
+#define DQ_PWM_OFFSET_XCM16_BASE 0x40
+#define DQ_PWM_OFFSET_XCM32_BASE 0x44
+#define DQ_PWM_OFFSET_UCM16_BASE 0x48
+#define DQ_PWM_OFFSET_UCM32_BASE 0x4C
+#define DQ_PWM_OFFSET_UCM16_4 0x50
+#define DQ_PWM_OFFSET_TCM16_BASE 0x58
+#define DQ_PWM_OFFSET_TCM32_BASE 0x5C
+#define DQ_PWM_OFFSET_XCM_FLAGS 0x68
+#define DQ_PWM_OFFSET_UCM_FLAGS 0x69
+#define DQ_PWM_OFFSET_TCM_FLAGS 0x6B
+
+#define DQ_PWM_OFFSET_XCM_RDMA_SQ_PROD (DQ_PWM_OFFSET_XCM16_BASE + 2)
+#define DQ_PWM_OFFSET_UCM_RDMA_CQ_CONS_32BIT (DQ_PWM_OFFSET_UCM32_BASE)
+#define DQ_PWM_OFFSET_UCM_RDMA_CQ_CONS_16BIT (DQ_PWM_OFFSET_UCM16_4)
+#define DQ_PWM_OFFSET_UCM_RDMA_INT_TIMEOUT (DQ_PWM_OFFSET_UCM16_BASE + 2)
+#define DQ_PWM_OFFSET_UCM_RDMA_ARM_FLAGS (DQ_PWM_OFFSET_UCM_FLAGS)
+#define DQ_PWM_OFFSET_TCM_ROCE_RQ_PROD (DQ_PWM_OFFSET_TCM16_BASE + 1)
+#define DQ_PWM_OFFSET_TCM_IWARP_RQ_PROD (DQ_PWM_OFFSET_TCM16_BASE + 3)
+
+#define DQ_REGION_SHIFT (12)
+
+/* DPM */
+#define DQ_DPM_WQE_BUFF_SIZE (320)
+
+/* Conn type ranges */
+#define DQ_CONN_TYPE_RANGE_SHIFT (4)
/*****************/
/* QM CONSTANTS */
/*****************/
/* number of TX queues in the QM */
-#define MAX_QM_TX_QUEUES_K2 512
-#define MAX_QM_TX_QUEUES_BB 448
-#define MAX_QM_TX_QUEUES MAX_QM_TX_QUEUES_K2
+#define MAX_QM_TX_QUEUES_K2 512
+#define MAX_QM_TX_QUEUES_BB 448
+#define MAX_QM_TX_QUEUES MAX_QM_TX_QUEUES_K2
/* number of Other queues in the QM */
-#define MAX_QM_OTHER_QUEUES_BB 64
-#define MAX_QM_OTHER_QUEUES_K2 128
-#define MAX_QM_OTHER_QUEUES MAX_QM_OTHER_QUEUES_K2
+#define MAX_QM_OTHER_QUEUES_BB 64
+#define MAX_QM_OTHER_QUEUES_K2 128
+#define MAX_QM_OTHER_QUEUES MAX_QM_OTHER_QUEUES_K2
/* number of queues in a PF queue group */
-#define QM_PF_QUEUE_GROUP_SIZE 8
+#define QM_PF_QUEUE_GROUP_SIZE 8
+
+/* the size of a single queue element in bytes */
+#define QM_PQ_ELEMENT_SIZE 4
/* base number of Tx PQs in the CM PQ representation.
* should be used when storing PQ IDs in CM PQ registers and context
*/
-#define CM_TX_PQ_BASE 0x200
+#define CM_TX_PQ_BASE 0x200
+
+/* number of global Vport/QCN rate limiters */
+#define MAX_QM_GLOBAL_RLS 256
/* QM registers data */
#define QM_LINE_CRD_REG_WIDTH 16
@@ -164,7 +389,7 @@
#define QM_BYTE_CRD_REG_SIGN_BIT (1 << (QM_BYTE_CRD_REG_WIDTH - 1))
#define QM_WFQ_CRD_REG_WIDTH 32
#define QM_WFQ_CRD_REG_SIGN_BIT (1 << (QM_WFQ_CRD_REG_WIDTH - 1))
-#define QM_RL_CRD_REG_WIDTH 32
+#define QM_RL_CRD_REG_WIDTH 32
#define QM_RL_CRD_REG_SIGN_BIT (1 << (QM_RL_CRD_REG_WIDTH - 1))
/*****************/
@@ -177,114 +402,217 @@
/* Number of Protocol Indices per Status Block */
#define PIS_PER_SB 12
+/* fsm is stopped or not valid for this sb */
#define CAU_HC_STOPPED_STATE 3
+/* fsm is working without interrupt coalescing for this sb*/
#define CAU_HC_DISABLE_STATE 4
+/* fsm is working with interrupt coalescing for this sb*/
#define CAU_HC_ENABLE_STATE 0
+
/*****************/
/* IGU CONSTANTS */
/*****************/
-#define MAX_SB_PER_PATH_K2 (368)
-#define MAX_SB_PER_PATH_BB (288)
+#define MAX_SB_PER_PATH_K2 (368)
+#define MAX_SB_PER_PATH_BB (288)
#define MAX_TOT_SB_PER_PATH \
MAX_SB_PER_PATH_K2
-#define MAX_SB_PER_PF_MIMD 129
-#define MAX_SB_PER_PF_SIMD 64
-#define MAX_SB_PER_VF 64
+#define MAX_SB_PER_PF_MIMD 129
+#define MAX_SB_PER_PF_SIMD 64
+#define MAX_SB_PER_VF 64
/* Memory addresses on the BAR for the IGU Sub Block */
-#define IGU_MEM_BASE 0x0000
+#define IGU_MEM_BASE 0x0000
-#define IGU_MEM_MSIX_BASE 0x0000
-#define IGU_MEM_MSIX_UPPER 0x0101
-#define IGU_MEM_MSIX_RESERVED_UPPER 0x01ff
+#define IGU_MEM_MSIX_BASE 0x0000
+#define IGU_MEM_MSIX_UPPER 0x0101
+#define IGU_MEM_MSIX_RESERVED_UPPER 0x01ff
-#define IGU_MEM_PBA_MSIX_BASE 0x0200
-#define IGU_MEM_PBA_MSIX_UPPER 0x0202
-#define IGU_MEM_PBA_MSIX_RESERVED_UPPER 0x03ff
+#define IGU_MEM_PBA_MSIX_BASE 0x0200
+#define IGU_MEM_PBA_MSIX_UPPER 0x0202
+#define IGU_MEM_PBA_MSIX_RESERVED_UPPER 0x03ff
-#define IGU_CMD_INT_ACK_BASE 0x0400
+#define IGU_CMD_INT_ACK_BASE 0x0400
#define IGU_CMD_INT_ACK_UPPER (IGU_CMD_INT_ACK_BASE + \
MAX_TOT_SB_PER_PATH - \
1)
-#define IGU_CMD_INT_ACK_RESERVED_UPPER 0x05ff
+#define IGU_CMD_INT_ACK_RESERVED_UPPER 0x05ff
-#define IGU_CMD_ATTN_BIT_UPD_UPPER 0x05f0
-#define IGU_CMD_ATTN_BIT_SET_UPPER 0x05f1
-#define IGU_CMD_ATTN_BIT_CLR_UPPER 0x05f2
+#define IGU_CMD_ATTN_BIT_UPD_UPPER 0x05f0
+#define IGU_CMD_ATTN_BIT_SET_UPPER 0x05f1
+#define IGU_CMD_ATTN_BIT_CLR_UPPER 0x05f2
-#define IGU_REG_SISR_MDPC_WMASK_UPPER 0x05f3
-#define IGU_REG_SISR_MDPC_WMASK_LSB_UPPER 0x05f4
-#define IGU_REG_SISR_MDPC_WMASK_MSB_UPPER 0x05f5
-#define IGU_REG_SISR_MDPC_WOMASK_UPPER 0x05f6
+#define IGU_REG_SISR_MDPC_WMASK_UPPER 0x05f3
+#define IGU_REG_SISR_MDPC_WMASK_LSB_UPPER 0x05f4
+#define IGU_REG_SISR_MDPC_WMASK_MSB_UPPER 0x05f5
+#define IGU_REG_SISR_MDPC_WOMASK_UPPER 0x05f6
-#define IGU_CMD_PROD_UPD_BASE 0x0600
+#define IGU_CMD_PROD_UPD_BASE 0x0600
#define IGU_CMD_PROD_UPD_UPPER (IGU_CMD_PROD_UPD_BASE +\
MAX_TOT_SB_PER_PATH - \
1)
-#define IGU_CMD_PROD_UPD_RESERVED_UPPER 0x07ff
+#define IGU_CMD_PROD_UPD_RESERVED_UPPER 0x07ff
/*****************/
/* PXP CONSTANTS */
/*****************/
+/* Bars for Blocks */
+#define PXP_BAR_GRC 0
+#define PXP_BAR_TSDM 0
+#define PXP_BAR_USDM 0
+#define PXP_BAR_XSDM 0
+#define PXP_BAR_MSDM 0
+#define PXP_BAR_YSDM 0
+#define PXP_BAR_PSDM 0
+#define PXP_BAR_IGU 0
+#define PXP_BAR_DQ 1
+
/* PTT and GTT */
-#define PXP_NUM_PF_WINDOWS 12
-#define PXP_PER_PF_ENTRY_SIZE 8
-#define PXP_NUM_GLOBAL_WINDOWS 243
-#define PXP_GLOBAL_ENTRY_SIZE 4
-#define PXP_ADMIN_WINDOW_ALLOWED_LENGTH 4
-#define PXP_PF_WINDOW_ADMIN_START 0
-#define PXP_PF_WINDOW_ADMIN_LENGTH 0x1000
+#define PXP_NUM_PF_WINDOWS 12
+#define PXP_PER_PF_ENTRY_SIZE 8
+#define PXP_NUM_GLOBAL_WINDOWS 243
+#define PXP_GLOBAL_ENTRY_SIZE 4
+#define PXP_ADMIN_WINDOW_ALLOWED_LENGTH 4
+#define PXP_PF_WINDOW_ADMIN_START 0
+#define PXP_PF_WINDOW_ADMIN_LENGTH 0x1000
#define PXP_PF_WINDOW_ADMIN_END (PXP_PF_WINDOW_ADMIN_START + \
PXP_PF_WINDOW_ADMIN_LENGTH - 1)
-#define PXP_PF_WINDOW_ADMIN_PER_PF_START 0
+#define PXP_PF_WINDOW_ADMIN_PER_PF_START 0
#define PXP_PF_WINDOW_ADMIN_PER_PF_LENGTH (PXP_NUM_PF_WINDOWS * \
PXP_PER_PF_ENTRY_SIZE)
#define PXP_PF_WINDOW_ADMIN_PER_PF_END (PXP_PF_WINDOW_ADMIN_PER_PF_START + \
- PXP_PF_WINDOW_ADMIN_PER_PF_LENGTH - 1)
-#define PXP_PF_WINDOW_ADMIN_GLOBAL_START 0x200
+ PXP_PF_WINDOW_ADMIN_PER_PF_LENGTH - 1)
+#define PXP_PF_WINDOW_ADMIN_GLOBAL_START 0x200
#define PXP_PF_WINDOW_ADMIN_GLOBAL_LENGTH (PXP_NUM_GLOBAL_WINDOWS * \
PXP_GLOBAL_ENTRY_SIZE)
-#define PXP_PF_WINDOW_ADMIN_GLOBAL_END \
- (PXP_PF_WINDOW_ADMIN_GLOBAL_START + \
- PXP_PF_WINDOW_ADMIN_GLOBAL_LENGTH - 1)
-#define PXP_PF_GLOBAL_PRETEND_ADDR 0x1f0
-#define PXP_PF_ME_OPAQUE_MASK_ADDR 0xf4
-#define PXP_PF_ME_OPAQUE_ADDR 0x1f8
-#define PXP_PF_ME_CONCRETE_ADDR 0x1fc
-
-#define PXP_EXTERNAL_BAR_PF_WINDOW_START 0x1000
-#define PXP_EXTERNAL_BAR_PF_WINDOW_NUM PXP_NUM_PF_WINDOWS
-#define PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE 0x1000
-#define PXP_EXTERNAL_BAR_PF_WINDOW_LENGTH \
- (PXP_EXTERNAL_BAR_PF_WINDOW_NUM * \
+#define PXP_PF_WINDOW_ADMIN_GLOBAL_END \
+ (PXP_PF_WINDOW_ADMIN_GLOBAL_START + \
+ PXP_PF_WINDOW_ADMIN_GLOBAL_LENGTH - 1)
+#define PXP_PF_GLOBAL_PRETEND_ADDR 0x1f0
+#define PXP_PF_ME_OPAQUE_MASK_ADDR 0xf4
+#define PXP_PF_ME_OPAQUE_ADDR 0x1f8
+#define PXP_PF_ME_CONCRETE_ADDR 0x1fc
+
+#define PXP_EXTERNAL_BAR_PF_WINDOW_START 0x1000
+#define PXP_EXTERNAL_BAR_PF_WINDOW_NUM PXP_NUM_PF_WINDOWS
+#define PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE 0x1000
+#define PXP_EXTERNAL_BAR_PF_WINDOW_LENGTH \
+ (PXP_EXTERNAL_BAR_PF_WINDOW_NUM * \
PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE)
-#define PXP_EXTERNAL_BAR_PF_WINDOW_END \
- (PXP_EXTERNAL_BAR_PF_WINDOW_START + \
+#define PXP_EXTERNAL_BAR_PF_WINDOW_END \
+ (PXP_EXTERNAL_BAR_PF_WINDOW_START + \
PXP_EXTERNAL_BAR_PF_WINDOW_LENGTH - 1)
-#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_START \
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_START \
(PXP_EXTERNAL_BAR_PF_WINDOW_END + 1)
#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_NUM PXP_NUM_GLOBAL_WINDOWS
-#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_SINGLE_SIZE 0x1000
-#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH \
- (PXP_EXTERNAL_BAR_GLOBAL_WINDOW_NUM * \
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_SINGLE_SIZE 0x1000
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH \
+ (PXP_EXTERNAL_BAR_GLOBAL_WINDOW_NUM * \
PXP_EXTERNAL_BAR_GLOBAL_WINDOW_SINGLE_SIZE)
-#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_END \
- (PXP_EXTERNAL_BAR_GLOBAL_WINDOW_START + \
+#define PXP_EXTERNAL_BAR_GLOBAL_WINDOW_END \
+ (PXP_EXTERNAL_BAR_GLOBAL_WINDOW_START + \
PXP_EXTERNAL_BAR_GLOBAL_WINDOW_LENGTH - 1)
-#define PXP_ILT_PAGE_SIZE_NUM_BITS_MIN 12
-#define PXP_ILT_BLOCK_FACTOR_MULTIPLIER 1024
+/* PF BAR */
+/*#define PXP_BAR0_START_GRC 0x1000 */
+/*#define PXP_BAR0_GRC_LENGTH 0xBFF000 */
+#define PXP_BAR0_START_GRC 0x0000
+#define PXP_BAR0_GRC_LENGTH 0x1C00000
+#define PXP_BAR0_END_GRC \
+ (PXP_BAR0_START_GRC + PXP_BAR0_GRC_LENGTH - 1)
+
+#define PXP_BAR0_START_IGU 0x1C00000
+#define PXP_BAR0_IGU_LENGTH 0x10000
+#define PXP_BAR0_END_IGU \
+ (PXP_BAR0_START_IGU + PXP_BAR0_IGU_LENGTH - 1)
+
+#define PXP_BAR0_START_TSDM 0x1C80000
+#define PXP_BAR0_SDM_LENGTH 0x40000
+#define PXP_BAR0_SDM_RESERVED_LENGTH 0x40000
+#define PXP_BAR0_END_TSDM \
+ (PXP_BAR0_START_TSDM + PXP_BAR0_SDM_LENGTH - 1)
+
+#define PXP_BAR0_START_MSDM 0x1D00000
+#define PXP_BAR0_END_MSDM \
+ (PXP_BAR0_START_MSDM + PXP_BAR0_SDM_LENGTH - 1)
+
+#define PXP_BAR0_START_USDM 0x1D80000
+#define PXP_BAR0_END_USDM \
+ (PXP_BAR0_START_USDM + PXP_BAR0_SDM_LENGTH - 1)
+
+#define PXP_BAR0_START_XSDM 0x1E00000
+#define PXP_BAR0_END_XSDM \
+ (PXP_BAR0_START_XSDM + PXP_BAR0_SDM_LENGTH - 1)
+
+#define PXP_BAR0_START_YSDM 0x1E80000
+#define PXP_BAR0_END_YSDM \
+ (PXP_BAR0_START_YSDM + PXP_BAR0_SDM_LENGTH - 1)
+
+#define PXP_BAR0_START_PSDM 0x1F00000
+#define PXP_BAR0_END_PSDM \
+ (PXP_BAR0_START_PSDM + PXP_BAR0_SDM_LENGTH - 1)
+
+#define PXP_BAR0_FIRST_INVALID_ADDRESS \
+ (PXP_BAR0_END_PSDM + 1)
+
+#define PXP_ILT_PAGE_SIZE_NUM_BITS_MIN 12
+#define PXP_ILT_BLOCK_FACTOR_MULTIPLIER 1024
/* ILT Records */
#define PXP_NUM_ILT_RECORDS_BB 7600
#define PXP_NUM_ILT_RECORDS_K2 11000
#define MAX_NUM_ILT_RECORDS MAX(PXP_NUM_ILT_RECORDS_BB, PXP_NUM_ILT_RECORDS_K2)
+
+/* Host Interface */
+#define PXP_QUEUES_ZONE_MAX_NUM 320
+
+
+
+
+/*****************/
+/* PRM CONSTANTS */
+/*****************/
+#define PRM_DMA_PAD_BYTES_NUM 2
+/*****************/
+/* SDMs CONSTANTS */
+/*****************/
+
+
+#define SDM_OP_GEN_TRIG_NONE 0
+#define SDM_OP_GEN_TRIG_WAKE_THREAD 1
+#define SDM_OP_GEN_TRIG_AGG_INT 2
+#define SDM_OP_GEN_TRIG_LOADER 4
+#define SDM_OP_GEN_TRIG_INDICATE_ERROR 6
+#define SDM_OP_GEN_TRIG_RELEASE_THREAD 7
+
+/***********************************************************/
+/* Completion types */
+/***********************************************************/
+
+#define SDM_COMP_TYPE_NONE 0
+#define SDM_COMP_TYPE_WAKE_THREAD 1
+#define SDM_COMP_TYPE_AGG_INT 2
+/* Send direct message to local CM and/or remote CMs. Destinations are defined
+ * by vector in CompParams.
+ */
+#define SDM_COMP_TYPE_CM 3
+#define SDM_COMP_TYPE_LOADER 4
+/* Send direct message to PXP (like "internal write" command) to write to remote
+ * Storm RAM via remote SDM
+ */
+#define SDM_COMP_TYPE_PXP 5
+/* Indicate error per thread */
+#define SDM_COMP_TYPE_INDICATE_ERROR 6
+#define SDM_COMP_TYPE_RELEASE_THREAD 7
+/* Write to local RAM as a completion */
+#define SDM_COMP_TYPE_RAM 8
+
+
/******************/
/* PBF CONSTANTS */
/******************/
@@ -299,12 +627,43 @@
/* PRS CONSTANTS */
/*****************/
+#define PRS_GFT_CAM_LINES_NO_MATCH 31
/* Async data KCQ CQE */
struct async_data {
- __le32 cid;
- __le16 itid;
- u8 error_code;
- u8 fw_debug_param;
+ /* Context ID of the connection */
+ __le32 cid;
+ /* Task Id of the task (for error that happened on a a task) */
+ __le16 itid;
+ /* error code - relevant only if the opcode indicates its an error */
+ u8 error_code;
+ /* internal fw debug parameter */
+ u8 fw_debug_param;
+};
+
+/*
+ * Interrupt coalescing TimeSet
+ */
+struct coalescing_timeset {
+ u8 value;
+/* Interrupt coalescing TimeSet (timeout_ticks = TimeSet shl (TimerRes+1)) */
+#define COALESCING_TIMESET_TIMESET_MASK 0x7F
+#define COALESCING_TIMESET_TIMESET_SHIFT 0
+/* Only if this flag is set, timeset will take effect */
+#define COALESCING_TIMESET_VALID_MASK 0x1
+#define COALESCING_TIMESET_VALID_SHIFT 7
+};
+
+struct common_queue_zone {
+ __le16 ring_drv_data_consumer;
+ __le16 reserved;
+};
+
+/*
+ * ETH Rx producers data
+ */
+struct eth_rx_prod_data {
+ __le16 bd_prod /* BD producer. */;
+ __le16 cqe_prod /* CQE producer. */;
};
struct regpair {
@@ -312,24 +671,38 @@ struct regpair {
__le32 hi /* high word for reg-pair */;
};
+/*
+ * Event Ring VF-PF Channel data
+ */
struct vf_pf_channel_eqe_data {
struct regpair msg_addr /* VF-PF message address */;
};
struct iscsi_eqe_data {
__le32 cid /* Context ID of the connection */;
- __le16 conn_id
/* Task Id of the task (for error that happened on a a task) */;
+ __le16 conn_id;
+/* error code - relevant only if the opcode indicates its an error */
u8 error_code;
- u8 reserved0;
+ u8 error_pdu_opcode_reserved;
+/* The processed PDUs opcode on which happened the error - updated for specific
+ * error codes, by default=0xFF
+ */
+#define ISCSI_EQE_DATA_ERROR_PDU_OPCODE_MASK 0x3F
+#define ISCSI_EQE_DATA_ERROR_PDU_OPCODE_SHIFT 0
+/* Indication for driver is the error_pdu_opcode field has valid value */
+#define ISCSI_EQE_DATA_ERROR_PDU_OPCODE_VALID_MASK 0x1
+#define ISCSI_EQE_DATA_ERROR_PDU_OPCODE_VALID_SHIFT 6
+#define ISCSI_EQE_DATA_RESERVED0_MASK 0x1
+#define ISCSI_EQE_DATA_RESERVED0_SHIFT 7
};
/*
* Event Ring malicious VF data
*/
struct malicious_vf_eqe_data {
- u8 vf_id /* Malicious VF ID */; /* WARNING:CAMELCASE */
- u8 err_id /* Malicious VF error */;
+ u8 vfId /* Malicious VF ID */;
+ u8 errId /* Malicious VF error */;
__le16 reserved[3];
};
@@ -337,41 +710,46 @@ struct malicious_vf_eqe_data {
* Event Ring initial cleanup data
*/
struct initial_cleanup_eqe_data {
- u8 vf_id /* VF ID */; /* WARNING:CAMELCASE */
+ u8 vfId /* VF ID */;
u8 reserved[7];
};
-
+/*
+ * Event Data Union
+ */
union event_ring_data {
u8 bytes[8] /* Byte Array */;
struct vf_pf_channel_eqe_data vf_pf_channel /* VF-PF Channel data */;
struct iscsi_eqe_data iscsi_info /* Dedicated fields to iscsi data */;
- struct regpair roce_handle /* WARNING:CAMELCASE */
/* Dedicated field for RoCE affiliated asynchronous error */;
+ struct regpair roceHandle;
struct malicious_vf_eqe_data malicious_vf /* Malicious VF data */;
struct initial_cleanup_eqe_data vf_init_cleanup
/* VF Initial Cleanup data */;
+/* Host handle for the Async Completions */
+ struct regpair iwarp_handle;
};
/* Event Ring Entry */
struct event_ring_entry {
- u8 protocol_id;
- u8 opcode;
- __le16 reserved0;
- __le16 echo;
- u8 fw_return_code;
+ u8 protocol_id /* Event Protocol ID */;
+ u8 opcode /* Event Opcode */;
+ __le16 reserved0 /* Reserved */;
+ __le16 echo /* Echo value from ramrod data on the host */;
+ u8 fw_return_code /* FW return code for SP ramrods */;
u8 flags;
+/* 0: synchronous EQE - a completion of SP message. 1: asynchronous EQE */
#define EVENT_RING_ENTRY_ASYNC_MASK 0x1
#define EVENT_RING_ENTRY_ASYNC_SHIFT 0
#define EVENT_RING_ENTRY_RESERVED1_MASK 0x7F
#define EVENT_RING_ENTRY_RESERVED1_SHIFT 1
- union event_ring_data data;
+ union event_ring_data data;
};
/* Multi function mode */
enum mf_mode {
- SF,
- MF_OVLAN,
- MF_NPAR,
+ ERROR_MODE /* Unsupported mode */,
+ MF_OVLAN /* Multi function based on outer VLAN */,
+ MF_NPAR /* Multi function based on MAC address (NIC partitioning) */,
MAX_MF_MODE
};
@@ -390,35 +768,59 @@ enum protocol_type {
MAX_PROTOCOL_TYPE
};
+/*
+ * Ustorm Queue Zone
+ */
+struct ustorm_eth_queue_zone {
+/* Rx interrupt coalescing TimeSet */
+ struct coalescing_timeset int_coalescing_timeset;
+ u8 reserved[3];
+};
+
+
+struct ustorm_queue_zone {
+ struct ustorm_eth_queue_zone eth;
+ struct common_queue_zone common;
+};
+
/* status block structure */
struct cau_pi_entry {
- u32 prod;
+ __le32 prod;
+/* A per protocol indexPROD value. */
#define CAU_PI_ENTRY_PROD_VAL_MASK 0xFFFF
#define CAU_PI_ENTRY_PROD_VAL_SHIFT 0
+/* This value determines the TimeSet that the PI is associated with */
#define CAU_PI_ENTRY_PI_TIMESET_MASK 0x7F
#define CAU_PI_ENTRY_PI_TIMESET_SHIFT 16
+/* Select the FSM within the SB */
#define CAU_PI_ENTRY_FSM_SEL_MASK 0x1
#define CAU_PI_ENTRY_FSM_SEL_SHIFT 23
+/* Select the FSM within the SB */
#define CAU_PI_ENTRY_RESERVED_MASK 0xFF
#define CAU_PI_ENTRY_RESERVED_SHIFT 24
};
/* status block structure */
struct cau_sb_entry {
- u32 data;
+ __le32 data;
+/* The SB PROD index which is sent to the IGU. */
#define CAU_SB_ENTRY_SB_PROD_MASK 0xFFFFFF
#define CAU_SB_ENTRY_SB_PROD_SHIFT 0
-#define CAU_SB_ENTRY_STATE0_MASK 0xF
+#define CAU_SB_ENTRY_STATE0_MASK 0xF /* RX state */
#define CAU_SB_ENTRY_STATE0_SHIFT 24
-#define CAU_SB_ENTRY_STATE1_MASK 0xF
+#define CAU_SB_ENTRY_STATE1_MASK 0xF /* TX state */
#define CAU_SB_ENTRY_STATE1_SHIFT 28
- u32 params;
+ __le32 params;
+/* Indicates the RX TimeSet that this SB is associated with. */
#define CAU_SB_ENTRY_SB_TIMESET0_MASK 0x7F
#define CAU_SB_ENTRY_SB_TIMESET0_SHIFT 0
+/* Indicates the TX TimeSet that this SB is associated with. */
#define CAU_SB_ENTRY_SB_TIMESET1_MASK 0x7F
#define CAU_SB_ENTRY_SB_TIMESET1_SHIFT 7
+/* This value will determine the RX FSM timer resolution in ticks */
#define CAU_SB_ENTRY_TIMER_RES0_MASK 0x3
#define CAU_SB_ENTRY_TIMER_RES0_SHIFT 14
+/* This value will determine the TX FSM timer resolution in ticks */
#define CAU_SB_ENTRY_TIMER_RES1_MASK 0x3
#define CAU_SB_ENTRY_TIMER_RES1_SHIFT 16
#define CAU_SB_ENTRY_VF_NUMBER_MASK 0xFF
@@ -427,6 +829,9 @@ struct cau_sb_entry {
#define CAU_SB_ENTRY_VF_VALID_SHIFT 26
#define CAU_SB_ENTRY_PF_NUMBER_MASK 0xF
#define CAU_SB_ENTRY_PF_NUMBER_SHIFT 27
+/* If set then indicates that the TPH STAG is equal to the SB number. Otherwise
+ * the STAG will be equal to all ones.
+ */
#define CAU_SB_ENTRY_TPH_MASK 0x1
#define CAU_SB_ENTRY_TPH_SHIFT 31
};
@@ -434,125 +839,317 @@ struct cau_sb_entry {
/* core doorbell data */
struct core_db_data {
u8 params;
+/* destination of doorbell (use enum db_dest) */
#define CORE_DB_DATA_DEST_MASK 0x3
#define CORE_DB_DATA_DEST_SHIFT 0
+/* aggregative command to CM (use enum db_agg_cmd_sel) */
#define CORE_DB_DATA_AGG_CMD_MASK 0x3
#define CORE_DB_DATA_AGG_CMD_SHIFT 2
-#define CORE_DB_DATA_BYPASS_EN_MASK 0x1
+#define CORE_DB_DATA_BYPASS_EN_MASK 0x1 /* enable QM bypass */
#define CORE_DB_DATA_BYPASS_EN_SHIFT 4
#define CORE_DB_DATA_RESERVED_MASK 0x1
#define CORE_DB_DATA_RESERVED_SHIFT 5
+/* aggregative value selection */
#define CORE_DB_DATA_AGG_VAL_SEL_MASK 0x3
#define CORE_DB_DATA_AGG_VAL_SEL_SHIFT 6
- u8 agg_flags;
- __le16 spq_prod;
+/* bit for every DQ counter flags in CM context that DQ can increment */
+ u8 agg_flags;
+ __le16 spq_prod;
};
/* Enum of doorbell aggregative command selection */
enum db_agg_cmd_sel {
- DB_AGG_CMD_NOP,
- DB_AGG_CMD_SET,
- DB_AGG_CMD_ADD,
- DB_AGG_CMD_MAX,
+ DB_AGG_CMD_NOP /* No operation */,
+ DB_AGG_CMD_SET /* Set the value */,
+ DB_AGG_CMD_ADD /* Add the value */,
+ DB_AGG_CMD_MAX /* Set max of current and new value */,
MAX_DB_AGG_CMD_SEL
};
/* Enum of doorbell destination */
enum db_dest {
- DB_DEST_XCM,
- DB_DEST_UCM,
- DB_DEST_TCM,
+ DB_DEST_XCM /* TX doorbell to XCM */,
+ DB_DEST_UCM /* RX doorbell to UCM */,
+ DB_DEST_TCM /* RX doorbell to TCM */,
DB_NUM_DESTINATIONS,
MAX_DB_DEST
};
+
+/*
+ * Enum of doorbell DPM types
+ */
+enum db_dpm_type {
+ DPM_LEGACY /* Legacy DPM- to Xstorm RAM */,
+ DPM_ROCE /* RoCE DPM- to NIG */,
+/* L2 DPM inline- to PBF, with packet data on doorbell */
+ DPM_L2_INLINE,
+ DPM_L2_BD /* L2 DPM with BD- to PBF, with TX BD data on doorbell */,
+ MAX_DB_DPM_TYPE
+};
+
+/*
+ * Structure for doorbell data, in L2 DPM mode, for the first doorbell in a DPM
+ * burst
+ */
+struct db_l2_dpm_data {
+ __le16 icid /* internal CID */;
+ __le16 bd_prod /* bd producer value to update */;
+ __le32 params;
+/* Size in QWORD-s of the DPM burst */
+#define DB_L2_DPM_DATA_SIZE_MASK 0x3F
+#define DB_L2_DPM_DATA_SIZE_SHIFT 0
+/* Type of DPM transaction (DPM_L2_INLINE or DPM_L2_BD) (use enum db_dpm_type)
+ */
+#define DB_L2_DPM_DATA_DPM_TYPE_MASK 0x3
+#define DB_L2_DPM_DATA_DPM_TYPE_SHIFT 6
+#define DB_L2_DPM_DATA_NUM_BDS_MASK 0xFF /* number of BD-s */
+#define DB_L2_DPM_DATA_NUM_BDS_SHIFT 8
+/* size of the packet to be transmitted in bytes */
+#define DB_L2_DPM_DATA_PKT_SIZE_MASK 0x7FF
+#define DB_L2_DPM_DATA_PKT_SIZE_SHIFT 16
+#define DB_L2_DPM_DATA_RESERVED0_MASK 0x1
+#define DB_L2_DPM_DATA_RESERVED0_SHIFT 27
+/* In DPM_L2_BD mode: the number of SGE-s */
+#define DB_L2_DPM_DATA_SGE_NUM_MASK 0x7
+#define DB_L2_DPM_DATA_SGE_NUM_SHIFT 28
+#define DB_L2_DPM_DATA_RESERVED1_MASK 0x1
+#define DB_L2_DPM_DATA_RESERVED1_SHIFT 31
+};
+
+/*
+ * Structure for SGE in a DPM doorbell of type DPM_L2_BD
+ */
+struct db_l2_dpm_sge {
+ struct regpair addr /* Single continuous buffer */;
+ __le16 nbytes /* Number of bytes in this BD. */;
+ __le16 bitfields;
+/* The TPH STAG index value */
+#define DB_L2_DPM_SGE_TPH_ST_INDEX_MASK 0x1FF
+#define DB_L2_DPM_SGE_TPH_ST_INDEX_SHIFT 0
+#define DB_L2_DPM_SGE_RESERVED0_MASK 0x3
+#define DB_L2_DPM_SGE_RESERVED0_SHIFT 9
+/* Indicate if ST hint is requested or not */
+#define DB_L2_DPM_SGE_ST_VALID_MASK 0x1
+#define DB_L2_DPM_SGE_ST_VALID_SHIFT 11
+#define DB_L2_DPM_SGE_RESERVED1_MASK 0xF
+#define DB_L2_DPM_SGE_RESERVED1_SHIFT 12
+ __le32 reserved2;
+};
+
/* Structure for doorbell address, in legacy mode */
struct db_legacy_addr {
__le32 addr;
#define DB_LEGACY_ADDR_RESERVED0_MASK 0x3
#define DB_LEGACY_ADDR_RESERVED0_SHIFT 0
+/* doorbell extraction mode specifier- 0 if not used */
#define DB_LEGACY_ADDR_DEMS_MASK 0x7
#define DB_LEGACY_ADDR_DEMS_SHIFT 2
-#define DB_LEGACY_ADDR_ICID_MASK 0x7FFFFFF
+#define DB_LEGACY_ADDR_ICID_MASK 0x7FFFFFF /* internal CID */
#define DB_LEGACY_ADDR_ICID_SHIFT 5
};
+/*
+ * Structure for doorbell address, in PWM mode
+ */
+struct db_pwm_addr {
+ __le32 addr;
+#define DB_PWM_ADDR_RESERVED0_MASK 0x7
+#define DB_PWM_ADDR_RESERVED0_SHIFT 0
+/* Offset in PWM address space */
+#define DB_PWM_ADDR_OFFSET_MASK 0x7F
+#define DB_PWM_ADDR_OFFSET_SHIFT 3
+#define DB_PWM_ADDR_WID_MASK 0x3 /* Window ID */
+#define DB_PWM_ADDR_WID_SHIFT 10
+#define DB_PWM_ADDR_DPI_MASK 0xFFFF /* Doorbell page ID */
+#define DB_PWM_ADDR_DPI_SHIFT 12
+#define DB_PWM_ADDR_RESERVED1_MASK 0xF
+#define DB_PWM_ADDR_RESERVED1_SHIFT 28
+};
+
+/*
+ * Parameters to RoCE firmware, passed in EDPM doorbell
+ */
+struct db_roce_dpm_params {
+ __le32 params;
+/* Size in QWORD-s of the DPM burst */
+#define DB_ROCE_DPM_PARAMS_SIZE_MASK 0x3F
+#define DB_ROCE_DPM_PARAMS_SIZE_SHIFT 0
+/* Type of DPM transacation (DPM_ROCE) (use enum db_dpm_type) */
+#define DB_ROCE_DPM_PARAMS_DPM_TYPE_MASK 0x3
+#define DB_ROCE_DPM_PARAMS_DPM_TYPE_SHIFT 6
+/* opcode for ROCE operation */
+#define DB_ROCE_DPM_PARAMS_OPCODE_MASK 0xFF
+#define DB_ROCE_DPM_PARAMS_OPCODE_SHIFT 8
+/* the size of the WQE payload in bytes */
+#define DB_ROCE_DPM_PARAMS_WQE_SIZE_MASK 0x7FF
+#define DB_ROCE_DPM_PARAMS_WQE_SIZE_SHIFT 16
+#define DB_ROCE_DPM_PARAMS_RESERVED0_MASK 0x1
+#define DB_ROCE_DPM_PARAMS_RESERVED0_SHIFT 27
+/* RoCE completion flag */
+#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_MASK 0x1
+#define DB_ROCE_DPM_PARAMS_COMPLETION_FLG_SHIFT 28
+#define DB_ROCE_DPM_PARAMS_S_FLG_MASK 0x1 /* RoCE S flag */
+#define DB_ROCE_DPM_PARAMS_S_FLG_SHIFT 29
+#define DB_ROCE_DPM_PARAMS_RESERVED1_MASK 0x3
+#define DB_ROCE_DPM_PARAMS_RESERVED1_SHIFT 30
+};
+
+/*
+ * Structure for doorbell data, in ROCE DPM mode, for the first doorbell in a
+ * DPM burst
+ */
+struct db_roce_dpm_data {
+ __le16 icid /* internal CID */;
+ __le16 prod_val /* aggregated value to update */;
+/* parameters passed to RoCE firmware */
+ struct db_roce_dpm_params params;
+};
+
/* Igu interrupt command */
enum igu_int_cmd {
- IGU_INT_ENABLE = 0,
+ IGU_INT_ENABLE = 0,
IGU_INT_DISABLE = 1,
- IGU_INT_NOP = 2,
- IGU_INT_NOP2 = 3,
+ IGU_INT_NOP = 2,
+ IGU_INT_NOP2 = 3,
MAX_IGU_INT_CMD
};
/* IGU producer or consumer update command */
struct igu_prod_cons_update {
- u32 sb_id_and_flags;
+ __le32 sb_id_and_flags;
#define IGU_PROD_CONS_UPDATE_SB_INDEX_MASK 0xFFFFFF
#define IGU_PROD_CONS_UPDATE_SB_INDEX_SHIFT 0
#define IGU_PROD_CONS_UPDATE_UPDATE_FLAG_MASK 0x1
#define IGU_PROD_CONS_UPDATE_UPDATE_FLAG_SHIFT 24
+/* interrupt enable/disable/nop (use enum igu_int_cmd) */
#define IGU_PROD_CONS_UPDATE_ENABLE_INT_MASK 0x3
#define IGU_PROD_CONS_UPDATE_ENABLE_INT_SHIFT 25
+/* (use enum igu_seg_access) */
#define IGU_PROD_CONS_UPDATE_SEGMENT_ACCESS_MASK 0x1
#define IGU_PROD_CONS_UPDATE_SEGMENT_ACCESS_SHIFT 27
#define IGU_PROD_CONS_UPDATE_TIMER_MASK_MASK 0x1
#define IGU_PROD_CONS_UPDATE_TIMER_MASK_SHIFT 28
#define IGU_PROD_CONS_UPDATE_RESERVED0_MASK 0x3
#define IGU_PROD_CONS_UPDATE_RESERVED0_SHIFT 29
+/* must always be set cleared (use enum command_type_bit) */
#define IGU_PROD_CONS_UPDATE_COMMAND_TYPE_MASK 0x1
#define IGU_PROD_CONS_UPDATE_COMMAND_TYPE_SHIFT 31
- u32 reserved1;
+ __le32 reserved1;
};
/* Igu segments access for default status block only */
enum igu_seg_access {
- IGU_SEG_ACCESS_REG = 0,
- IGU_SEG_ACCESS_ATTN = 1,
+ IGU_SEG_ACCESS_REG = 0,
+ IGU_SEG_ACCESS_ATTN = 1,
MAX_IGU_SEG_ACCESS
};
+
+/*
+ * Enumeration for L3 type field of parsing_and_err_flags_union. L3Type:
+ * 0 - unknown (not ip) ,1 - Ipv4, 2 - Ipv6 (this field can be filled according
+ * to the last-ethertype)
+ */
+enum l3_type {
+ e_l3Type_unknown,
+ e_l3Type_ipv4,
+ e_l3Type_ipv6,
+ MAX_L3_TYPE
+};
+
+
+/*
+ * Enumeration for l4Protocol field of parsing_and_err_flags_union. L4-protocol
+ * 0 - none, 1 - TCP, 2- UDP. if the packet is IPv4 fragment, and its not the
+ * first fragment, the protocol-type should be set to none.
+ */
+enum l4_protocol {
+ e_l4Protocol_none,
+ e_l4Protocol_tcp,
+ e_l4Protocol_udp,
+ MAX_L4_PROTOCOL
+};
+
+
+/*
+ * Parsing and error flags field.
+ */
struct parsing_and_err_flags {
__le16 flags;
+/* L3Type: 0 - unknown (not ip) ,1 - Ipv4, 2 - Ipv6 (this field can be filled
+ * according to the last-ethertype) (use enum l3_type)
+ */
#define PARSING_AND_ERR_FLAGS_L3TYPE_MASK 0x3
#define PARSING_AND_ERR_FLAGS_L3TYPE_SHIFT 0
+/* L4-protocol 0 - none, 1 - TCP, 2- UDP. if the packet is IPv4 fragment, and
+ * its not the first fragment, the protocol-type should be set to none.
+ * (use enum l4_protocol)
+ */
#define PARSING_AND_ERR_FLAGS_L4PROTOCOL_MASK 0x3
#define PARSING_AND_ERR_FLAGS_L4PROTOCOL_SHIFT 2
+/* Set if the packet is IPv4 fragment. */
#define PARSING_AND_ERR_FLAGS_IPV4FRAG_MASK 0x1
#define PARSING_AND_ERR_FLAGS_IPV4FRAG_SHIFT 4
+/* Set if VLAN tag exists. Invalid if tunnel type are IP GRE or IP GENEVE. */
#define PARSING_AND_ERR_FLAGS_TAG8021QEXIST_MASK 0x1
#define PARSING_AND_ERR_FLAGS_TAG8021QEXIST_SHIFT 5
+/* Set if L4 checksum was calculated. */
#define PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_MASK 0x1
#define PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_SHIFT 6
+/* Set for PTP packet. */
#define PARSING_AND_ERR_FLAGS_TIMESYNCPKT_MASK 0x1
#define PARSING_AND_ERR_FLAGS_TIMESYNCPKT_SHIFT 7
+/* Set if PTP timestamp recorded. */
#define PARSING_AND_ERR_FLAGS_TIMESTAMPRECORDED_MASK 0x1
#define PARSING_AND_ERR_FLAGS_TIMESTAMPRECORDED_SHIFT 8
+/* Set if either version-mismatch or hdr-len-error or ipv4-cksm is set or ipv6
+ * ver mismatch
+ */
#define PARSING_AND_ERR_FLAGS_IPHDRERROR_MASK 0x1
#define PARSING_AND_ERR_FLAGS_IPHDRERROR_SHIFT 9
+/* Set if L4 checksum validation failed. Valid only if L4 checksum was
+ * calculated.
+ */
#define PARSING_AND_ERR_FLAGS_L4CHKSMERROR_MASK 0x1
#define PARSING_AND_ERR_FLAGS_L4CHKSMERROR_SHIFT 10
+/* Set if GRE/VXLAN/GENEVE tunnel detected. */
#define PARSING_AND_ERR_FLAGS_TUNNELEXIST_MASK 0x1
#define PARSING_AND_ERR_FLAGS_TUNNELEXIST_SHIFT 11
+/* Set if VLAN tag exists in tunnel header. */
#define PARSING_AND_ERR_FLAGS_TUNNEL8021QTAGEXIST_MASK 0x1
#define PARSING_AND_ERR_FLAGS_TUNNEL8021QTAGEXIST_SHIFT 12
+/* Set if either tunnel-ipv4-version-mismatch or tunnel-ipv4-hdr-len-error or
+ * tunnel-ipv4-cksm is set or tunneling ipv6 ver mismatch
+ */
#define PARSING_AND_ERR_FLAGS_TUNNELIPHDRERROR_MASK 0x1
#define PARSING_AND_ERR_FLAGS_TUNNELIPHDRERROR_SHIFT 13
+/* Set if GRE or VXLAN/GENEVE UDP checksum was calculated. */
#define PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMWASCALCULATED_MASK 0x1
#define PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMWASCALCULATED_SHIFT 14
+/* Set if tunnel L4 checksum validation failed. Valid only if tunnel L4 checksum
+ * was calculated.
+ */
#define PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMERROR_MASK 0x1
#define PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMERROR_SHIFT 15
};
+
+/*
+ * Pb context
+ */
+struct pb_context {
+ __le32 crc[4];
+};
+
/* Concrete Function ID. */
struct pxp_concrete_fid {
__le16 fid;
-#define PXP_CONCRETE_FID_PFID_MASK 0xF
+#define PXP_CONCRETE_FID_PFID_MASK 0xF /* Parent PFID */
#define PXP_CONCRETE_FID_PFID_SHIFT 0
-#define PXP_CONCRETE_FID_PORT_MASK 0x3
+#define PXP_CONCRETE_FID_PORT_MASK 0x3 /* port number */
#define PXP_CONCRETE_FID_PORT_SHIFT 4
-#define PXP_CONCRETE_FID_PATH_MASK 0x1
+#define PXP_CONCRETE_FID_PATH_MASK 0x1 /* path number */
#define PXP_CONCRETE_FID_PATH_SHIFT 6
#define PXP_CONCRETE_FID_VFVALID_MASK 0x1
#define PXP_CONCRETE_FID_VFVALID_SHIFT 7
@@ -574,13 +1171,13 @@ struct pxp_pretend_concrete_fid {
union pxp_pretend_fid {
struct pxp_pretend_concrete_fid concrete_fid;
- __le16 opaque_fid;
+ __le16 opaque_fid;
};
/* Pxp Pretend Command Register. */
struct pxp_pretend_cmd {
- union pxp_pretend_fid fid;
- __le16 control;
+ union pxp_pretend_fid fid;
+ __le16 control;
#define PXP_PRETEND_CMD_PATH_MASK 0x1
#define PXP_PRETEND_CMD_PATH_SHIFT 0
#define PXP_PRETEND_CMD_USE_PORT_MASK 0x1
@@ -603,30 +1200,127 @@ struct pxp_pretend_cmd {
/* PTT Record in PXP Admin Window. */
struct pxp_ptt_entry {
- __le32 offset;
+ __le32 offset;
#define PXP_PTT_ENTRY_OFFSET_MASK 0x7FFFFF
#define PXP_PTT_ENTRY_OFFSET_SHIFT 0
#define PXP_PTT_ENTRY_RESERVED0_MASK 0x1FF
#define PXP_PTT_ENTRY_RESERVED0_SHIFT 23
- struct pxp_pretend_cmd pretend;
+ struct pxp_pretend_cmd pretend;
+};
+
+
+/*
+ * VF Zone A Permission Register.
+ */
+struct pxp_vf_zone_a_permission {
+ __le32 control;
+#define PXP_VF_ZONE_A_PERMISSION_VFID_MASK 0xFF
+#define PXP_VF_ZONE_A_PERMISSION_VFID_SHIFT 0
+#define PXP_VF_ZONE_A_PERMISSION_VALID_MASK 0x1
+#define PXP_VF_ZONE_A_PERMISSION_VALID_SHIFT 8
+#define PXP_VF_ZONE_A_PERMISSION_RESERVED0_MASK 0x7F
+#define PXP_VF_ZONE_A_PERMISSION_RESERVED0_SHIFT 9
+#define PXP_VF_ZONE_A_PERMISSION_RESERVED1_MASK 0xFFFF
+#define PXP_VF_ZONE_A_PERMISSION_RESERVED1_SHIFT 16
+};
+
+
+/*
+ * Rdif context
+ */
+struct rdif_task_context {
+ __le32 initialRefTag;
+ __le16 appTagValue;
+ __le16 appTagMask;
+ u8 flags0;
+#define RDIF_TASK_CONTEXT_IGNOREAPPTAG_MASK 0x1
+#define RDIF_TASK_CONTEXT_IGNOREAPPTAG_SHIFT 0
+#define RDIF_TASK_CONTEXT_INITIALREFTAGVALID_MASK 0x1
+#define RDIF_TASK_CONTEXT_INITIALREFTAGVALID_SHIFT 1
+/* 0 = IP checksum, 1 = CRC */
+#define RDIF_TASK_CONTEXT_HOSTGUARDTYPE_MASK 0x1
+#define RDIF_TASK_CONTEXT_HOSTGUARDTYPE_SHIFT 2
+#define RDIF_TASK_CONTEXT_SETERRORWITHEOP_MASK 0x1
+#define RDIF_TASK_CONTEXT_SETERRORWITHEOP_SHIFT 3
+/* 1/2/3 - Protection Type */
+#define RDIF_TASK_CONTEXT_PROTECTIONTYPE_MASK 0x3
+#define RDIF_TASK_CONTEXT_PROTECTIONTYPE_SHIFT 4
+/* 0=0x0000, 1=0xffff */
+#define RDIF_TASK_CONTEXT_CRC_SEED_MASK 0x1
+#define RDIF_TASK_CONTEXT_CRC_SEED_SHIFT 6
+/* Keep reference tag constant */
+#define RDIF_TASK_CONTEXT_KEEPREFTAGCONST_MASK 0x1
+#define RDIF_TASK_CONTEXT_KEEPREFTAGCONST_SHIFT 7
+ u8 partialDifData[7];
+ __le16 partialCrcValue;
+ __le16 partialChecksumValue;
+ __le32 offsetInIO;
+ __le16 flags1;
+#define RDIF_TASK_CONTEXT_VALIDATEGUARD_MASK 0x1
+#define RDIF_TASK_CONTEXT_VALIDATEGUARD_SHIFT 0
+#define RDIF_TASK_CONTEXT_VALIDATEAPPTAG_MASK 0x1
+#define RDIF_TASK_CONTEXT_VALIDATEAPPTAG_SHIFT 1
+#define RDIF_TASK_CONTEXT_VALIDATEREFTAG_MASK 0x1
+#define RDIF_TASK_CONTEXT_VALIDATEREFTAG_SHIFT 2
+#define RDIF_TASK_CONTEXT_FORWARDGUARD_MASK 0x1
+#define RDIF_TASK_CONTEXT_FORWARDGUARD_SHIFT 3
+#define RDIF_TASK_CONTEXT_FORWARDAPPTAG_MASK 0x1
+#define RDIF_TASK_CONTEXT_FORWARDAPPTAG_SHIFT 4
+#define RDIF_TASK_CONTEXT_FORWARDREFTAG_MASK 0x1
+#define RDIF_TASK_CONTEXT_FORWARDREFTAG_SHIFT 5
+/* 0=512B, 1=1KB, 2=2KB, 3=4KB, 4=8KB */
+#define RDIF_TASK_CONTEXT_INTERVALSIZE_MASK 0x7
+#define RDIF_TASK_CONTEXT_INTERVALSIZE_SHIFT 6
+/* 0=None, 1=DIF, 2=DIX */
+#define RDIF_TASK_CONTEXT_HOSTINTERFACE_MASK 0x3
+#define RDIF_TASK_CONTEXT_HOSTINTERFACE_SHIFT 9
+/* DIF tag right at the beginning of DIF interval */
+#define RDIF_TASK_CONTEXT_DIFBEFOREDATA_MASK 0x1
+#define RDIF_TASK_CONTEXT_DIFBEFOREDATA_SHIFT 11
+#define RDIF_TASK_CONTEXT_RESERVED0_MASK 0x1
+#define RDIF_TASK_CONTEXT_RESERVED0_SHIFT 12
+/* 0=None, 1=DIF */
+#define RDIF_TASK_CONTEXT_NETWORKINTERFACE_MASK 0x1
+#define RDIF_TASK_CONTEXT_NETWORKINTERFACE_SHIFT 13
+/* Forward application tag with mask */
+#define RDIF_TASK_CONTEXT_FORWARDAPPTAGWITHMASK_MASK 0x1
+#define RDIF_TASK_CONTEXT_FORWARDAPPTAGWITHMASK_SHIFT 14
+/* Forward reference tag with mask */
+#define RDIF_TASK_CONTEXT_FORWARDREFTAGWITHMASK_MASK 0x1
+#define RDIF_TASK_CONTEXT_FORWARDREFTAGWITHMASK_SHIFT 15
+ __le16 state;
+#define RDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFT_MASK 0xF
+#define RDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFT_SHIFT 0
+#define RDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFT_MASK 0xF
+#define RDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFT_SHIFT 4
+#define RDIF_TASK_CONTEXT_ERRORINIO_MASK 0x1
+#define RDIF_TASK_CONTEXT_ERRORINIO_SHIFT 8
+#define RDIF_TASK_CONTEXT_CHECKSUMOVERFLOW_MASK 0x1
+#define RDIF_TASK_CONTEXT_CHECKSUMOVERFLOW_SHIFT 9
+/* mask for refernce tag handling */
+#define RDIF_TASK_CONTEXT_REFTAGMASK_MASK 0xF
+#define RDIF_TASK_CONTEXT_REFTAGMASK_SHIFT 10
+#define RDIF_TASK_CONTEXT_RESERVED1_MASK 0x3
+#define RDIF_TASK_CONTEXT_RESERVED1_SHIFT 14
+ __le32 reserved2;
};
/* RSS hash type */
enum rss_hash_type {
- RSS_HASH_TYPE_DEFAULT = 0,
- RSS_HASH_TYPE_IPV4 = 1,
- RSS_HASH_TYPE_TCP_IPV4 = 2,
- RSS_HASH_TYPE_IPV6 = 3,
- RSS_HASH_TYPE_TCP_IPV6 = 4,
- RSS_HASH_TYPE_UDP_IPV4 = 5,
- RSS_HASH_TYPE_UDP_IPV6 = 6,
+ RSS_HASH_TYPE_DEFAULT = 0,
+ RSS_HASH_TYPE_IPV4 = 1,
+ RSS_HASH_TYPE_TCP_IPV4 = 2,
+ RSS_HASH_TYPE_IPV6 = 3,
+ RSS_HASH_TYPE_TCP_IPV6 = 4,
+ RSS_HASH_TYPE_UDP_IPV4 = 5,
+ RSS_HASH_TYPE_UDP_IPV6 = 6,
MAX_RSS_HASH_TYPE
};
/* status block structure */
struct status_block {
- __le16 pi_array[PIS_PER_SB];
- __le32 sb_num;
+ __le16 pi_array[PIS_PER_SB];
+ __le32 sb_num;
#define STATUS_BLOCK_SB_NUM_MASK 0x1FF
#define STATUS_BLOCK_SB_NUM_SHIFT 0
#define STATUS_BLOCK_ZERO_PAD_MASK 0x7F
@@ -640,20 +1334,6 @@ struct status_block {
#define STATUS_BLOCK_ZERO_PAD3_SHIFT 24
};
-/* @DPDK */
-#define X_FINAL_CLEANUP_AGG_INT 1
-#define SDM_COMP_TYPE_AGG_INT 2
-#define MAX_NUM_LL2_RX_QUEUES 32
-#define QM_PQ_ELEMENT_SIZE 4
-#define PXP_VF_BAR0_START_IGU 0
-#define EAGLE_ENG1_WORKAROUND_NIG_FLOWCTRL_MODE 3
-
-#define TSTORM_QZONE_SIZE 8
-#define MSTORM_QZONE_SIZE 16
-#define USTORM_QZONE_SIZE 8
-#define XSTORM_QZONE_SIZE 0
-#define YSTORM_QZONE_SIZE 8
-#define PSTORM_QZONE_SIZE 0
/* VF BAR */
#define PXP_VF_BAR0 0
@@ -708,7 +1388,165 @@ struct status_block {
#define PXP_VF_BAR0_GRC_WINDOW_LENGTH 32
-#define PXP_ILT_PAGE_SIZE_NUM_BITS_MIN 12
-#define PXP_ILT_BLOCK_FACTOR_MULTIPLIER 1024
+/*
+ * Tdif context
+ */
+struct tdif_task_context {
+ __le32 initialRefTag;
+ __le16 appTagValue;
+ __le16 appTagMask;
+ __le16 partialCrcValueB;
+ __le16 partialChecksumValueB;
+ __le16 stateB;
+#define TDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFTB_MASK 0xF
+#define TDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFTB_SHIFT 0
+#define TDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFTB_MASK 0xF
+#define TDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFTB_SHIFT 4
+#define TDIF_TASK_CONTEXT_ERRORINIOB_MASK 0x1
+#define TDIF_TASK_CONTEXT_ERRORINIOB_SHIFT 8
+#define TDIF_TASK_CONTEXT_CHECKSUMOVERFLOW_MASK 0x1
+#define TDIF_TASK_CONTEXT_CHECKSUMOVERFLOW_SHIFT 9
+#define TDIF_TASK_CONTEXT_RESERVED0_MASK 0x3F
+#define TDIF_TASK_CONTEXT_RESERVED0_SHIFT 10
+ u8 reserved1;
+ u8 flags0;
+#define TDIF_TASK_CONTEXT_IGNOREAPPTAG_MASK 0x1
+#define TDIF_TASK_CONTEXT_IGNOREAPPTAG_SHIFT 0
+#define TDIF_TASK_CONTEXT_INITIALREFTAGVALID_MASK 0x1
+#define TDIF_TASK_CONTEXT_INITIALREFTAGVALID_SHIFT 1
+/* 0 = IP checksum, 1 = CRC */
+#define TDIF_TASK_CONTEXT_HOSTGUARDTYPE_MASK 0x1
+#define TDIF_TASK_CONTEXT_HOSTGUARDTYPE_SHIFT 2
+#define TDIF_TASK_CONTEXT_SETERRORWITHEOP_MASK 0x1
+#define TDIF_TASK_CONTEXT_SETERRORWITHEOP_SHIFT 3
+/* 1/2/3 - Protection Type */
+#define TDIF_TASK_CONTEXT_PROTECTIONTYPE_MASK 0x3
+#define TDIF_TASK_CONTEXT_PROTECTIONTYPE_SHIFT 4
+/* 0=0x0000, 1=0xffff */
+#define TDIF_TASK_CONTEXT_CRC_SEED_MASK 0x1
+#define TDIF_TASK_CONTEXT_CRC_SEED_SHIFT 6
+#define TDIF_TASK_CONTEXT_RESERVED2_MASK 0x1
+#define TDIF_TASK_CONTEXT_RESERVED2_SHIFT 7
+ __le32 flags1;
+#define TDIF_TASK_CONTEXT_VALIDATEGUARD_MASK 0x1
+#define TDIF_TASK_CONTEXT_VALIDATEGUARD_SHIFT 0
+#define TDIF_TASK_CONTEXT_VALIDATEAPPTAG_MASK 0x1
+#define TDIF_TASK_CONTEXT_VALIDATEAPPTAG_SHIFT 1
+#define TDIF_TASK_CONTEXT_VALIDATEREFTAG_MASK 0x1
+#define TDIF_TASK_CONTEXT_VALIDATEREFTAG_SHIFT 2
+#define TDIF_TASK_CONTEXT_FORWARDGUARD_MASK 0x1
+#define TDIF_TASK_CONTEXT_FORWARDGUARD_SHIFT 3
+#define TDIF_TASK_CONTEXT_FORWARDAPPTAG_MASK 0x1
+#define TDIF_TASK_CONTEXT_FORWARDAPPTAG_SHIFT 4
+#define TDIF_TASK_CONTEXT_FORWARDREFTAG_MASK 0x1
+#define TDIF_TASK_CONTEXT_FORWARDREFTAG_SHIFT 5
+/* 0=512B, 1=1KB, 2=2KB, 3=4KB, 4=8KB */
+#define TDIF_TASK_CONTEXT_INTERVALSIZE_MASK 0x7
+#define TDIF_TASK_CONTEXT_INTERVALSIZE_SHIFT 6
+/* 0=None, 1=DIF, 2=DIX */
+#define TDIF_TASK_CONTEXT_HOSTINTERFACE_MASK 0x3
+#define TDIF_TASK_CONTEXT_HOSTINTERFACE_SHIFT 9
+/* DIF tag right at the beginning of DIF interval */
+#define TDIF_TASK_CONTEXT_DIFBEFOREDATA_MASK 0x1
+#define TDIF_TASK_CONTEXT_DIFBEFOREDATA_SHIFT 11
+/* reserved */
+#define TDIF_TASK_CONTEXT_RESERVED3_MASK 0x1
+#define TDIF_TASK_CONTEXT_RESERVED3_SHIFT 12
+/* 0=None, 1=DIF */
+#define TDIF_TASK_CONTEXT_NETWORKINTERFACE_MASK 0x1
+#define TDIF_TASK_CONTEXT_NETWORKINTERFACE_SHIFT 13
+#define TDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFTA_MASK 0xF
+#define TDIF_TASK_CONTEXT_RECEIVEDDIFBYTESLEFTA_SHIFT 14
+#define TDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFTA_MASK 0xF
+#define TDIF_TASK_CONTEXT_TRANSMITEDDIFBYTESLEFTA_SHIFT 18
+#define TDIF_TASK_CONTEXT_ERRORINIOA_MASK 0x1
+#define TDIF_TASK_CONTEXT_ERRORINIOA_SHIFT 22
+#define TDIF_TASK_CONTEXT_CHECKSUMOVERFLOWA_MASK 0x1
+#define TDIF_TASK_CONTEXT_CHECKSUMOVERFLOWA_SHIFT 23
+/* mask for refernce tag handling */
+#define TDIF_TASK_CONTEXT_REFTAGMASK_MASK 0xF
+#define TDIF_TASK_CONTEXT_REFTAGMASK_SHIFT 24
+/* Forward application tag with mask */
+#define TDIF_TASK_CONTEXT_FORWARDAPPTAGWITHMASK_MASK 0x1
+#define TDIF_TASK_CONTEXT_FORWARDAPPTAGWITHMASK_SHIFT 28
+/* Forward reference tag with mask */
+#define TDIF_TASK_CONTEXT_FORWARDREFTAGWITHMASK_MASK 0x1
+#define TDIF_TASK_CONTEXT_FORWARDREFTAGWITHMASK_SHIFT 29
+/* Keep reference tag constant */
+#define TDIF_TASK_CONTEXT_KEEPREFTAGCONST_MASK 0x1
+#define TDIF_TASK_CONTEXT_KEEPREFTAGCONST_SHIFT 30
+#define TDIF_TASK_CONTEXT_RESERVED4_MASK 0x1
+#define TDIF_TASK_CONTEXT_RESERVED4_SHIFT 31
+ __le32 offsetInIOB;
+ __le16 partialCrcValueA;
+ __le16 partialChecksumValueA;
+ __le32 offsetInIOA;
+ u8 partialDifDataA[8];
+ u8 partialDifDataB[8];
+};
+
+
+/*
+ * Timers context
+ */
+struct timers_context {
+ __le32 logical_client_0;
+/* Expiration time of logical client 0 */
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_MASK 0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC0_SHIFT 0
+/* Valid bit of logical client 0 */
+#define TIMERS_CONTEXT_VALIDLC0_MASK 0x1
+#define TIMERS_CONTEXT_VALIDLC0_SHIFT 28
+/* Active bit of logical client 0 */
+#define TIMERS_CONTEXT_ACTIVELC0_MASK 0x1
+#define TIMERS_CONTEXT_ACTIVELC0_SHIFT 29
+#define TIMERS_CONTEXT_RESERVED0_MASK 0x3
+#define TIMERS_CONTEXT_RESERVED0_SHIFT 30
+ __le32 logical_client_1;
+/* Expiration time of logical client 1 */
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_MASK 0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC1_SHIFT 0
+/* Valid bit of logical client 1 */
+#define TIMERS_CONTEXT_VALIDLC1_MASK 0x1
+#define TIMERS_CONTEXT_VALIDLC1_SHIFT 28
+/* Active bit of logical client 1 */
+#define TIMERS_CONTEXT_ACTIVELC1_MASK 0x1
+#define TIMERS_CONTEXT_ACTIVELC1_SHIFT 29
+#define TIMERS_CONTEXT_RESERVED1_MASK 0x3
+#define TIMERS_CONTEXT_RESERVED1_SHIFT 30
+ __le32 logical_client_2;
+/* Expiration time of logical client 2 */
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_MASK 0xFFFFFFF
+#define TIMERS_CONTEXT_EXPIRATIONTIMELC2_SHIFT 0
+/* Valid bit of logical client 2 */
+#define TIMERS_CONTEXT_VALIDLC2_MASK 0x1
+#define TIMERS_CONTEXT_VALIDLC2_SHIFT 28
+/* Active bit of logical client 2 */
+#define TIMERS_CONTEXT_ACTIVELC2_MASK 0x1
+#define TIMERS_CONTEXT_ACTIVELC2_SHIFT 29
+#define TIMERS_CONTEXT_RESERVED2_MASK 0x3
+#define TIMERS_CONTEXT_RESERVED2_SHIFT 30
+ __le32 host_expiration_fields;
+/* Expiration time on host (closest one) */
+#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_MASK 0xFFFFFFF
+#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALUE_SHIFT 0
+/* Valid bit of host expiration */
+#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_MASK 0x1
+#define TIMERS_CONTEXT_HOSTEXPRIRATIONVALID_SHIFT 28
+#define TIMERS_CONTEXT_RESERVED3_MASK 0x7
+#define TIMERS_CONTEXT_RESERVED3_SHIFT 29
+};
+
+
+/*
+ * Enum for next_protocol field of tunnel_parsing_flags
+ */
+enum tunnel_next_protocol {
+ e_unknown = 0,
+ e_l2 = 1,
+ e_ipv4 = 2,
+ e_ipv6 = 3,
+ MAX_TUNNEL_NEXT_PROTOCOL
+};
#endif /* __COMMON_HSI__ */
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 46d3e80..9e32279 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3056,8 +3056,6 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
OSAL_MEMSET(p_qzone, 0, qzone_size);
p_coalesce_timeset = p_qzone;
- p_coalesce_timeset->timeset = timeset;
- p_coalesce_timeset->valid = 1;
ecore_memcpy_to(p_hwfn, p_ptt, hw_addr, p_qzone, qzone_size);
return ECORE_SUCCESS;
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index 877de8b..3c4d7c0 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -25,6 +25,7 @@ enum common_event_opcode {
COMMON_EVENT_VF_FLR,
COMMON_EVENT_PF_UPDATE,
COMMON_EVENT_MALICIOUS_VF,
+ COMMON_EVENT_RL_UPDATE,
COMMON_EVENT_EMPTY,
MAX_COMMON_EVENT_OPCODE
};
@@ -39,6 +40,7 @@ enum common_ramrod_cmd_id {
COMMON_RAMROD_VF_START /* VF Function Start */,
COMMON_RAMROD_VF_STOP /* VF Function Stop Ramrod */,
COMMON_RAMROD_PF_UPDATE /* PF update Ramrod */,
+ COMMON_RAMROD_RL_UPDATE /* QCN/DCQCN RL update Ramrod */,
COMMON_RAMROD_EMPTY /* Empty Ramrod */,
MAX_COMMON_RAMROD_CMD_ID
};
@@ -643,6 +645,15 @@ enum core_ramrod_cmd_id {
};
/*
+ * Core RX CQE Type for Light L2
+ */
+enum core_roce_flavor_type {
+ CORE_ROCE,
+ CORE_RROCE,
+ MAX_CORE_ROCE_FLAVOR_TYPE
+};
+
+/*
* Specifies how ll2 should deal with packets errors: packet_too_big and no_buff
*/
struct core_rx_action_on_error {
@@ -814,10 +825,32 @@ struct core_tx_bd_flags {
struct core_tx_bd {
struct regpair addr /* Buffer Address */;
__le16 nbytes /* Number of Bytes in Buffer */;
- __le16 vlan /* VLAN to insert to packet (if insertion flag set) */;
- u8 nbds /* Number of BDs that make up one packet */;
+/* Network packets: VLAN to insert to packet (if insertion flag set) LoopBack
+ * packets: echo data to pass to Rx
+ */
+ __le16 nw_vlan_or_lb_echo;
+ u8 bitfield0;
+/* Number of BDs that make up one packet - width wide enough to present
+ * X_CORE_LL2_NUM_OF_BDS_ON_ST_CT
+ */
+#define CORE_TX_BD_NBDS_MASK 0xF
+#define CORE_TX_BD_NBDS_SHIFT 0
+/* Use roce_flavor enum - Diffrentiate between Roce flavors is valid when
+ * connType is ROCE (use enum core_roce_flavor_type)
+ */
+#define CORE_TX_BD_ROCE_FLAV_MASK 0x1
+#define CORE_TX_BD_ROCE_FLAV_SHIFT 4
+#define CORE_TX_BD_RESERVED0_MASK 0x7
+#define CORE_TX_BD_RESERVED0_SHIFT 5
struct core_tx_bd_flags bd_flags /* BD Flags */;
- __le16 l4_hdr_offset_w;
+ __le16 bitfield1;
+#define CORE_TX_BD_L4_HDR_OFFSET_W_MASK 0x3FFF
+#define CORE_TX_BD_L4_HDR_OFFSET_W_SHIFT 0
+/* Packet destination - Network, LB (use enum core_tx_dest) */
+#define CORE_TX_BD_TX_DST_MASK 0x1
+#define CORE_TX_BD_TX_DST_SHIFT 14
+#define CORE_TX_BD_RESERVED1_MASK 0x1
+#define CORE_TX_BD_RESERVED1_SHIFT 15
};
/*
@@ -830,22 +863,21 @@ enum core_tx_dest {
};
/*
- * Ramrod data for rx queue start ramrod
+ * Ramrod data for tx queue start ramrod
*/
struct core_tx_start_ramrod_data {
struct regpair pbl_base_addr /* Address of the pbl page */;
__le16 mtu /* Maximum transmission unit */;
__le16 sb_id /* Status block ID */;
u8 sb_index /* Status block protocol index */;
- u8 tx_dest /* TX Destination (either Network or LB) */;
u8 stats_en /* Statistics Enable */;
u8 stats_id /* Statistics Counter ID */;
+ u8 conn_type /* connection type that loaded ll2 */;
__le16 pbl_size /* Number of BD pages pointed by PBL */;
__le16 qm_pq_id /* QM PQ ID */;
- u8 conn_type /* connection type that loaded ll2 */;
u8 gsi_offload_flag
/* set when in GSI offload mode on ROCE connection */;
- u8 resrved[2];
+ u8 resrved[3];
};
/*
@@ -855,6 +887,25 @@ struct core_tx_stop_ramrod_data {
__le32 reserved0[2];
};
+/*
+ * Enum flag for what type of dcb data to update
+ */
+enum dcb_dhcp_update_flag {
+/* use when no change should be done to dcb data */
+ DONT_UPDATE_DCB_DHCP,
+ UPDATE_DCB /* use to update only l2 (vlan) priority */,
+ UPDATE_DSCP /* use to update only l3 dhcp */,
+ UPDATE_DCB_DSCP /* update vlan pri and dhcp */,
+ MAX_DCB_DHCP_UPDATE_FLAG
+};
+
+struct eth_mstorm_per_pf_stat {
+ struct regpair gre_discard_pkts /* Dropped GRE RX packets */;
+ struct regpair vxlan_discard_pkts /* Dropped VXLAN RX packets */;
+ struct regpair geneve_discard_pkts /* Dropped GENEVE RX packets */;
+ struct regpair lb_discard_pkts /* Dropped Tx switched packets */;
+};
+
struct eth_mstorm_per_queue_stat {
struct regpair ttl0_discard;
struct regpair packet_too_big_discard;
@@ -867,6 +918,33 @@ struct eth_mstorm_per_queue_stat {
};
/*
+ * Ethernet TX Per PF
+ */
+struct eth_pstorm_per_pf_stat {
+/* number of total ucast bytes sent on loopback port without errors */
+ struct regpair sent_lb_ucast_bytes;
+/* number of total mcast bytes sent on loopback port without errors */
+ struct regpair sent_lb_mcast_bytes;
+/* number of total bcast bytes sent on loopback port without errors */
+ struct regpair sent_lb_bcast_bytes;
+/* number of total ucast packets sent on loopback port without errors */
+ struct regpair sent_lb_ucast_pkts;
+/* number of total mcast packets sent on loopback port without errors */
+ struct regpair sent_lb_mcast_pkts;
+/* number of total bcast packets sent on loopback port without errors */
+ struct regpair sent_lb_bcast_pkts;
+ struct regpair sent_gre_bytes /* Sent GRE bytes */;
+ struct regpair sent_vxlan_bytes /* Sent VXLAN bytes */;
+ struct regpair sent_geneve_bytes /* Sent GENEVE bytes */;
+ struct regpair sent_gre_pkts /* Sent GRE packets */;
+ struct regpair sent_vxlan_pkts /* Sent VXLAN packets */;
+ struct regpair sent_geneve_pkts /* Sent GENEVE packets */;
+ struct regpair gre_drop_pkts /* Dropped GRE TX packets */;
+ struct regpair vxlan_drop_pkts /* Dropped VXLAN TX packets */;
+ struct regpair geneve_drop_pkts /* Dropped GENEVE TX packets */;
+};
+
+/*
* Ethernet TX Per Queue Stats
*/
struct eth_pstorm_per_queue_stat {
@@ -898,6 +976,27 @@ struct eth_rx_rate_limit {
__le16 reserved1;
};
+struct eth_ustorm_per_pf_stat {
+/* number of total ucast bytes received on loopback port without errors */
+ struct regpair rcv_lb_ucast_bytes;
+/* number of total mcast bytes received on loopback port without errors */
+ struct regpair rcv_lb_mcast_bytes;
+/* number of total bcast bytes received on loopback port without errors */
+ struct regpair rcv_lb_bcast_bytes;
+/* number of total ucast packets received on loopback port without errors */
+ struct regpair rcv_lb_ucast_pkts;
+/* number of total mcast packets received on loopback port without errors */
+ struct regpair rcv_lb_mcast_pkts;
+/* number of total bcast packets received on loopback port without errors */
+ struct regpair rcv_lb_bcast_pkts;
+ struct regpair rcv_gre_bytes /* Received GRE bytes */;
+ struct regpair rcv_vxlan_bytes /* Received VXLAN bytes */;
+ struct regpair rcv_geneve_bytes /* Received GENEVE bytes */;
+ struct regpair rcv_gre_pkts /* Received GRE packets */;
+ struct regpair rcv_vxlan_pkts /* Received VXLAN packets */;
+ struct regpair rcv_geneve_pkts /* Received GENEVE packets */;
+};
+
struct eth_ustorm_per_queue_stat {
struct regpair rcv_ucast_bytes;
struct regpair rcv_mcast_bytes;
@@ -934,6 +1033,14 @@ enum fw_flow_ctrl_mode {
};
/*
+ * Major and Minor hsi Versions
+ */
+struct hsi_fp_ver_struct {
+ u8 minor_ver_arr[2] /* Minor Version of hsi loading pf */;
+ u8 major_ver_arr[2] /* Major Version of driver loading pf */;
+};
+
+/*
* Integration Phase
*/
enum integ_phase {
@@ -944,6 +1051,18 @@ enum integ_phase {
};
/*
+ * Ports mode
+ */
+enum iwarp_ll2_tx_queues {
+/* LL2 queue for OOO packets sent in-order by the driver */
+ IWARP_LL2_IN_ORDER_TX_QUEUE = 1,
+/* LL2 queue for unaligned packets sent aligned by the driver */
+ IWARP_LL2_ALIGNED_TX_QUEUE,
+ IWARP_LL2_ERROR /* Error indication */,
+ MAX_IWARP_LL2_TX_QUEUES
+};
+
+/*
* Malicious VF error ID
*/
enum malicious_vf_error_id {
@@ -953,7 +1072,7 @@ enum malicious_vf_error_id {
VF_ZONE_MSG_NOT_VALID /* VF channel message is not valid */,
VF_ZONE_FUNC_NOT_ENABLED /* Parent PF of VF channel is not active */,
ETH_PACKET_TOO_SMALL
-/* TX packet is shorter then reported on BDs or from minimal size */
+ /* TX packet is shorter then reported on BDs or from minimal size */
,
ETH_ILLEGAL_VLAN_MODE
/* Tx packet with marked as insert VLAN when its illegal */,
@@ -975,6 +1094,7 @@ enum malicious_vf_error_id {
ETH_EDPM_OUT_OF_SYNC /* Valid BDs on local ring after EDPM L2 sync */,
ETH_TUNN_IPV6_EXT_NBD_ERR
/* Tunneled packet with IPv6+Ext without a proper number of BDs */,
+ ETH_CONTROL_PACKET_VIOLATION /* VF sent control frame such as PFC */,
MAX_MALICIOUS_VF_ERROR_ID
};
@@ -984,6 +1104,9 @@ enum malicious_vf_error_id {
struct mstorm_non_trigger_vf_zone {
struct eth_mstorm_per_queue_stat eth_queue_stat
/* VF statistic bucket */;
+/* VF RX queues producers */
+ struct eth_rx_prod_data
+ eth_rx_queue_producers[ETH_MAX_NUM_RX_QUEUES_PER_VF_QUAD];
};
/*
@@ -1060,10 +1183,11 @@ struct pf_start_ramrod_data {
u8 allow_npar_tx_switching;
u8 inner_to_outer_pri_map[8];
u8 pri_map_valid
-/* If inner_to_outer_pri_map is initialize then set pri_map_valid */
+ /* If inner_to_outer_pri_map is initialize then set pri_map_valid */
;
__le32 outer_tag;
- u8 reserved0[4];
+/* FP HSI version to be used by FW */
+ struct hsi_fp_ver_struct hsi_fp_ver;
};
/*
@@ -1071,9 +1195,11 @@ struct pf_start_ramrod_data {
*/
struct protocol_dcb_data {
u8 dcb_enable_flag /* dcbEnable flag value */;
+ u8 dscp_enable_flag /* If set use dscp value */;
u8 dcb_priority /* dcbPri flag value */;
u8 dcb_tc /* dcb TC value */;
- u8 reserved;
+ u8 dscp_val /* dscp value to write if dscp_enable_flag is set */;
+ u8 reserved0;
};
/*
@@ -1081,6 +1207,14 @@ struct protocol_dcb_data {
*/
struct pf_update_tunnel_config {
u8 update_rx_pf_clss;
+/* Update per PORT default tunnel RX classification scheme for traffic with
+ * unknown unicast outer MAC in NPAR mode.
+ */
+ u8 update_rx_def_ucast_clss;
+/* Update per PORT default tunnel RX classification scheme for traffic with non
+ * unicast outer MAC in NPAR mode.
+ */
+ u8 update_rx_def_non_ucast_clss;
u8 update_tx_pf_clss;
u8 set_vxlan_udp_port_flg
/* Update VXLAN tunnel UDP destination port. */;
@@ -1102,7 +1236,7 @@ struct pf_update_tunnel_config {
u8 tunnel_clss_ipgre /* Classification scheme for ip GRE tunnel. */;
__le16 vxlan_udp_port /* VXLAN tunnel UDP destination port. */;
__le16 geneve_udp_port /* GENEVE tunnel UDP destination port. */;
- __le16 reserved[3];
+ __le16 reserved[2];
};
/*
@@ -1114,9 +1248,10 @@ struct pf_update_ramrod_data {
u8 update_fcoe_dcb_data_flag /* Update FCOE DCB data indication */;
u8 update_iscsi_dcb_data_flag /* Update iSCSI DCB data indication */;
u8 update_roce_dcb_data_flag /* Update ROCE DCB data indication */;
+/* Update RROCE (RoceV2) DCB data indication */
+ u8 update_rroce_dcb_data_flag;
u8 update_iwarp_dcb_data_flag /* Update IWARP DCB data indication */;
u8 update_mf_vlan_flag /* Update MF outer vlan Id */;
- u8 reserved;
struct protocol_dcb_data eth_dcb_data /* core eth related fields */;
struct protocol_dcb_data fcoe_dcb_data /* core fcoe related fields */;
struct protocol_dcb_data iscsi_dcb_data /* core iscsi related fields */
@@ -1124,10 +1259,12 @@ struct pf_update_ramrod_data {
struct protocol_dcb_data roce_dcb_data /* core roce related fields */;
struct protocol_dcb_data iwarp_dcb_data /* core iwarp related fields */
;
+/* core roce related fields */
+ struct protocol_dcb_data rroce_dcb_data;
__le16 mf_vlan /* new outer vlan id value */;
- __le16 reserved2;
- struct pf_update_tunnel_config tunnel_config /* tunnel configuration. */
- ;
+ __le16 reserved;
+/* tunnel configuration. */
+ struct pf_update_tunnel_config tunnel_config;
};
/*
@@ -1143,6 +1280,15 @@ enum ports_mode {
};
/*
+ * use to index in hsi_fp_[major|minor]_ver_arr per protocol
+ */
+enum protocol_version_array_key {
+ ETH_VER_KEY = 0,
+ ROCE_VER_KEY,
+ MAX_PROTOCOL_VERSION_ARRAY_KEY
+};
+
+/*
* RDMA TX Stats
*/
struct rdma_sent_stats {
@@ -1187,6 +1333,31 @@ struct rdma_rcv_stats {
};
/*
+ * Data for update QCN/DCQCN RL ramrod
+ */
+struct rl_update_ramrod_data {
+ u8 qcn_update_param_flg /* Update QCN global params: timeout. */;
+/* Update DCQCN global params: timeout, g, k. */
+ u8 dcqcn_update_param_flg;
+ u8 rl_init_flg /* Init RL parameters, when RL disabled. */;
+ u8 rl_start_flg /* Start RL in IDLE state. Set rate to maximum. */;
+ u8 rl_stop_flg /* Stop RL. */;
+ u8 rl_id_first /* ID of first or single RL, that will be updated. */;
+/* ID of last RL, that will be updated. If clear, single RL will updated. */
+ u8 rl_id_last;
+ u8 rl_dc_qcn_flg /* If set, RL will used for DCQCN. */;
+ __le32 rl_bc_rate /* Byte Counter Limit. */;
+ __le16 rl_max_rate /* Maximum rate in 1.6 Mbps resolution. */;
+ __le16 rl_r_ai /* Active increase rate. */;
+ __le16 rl_r_hai /* Hyper active increase rate. */;
+ __le16 dcqcn_g /* DCQCN Alpha update gain in 1/64K resolution . */;
+ __le32 dcqcn_k_us /* DCQCN Alpha update interval. */;
+ __le32 dcqcn_timeuot_us /* DCQCN timeout. */;
+ __le32 qcn_timeuot_us /* QCN timeout. */;
+ __le32 reserved[2];
+};
+
+/*
* Slowpath Element (SPQE)
*/
struct slow_path_element {
@@ -1223,6 +1394,11 @@ struct tstorm_per_port_stat {
;
struct regpair preroce_irregular_pkt
/* packet is an PREROCE irregular packet */;
+ struct regpair eth_gre_tunn_filter_discard /* GRE dropped packets */;
+/* VXLAN dropped packets */
+ struct regpair eth_vxlan_tunn_filter_discard;
+/* GENEVE dropped packets */
+ struct regpair eth_geneve_tunn_filter_discard;
};
/*
@@ -1244,10 +1420,14 @@ enum tunnel_clss {
TUNNEL_CLSS_MAC_VNI
,
TUNNEL_CLSS_INNER_MAC_VLAN
-/* Use MAC and VLAN from last L2 header for vport classification */
+ /* Use MAC and VLAN from last L2 header for vport classification */
,
TUNNEL_CLSS_INNER_MAC_VNI
,
+/* Use MAC and VLAN from last L2 header for vport classification. If no exact
+ * match, use MAC and VLAN from first L2 header for classification.
+ */
+ TUNNEL_CLSS_MAC_VLAN_DUAL_STAGE,
MAX_TUNNEL_CLSS
};
@@ -1295,7 +1475,9 @@ struct vf_start_ramrod_data {
u8 enable_flr_ack;
__le16 opaque_fid /* VF opaque FID */;
u8 personality /* define what type of personality is new VF */;
- u8 reserved[3];
+ u8 reserved[7];
+/* FP HSI version to be used by FW */
+ struct hsi_fp_ver_struct hsi_fp_ver;
};
/*
@@ -1309,6 +1491,19 @@ struct vf_stop_ramrod_data {
};
/*
+ * VF zone size mode.
+ */
+enum vf_zone_size_mode {
+/* Default VF zone size. Up to 192 VF supported. */
+ VF_ZONE_SIZE_MODE_DEFAULT,
+/* Doubled VF zone size. Up to 96 VF supported. */
+ VF_ZONE_SIZE_MODE_DOUBLE,
+/* Quad VF zone size. Up to 48 VF supported. */
+ VF_ZONE_SIZE_MODE_QUAD,
+ MAX_VF_ZONE_SIZE_MODE
+};
+
+/*
* Attentions status block
*/
struct atten_status_block {
@@ -1319,6 +1514,7 @@ struct atten_status_block {
__le32 reserved1;
};
+
/*
* Igu cleanup bit values to distinguish between clean or producer consumer
*/
@@ -1376,7 +1572,7 @@ struct dmae_cmd {
__le32 src_addr_hi;
__le32 dst_addr_lo;
__le32 dst_addr_hi;
- __le16 length /* Length in DW */;
+ __le16 length_dw /* Length in DW */;
__le16 opcode_b;
#define DMAE_CMD_SRC_VF_ID_MASK 0xFF
#define DMAE_CMD_SRC_VF_ID_SHIFT 0
@@ -1395,10 +1591,62 @@ struct dmae_cmd {
__le16 xsum8 /* checksum8 result */;
};
-struct storm_ram_section {
- __le16 offset
- /* The offset of the section in the RAM (in 64 bit units) */;
- __le16 size /* The size of the section (in 64 bit units) */;
+
+enum dmae_cmd_comp_crc_en_enum {
+ dmae_cmd_comp_crc_disabled /* Do not write a CRC word */,
+ dmae_cmd_comp_crc_enabled /* Write a CRC word */,
+ MAX_DMAE_CMD_COMP_CRC_EN_ENUM
+};
+
+
+enum dmae_cmd_comp_func_enum {
+/* completion word and/or CRC will be sent to SRC-PCI function/SRC VFID */
+ dmae_cmd_comp_func_to_src,
+/* completion word and/or CRC will be sent to DST-PCI function/DST VFID */
+ dmae_cmd_comp_func_to_dst,
+ MAX_DMAE_CMD_COMP_FUNC_ENUM
+};
+
+
+enum dmae_cmd_comp_word_en_enum {
+ dmae_cmd_comp_word_disabled /* Do not write a completion word */,
+ dmae_cmd_comp_word_enabled /* Write the completion word */,
+ MAX_DMAE_CMD_COMP_WORD_EN_ENUM
+};
+
+
+enum dmae_cmd_c_dst_enum {
+ dmae_cmd_c_dst_pcie,
+ dmae_cmd_c_dst_grc,
+ MAX_DMAE_CMD_C_DST_ENUM
+};
+
+
+enum dmae_cmd_dst_enum {
+ dmae_cmd_dst_none_0,
+ dmae_cmd_dst_pcie,
+ dmae_cmd_dst_grc,
+ dmae_cmd_dst_none_3,
+ MAX_DMAE_CMD_DST_ENUM
+};
+
+
+enum dmae_cmd_error_handling_enum {
+/* Send a regular completion (with no error indication) */
+ dmae_cmd_error_handling_send_regular_comp,
+/* Send a completion with an error indication (i.e. set bit 31 of the completion
+ * word)
+ */
+ dmae_cmd_error_handling_send_comp_with_err,
+ dmae_cmd_error_handling_dont_send_comp /* Do not send a completion */,
+ MAX_DMAE_CMD_ERROR_HANDLING_ENUM
+};
+
+
+enum dmae_cmd_src_enum {
+ dmae_cmd_src_pcie /* The source is the PCIe */,
+ dmae_cmd_src_grc /* The source is the GRC */,
+ MAX_DMAE_CMD_SRC_ENUM
};
/*
@@ -1475,6 +1723,7 @@ struct igu_msix_vector {
#define IGU_MSIX_VECTOR_RESERVED1_SHIFT 24
};
+
struct mstorm_core_conn_ag_ctx {
u8 byte0 /* cdu_validation */;
u8 byte1 /* state */;
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index e9b96d5..72bc6de 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -461,7 +461,7 @@ ecore_dmae_post_command(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
" src=0x%x:%x dst=0x%x:%x\n",
idx_cmd, (u32)p_command->opcode,
(u16)p_command->opcode_b,
- (int)p_command->length,
+ (int)p_command->length_dw,
(int)p_command->src_addr_hi,
(int)p_command->src_addr_lo,
(int)p_command->dst_addr_hi,
@@ -668,7 +668,7 @@ ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn,
return ECORE_INVAL;
}
- cmd->length = (u16)length;
+ cmd->length_dw = (u16)length;
if (src_type == ECORE_DMAE_ADDRESS_HOST_VIRT ||
src_type == ECORE_DMAE_ADDRESS_HOST_PHYS)
diff --git a/drivers/net/qede/base/eth_common.h b/drivers/net/qede/base/eth_common.h
index 71ef615..bd73f7d 100644
--- a/drivers/net/qede/base/eth_common.h
+++ b/drivers/net/qede/base/eth_common.h
@@ -59,14 +59,6 @@
#define ETH_TPA_CQE_END_LEN_LIST_SIZE 4
/*
- * Interrupt coalescing TimeSet
- */
-struct coalescing_timeset {
- u8 timeset;
- u8 valid /* Only if this flag is set, timeset will take effect */;
-};
-
-/*
* Destination port mode
*/
enum dest_port_mode {
@@ -364,16 +356,6 @@ struct eth_rx_pmd_cqe {
};
/*
- * ETH Rx producers data
- */
-struct eth_rx_prod_data {
- __le16 bd_prod /* BD producer */;
- __le16 cqe_prod /* CQE producer */;
- __le16 reserved;
- __le16 reserved1 /* FW reserved. */;
-};
-
-/*
* Aggregation end reason.
*/
enum eth_tpa_end_reason {
@@ -487,15 +469,6 @@ struct mstorm_eth_queue_zone {
};
/*
- * Ustorm Queue Zone
- */
-struct ustorm_eth_queue_zone {
- struct coalescing_timeset int_coalescing_timeset
- /* Rx interrupt coalescing TimeSet */;
- __le16 reserved[3];
-};
-
-/*
* Ystorm Queue Zone
*/
struct ystorm_eth_queue_zone {
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 3b25e1a..ab88671 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -6,6 +6,14 @@
* See LICENSE.qede_pmd for copyright and licensing details.
*/
+/*
+ * Copyright (c) 2016 QLogic Corporation.
+ * All rights reserved.
+ * www.qlogic.com
+ *
+ * See LICENSE.qede_pmd for copyright and licensing details.
+ */
+
#define CDU_REG_CID_ADDR_PARAMS_CONTEXT_SIZE_SHIFT \
0
@@ -1105,3 +1113,31 @@
#define CDU_REG_SEGMENT1_PARAMS_T1_TID_SIZE_SHIFT 24
#define CDU_REG_SEGMENT1_PARAMS_T1_TID_BLOCK_WASTE_SHIFT 16
#define DORQ_REG_DB_DROP_DETAILS_ADDRESS 0x100a1cUL
+
+/* 8.10.9.0 FW */
+#define NIG_REG_VXLAN_CTRL 0x50105cUL
+#define PRS_REG_SEARCH_ROCE 0x1f040cUL
+#define PRS_REG_CM_HDR_GFT 0x1f11c8UL
+#define PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT 0
+#define PRS_REG_CM_HDR_GFT_CM_HDR_SHIFT 8
+#define CCFC_REG_WEAK_ENABLE_VF 0x2e0704UL
+#define TCFC_REG_STRONG_ENABLE_VF 0x2d070cUL
+#define TCFC_REG_WEAK_ENABLE_VF 0x2d0704UL
+#define PRS_REG_SEARCH_GFT 0x1f11bcUL
+#define PRS_REG_LOAD_L2_FILTER 0x1f0198UL
+#define PRS_REG_GFT_CAM 0x1f1100UL
+#define PRS_REG_GFT_PROFILE_MASK_RAM 0x1f1000UL
+#define PGLUE_B_REG_MSDM_VF_SHIFT_B 0x2aa1c4UL
+#define PGLUE_B_REG_MSDM_OFFSET_MASK_B 0x2aa1c0UL
+#define PRS_REG_PKT_LEN_STAT_TAGS_NOT_COUNTED_FIRST 0x1f0a0cUL
+#define PRS_REG_SEARCH_FCOE 0x1f0408UL
+#define PGLUE_B_REG_PGL_ADDR_E8_F0 0x2aaf98UL
+#define NIG_REG_DSCP_TO_TC_MAP_ENABLE 0x5088f8UL
+#define PGLUE_B_REG_PGL_ADDR_EC_F0 0x2aaf9cUL
+#define PGLUE_B_REG_PGL_ADDR_F0_F0 0x2aafa0UL
+#define PRS_REG_ROCE_DEST_QP_MAX_PF 0x1f0430UL
+#define PGLUE_B_REG_PGL_ADDR_F4_F0 0x2aafa4UL
+#define IGU_REG_WRITE_DONE_PENDING 0x180900UL
+#define NIG_REG_LLH_TAGMAC_DEF_PF_VECTOR 0x50196cUL
+#define PRS_REG_MSG_INFO 0x1f0a1cUL
+#define BAR0_MAP_REG_XSDM_RAM 0x1e00000UL
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 05/32] net/qede/base: add attention formatting string
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (3 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 04/32] net/qede/base: add HSI changes and register defines Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 06/32] net/qede/base: additional formatting/comment changes Rasesh Mody
` (27 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Rasesh Mody
In case of attention from a signal that's represented by multiple bits
in misc AEU, add the format string which is populated with proper index
and resulting prints will show string as a prefix.
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
---
drivers/net/qede/base/bcm_osal.c | 21 +++++++++++++++++++++
drivers/net/qede/base/bcm_osal.h | 6 ++++++
drivers/net/qede/base/ecore_int.c | 35 +++++++++++++++++++++++++++++------
3 files changed, 56 insertions(+), 6 deletions(-)
diff --git a/drivers/net/qede/base/bcm_osal.c b/drivers/net/qede/base/bcm_osal.c
index 67270fd..d53dfee 100644
--- a/drivers/net/qede/base/bcm_osal.c
+++ b/drivers/net/qede/base/bcm_osal.c
@@ -65,6 +65,27 @@ inline bool qede_test_bit(u32 nr, unsigned long *addr)
return res;
}
+static inline u32 qede_ffb(unsigned long word)
+{
+ unsigned long first_bit;
+
+ first_bit = __builtin_ffsl(word);
+ return first_bit ? (first_bit - 1) : OSAL_BITS_PER_UL;
+}
+
+inline u32 qede_find_first_bit(unsigned long *addr, u32 limit)
+{
+ u32 i;
+ u32 nwords = 0;
+ OSAL_BUILD_BUG_ON(!limit);
+ nwords = (limit - 1) / OSAL_BITS_PER_UL + 1;
+ for (i = 0; i < nwords; i++)
+ if (addr[i] != 0)
+ break;
+
+ return (i == nwords) ? limit : i * OSAL_BITS_PER_UL + qede_ffb(addr[i]);
+}
+
static inline u32 qede_ffz(unsigned long word)
{
unsigned long first_zero;
diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 3e2aeb0..a535058 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -301,6 +301,10 @@ bool qede_test_bit(u32, unsigned long *);
#define OSAL_TEST_BIT(bit, bitmap) \
qede_test_bit(bit, bitmap)
+u32 qede_find_first_bit(unsigned long *, u32);
+#define OSAL_FIND_FIRST_BIT(bitmap, length) \
+ qede_find_first_bit(bitmap, length)
+
u32 qede_find_first_zero_bit(unsigned long *, u32);
#define OSAL_FIND_FIRST_ZERO_BIT(bitmap, length) \
qede_find_first_zero_bit(bitmap, length)
@@ -377,6 +381,8 @@ u32 qede_osal_log2(u32);
#define OSAL_ARRAY_SIZE(arr) RTE_DIM(arr)
#define OSAL_SPRINTF(name, pattern, ...) \
sprintf(name, pattern, ##__VA_ARGS__)
+#define OSAL_SNPRINTF(buf, size, format, ...) \
+ snprintf(buf, size, format, ##__VA_ARGS__)
#define OSAL_STRLEN(string) strlen(string)
#define OSAL_STRCPY(dst, string) strcpy(dst, string)
#define OSAL_STRNCPY(dst, string, len) strncpy(dst, string, len)
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index e4c002a..04c4947 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -783,7 +783,9 @@ static void ecore_int_deassertion_print_bit(struct ecore_hwfn *p_hwfn,
static enum _ecore_status_t
ecore_int_deassertion_aeu_bit(struct ecore_hwfn *p_hwfn,
struct aeu_invert_reg_bit *p_aeu,
- u32 aeu_en_reg, u32 bitmask)
+ u32 aeu_en_reg,
+ const char *p_bit_name,
+ u32 bitmask)
{
enum _ecore_status_t rc = ECORE_INVAL;
u32 val, mask;
@@ -795,12 +797,12 @@ ecore_int_deassertion_aeu_bit(struct ecore_hwfn *p_hwfn,
#endif
DP_INFO(p_hwfn, "Deasserted attention `%s'[%08x]\n",
- p_aeu->bit_name, bitmask);
+ p_bit_name, bitmask);
/* Call callback before clearing the interrupt status */
if (p_aeu->cb) {
DP_INFO(p_hwfn, "`%s (attention)': Calling Callback function\n",
- p_aeu->bit_name);
+ p_bit_name);
rc = p_aeu->cb(p_hwfn);
}
@@ -812,7 +814,7 @@ ecore_int_deassertion_aeu_bit(struct ecore_hwfn *p_hwfn,
/* Reach assertion if attention is fatal */
if (rc != ECORE_SUCCESS) {
DP_NOTICE(p_hwfn, true, "`%s': Fatal attention\n",
- p_aeu->bit_name);
+ p_bit_name);
ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_HW_ATTN);
}
@@ -824,7 +826,7 @@ ecore_int_deassertion_aeu_bit(struct ecore_hwfn *p_hwfn,
val = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en_reg);
ecore_wr(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en_reg, (val & mask));
DP_INFO(p_hwfn, "`%s' - Disabled future attentions\n",
- p_aeu->bit_name);
+ p_bit_name);
}
if (p_aeu->flags & (ATTENTION_FW_DUMP | ATTENTION_PANIC_DUMP)) {
@@ -942,8 +944,8 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
* previous assertion.
*/
for (j = 0, bit_idx = 0; bit_idx < 32; j++) {
+ unsigned long bitmask;
u8 bit, bit_len;
- u32 bitmask;
p_aeu = &sb_attn_sw->p_aeu_desc[i].bits[j];
@@ -961,10 +963,31 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
bitmask = bits & (((1 << bit_len) - 1) << bit);
if (bitmask) {
+ u32 flags = p_aeu->flags;
+ char bit_name[30];
+
+ bit = (u8)OSAL_FIND_FIRST_BIT(&bitmask,
+ bit_len);
+
+ /* Some bits represent more than a
+ * a single interrupt. Correctly print
+ * their name.
+ */
+ if (ATTENTION_LENGTH(flags) > 2 ||
+ ((flags & ATTENTION_PAR_INT) &&
+ ATTENTION_LENGTH(flags) > 1))
+ OSAL_SNPRINTF(bit_name, 30,
+ p_aeu->bit_name,
+ bit);
+ else
+ OSAL_STRNCPY(bit_name,
+ p_aeu->bit_name,
+ 30);
/* Handle source of the attention */
ecore_int_deassertion_aeu_bit(p_hwfn,
p_aeu,
aeu_en,
+ bit_name,
bitmask);
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 06/32] net/qede/base: additional formatting/comment changes
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (4 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 05/32] net/qede/base: add attention formatting string Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 07/32] net/qede: fix 32 bit compilation Rasesh Mody
` (26 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Rasesh Mody
Change details:
- adds new comments
- modifies some of the existing comments
- abstract code into macros
- split long lines
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
---
drivers/net/qede/base/ecore.h | 3 +-
drivers/net/qede/base/ecore_chain.h | 14 ++---
drivers/net/qede/base/ecore_cxt.c | 52 +++++++++----------
drivers/net/qede/base/ecore_cxt.h | 3 +-
drivers/net/qede/base/ecore_dcbx.c | 6 ++-
drivers/net/qede/base/ecore_dev.c | 70 +++++++++++++------------
drivers/net/qede/base/ecore_dev_api.h | 33 ++++++++----
drivers/net/qede/base/ecore_hsi_eth.h | 8 +--
drivers/net/qede/base/ecore_hw.c | 2 +-
drivers/net/qede/base/ecore_hw.h | 31 +++++++----
drivers/net/qede/base/ecore_hw_defs.h | 22 ++++----
drivers/net/qede/base/ecore_init_fw_funcs.c | 3 ++
drivers/net/qede/base/ecore_init_fw_funcs.h | 80 +++++++++++++++++++----------
drivers/net/qede/base/ecore_init_ops.h | 8 ++-
drivers/net/qede/base/ecore_int.c | 9 ++--
drivers/net/qede/base/ecore_iov_api.h | 57 ++++++++++++++------
drivers/net/qede/base/ecore_l2.c | 25 +++++----
drivers/net/qede/base/ecore_l2_api.h | 9 ++--
drivers/net/qede/base/ecore_mcp.c | 3 +-
drivers/net/qede/base/ecore_sp_commands.c | 12 ++---
drivers/net/qede/base/ecore_spq.c | 12 ++---
drivers/net/qede/base/ecore_spq.h | 21 +++++---
drivers/net/qede/base/eth_common.h | 15 ++++--
drivers/net/qede/base/nvm_cfg.h | 17 ++++--
24 files changed, 319 insertions(+), 196 deletions(-)
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index b9127de..9f456e3 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -94,7 +94,6 @@ static OSAL_INLINE u32 DB_ADDR(u32 cid, u32 DEMS)
return db_addr;
}
-/* @DPDK: This is a backport from latest ecore for TSS fix */
static OSAL_INLINE u32 DB_ADDR_VF(u32 cid, u32 DEMS)
{
u32 db_addr = FIELD_VALUE(DB_LEGACY_ADDR_DEMS, DEMS) |
@@ -107,6 +106,7 @@ static OSAL_INLINE u32 DB_ADDR_VF(u32 cid, u32 DEMS)
((sizeof(type_name) + (u32)(1 << (p_hwfn->p_dev->cache_shift)) - 1) & \
~((1 << (p_hwfn->p_dev->cache_shift)) - 1))
+#ifndef LINUX_REMOVE
#ifndef U64_HI
#define U64_HI(val) ((u32)(((u64)(val)) >> 32))
#endif
@@ -114,6 +114,7 @@ static OSAL_INLINE u32 DB_ADDR_VF(u32 cid, u32 DEMS)
#ifndef U64_LO
#define U64_LO(val) ((u32)(((u64)(val)) & 0xffffffff))
#endif
+#endif
#ifndef __EXTRACT__LINUX__
enum DP_LEVEL {
diff --git a/drivers/net/qede/base/ecore_chain.h b/drivers/net/qede/base/ecore_chain.h
index bc18c41..56b7b4d 100644
--- a/drivers/net/qede/base/ecore_chain.h
+++ b/drivers/net/qede/base/ecore_chain.h
@@ -307,21 +307,23 @@ ecore_chain_advance_page(struct ecore_chain *p_chain, void **p_next_elem,
(((p)->u.chain32.idx & (p)->elem_per_page_mask) == (p)->usable_per_page)
#define is_unusable_next_idx(p, idx) \
- ((((p)->u.chain16.idx + 1) & (p)->elem_per_page_mask) == \
- (p)->usable_per_page)
+ ((((p)->u.chain16.idx + 1) & \
+ (p)->elem_per_page_mask) == (p)->usable_per_page)
#define is_unusable_next_idx_u32(p, idx) \
- ((((p)->u.chain32.idx + 1) & (p)->elem_per_page_mask) \
- == (p)->usable_per_page)
+ ((((p)->u.chain32.idx + 1) & \
+ (p)->elem_per_page_mask) == (p)->usable_per_page)
#define test_and_skip(p, idx) \
do { \
if (is_chain_u16(p)) { \
if (is_unusable_idx(p, idx)) \
- (p)->u.chain16.idx += (p)->elem_unusable; \
+ (p)->u.chain16.idx += \
+ (p)->elem_unusable; \
} else { \
if (is_unusable_idx_u32(p, idx)) \
- (p)->u.chain32.idx += (p)->elem_unusable; \
+ (p)->u.chain32.idx += \
+ (p)->elem_unusable; \
} \
} while (0)
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 415d1c8..22d0b25 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -378,7 +378,7 @@ static void ecore_ilt_cli_blk_fill(struct ecore_ilt_client_cfg *p_cli,
{
u32 ilt_size = ILT_PAGE_IN_BYTES(p_cli->p_size.val);
- /* verfiy called once for each block */
+ /* verify that it's called once for each block */
if (p_blk->total_size)
return;
@@ -405,7 +405,8 @@ static void ecore_ilt_cli_adv_line(struct ecore_hwfn *p_hwfn,
p_cli->last.val = *p_line - 1;
DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
- "ILT[Client %d] - Lines: [%08x - %08x]. Block - Size %08x [Real %08x] Start line %d\n",
+ "ILT[Client %d] - Lines: [%08x - %08x]. Block - Size %08x"
+ " [Real %08x] Start line %d\n",
client_id, p_cli->first.val, p_cli->last.val,
p_blk->total_size, p_blk->real_size_in_page,
p_blk->start_line);
@@ -453,7 +454,7 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
p_mngr->pf_start_line = RESC_START(p_hwfn, ECORE_ILT);
DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
- "hwfn [%d] - Set context manager starting line to be 0x%08x\n",
+ "hwfn [%d] - Set context mngr starting line to be 0x%08x\n",
p_hwfn->my_id, p_hwfn->p_cxt_mngr->pf_start_line);
/* CDUC */
@@ -797,16 +798,20 @@ t2_fail:
return rc;
}
+#define for_each_ilt_valid_client(pos, clients) \
+ for (pos = 0; pos < ILT_CLI_MAX; pos++) \
+ if (!clients[pos].active) { \
+ continue; \
+ } else \
+
+
/* Total number of ILT lines used by this PF */
static u32 ecore_cxt_ilt_shadow_size(struct ecore_ilt_client_cfg *ilt_clients)
{
u32 size = 0;
u32 i;
- for (i = 0; i < ILT_CLI_MAX; i++)
- if (!ilt_clients[i].active)
- continue;
- else
+ for_each_ilt_valid_client(i, ilt_clients)
size += (ilt_clients[i].last.val -
ilt_clients[i].first.val + 1);
@@ -876,9 +881,9 @@ ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn,
ilt_shadow[line].size = size;
DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
- "ILT shadow: Line [%d] Physical 0x%" PRIx64
+ "ILT shadow: Line [%d] Physical 0x%lx"
" Virtual %p Size %d\n",
- line, (u64)p_phys, p_virt, size);
+ line, (unsigned long)p_phys, p_virt, size);
sz_left -= size;
line++;
@@ -892,15 +897,16 @@ static enum _ecore_status_t ecore_ilt_shadow_alloc(struct ecore_hwfn *p_hwfn)
struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
struct ecore_ilt_client_cfg *clients = p_mngr->clients;
struct ecore_ilt_cli_blk *p_blk;
- enum _ecore_status_t rc;
u32 size, i, j, k;
+ enum _ecore_status_t rc;
size = ecore_cxt_ilt_shadow_size(clients);
p_mngr->ilt_shadow = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL,
size * sizeof(struct ecore_dma_mem));
if (!p_mngr->ilt_shadow) {
- DP_NOTICE(p_hwfn, true, "Failed to allocate ilt shadow table");
+ DP_NOTICE(p_hwfn, true,
+ "Failed to allocate ilt shadow table\n");
rc = ECORE_NOMEM;
goto ilt_shadow_fail;
}
@@ -909,10 +915,7 @@ static enum _ecore_status_t ecore_ilt_shadow_alloc(struct ecore_hwfn *p_hwfn)
"Allocated 0x%x bytes for ilt shadow\n",
(u32)(size * sizeof(struct ecore_dma_mem)));
- for (i = 0; i < ILT_CLI_MAX; i++)
- if (!clients[i].active) {
- continue;
- } else {
+ for_each_ilt_valid_client(i, clients) {
for (j = 0; j < ILT_CLI_PF_BLOCKS; j++) {
p_blk = &clients[i].pf_blks[j];
rc = ecore_ilt_blk_alloc(p_hwfn, p_blk, i, 0);
@@ -1362,10 +1365,7 @@ static void ecore_ilt_bounds_init(struct ecore_hwfn *p_hwfn)
int i;
ilt_clients = p_hwfn->p_cxt_mngr->clients;
- for (i = 0; i < ILT_CLI_MAX; i++)
- if (!ilt_clients[i].active) {
- continue;
- } else {
+ for_each_ilt_valid_client(i, ilt_clients) {
STORE_RT_REG(p_hwfn,
ilt_clients[i].first.reg,
ilt_clients[i].first.val);
@@ -1448,10 +1448,7 @@ static void ecore_ilt_init_pf(struct ecore_hwfn *p_hwfn)
p_shdw = p_mngr->ilt_shadow;
clients = p_hwfn->p_cxt_mngr->clients;
- for (i = 0; i < ILT_CLI_MAX; i++)
- if (!clients[i].active) {
- continue;
- } else {
+ for_each_ilt_valid_client(i, clients) {
/* Client's 1st val and RT array are absolute, ILT shadows'
* lines are relative.
*/
@@ -1474,9 +1471,10 @@ static void ecore_ilt_init_pf(struct ecore_hwfn *p_hwfn)
DP_VERBOSE(p_hwfn, ECORE_MSG_ILT,
"Setting RT[0x%08x] from"
" ILT[0x%08x] [Client is %d] to"
- " Physical addr: 0x%" PRIx64 "\n",
+ " Physical addr: 0x%lx\n",
rt_offst, line, i,
- (u64)(p_shdw[line].p_phys >> 12));
+ (unsigned long)(p_shdw[line].
+ p_phys >> 12));
}
STORE_RT_REG_AGG(p_hwfn, rt_offst, ilt_hw_entry);
@@ -1557,7 +1555,7 @@ static void ecore_tm_init_pf(struct ecore_hwfn *p_hwfn)
SET_FIELD(cfg_word, TM_CFG_NUM_IDS, tm_iids.pf_cids);
SET_FIELD(cfg_word, TM_CFG_PRE_SCAN_OFFSET, 0);
SET_FIELD(cfg_word, TM_CFG_PARENT_PF, 0); /* n/a for PF */
- SET_FIELD(cfg_word, TM_CFG_CID_PRE_SCAN_ROWS, 0);
+ SET_FIELD(cfg_word, TM_CFG_CID_PRE_SCAN_ROWS, 0); /* scan all */
rt_reg = TM_REG_CONFIG_CONN_MEM_RT_OFFSET +
(sizeof(cfg_word) / sizeof(u32)) *
@@ -1650,7 +1648,7 @@ enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
p_mngr->acquired[type].max_count);
if (rel_cid >= p_mngr->acquired[type].max_count) {
- DP_NOTICE(p_hwfn, false, "no CID available for protocol %d",
+ DP_NOTICE(p_hwfn, false, "no CID available for protocol %d\n",
type);
return ECORE_NORESOURCES;
}
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index 1ac95f9..ba02410 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -152,6 +152,7 @@ enum _ecore_status_t ecore_cxt_free_proto_ilt(struct ecore_hwfn *p_hwfn,
#define ECORE_CTX_FL_MEM 1
enum _ecore_status_t ecore_cxt_get_task_ctx(struct ecore_hwfn *p_hwfn,
u32 tid,
- u8 ctx_type, void **task_ctx);
+ u8 ctx_type,
+ void **task_ctx);
#endif /* _ECORE_CID_ */
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 18843c4..db73658 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -348,14 +348,16 @@ ecore_dcbx_copy_mib(struct ecore_hwfn *p_hwfn,
read_count++;
DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
- "mib type = %d, try count = %d prefix seq num = %d suffix seq num = %d\n",
+ "mib type = %d, try count = %d prefix seq num ="
+ " %d suffix seq num = %d\n",
type, read_count, prefix_seq_num, suffix_seq_num);
} while ((prefix_seq_num != suffix_seq_num) &&
(read_count < ECORE_DCBX_MAX_MIB_READ_TRY));
if (read_count >= ECORE_DCBX_MAX_MIB_READ_TRY) {
DP_ERR(p_hwfn,
- "MIB read err, mib type = %d, try count = %d prefix seq num = %d suffix seq num = %d\n",
+ "MIB read err, mib type = %d, try count ="
+ " %d prefix seq num = %d suffix seq num = %d\n",
type, read_count, prefix_seq_num, suffix_seq_num);
rc = ECORE_IO;
}
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 9e32279..fd38215 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -323,8 +323,8 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
{
struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
- enum _ecore_status_t rc;
bool b_rc;
+ enum _ecore_status_t rc;
/* qm_info is allocated in ecore_init_qm_info() which is already called
* from ecore_resc_alloc() or previous call of ecore_qm_reconf().
@@ -467,12 +467,20 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
goto alloc_no_mem;
p_hwfn->p_consq = p_consq;
+#ifdef CONFIG_ECORE_LL2
+ if (p_hwfn->using_ll2) {
+ p_ll2_info = ecore_ll2_alloc(p_hwfn);
+ if (!p_ll2_info)
+ goto alloc_no_mem;
+ p_hwfn->p_ll2_info = p_ll2_info;
+ }
+#endif
+
/* DMA info initialization */
rc = ecore_dmae_info_alloc(p_hwfn);
if (rc) {
DP_NOTICE(p_hwfn, true,
- "Failed to allocate memory for"
- " dmae_info structure\n");
+ "Failed to allocate memory for dmae_info structure\n");
goto alloc_err;
}
@@ -480,7 +488,7 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
rc = ecore_dcbx_info_alloc(p_hwfn);
if (rc) {
DP_NOTICE(p_hwfn, true,
- "Failed to allocate memory for dcbxstruct\n");
+ "Failed to allocate memory for dcbx structure\n");
goto alloc_err;
}
}
@@ -558,9 +566,11 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn *p_hwfn,
command |= SDM_COMP_TYPE_AGG_INT << SDM_OP_GEN_COMP_TYPE_SHIFT;
/* Make sure notification is not set before initiating final cleanup */
+
if (REG_RD(p_hwfn, addr)) {
DP_NOTICE(p_hwfn, false,
- "Unexpected; Found final cleanup notification "
+ "Unexpected; Found final cleanup notification");
+ DP_NOTICE(p_hwfn, false,
" before initiating final cleanup\n");
REG_WR(p_hwfn, addr, 0);
}
@@ -742,11 +752,11 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
int hw_mode)
{
struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
- enum _ecore_status_t rc = ECORE_SUCCESS;
struct ecore_dev *p_dev = p_hwfn->p_dev;
u8 vf_id, max_num_vfs;
u16 num_pfs, pf_id;
u32 concrete_fid;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
ecore_init_cau_rt_data(p_dev);
@@ -906,11 +916,15 @@ static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
return;
}
+ /* XLPORT MAC MODE *//* 0 Quad, 4 Single... */
ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MODE_REG, (0x4 << 4) | 0x4, 1,
port);
ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_MAC_CONTROL, 0, 1, port);
+ /* XLMAC: SOFT RESET */
ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_CTRL, 0x40, 0, port);
+ /* XLMAC: Port Speed >= 10Gbps */
ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_MODE, 0x40, 0, port);
+ /* XLMAC: Max Size */
ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_RX_MAX_SIZE, 0x3fff, 0, port);
ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_TX_CTRL,
0x01000000800ULL | (0xa << 12) | ((u64)1 << 38),
@@ -1103,13 +1117,12 @@ ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
bool b_hw_start,
enum ecore_int_mode int_mode, bool allow_npar_tx_switch)
{
- enum _ecore_status_t rc = ECORE_SUCCESS;
u8 rel_pf_id = p_hwfn->rel_pf_id;
u32 prs_reg;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
u16 ctrl;
int pos;
- /* ILT/DQ/CM/QM */
if (p_hwfn->mcp_info) {
struct ecore_mcp_function_info *p_info;
@@ -1344,6 +1357,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
if (rc)
break;
+#ifndef REAL_ASIC_ONLY
if (ENABLE_EAGLE_ENG1_WORKAROUND(p_hwfn)) {
struct init_nig_pri_tc_map_req tc_map;
@@ -1360,7 +1374,8 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
p_hwfn->p_main_ptt,
&tc_map);
}
- /* fallthrough */
+#endif
+ /* Fall into */
case FW_MSG_CODE_DRV_LOAD_FUNCTION:
rc = ecore_hw_init_pf(p_hwfn, p_hwfn->p_main_ptt,
p_tunn, p_hwfn->hw_info.hw_mode,
@@ -1374,7 +1389,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
if (rc != ECORE_SUCCESS)
DP_NOTICE(p_hwfn, true,
- "init phase failed loadcode 0x%x (rc %d)\n",
+ "init phase failed for loadcode 0x%x (rc %d)\n",
load_code, rc);
/* ACK mfw regardless of success or failure of initialization */
@@ -1391,8 +1406,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
/* send DCBX attention request command */
DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
- "sending phony dcbx set command to trigger DCBx"
- " attention handling\n");
+ "sending phony dcbx set command to trigger DCBx attention handling\n");
mfw_rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
DRV_MSG_CODE_SET_DCBX,
1 << DRV_MB_PARAM_DCBX_NOTIFY_SHIFT,
@@ -1419,8 +1433,8 @@ static OSAL_INLINE void ecore_hw_timers_stop(struct ecore_dev *p_dev,
/* close timers */
ecore_wr(p_hwfn, p_ptt, TM_REG_PF_ENABLE_CONN, 0x0);
ecore_wr(p_hwfn, p_ptt, TM_REG_PF_ENABLE_TASK, 0x0);
- for (i = 0; i < ECORE_HW_STOP_RETRY_LIMIT &&
- !p_dev->recov_in_prog; i++) {
+ for (i = 0; i < ECORE_HW_STOP_RETRY_LIMIT && !p_dev->recov_in_prog;
+ i++) {
if ((!ecore_rd(p_hwfn, p_ptt,
TM_REG_PF_SCAN_ACTIVE_CONN)) &&
(!ecore_rd(p_hwfn, p_ptt, TM_REG_PF_SCAN_ACTIVE_TASK)))
@@ -1433,8 +1447,7 @@ static OSAL_INLINE void ecore_hw_timers_stop(struct ecore_dev *p_dev,
}
if (i == ECORE_HW_STOP_RETRY_LIMIT)
DP_NOTICE(p_hwfn, true,
- "Timers linear scans are not over"
- " [Connection %02x Tasks %02x]\n",
+ "Timers linear scans are not over [Connection %02x Tasks %02x]\n",
(u8)ecore_rd(p_hwfn, p_ptt,
TM_REG_PF_SCAN_ACTIVE_CONN),
(u8)ecore_rd(p_hwfn, p_ptt,
@@ -1475,9 +1488,7 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
rc = ecore_sp_pf_stop(p_hwfn);
if (rc)
DP_NOTICE(p_hwfn, true,
- "Failed to close PF against FW. Continue to"
- " stop HW to prevent illegal host access"
- " by the device\n");
+ "Failed to close PF against FW. Continue to stop HW to prevent illegal host access by the device\n");
/* perform debug action after PF stop was sent */
OSAL_AFTER_PF_STOP((void *)p_hwfn->p_dev, p_hwfn->my_id);
@@ -1938,8 +1949,7 @@ static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
link->loopback_mode = 0;
DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
- "Read default link: Speed 0x%08x, Adv. Speed 0x%08x,"
- " AN: 0x%02x, PAUSE AN: 0x%02x\n",
+ "Read default link: Speed 0x%08x, Adv. Speed 0x%08x, AN: 0x%02x, PAUSE AN: 0x%02x\n",
link->speed.forced_speed, link->speed.advertised_speeds,
link->speed.autoneg, link->pause.autoneg);
@@ -2217,8 +2227,7 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
MISCS_REG_CHIP_METAL);
MASK_FIELD(CHIP_METAL, p_dev->chip_metal);
DP_INFO(p_dev->hwfns,
- "Chip details - %s%d, Num: %04x Rev: %04x Bond id: %04x"
- " Metal: %04x\n",
+ "Chip details - %s%d, Num: %04x Rev: %04x Bond id: %04x Metal: %04x\n",
ECORE_IS_BB(p_dev) ? "BB" : "AH",
CHIP_REV_IS_A0(p_dev) ? 0 : 1,
p_dev->chip_num, p_dev->chip_rev, p_dev->chip_bond_id,
@@ -2527,8 +2536,7 @@ ecore_chain_alloc_sanity_check(struct ecore_dev *p_dev,
(cnt_type == ECORE_CHAIN_CNT_TYPE_U32 &&
chain_size > ECORE_U32_MAX)) {
DP_NOTICE(p_dev, true,
- "The actual chain size (0x%lx) is larger than"
- " the maximal possible value\n",
+ "The actual chain size (0x%lx) is larger than the maximal possible value\n",
(unsigned long)chain_size);
return ECORE_INVAL;
}
@@ -2706,8 +2714,7 @@ enum _ecore_status_t ecore_fw_l2_queue(struct ecore_hwfn *p_hwfn,
min = (u16)RESC_START(p_hwfn, ECORE_L2_QUEUE);
max = min + RESC_NUM(p_hwfn, ECORE_L2_QUEUE);
DP_NOTICE(p_hwfn, true,
- "l2_queue id [%d] is not valid, available"
- " indices [%d - %d]\n",
+ "l2_queue id [%d] is not valid, available indices [%d - %d]\n",
src_id, min, max);
return ECORE_INVAL;
@@ -2727,8 +2734,7 @@ enum _ecore_status_t ecore_fw_vport(struct ecore_hwfn *p_hwfn,
min = (u8)RESC_START(p_hwfn, ECORE_VPORT);
max = min + RESC_NUM(p_hwfn, ECORE_VPORT);
DP_NOTICE(p_hwfn, true,
- "vport id [%d] is not valid, available"
- " indices [%d - %d]\n",
+ "vport id [%d] is not valid, available indices [%d - %d]\n",
src_id, min, max);
return ECORE_INVAL;
@@ -2748,7 +2754,7 @@ enum _ecore_status_t ecore_fw_rss_eng(struct ecore_hwfn *p_hwfn,
min = (u8)RESC_START(p_hwfn, ECORE_RSS_ENG);
max = min + RESC_NUM(p_hwfn, ECORE_RSS_ENG);
DP_NOTICE(p_hwfn, true,
- "rss_eng id [%d] is not valid,avail idx [%d - %d]\n",
+ "rss_eng id [%d] is not valid, available indices [%d - %d]\n",
src_id, min, max);
return ECORE_INVAL;
@@ -3333,7 +3339,7 @@ int ecore_configure_vport_wfq(struct ecore_dev *p_dev, u16 vp_id, u32 rate)
/* TBD - for multiple hardware functions - that is 100 gig */
if (p_dev->num_hwfns > 1) {
DP_NOTICE(p_dev, false,
- "WFQ configuration is not supported for this dev\n");
+ "WFQ configuration is not supported for this device\n");
return rc;
}
@@ -3367,7 +3373,7 @@ void ecore_configure_vp_wfq_on_link_change(struct ecore_dev *p_dev,
/* TBD - for multiple hardware functions - that is 100 gig */
if (p_dev->num_hwfns > 1) {
DP_VERBOSE(p_dev, ECORE_MSG_LINK,
- "WFQ configuration is not supported for this dev\n");
+ "WFQ configuration is not supported for this device\n");
return;
}
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 1b78c32..77f4869 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -24,7 +24,9 @@ struct ecore_tunn_start_params;
* @param dp_ctx
*/
void ecore_init_dp(struct ecore_dev *p_dev,
- u32 dp_module, u8 dp_level, void *dp_ctx);
+ u32 dp_module,
+ u8 dp_level,
+ void *dp_ctx);
/**
* @brief ecore_init_struct - initialize the device structure to
@@ -172,7 +174,8 @@ struct ecore_ptt *ecore_ptt_acquire(struct ecore_hwfn *p_hwfn);
* @param p_hwfn
* @param p_ptt
*/
-void ecore_ptt_release(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
+void ecore_ptt_release(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt);
#ifndef __EXTRACT__LINUX__
struct ecore_eth_stats {
@@ -290,7 +293,9 @@ enum _ecore_status_t
ecore_dmae_host2grc(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u64 source_addr,
- u32 grc_addr, u32 size_in_dwords, u32 flags);
+ u32 grc_addr,
+ u32 size_in_dwords,
+ u32 flags);
/**
* @brief ecore_dmae_grc2host - Read data from dmae data offset
@@ -306,7 +311,9 @@ enum _ecore_status_t
ecore_dmae_grc2host(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u32 grc_addr,
- dma_addr_t dest_addr, u32 size_in_dwords, u32 flags);
+ dma_addr_t dest_addr,
+ u32 size_in_dwords,
+ u32 flags);
/**
* @brief ecore_dmae_host2host - copy data from to source address
@@ -324,7 +331,8 @@ ecore_dmae_host2host(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
dma_addr_t source_addr,
dma_addr_t dest_addr,
- u32 size_in_dwords, struct ecore_dmae_params *p_params);
+ u32 size_in_dwords,
+ struct ecore_dmae_params *p_params);
/**
* @brief ecore_chain_alloc - Allocate and initialize a chain
@@ -344,7 +352,8 @@ ecore_chain_alloc(struct ecore_dev *p_dev,
enum ecore_chain_mode mode,
enum ecore_chain_cnt_type cnt_type,
u32 num_elems,
- osal_size_t elem_size, struct ecore_chain *p_chain);
+ osal_size_t elem_size,
+ struct ecore_chain *p_chain);
/**
* @brief ecore_chain_free - Free chain DMA memory
@@ -352,7 +361,8 @@ ecore_chain_alloc(struct ecore_dev *p_dev,
* @param p_hwfn
* @param p_chain
*/
-void ecore_chain_free(struct ecore_dev *p_dev, struct ecore_chain *p_chain);
+void ecore_chain_free(struct ecore_dev *p_dev,
+ struct ecore_chain *p_chain);
/**
* @@brief ecore_fw_l2_queue - Get absolute L2 queue ID
@@ -364,7 +374,8 @@ void ecore_chain_free(struct ecore_dev *p_dev, struct ecore_chain *p_chain);
* @return enum _ecore_status_t
*/
enum _ecore_status_t ecore_fw_l2_queue(struct ecore_hwfn *p_hwfn,
- u16 src_id, u16 *dst_id);
+ u16 src_id,
+ u16 *dst_id);
/**
* @@brief ecore_fw_vport - Get absolute vport ID
@@ -376,7 +387,8 @@ enum _ecore_status_t ecore_fw_l2_queue(struct ecore_hwfn *p_hwfn,
* @return enum _ecore_status_t
*/
enum _ecore_status_t ecore_fw_vport(struct ecore_hwfn *p_hwfn,
- u8 src_id, u8 *dst_id);
+ u8 src_id,
+ u8 *dst_id);
/**
* @@brief ecore_fw_rss_eng - Get absolute RSS engine ID
@@ -388,7 +400,8 @@ enum _ecore_status_t ecore_fw_vport(struct ecore_hwfn *p_hwfn,
* @return enum _ecore_status_t
*/
enum _ecore_status_t ecore_fw_rss_eng(struct ecore_hwfn *p_hwfn,
- u8 src_id, u8 *dst_id);
+ u8 src_id,
+ u8 *dst_id);
/**
* @brief ecore_llh_add_mac_filter - configures a MAC filter in llh
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index 78cc55d..dd94d31 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -872,7 +872,7 @@ struct eth_vport_tpa_param {
u8 tpa_ipv6_tunn_en_flg /* Enable TPA for IPv6 over tunnel */;
u8 tpa_pkt_split_flg;
u8 tpa_hdr_data_split_flg
-/* If set, put header of first TPA segment on bd and data on SGE */
+ /* If set, put header of first TPA segment on bd and data on SGE */
;
u8 tpa_gro_consistent_flg
/* If set, GRO data consistent will checked for TPA continue */;
@@ -882,10 +882,10 @@ struct eth_vport_tpa_param {
__le16 tpa_min_size_to_start
/* minimum TCP payload size for a packet to start aggregation */;
__le16 tpa_min_size_to_cont
-/* minimum TCP payload size for a packet to continue aggregation */
+ /* minimum TCP payload size for a packet to continue aggregation */
;
u8 max_buff_num
-/* maximal number of buffers that can be used for one aggregation */
+ /* maximal number of buffers that can be used for one aggregation */
;
u8 reserved;
};
@@ -998,7 +998,7 @@ struct rx_queue_start_ramrod_data {
};
/*
- * Ramrod data for rx queue start ramrod
+ * Ramrod data for rx queue stop ramrod
*/
struct rx_queue_stop_ramrod_data {
__le16 rx_queue_id /* ID of RX queue */;
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 72bc6de..04ec1ea 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -580,8 +580,8 @@ void ecore_dmae_info_free(struct ecore_hwfn *p_hwfn)
static enum _ecore_status_t ecore_dmae_operation_wait(struct ecore_hwfn *p_hwfn)
{
- enum _ecore_status_t ecore_status = ECORE_SUCCESS;
u32 wait_cnt_limit = 10000, wait_cnt = 0;
+ enum _ecore_status_t ecore_status = ECORE_SUCCESS;
#ifndef ASIC_ONLY
u32 factor = (CHIP_REV_IS_EMUL(p_hwfn->p_dev) ?
diff --git a/drivers/net/qede/base/ecore_hw.h b/drivers/net/qede/base/ecore_hw.h
index 9603c99..154eb3c 100644
--- a/drivers/net/qede/base/ecore_hw.h
+++ b/drivers/net/qede/base/ecore_hw.h
@@ -105,7 +105,8 @@ void ecore_ptt_pool_free(struct ecore_hwfn *p_hwfn);
*
* @return u32
*/
-u32 ecore_ptt_get_hw_addr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
+u32 ecore_ptt_get_hw_addr(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt);
/**
* @brief ecore_ptt_get_bar_addr - Get PPT's external BAR address
@@ -125,7 +126,8 @@ u32 ecore_ptt_get_bar_addr(struct ecore_ptt *p_ptt);
* @param p_ptt
*/
void ecore_ptt_set_win(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u32 new_hw_addr);
+ struct ecore_ptt *p_ptt,
+ u32 new_hw_addr);
/**
* @brief ecore_get_reserved_ptt - Get a specific reserved PTT
@@ -147,7 +149,9 @@ struct ecore_ptt *ecore_get_reserved_ptt(struct ecore_hwfn *p_hwfn,
* @param hw_addr
*/
void ecore_wr(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u32 hw_addr, u32 val);
+ struct ecore_ptt *p_ptt,
+ u32 hw_addr,
+ u32 val);
/**
* @brief ecore_rd - Read value from BAR using the given ptt
@@ -157,7 +161,9 @@ void ecore_wr(struct ecore_hwfn *p_hwfn,
* @param val
* @param hw_addr
*/
-u32 ecore_rd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 hw_addr);
+u32 ecore_rd(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u32 hw_addr);
/**
* @brief ecore_memcpy_from - copy n bytes from BAR using the given
@@ -171,7 +177,9 @@ u32 ecore_rd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 hw_addr);
*/
void ecore_memcpy_from(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
- void *dest, u32 hw_addr, osal_size_t n);
+ void *dest,
+ u32 hw_addr,
+ osal_size_t n);
/**
* @brief ecore_memcpy_to - copy n bytes to BAR using the given
@@ -185,7 +193,9 @@ void ecore_memcpy_from(struct ecore_hwfn *p_hwfn,
*/
void ecore_memcpy_to(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
- u32 hw_addr, void *src, osal_size_t n);
+ u32 hw_addr,
+ void *src,
+ osal_size_t n);
/**
* @brief ecore_fid_pretend - pretend to another function when
* accessing the ptt window. There is no way to unpretend
@@ -198,7 +208,8 @@ void ecore_memcpy_to(struct ecore_hwfn *p_hwfn,
* either pf / vf, port/path fields are don't care.
*/
void ecore_fid_pretend(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u16 fid);
+ struct ecore_ptt *p_ptt,
+ u16 fid);
/**
* @brief ecore_port_pretend - pretend to another port when
@@ -209,7 +220,8 @@ void ecore_fid_pretend(struct ecore_hwfn *p_hwfn,
* @param port_id - the port to pretend to
*/
void ecore_port_pretend(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u8 port_id);
+ struct ecore_ptt *p_ptt,
+ u8 port_id);
/**
* @brief ecore_port_unpretend - cancel any previously set port
@@ -218,7 +230,8 @@ void ecore_port_pretend(struct ecore_hwfn *p_hwfn,
* @param p_hwfn
* @param p_ptt
*/
-void ecore_port_unpretend(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
+void ecore_port_unpretend(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt);
/**
* @brief ecore_vfid_to_concrete - build a concrete FID for a
diff --git a/drivers/net/qede/base/ecore_hw_defs.h b/drivers/net/qede/base/ecore_hw_defs.h
index 19816ff..deb8e34 100644
--- a/drivers/net/qede/base/ecore_hw_defs.h
+++ b/drivers/net/qede/base/ecore_hw_defs.h
@@ -10,19 +10,19 @@
#define _ECORE_IGU_DEF_H_
/* Fields of IGU PF CONFIGRATION REGISTER */
-#define IGU_PF_CONF_FUNC_EN (0x1 << 0) /* function enable */
-#define IGU_PF_CONF_MSI_MSIX_EN (0x1 << 1) /* MSI/MSIX enable */
-#define IGU_PF_CONF_INT_LINE_EN (0x1 << 2) /* INT enable */
-#define IGU_PF_CONF_ATTN_BIT_EN (0x1 << 3) /* attention enable */
-#define IGU_PF_CONF_SINGLE_ISR_EN (0x1 << 4) /* single ISR mode enable */
-#define IGU_PF_CONF_SIMD_MODE (0x1 << 5) /* simd all ones mode */
+#define IGU_PF_CONF_FUNC_EN (0x1 << 0) /* function enable */
+#define IGU_PF_CONF_MSI_MSIX_EN (0x1 << 1) /* MSI/MSIX enable */
+#define IGU_PF_CONF_INT_LINE_EN (0x1 << 2) /* INT enable */
+#define IGU_PF_CONF_ATTN_BIT_EN (0x1 << 3) /* attention enable */
+#define IGU_PF_CONF_SINGLE_ISR_EN (0x1 << 4) /* single ISR mode enable */
+#define IGU_PF_CONF_SIMD_MODE (0x1 << 5) /* simd all ones mode */
/* Fields of IGU VF CONFIGRATION REGISTER */
-#define IGU_VF_CONF_FUNC_EN (0x1 << 0) /* function enable */
-#define IGU_VF_CONF_MSI_MSIX_EN (0x1 << 1) /* MSI/MSIX enable */
-#define IGU_VF_CONF_SINGLE_ISR_EN (0x1 << 4) /* single ISR mode enable */
-#define IGU_VF_CONF_PARENT_MASK (0xF) /* Parent PF */
-#define IGU_VF_CONF_PARENT_SHIFT 5 /* Parent PF */
+#define IGU_VF_CONF_FUNC_EN (0x1 << 0) /* function enable */
+#define IGU_VF_CONF_MSI_MSIX_EN (0x1 << 1) /* MSI/MSIX enable */
+#define IGU_VF_CONF_SINGLE_ISR_EN (0x1 << 4) /* single ISR mode enable */
+#define IGU_VF_CONF_PARENT_MASK (0xF) /* Parent PF */
+#define IGU_VF_CONF_PARENT_SHIFT 5 /* Parent PF */
/* Igu control commands
*/
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index 0844194..bffc73c 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -1099,6 +1099,8 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
ecore_wr(p_hwfn, p_ptt,
BRB_REG_MAIN_TC_GUARANTIED_HYST_0 + reg_offset,
BRB_HYST_BLOCKS);
+/* init pause/full thresholds per physical TC - for loopback traffic */
+
ecore_wr(p_hwfn, p_ptt,
BRB_REG_LB_TC_FULL_XOFF_THRESHOLD_0 +
reg_offset, full_xoff_th);
@@ -1111,6 +1113,7 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
ecore_wr(p_hwfn, p_ptt,
BRB_REG_LB_TC_PAUSE_XON_THRESHOLD_0 +
reg_offset, pause_xon_th);
+/* init pause/full thresholds per physical TC - for main traffic */
ecore_wr(p_hwfn, p_ptt,
BRB_REG_MAIN_TC_FULL_XOFF_THRESHOLD_0 +
reg_offset, full_xoff_th);
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index 0c8d1fb..f5df764 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -40,7 +40,7 @@ u32 ecore_qm_pf_mem_size(u8 pf_id,
* @param pf_wfq_en - enable per-PF WFQ
* @param vport_rl_en - enable per-VPORT rate limiters
* @param vport_wfq_en - enable per-VPORT WFQ
- * @param port_params- array of size MAX_NUM_PORTS with parameters for each port
+ * @param port_params - array of size MAX_NUM_PORTS with params for each port
*
* @return 0 on success, -1 on error.
*/
@@ -83,7 +83,9 @@ int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
* @return 0 on success, -1 on error.
*/
int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u8 pf_id, u16 pf_wfq);
+ struct ecore_ptt *p_ptt,
+ u8 pf_id,
+ u16 pf_wfq);
/**
* @brief ecore_init_pf_rl Initializes the rate limit of the specified PF
*
@@ -95,9 +97,11 @@ int ecore_init_pf_wfq(struct ecore_hwfn *p_hwfn,
* @return 0 on success, -1 on error.
*/
int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u8 pf_id, u32 pf_rl);
+ struct ecore_ptt *p_ptt,
+ u8 pf_id,
+ u32 pf_rl);
/**
- * @brief ecore_init_vport_wfq Initializes the WFQ weight of the specified VPORT
+ * @brief ecore_init_vport_wfq Initializes the WFQ weight of specified VPORT
*
* @param p_hwfn
* @param p_ptt - ptt window used for writing the registers
@@ -110,7 +114,8 @@ int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
*/
int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
- u16 first_tx_pq_id[NUM_OF_TCS], u16 vport_wfq);
+ u16 first_tx_pq_id[NUM_OF_TCS],
+ u16 vport_wfq);
/**
* @brief ecore_init_vport_rl Initializes the rate limit of the specified VPORT
*
@@ -122,7 +127,9 @@ int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
* @return 0 on success, -1 on error.
*/
int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u8 vport_id, u32 vport_rl);
+ struct ecore_ptt *p_ptt,
+ u8 vport_id,
+ u32 vport_rl);
/**
* @brief ecore_send_qm_stop_cmd Sends a stop command to the QM
*
@@ -133,13 +140,16 @@ int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
* @param start_pq - first PQ ID to stop
* @param num_pqs - Number of PQs to stop, starting from start_pq.
*
- * @return bool, true if successful, false if timeout occurred while
- * waiting for QM command done.
+ * @return bool, true if successful, false if timeout occurred while waiting
+ * for QM command done.
*/
bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
bool is_release_cmd,
- bool is_tx_pq, u16 start_pq, u16 num_pqs);
+ bool is_tx_pq,
+ u16 start_pq,
+ u16 num_pqs);
+#ifndef UNUSED_HSI_FUNC
/**
* @brief ecore_init_nig_ets - initializes the NIG ETS arbiter
*
@@ -153,7 +163,8 @@ bool ecore_send_qm_stop_cmd(struct ecore_hwfn *p_hwfn,
*/
void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
- struct init_ets_req *req, bool is_lb);
+ struct init_ets_req *req,
+ bool is_lb);
/**
* @brief ecore_init_nig_lb_rl - initializes the NIG LB RLs
*
@@ -165,6 +176,7 @@ void ecore_init_nig_ets(struct ecore_hwfn *p_hwfn,
void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct init_nig_lb_rl_req *req);
+#endif /* UNUSED_HSI_FUNC */
/**
* @brief ecore_init_nig_pri_tc_map - initializes the NIG priority to TC map.
*
@@ -176,6 +188,7 @@ void ecore_init_nig_lb_rl(struct ecore_hwfn *p_hwfn,
void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct init_nig_pri_tc_map_req *req);
+#ifndef UNUSED_HSI_FUNC
/**
* @brief ecore_init_prs_ets - initializes the PRS Rx ETS arbiter
*
@@ -185,7 +198,10 @@ void ecore_init_nig_pri_tc_map(struct ecore_hwfn *p_hwfn,
* @param req - the PRS ETS initialization requirements.
*/
void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, struct init_ets_req *req);
+ struct ecore_ptt *p_ptt,
+ struct init_ets_req *req);
+#endif /* UNUSED_HSI_FUNC */
+#ifndef UNUSED_HSI_FUNC
/**
* @brief ecore_init_brb_ram - initializes BRB RAM sizes per TC
*
@@ -195,35 +211,43 @@ void ecore_init_prs_ets(struct ecore_hwfn *p_hwfn,
* @param req - the BRB RAM initialization requirements.
*/
void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, struct init_brb_ram_req *req);
+ struct ecore_ptt *p_ptt,
+ struct init_brb_ram_req *req);
+#endif /* UNUSED_HSI_FUNC */
+#ifndef UNUSED_HSI_FUNC
/**
- * @brief ecore_set_engine_mf_ovlan_eth_type - initializes Nig,Prs,Pbf
- * and llh ethType Regs to input ethType
- * should Be called once per engine if engine is in BD mode.
+ * @brief ecore_set_engine_mf_ovlan_eth_type - initializes Nig,Prs,Pbf and llh
+ * ethType Regs to input ethType
+ * should Be called once per engine
+ * if engine
+ * is in BD mode.
*
* @param p_ptt - ptt window used for writing the registers.
* @param ethType - etherType to configure
*/
void ecore_set_engine_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u32 eth_type);
+ struct ecore_ptt *p_ptt, u32 ethType);
/**
- * @brief ecore_set_port_mf_ovlan_eth_type - initializes DORQ ethType Regs
- * to input ethType
- * should Be called once per port.
+ * @brief ecore_set_port_mf_ovlan_eth_type - initializes DORQ ethType Regs to
+ * input ethType should Be called
+ * once per port.
*
* @param p_ptt - ptt window used for writing the registers.
* @param ethType - etherType to configure
*/
void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u32 eth_type);
+ struct ecore_ptt *p_ptt, u32 ethType);
+#endif /* UNUSED_HSI_FUNC */
/**
- * @brief ecore_set_vxlan_dest_port - init vxlan tunnel destination udp port
+ * @brief ecore_set_vxlan_dest_port - initializes vxlan tunnel destination udp
+ * port
*
* @param p_ptt - ptt window used for writing the registers.
* @param dest_port - vxlan destination udp port.
*/
void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u16 dest_port);
+ struct ecore_ptt *p_ptt,
+ u16 dest_port);
/**
* @brief ecore_set_vxlan_enable - enable or disable VXLAN tunnel in HW
*
@@ -231,7 +255,8 @@ void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
* @param vxlan_enable - vxlan enable flag.
*/
void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, bool vxlan_enable);
+ struct ecore_ptt *p_ptt,
+ bool vxlan_enable);
/**
* @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW
*
@@ -241,15 +266,18 @@ void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
*/
void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
- bool eth_gre_enable, bool ip_gre_enable);
+ bool eth_gre_enable,
+ bool ip_gre_enable);
/**
- * @brief ecore_set_geneve_dest_port - init geneve tunnel destination udp port
+ * @brief ecore_set_geneve_dest_port - initializes geneve tunnel destination
+ * udp port
*
* @param p_ptt - ptt window used for writing the registers.
* @param dest_port - geneve destination udp port.
*/
void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u16 dest_port);
+ struct ecore_ptt *p_ptt,
+ u16 dest_port);
/**
* @brief ecore_set_gre_enable - enable or disable GRE tunnel in HW
*
diff --git a/drivers/net/qede/base/ecore_init_ops.h b/drivers/net/qede/base/ecore_init_ops.h
index 8a6fce4..f6b0a2d 100644
--- a/drivers/net/qede/base/ecore_init_ops.h
+++ b/drivers/net/qede/base/ecore_init_ops.h
@@ -68,7 +68,9 @@ void ecore_init_clear_rt_data(struct ecore_hwfn *p_hwfn);
* @param rt_offset
* @param val
*/
-void ecore_init_store_rt_reg(struct ecore_hwfn *p_hwfn, u32 rt_offset, u32 val);
+void ecore_init_store_rt_reg(struct ecore_hwfn *p_hwfn,
+ u32 rt_offset,
+ u32 val);
#define STORE_RT_REG(hwfn, offset, val) \
ecore_init_store_rt_reg(hwfn, offset, val)
@@ -87,7 +89,9 @@ void ecore_init_store_rt_reg(struct ecore_hwfn *p_hwfn, u32 rt_offset, u32 val);
*/
void ecore_init_store_rt_agg(struct ecore_hwfn *p_hwfn,
- u32 rt_offset, u32 *val, osal_size_t size);
+ u32 rt_offset,
+ u32 *val,
+ osal_size_t size);
#define STORE_RT_REG_AGG(hwfn, offset, val) \
ecore_init_store_rt_agg(hwfn, offset, (u32 *)&val, sizeof(val))
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index 04c4947..4d5543a 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -944,7 +944,7 @@ static enum _ecore_status_t ecore_int_deassertion(struct ecore_hwfn *p_hwfn,
* previous assertion.
*/
for (j = 0, bit_idx = 0; bit_idx < 32; j++) {
- unsigned long bitmask;
+ unsigned long int bitmask;
u8 bit, bit_len;
p_aeu = &sb_attn_sw->p_aeu_desc[i].bits[j];
@@ -1021,8 +1021,8 @@ static enum _ecore_status_t ecore_int_attentions(struct ecore_hwfn *p_hwfn)
struct ecore_sb_attn_info *p_sb_attn_sw = p_hwfn->p_sb_attn;
struct atten_status_block *p_sb_attn = p_sb_attn_sw->sb_attn;
u16 index = 0, asserted_bits, deasserted_bits;
- enum _ecore_status_t rc = ECORE_SUCCESS;
u32 attn_bits = 0, attn_acks = 0;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
/* Read current attention bits/acks - safeguard against attentions
* by guaranting work on a synchronized timeframe
@@ -1162,6 +1162,7 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie)
}
/* Check the validity of the DPC ptt. If not ack interrupts and fail */
+
if (!p_hwfn->p_dpc_ptt) {
DP_NOTICE(p_hwfn->p_dev, true, "Failed to allocate PTT\n");
ecore_sb_ack(sb_info, IGU_INT_ENABLE, 1);
@@ -1582,7 +1583,7 @@ static enum _ecore_status_t ecore_int_sp_sb_alloc(struct ecore_hwfn *p_hwfn,
p_virt = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
&p_phys, SB_ALIGNED_SIZE(p_hwfn));
if (!p_virt) {
- DP_NOTICE(p_hwfn, true, "Failed to allocate status block");
+ DP_NOTICE(p_hwfn, true, "Failed to allocate status block\n");
OSAL_FREE(p_hwfn->p_dev, p_sb);
return ECORE_NOMEM;
}
@@ -1691,6 +1692,7 @@ static void ecore_int_igu_enable_attn(struct ecore_hwfn *p_hwfn,
ecore_wr(p_hwfn, p_ptt, IGU_REG_TRAILING_EDGE_LATCH, 0xfff);
ecore_wr(p_hwfn, p_ptt, IGU_REG_ATTENTION_ENABLE, 0xfff);
+ /* Flush the writes to IGU */
OSAL_MMIOWB(p_hwfn->p_dev);
/* Unmask AEU signals toward IGU */
@@ -1782,6 +1784,7 @@ void ecore_int_igu_cleanup_sb(struct ecore_hwfn *p_hwfn,
ecore_wr(p_hwfn, p_ptt, IGU_REG_COMMAND_REG_CTRL, cmd_ctrl);
+ /* Flush the write to IGU */
OSAL_MMIOWB(p_hwfn->p_dev);
/* calculate where to read the status bit from */
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 5ad4ec6..0085726 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -53,6 +53,14 @@ struct ecore_mcp_link_capabilities;
struct ecore_vf_acquire_sw_info {
u32 driver_version;
u8 os_type;
+
+ /* We have several close releases that all use ~same FW with different
+ * versions [making it incompatible as the versioning scheme is still
+ * tied directly to FW version], allow to override the checking. Only
+ * those versions would actually support this feature [so it would not
+ * break forward compatibility with newer HV drivers that are no longer
+ * suited].
+ */
bool override_fw_version;
};
@@ -132,7 +140,8 @@ void ecore_iov_set_vf_to_disable(struct ecore_hwfn *p_hwfn,
*/
enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
- u16 rel_vf_id, u16 num_rx_queues);
+ u16 rel_vf_id,
+ u16 num_rx_queues);
/**
* @brief ecore_iov_process_mbx_req - process a request received
@@ -143,7 +152,8 @@ enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
* @param vfid
*/
void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, int vfid);
+ struct ecore_ptt *p_ptt,
+ int vfid);
/**
* @brief ecore_iov_release_hw_for_vf - called once upper layer
@@ -197,7 +207,8 @@ enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
*/
enum _ecore_status_t
ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u16 rel_vf_id);
+ struct ecore_ptt *p_ptt,
+ u16 rel_vf_id);
/**
* @brief Update the bulletin with link information. Notice this does NOT
@@ -238,7 +249,8 @@ void ecore_iov_get_link(struct ecore_hwfn *p_hwfn,
*
* @return bool
*/
-bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
+bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn,
+ u16 rel_vf_id);
/**
* @brief Check if given VF ID @vfid is valid
@@ -253,7 +265,8 @@ bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
* @return bool - true for valid VF ID
*/
bool ecore_iov_is_valid_vfid(struct ecore_hwfn *p_hwfn,
- int rel_vf_id, bool b_enabled_only);
+ int rel_vf_id,
+ bool b_enabled_only);
/**
* @brief Get VF's public info structure
@@ -264,9 +277,9 @@ bool ecore_iov_is_valid_vfid(struct ecore_hwfn *p_hwfn,
*
* @return struct ecore_public_vf_info *
*/
-struct ecore_public_vf_info *ecore_iov_get_public_vf_info(struct ecore_hwfn
- *p_hwfn, u16 vfid,
- bool b_enabled_only);
+struct ecore_public_vf_info*
+ecore_iov_get_public_vf_info(struct ecore_hwfn *p_hwfn,
+ u16 vfid, bool b_enabled_only);
/**
* @brief Set pending events bitmap for given @vfid
@@ -295,7 +308,8 @@ void ecore_iov_pf_get_and_clear_pending_events(struct ecore_hwfn *p_hwfn,
* @return enum _ecore_status_t
*/
enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *ptt, int vfid);
+ struct ecore_ptt *ptt,
+ int vfid);
/**
* @brief Set forced MAC address in PFs copy of bulletin board
* and configures FW/HW to support the configuration.
@@ -342,7 +356,9 @@ void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
*/
enum _ecore_status_t
ecore_iov_bulletin_set_forced_untagged_default(struct ecore_hwfn *p_hwfn,
- bool b_untagged_only, int vfid);
+ bool b_untagged_only,
+ int vfid);
+
/**
* @brief Get VFs opaque fid.
*
@@ -486,7 +502,8 @@ u32 ecore_iov_pfvf_msg_length(void);
*
* @return OSAL_NULL if mac isn't forced; Otherwise, returns MAC.
*/
-u8 *ecore_iov_bulletin_get_forced_mac(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
+u8 *ecore_iov_bulletin_get_forced_mac(struct ecore_hwfn *p_hwfn,
+ u16 rel_vf_id);
/**
* @brief Returns pvid if one is configured
@@ -535,7 +552,8 @@ enum _ecore_status_t ecore_iov_get_vf_stats(struct ecore_hwfn *p_hwfn,
*
* @return num of rxqs chains.
*/
-u8 ecore_iov_get_vf_num_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
+u8 ecore_iov_get_vf_num_rxqs(struct ecore_hwfn *p_hwfn,
+ u16 rel_vf_id);
/**
* @brief - Retrieves num of active rxqs chains
@@ -545,7 +563,8 @@ u8 ecore_iov_get_vf_num_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
*
* @return
*/
-u8 ecore_iov_get_vf_num_active_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
+u8 ecore_iov_get_vf_num_active_rxqs(struct ecore_hwfn *p_hwfn,
+ u16 rel_vf_id);
/**
* @brief - Retrieves ctx pointer
@@ -555,7 +574,8 @@ u8 ecore_iov_get_vf_num_active_rxqs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
*
* @return
*/
-void *ecore_iov_get_vf_ctx(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
+void *ecore_iov_get_vf_ctx(struct ecore_hwfn *p_hwfn,
+ u16 rel_vf_id);
/**
* @brief - Retrieves VF`s num sbs
@@ -565,7 +585,8 @@ void *ecore_iov_get_vf_ctx(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
*
* @return
*/
-u8 ecore_iov_get_vf_num_sbs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
+u8 ecore_iov_get_vf_num_sbs(struct ecore_hwfn *p_hwfn,
+ u16 rel_vf_id);
/**
* @brief - Returm true if VF is waiting for acquire
@@ -575,7 +596,8 @@ u8 ecore_iov_get_vf_num_sbs(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
*
* @return
*/
-bool ecore_iov_is_vf_wait_for_acquire(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
+bool ecore_iov_is_vf_wait_for_acquire(struct ecore_hwfn *p_hwfn,
+ u16 rel_vf_id);
/**
* @brief - Returm true if VF is acquired but not initialized
@@ -596,7 +618,8 @@ bool ecore_iov_is_vf_acquired_not_initialized(struct ecore_hwfn *p_hwfn,
*
* @return
*/
-bool ecore_iov_is_vf_initialized(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
+bool ecore_iov_is_vf_initialized(struct ecore_hwfn *p_hwfn,
+ u16 rel_vf_id);
/**
* @brief - Get VF's vport min rate configured.
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index b31523b..bc6b59d 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -35,9 +35,9 @@ ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
{
struct vport_start_ramrod_data *p_ramrod = OSAL_NULL;
struct ecore_spq_entry *p_ent = OSAL_NULL;
- enum _ecore_status_t rc = ECORE_NOTIMPL;
struct ecore_sp_init_data init_data;
u8 abs_vport_id = 0;
+ enum _ecore_status_t rc = ECORE_NOTIMPL;
u16 rx_mode = 0;
rc = ecore_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
@@ -449,8 +449,8 @@ enum _ecore_status_t ecore_sp_vport_stop(struct ecore_hwfn *p_hwfn,
struct vport_stop_ramrod_data *p_ramrod;
struct ecore_sp_init_data init_data;
struct ecore_spq_entry *p_ent;
- enum _ecore_status_t rc;
u8 abs_vport_id = 0;
+ enum _ecore_status_t rc;
if (IS_VF(p_hwfn->p_dev))
return ecore_vf_pf_vport_stop(p_hwfn);
@@ -703,10 +703,10 @@ ecore_sp_eth_rx_queues_update(struct ecore_hwfn *p_hwfn,
{
struct rx_queue_update_ramrod_data *p_ramrod = OSAL_NULL;
struct ecore_spq_entry *p_ent = OSAL_NULL;
- enum _ecore_status_t rc = ECORE_NOTIMPL;
struct ecore_sp_init_data init_data;
struct ecore_hw_cid_data *p_rx_cid;
u16 qid, abs_rx_q_id = 0;
+ enum _ecore_status_t rc = ECORE_NOTIMPL;
u8 i;
if (IS_VF(p_hwfn->p_dev))
@@ -758,9 +758,9 @@ ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
struct ecore_hw_cid_data *p_rx_cid = &p_hwfn->p_rx_cids[rx_queue_id];
struct rx_queue_stop_ramrod_data *p_ramrod = OSAL_NULL;
struct ecore_spq_entry *p_ent = OSAL_NULL;
- enum _ecore_status_t rc = ECORE_NOTIMPL;
struct ecore_sp_init_data init_data;
u16 abs_rx_q_id = 0;
+ enum _ecore_status_t rc = ECORE_NOTIMPL;
if (IS_VF(p_hwfn->p_dev))
return ecore_vf_pf_rxq_stop(p_hwfn, rx_queue_id,
@@ -816,15 +816,16 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
u16 pbl_size,
union ecore_qm_pq_params *p_pq_params)
{
- struct ecore_hw_cid_data *p_tx_cid = &p_hwfn->p_tx_cids[tx_queue_id];
struct tx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
struct ecore_spq_entry *p_ent = OSAL_NULL;
- enum _ecore_status_t rc = ECORE_NOTIMPL;
struct ecore_sp_init_data init_data;
+ struct ecore_hw_cid_data *p_tx_cid;
u16 pq_id, abs_tx_q_id = 0;
u8 abs_vport_id;
+ enum _ecore_status_t rc = ECORE_NOTIMPL;
/* Store information for the stop */
+ p_tx_cid = &p_hwfn->p_tx_cids[tx_queue_id];
p_tx_cid->cid = cid;
p_tx_cid->opaque_fid = opaque_fid;
@@ -908,7 +909,8 @@ enum _ecore_status_t ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
"opaque_fid=0x%x, cid=0x%x, tx_qid=0x%x, vport_id=0x%x, sb_id=0x%x\n",
- opaque_fid, p_tx_cid->cid, tx_queue_id, vport_id, sb);
+ opaque_fid, p_tx_cid->cid, tx_queue_id,
+ vport_id, sb);
/* TODO - set tc in the pq_params for multi-cos */
rc = ecore_sp_eth_txq_start_ramrod(p_hwfn,
@@ -919,7 +921,9 @@ enum _ecore_status_t ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
abs_stats_id,
sb,
sb_index,
- pbl_addr, pbl_size, &pq_params);
+ pbl_addr,
+ pbl_size,
+ &pq_params);
*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
DB_ADDR(p_tx_cid->cid, DQ_DEMS_LEGACY);
@@ -1011,8 +1015,8 @@ ecore_filter_ucast_common(struct ecore_hwfn *p_hwfn,
enum spq_mode comp_mode,
struct ecore_spq_comp_cb *p_comp_data)
{
- struct vport_filter_update_ramrod_data *p_ramrod;
u8 vport_to_add_to = 0, vport_to_remove_from = 0;
+ struct vport_filter_update_ramrod_data *p_ramrod;
struct eth_filter_cmd *p_first_filter;
struct eth_filter_cmd *p_second_filter;
struct ecore_sp_init_data init_data;
@@ -1304,11 +1308,10 @@ ecore_sp_eth_filter_mcast(struct ecore_hwfn *p_hwfn,
0, sizeof(p_ramrod->approx_mcast.bins));
OSAL_MEMSET(bins, 0, sizeof(unsigned long) *
ETH_MULTICAST_MAC_BINS_IN_REGS);
-
- if (p_filter_cmd->opcode == ECORE_FILTER_ADD) {
/* filter ADD op is explicit set op and it removes
* any existing filters for the vport.
*/
+ if (p_filter_cmd->opcode == ECORE_FILTER_ADD) {
for (i = 0; i < p_filter_cmd->num_mc_addrs; i++) {
u32 bit;
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index ab9aca0..65a508c 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -137,7 +137,8 @@ ecore_filter_mcast_cmd(struct ecore_dev *p_dev,
/* Set "accept" filters */
enum _ecore_status_t
-ecore_filter_accept_cmd(struct ecore_dev *p_dev,
+ecore_filter_accept_cmd(
+ struct ecore_dev *p_dev,
u8 vport,
struct ecore_filter_accept_flags accept_flags,
u8 update_accept_any_vlan,
@@ -204,7 +205,8 @@ enum _ecore_status_t ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t
ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
u16 rx_queue_id,
- bool eq_completion_only, bool cqe_completion);
+ bool eq_completion_only,
+ bool cqe_completion);
/**
* @brief ecore_sp_eth_tx_queue_start - TX Queue Start Ramrod
@@ -351,7 +353,8 @@ ecore_sp_vport_update(struct ecore_hwfn *p_hwfn,
* @return enum _ecore_status_t
*/
enum _ecore_status_t ecore_sp_vport_stop(struct ecore_hwfn *p_hwfn,
- u16 opaque_fid, u8 vport_id);
+ u16 opaque_fid,
+ u8 vport_id);
enum _ecore_status_t
ecore_sp_eth_filter_ucast(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 2823113..b29e630 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -135,7 +135,8 @@ static enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn,
PUBLIC_DRV_MB));
p_info->drv_mb_addr = SECTION_ADDR(drv_mb_offsize, mcp_pf_id);
DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
- "drv_mb_offsiz = 0x%x, drv_mb_addr = 0x%x mcp_pf_id = 0x%x\n",
+ "drv_mb_offsiz = 0x%x, drv_mb_addr = 0x%x"
+ " mcp_pf_id = 0x%x\n",
drv_mb_offsize, p_info->drv_mb_addr, mcp_pf_id);
/* Set the MFW MB address */
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index e150415..58df3f5 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -323,11 +323,11 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
bool allow_npar_tx_switch)
{
struct pf_start_ramrod_data *p_ramrod = OSAL_NULL;
- struct ecore_spq_entry *p_ent = OSAL_NULL;
u16 sb = ecore_int_get_sp_sb_id(p_hwfn);
u8 sb_index = p_hwfn->p_eq->eq_sb_index;
- enum _ecore_status_t rc = ECORE_NOTIMPL;
+ struct ecore_spq_entry *p_ent = OSAL_NULL;
struct ecore_sp_init_data init_data;
+ enum _ecore_status_t rc = ECORE_NOTIMPL;
u8 page_cnt;
/* update initial eq producer */
@@ -416,8 +416,8 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t ecore_sp_pf_update(struct ecore_hwfn *p_hwfn)
{
struct ecore_spq_entry *p_ent = OSAL_NULL;
- enum _ecore_status_t rc = ECORE_NOTIMPL;
struct ecore_sp_init_data init_data;
+ enum _ecore_status_t rc = ECORE_NOTIMPL;
/* Get SPQ entry */
OSAL_MEMSET(&init_data, 0, sizeof(init_data));
@@ -445,8 +445,8 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
struct ecore_spq_comp_cb *p_comp_data)
{
struct ecore_spq_entry *p_ent = OSAL_NULL;
- enum _ecore_status_t rc = ECORE_NOTIMPL;
struct ecore_sp_init_data init_data;
+ enum _ecore_status_t rc = ECORE_NOTIMPL;
/* Get SPQ entry */
OSAL_MEMSET(&init_data, 0, sizeof(init_data));
@@ -484,9 +484,9 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t ecore_sp_pf_stop(struct ecore_hwfn *p_hwfn)
{
- enum _ecore_status_t rc = ECORE_NOTIMPL;
struct ecore_spq_entry *p_ent = OSAL_NULL;
struct ecore_sp_init_data init_data;
+ enum _ecore_status_t rc = ECORE_NOTIMPL;
/* Get SPQ entry */
OSAL_MEMSET(&init_data, 0, sizeof(init_data));
@@ -506,8 +506,8 @@ enum _ecore_status_t ecore_sp_pf_stop(struct ecore_hwfn *p_hwfn)
enum _ecore_status_t ecore_sp_heartbeat_ramrod(struct ecore_hwfn *p_hwfn)
{
struct ecore_spq_entry *p_ent = OSAL_NULL;
- enum _ecore_status_t rc = ECORE_NOTIMPL;
struct ecore_sp_init_data init_data;
+ enum _ecore_status_t rc = ECORE_NOTIMPL;
/* Get SPQ entry */
OSAL_MEMSET(&init_data, 0, sizeof(init_data));
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 440f5b3..28a658e 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -157,7 +157,7 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
rc = ecore_cxt_get_cid_info(p_hwfn, &cxt_info);
if (rc < 0) {
- DP_NOTICE(p_hwfn, true, "Cannot find context info for cid=%d",
+ DP_NOTICE(p_hwfn, true, "Cannot find context info for cid=%d\n",
p_spq->cid);
return;
}
@@ -352,7 +352,7 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
ECORE_CHAIN_CNT_TYPE_U16,
num_elem,
sizeof(union event_ring_element), &p_eq->chain)) {
- DP_NOTICE(p_hwfn, true, "Failed to allocate eq chain");
+ DP_NOTICE(p_hwfn, true, "Failed to allocate eq chain\n");
goto eq_allocate_fail;
}
@@ -419,8 +419,8 @@ enum _ecore_status_t ecore_eth_cqe_completion(struct ecore_hwfn *p_hwfn,
***************************************************************************/
void ecore_spq_setup(struct ecore_hwfn *p_hwfn)
{
- struct ecore_spq_entry *p_virt = OSAL_NULL;
struct ecore_spq *p_spq = p_hwfn->p_spq;
+ struct ecore_spq_entry *p_virt = OSAL_NULL;
dma_addr_t p_phys = 0;
u32 i, capacity;
@@ -475,7 +475,7 @@ enum _ecore_status_t ecore_spq_alloc(struct ecore_hwfn *p_hwfn)
OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(struct ecore_spq));
if (!p_spq) {
DP_NOTICE(p_hwfn, true,
- "Failed to allocate `struct ecore_spq'");
+ "Failed to allocate `struct ecore_spq'\n");
return ECORE_NOMEM;
}
@@ -484,7 +484,7 @@ enum _ecore_status_t ecore_spq_alloc(struct ecore_hwfn *p_hwfn)
ECORE_CHAIN_MODE_SINGLE, ECORE_CHAIN_CNT_TYPE_U16, 0,
/* N/A when the mode is SINGLE */
sizeof(struct slow_path_element), &p_spq->chain)) {
- DP_NOTICE(p_hwfn, true, "Failed to allocate spq chain");
+ DP_NOTICE(p_hwfn, true, "Failed to allocate spq chain\n");
goto spq_allocate_fail;
}
@@ -745,7 +745,7 @@ enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn,
if (p_hwfn->p_dev->recov_in_prog) {
DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
"Recovery is in progress -> skip spq post"
- " [cmd %02x protocol %02x]",
+ " [cmd %02x protocol %02x]\n",
p_ent->elem.hdr.cmd_id, p_ent->elem.hdr.protocol_id);
/* Return success to let the flows to be completed successfully
* w/o any error handling.
diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h
index 74484ab..490b7d9 100644
--- a/drivers/net/qede/base/ecore_spq.h
+++ b/drivers/net/qede/base/ecore_spq.h
@@ -175,7 +175,8 @@ void ecore_spq_free(struct ecore_hwfn *p_hwfn);
* @return enum _ecore_status_t
*/
enum _ecore_status_t
-ecore_spq_get_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry **pp_ent);
+ecore_spq_get_entry(struct ecore_hwfn *p_hwfn,
+ struct ecore_spq_entry **pp_ent);
/**
* @brief ecore_spq_return_entry - Return an entry to spq free
@@ -194,7 +195,8 @@ void ecore_spq_return_entry(struct ecore_hwfn *p_hwfn,
*
* @return struct ecore_eq* - a newly allocated structure; NULL upon error.
*/
-struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem);
+struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn,
+ u16 num_elem);
/**
* @brief ecore_eq_setup - Reset the SPQ to its start state.
@@ -202,7 +204,8 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem);
* @param p_hwfn
* @param p_eq
*/
-void ecore_eq_setup(struct ecore_hwfn *p_hwfn, struct ecore_eq *p_eq);
+void ecore_eq_setup(struct ecore_hwfn *p_hwfn,
+ struct ecore_eq *p_eq);
/**
* @brief ecore_eq_deallocate - deallocates the given EQ struct.
@@ -210,7 +213,8 @@ void ecore_eq_setup(struct ecore_hwfn *p_hwfn, struct ecore_eq *p_eq);
* @param p_hwfn
* @param p_eq
*/
-void ecore_eq_free(struct ecore_hwfn *p_hwfn, struct ecore_eq *p_eq);
+void ecore_eq_free(struct ecore_hwfn *p_hwfn,
+ struct ecore_eq *p_eq);
/**
* @brief ecore_eq_prod_update - update the FW with default EQ producer
@@ -218,7 +222,8 @@ void ecore_eq_free(struct ecore_hwfn *p_hwfn, struct ecore_eq *p_eq);
* @param p_hwfn
* @param prod
*/
-void ecore_eq_prod_update(struct ecore_hwfn *p_hwfn, u16 prod);
+void ecore_eq_prod_update(struct ecore_hwfn *p_hwfn,
+ u16 prod);
/**
* @brief ecore_eq_completion - Completes currently pending EQ elements
@@ -271,7 +276,8 @@ struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn);
* @param p_hwfn
* @param p_eq
*/
-void ecore_consq_setup(struct ecore_hwfn *p_hwfn, struct ecore_consq *p_consq);
+void ecore_consq_setup(struct ecore_hwfn *p_hwfn,
+ struct ecore_consq *p_consq);
/**
* @brief ecore_consq_free - deallocates the given ConsQ struct.
@@ -279,6 +285,7 @@ void ecore_consq_setup(struct ecore_hwfn *p_hwfn, struct ecore_consq *p_consq);
* @param p_hwfn
* @param p_eq
*/
-void ecore_consq_free(struct ecore_hwfn *p_hwfn, struct ecore_consq *p_consq);
+void ecore_consq_free(struct ecore_hwfn *p_hwfn,
+ struct ecore_consq *p_consq);
#endif /* __ECORE_SPQ_H__ */
diff --git a/drivers/net/qede/base/eth_common.h b/drivers/net/qede/base/eth_common.h
index bd73f7d..cc310e3 100644
--- a/drivers/net/qede/base/eth_common.h
+++ b/drivers/net/qede/base/eth_common.h
@@ -41,14 +41,18 @@
#define ETH_NUM_VLAN_FILTERS 512
/* approx. multicast constants */
+/* CRC seed for multicast bin calculation */
#define ETH_MULTICAST_BIN_FROM_MAC_SEED 0
#define ETH_MULTICAST_MAC_BINS 256
#define ETH_MULTICAST_MAC_BINS_IN_REGS (ETH_MULTICAST_MAC_BINS / 32)
/* ethernet vport update constants */
#define ETH_FILTER_RULES_COUNT 10
+/* number of RSS indirection table entries, per Vport) */
#define ETH_RSS_IND_TABLE_ENTRIES_NUM 128
+/* Length of RSS key (in regs) */
#define ETH_RSS_KEY_SIZE_REGS 10
+/* number of available RSS engines in K2 */
#define ETH_RSS_ENGINE_NUM_K2 207
#define ETH_RSS_ENGINE_NUM_BB 127
@@ -156,10 +160,10 @@ struct eth_tx_data_2nd_bd {
* Firmware data for L2-EDPM packet.
*/
struct eth_edpm_fw_data {
- struct eth_tx_data_1st_bd data_1st_bd
- /* Parsing information data from the 1st BD. */;
- struct eth_tx_data_2nd_bd data_2nd_bd
- /* Parsing information data from the 2nd BD. */;
+/* Parsing information data from the 1st BD. */
+ struct eth_tx_data_1st_bd data_1st_bd;
+/* Parsing information data from the 2nd BD. */
+ struct eth_tx_data_2nd_bd data_2nd_bd;
__le32 reserved;
};
@@ -348,7 +352,8 @@ enum eth_rx_cqe_type {
};
/*
- * Wrapp for PD RX CQE used in order to cover full cache line when writing CQE
+ * Wrapper for PD RX CQE - used in order to cover full cache line when writing
+ * CQE
*/
struct eth_rx_pmd_cqe {
union eth_rx_cqe cqe /* CQE data itself */;
diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index 8d99880..fe980d5 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -225,7 +225,8 @@ struct nvm_cfg1_glob {
#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_EXT_I2C 0x1
#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_ONLY 0x2
#define NVM_CFG1_GLOB_ON_CHIP_SENSOR_MODE_INT_EXT_SMBUS 0x3
-#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_MASK 0x06000000
+ #define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_MASK \
+ 0x06000000
#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_OFFSET 25
#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_DISABLE 0x0
#define NVM_CFG1_GLOB_TEMPERATURE_MONITORING_MODE_INTERNAL 0x1
@@ -272,10 +273,12 @@ struct nvm_cfg1_glob {
#define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_MASK 0x00000080
#define NVM_CFG1_GLOB_TX_LANE3_POL_FLIP_OFFSET 7
/* Control the period between two successive checks */
-#define NVM_CFG1_GLOB_TEMPERATURE_PERIOD_BETWEEN_CHECKS_MASK 0x0000FF00
+ #define NVM_CFG1_GLOB_TEMPERATURE_PERIOD_BETWEEN_CHECKS_MASK \
+ 0x0000FF00
#define NVM_CFG1_GLOB_TEMPERATURE_PERIOD_BETWEEN_CHECKS_OFFSET 8
/* Set shutdown temperature */
-#define NVM_CFG1_GLOB_SHUTDOWN_THRESHOLD_TEMPERATURE_MASK 0x00FF0000
+ #define NVM_CFG1_GLOB_SHUTDOWN_THRESHOLD_TEMPERATURE_MASK \
+ 0x00FF0000
#define NVM_CFG1_GLOB_SHUTDOWN_THRESHOLD_TEMPERATURE_OFFSET 16
/* Set max. count for over operational temperature */
#define NVM_CFG1_GLOB_MAX_COUNT_OPER_THRESHOLD_MASK 0xFF000000
@@ -320,10 +323,12 @@ struct nvm_cfg1_glob {
#define NVM_CFG1_GLOB_VENDOR_ID_MASK 0x0000FFFF
#define NVM_CFG1_GLOB_VENDOR_ID_OFFSET 0
/* Set caution temperature */
-#define NVM_CFG1_GLOB_CAUTION_THRESHOLD_TEMPERATURE_MASK 0x00FF0000
+ #define NVM_CFG1_GLOB_CAUTION_THRESHOLD_TEMPERATURE_MASK \
+ 0x00FF0000
#define NVM_CFG1_GLOB_CAUTION_THRESHOLD_TEMPERATURE_OFFSET 16
/* Set external thermal sensor I2C address */
-#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ADDRESS_MASK 0xFF000000
+ #define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ADDRESS_MASK \
+ 0xFF000000
#define NVM_CFG1_GLOB_EXTERNAL_THERMAL_SENSOR_ADDRESS_OFFSET 24
u32 pci_subsys_id; /* 0x54 */
#define NVM_CFG1_GLOB_SUBSYSTEM_VENDOR_ID_MASK 0x0000FFFF
@@ -349,6 +354,7 @@ struct nvm_cfg1_glob {
#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_8M 0xD
#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_16M 0xE
#define NVM_CFG1_GLOB_EXPANSION_ROM_SIZE_32M 0xF
+ /* BB VF BAR2 size */
#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_MASK 0x000000F0
#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_OFFSET 4
#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_DISABLED 0x0
@@ -367,6 +373,7 @@ struct nvm_cfg1_glob {
#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_16M 0xD
#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_32M 0xE
#define NVM_CFG1_GLOB_VF_PCI_BAR2_SIZE_64M 0xF
+ /* BB BAR2 size (global) */
#define NVM_CFG1_GLOB_BAR2_SIZE_MASK 0x00000F00
#define NVM_CFG1_GLOB_BAR2_SIZE_OFFSET 8
#define NVM_CFG1_GLOB_BAR2_SIZE_DISABLED 0x0
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 07/32] net/qede: fix 32 bit compilation
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (5 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 06/32] net/qede/base: additional formatting/comment changes Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-26 16:54 ` Thomas Monjalon
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 08/32] net/qede: change signature of MCP command API Rasesh Mody
` (25 subsequent siblings)
32 siblings, 1 reply; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Rasesh Mody
Fix 32 bit compilation for gcc version 4.3.4.
Fixes: ec94dbc57362 ("qede: add base driver")
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
---
drivers/net/qede/Makefile | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/net/qede/Makefile b/drivers/net/qede/Makefile
index fe449aa..7965a83 100644
--- a/drivers/net/qede/Makefile
+++ b/drivers/net/qede/Makefile
@@ -48,9 +48,13 @@ endif
endif
ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y)
+ifeq ($(shell gcc -Wno-unused-but-set-variable -Werror -E - < /dev/null > /dev/null 2>&1; echo $$?),0)
CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable
+endif
CFLAGS_BASE_DRIVER += -Wno-missing-declarations
+ifeq ($(shell gcc -Wno-maybe-uninitialized -Werror -E - < /dev/null > /dev/null 2>&1; echo $$?),0)
CFLAGS_BASE_DRIVER += -Wno-maybe-uninitialized
+endif
CFLAGS_BASE_DRIVER += -Wno-strict-prototypes
ifeq ($(shell test $(GCC_VERSION) -ge 60 && echo 1), 1)
CFLAGS_BASE_DRIVER += -Wno-shift-negative-value
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 08/32] net/qede: change signature of MCP command API
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (6 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 07/32] net/qede: fix 32 bit compilation Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 09/32] net/qede: serialize access to MFW mbox Rasesh Mody
` (24 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Harish Patil
From: Harish Patil <harish.patil@qlogic.com>
Change ecore_mcp_cmd_and_union() to accept pointer to a structure rather
than accepting multiple arguments. A new struct ecore_mcp_mb_params is
added for that purpose. Also make this function static. This change is
mostly keeping in mind the future requests which needs additional
arguments.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
drivers/net/qede/base/bcm_osal.h | 2 +
drivers/net/qede/base/ecore_mcp.c | 138 ++++++++++++++++++++++++--------------
drivers/net/qede/base/ecore_mcp.h | 31 +++------
3 files changed, 98 insertions(+), 73 deletions(-)
diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index a535058..9d84ae2 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -9,6 +9,8 @@
#ifndef __BCM_OSAL_H
#define __BCM_OSAL_H
+#include <string.h>
+
#include <rte_byteorder.h>
#include <rte_spinlock.h>
#include <rte_malloc.h>
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index b29e630..24211a3 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -313,32 +313,10 @@ static enum _ecore_status_t ecore_do_mcp_cmd(struct ecore_hwfn *p_hwfn,
return rc;
}
-enum _ecore_status_t ecore_mcp_cmd(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u32 cmd, u32 param,
- u32 *o_mcp_resp, u32 *o_mcp_param)
-{
-#ifndef ASIC_ONLY
- if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
- if (cmd == DRV_MSG_CODE_UNLOAD_REQ) {
- loaded--;
- loaded_port[p_hwfn->port_id]--;
- DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Unload cnt: 0x%x\n",
- loaded);
- }
- return ECORE_SUCCESS;
- }
-#endif
- return ecore_mcp_cmd_and_union(p_hwfn, p_ptt, cmd, param, OSAL_NULL,
- o_mcp_resp, o_mcp_param);
-}
-
-enum _ecore_status_t ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u32 cmd, u32 param,
- union drv_union_data *p_union_data,
- u32 *o_mcp_resp,
- u32 *o_mcp_param)
+static enum _ecore_status_t
+ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ struct ecore_mcp_mb_params *p_mb_params)
{
u32 union_data_addr;
enum _ecore_status_t rc;
@@ -354,19 +332,54 @@ enum _ecore_status_t ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
*/
OSAL_SPIN_LOCK(&p_hwfn->mcp_info->lock);
- if (p_union_data != OSAL_NULL) {
union_data_addr = p_hwfn->mcp_info->drv_mb_addr +
OFFSETOF(struct public_drv_mb, union_data);
- ecore_memcpy_to(p_hwfn, p_ptt, union_data_addr, p_union_data,
- sizeof(*p_union_data));
-}
- rc = ecore_do_mcp_cmd(p_hwfn, p_ptt, cmd, param, o_mcp_resp,
- o_mcp_param);
+ if (p_mb_params->p_data_src != OSAL_NULL)
+ ecore_memcpy_to(p_hwfn, p_ptt, union_data_addr,
+ p_mb_params->p_data_src,
+ sizeof(*p_mb_params->p_data_src));
- OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->lock);
+ rc = ecore_do_mcp_cmd(p_hwfn, p_ptt, p_mb_params->cmd,
+ p_mb_params->param, &p_mb_params->mcp_resp,
+ &p_mb_params->mcp_param);
+ if (p_mb_params->p_data_dst != OSAL_NULL)
+ ecore_memcpy_from(p_hwfn, p_ptt, p_mb_params->p_data_dst,
+ union_data_addr,
+ sizeof(*p_mb_params->p_data_dst));
+ return rc;
+}
+
+enum _ecore_status_t ecore_mcp_cmd(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt, u32 cmd, u32 param,
+ u32 *o_mcp_resp, u32 *o_mcp_param)
+{
+ struct ecore_mcp_mb_params mb_params;
+ enum _ecore_status_t rc;
+
+#ifndef ASIC_ONLY
+ if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+ if (cmd == DRV_MSG_CODE_UNLOAD_REQ) {
+ loaded--;
+ loaded_port[p_hwfn->port_id]--;
+ DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "Unload cnt: 0x%x\n",
+ loaded);
+ }
+ return ECORE_SUCCESS;
+ }
+#endif
+ OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+ mb_params.cmd = cmd;
+ mb_params.param = param;
+ rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+ if (rc != ECORE_SUCCESS)
return rc;
+
+ *o_mcp_resp = mb_params.mcp_resp;
+ *o_mcp_param = mb_params.mcp_param;
+
+ return ECORE_SUCCESS;
}
enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn,
@@ -377,12 +390,23 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn,
u32 *o_mcp_param,
u32 i_txn_size, u32 *i_buf)
{
+ struct ecore_mcp_mb_params mb_params;
union drv_union_data union_data;
+ enum _ecore_status_t rc;
- OSAL_MEMCPY((u32 *)&union_data.raw_data, i_buf, i_txn_size);
+ OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+ mb_params.cmd = cmd;
+ mb_params.param = param;
+ OSAL_MEMCPY(&union_data.raw_data, i_buf, i_txn_size);
+ mb_params.p_data_src = &union_data;
+ rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+ if (rc != ECORE_SUCCESS)
+ return rc;
- return ecore_mcp_cmd_and_union(p_hwfn, p_ptt, cmd, param, &union_data,
- o_mcp_resp, o_mcp_param);
+ *o_mcp_resp = mb_params.mcp_resp;
+ *o_mcp_param = mb_params.mcp_param;
+
+ return ECORE_SUCCESS;
}
enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
@@ -452,6 +476,7 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
u32 *p_load_code)
{
struct ecore_dev *p_dev = p_hwfn->p_dev;
+ struct ecore_mcp_mb_params mb_params;
union drv_union_data union_data;
u32 param;
enum _ecore_status_t rc;
@@ -463,12 +488,13 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
}
#endif
+ OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+ mb_params.cmd = DRV_MSG_CODE_LOAD_REQ;
+ mb_params.param = PDA_COMP | DRV_ID_MCP_HSI_VER_CURRENT |
+ p_dev->drv_type;
OSAL_MEMCPY(&union_data.ver_str, p_dev->ver_str, MCP_DRV_VER_STR_SIZE);
-
- rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, DRV_MSG_CODE_LOAD_REQ,
- (PDA_COMP | DRV_ID_MCP_HSI_VER_CURRENT |
- p_dev->drv_type),
- &union_data, p_load_code, ¶m);
+ mb_params.p_data_src = &union_data;
+ rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
/* if mcp fails to respond we must abort */
if (rc != ECORE_SUCCESS) {
@@ -535,6 +561,7 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
u32 mfw_func_offsize = ecore_rd(p_hwfn, p_ptt, addr);
u32 func_addr = SECTION_ADDR(mfw_func_offsize,
MCP_PF_ID(p_hwfn));
+ struct ecore_mcp_mb_params mb_params;
union drv_union_data union_data;
u32 resp, param;
enum _ecore_status_t rc;
@@ -545,11 +572,11 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
"Acking VFs [%08x,...,%08x] - %08x\n",
i * 32, (i + 1) * 32 - 1, vfs_to_ack[i]);
+ OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+ mb_params.cmd = DRV_MSG_CODE_VF_DISABLED_DONE;
OSAL_MEMCPY(&union_data.ack_vf_disabled, vfs_to_ack, VF_MAX_STATIC / 8);
-
- rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt,
- DRV_MSG_CODE_VF_DISABLED_DONE, 0,
- &union_data, &resp, ¶m);
+ mb_params.p_data_src = &union_data;
+ rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
if (rc != ECORE_SUCCESS) {
DP_NOTICE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IOV),
"Failed to pass ACK for VF flr to MFW\n");
@@ -738,6 +765,7 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt, bool b_up)
{
struct ecore_mcp_link_params *params = &p_hwfn->mcp_info->link_input;
+ struct ecore_mcp_mb_params mb_params;
union drv_union_data union_data;
struct pmm_phy_cfg *p_phy_cfg;
u32 param = 0, reply = 0, cmd;
@@ -782,8 +810,10 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
else
DP_VERBOSE(p_hwfn, ECORE_MSG_LINK, "Resetting link\n");
- rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, cmd, 0, &union_data, &reply,
- ¶m);
+ OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+ mb_params.cmd = cmd;
+ mb_params.p_data_src = &union_data;
+ rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
/* if mcp fails to respond we must abort */
if (rc != ECORE_SUCCESS) {
@@ -860,6 +890,7 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
{
enum ecore_mcp_protocol_type stats_type;
union ecore_mcp_protocol_stats stats;
+ struct ecore_mcp_mb_params mb_params;
u32 hsi_param, param = 0, reply = 0;
union drv_union_data union_data;
@@ -875,10 +906,12 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
OSAL_GET_PROTOCOL_STATS(p_hwfn->p_dev, stats_type, &stats);
+ OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+ mb_params.cmd = DRV_MSG_CODE_GET_STATS;
+ mb_params.param = hsi_param;
OSAL_MEMCPY(&union_data, &stats, sizeof(stats));
-
- ecore_mcp_cmd_and_union(p_hwfn, p_ptt, DRV_MSG_CODE_GET_STATS,
- hsi_param, &union_data, &reply, ¶m);
+ mb_params.p_data_src = &union_data;
+ ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
}
static u32 ecore_mcp_get_shmem_func(struct ecore_hwfn *p_hwfn,
@@ -1400,6 +1433,7 @@ ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
{
u32 param = 0, reply = 0, num_words, i;
struct drv_version_stc *p_drv_version;
+ struct ecore_mcp_mb_params mb_params;
union drv_union_data union_data;
void *p_name;
OSAL_BE32 val;
@@ -1419,8 +1453,10 @@ ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
*(u32 *)&p_drv_version->name[i * sizeof(u32)] = val;
}
- rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, DRV_MSG_CODE_SET_VERSION, 0,
- &union_data, &reply, ¶m);
+ OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+ mb_params.cmd = DRV_MSG_CODE_SET_VERSION;
+ mb_params.p_data_src = &union_data;
+ rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
if (rc != ECORE_SUCCESS)
DP_ERR(p_hwfn, "MCP response failure, aborting\n");
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 7af4349..28a8f93 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -49,6 +49,15 @@ struct ecore_mcp_info {
u16 mcp_hist;
};
+struct ecore_mcp_mb_params {
+ u32 cmd;
+ u32 param;
+ union drv_union_data *p_data_src;
+ union drv_union_data *p_data_dst;
+ u32 mcp_resp;
+ u32 mcp_param;
+};
+
/**
* @brief Initialize the interface with the MCP
*
@@ -177,28 +186,6 @@ enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt);
/**
- * @brief - Sets the union data in the MCP mailbox and sends a mailbox command.
- *
- * @param p_hwfn - hw function
- * @param p_ptt - PTT required for register access
- * @param cmd - command to be sent to the MCP
- * @param param - optional param
- * @param p_union_data - pointer to a drv_union_data
- * @param o_mcp_resp - the MCP response code (exclude sequence)
- * @param o_mcp_param - optional parameter provided by the MCP response
- *
- * @return enum _ecore_status_t -
- * ECORE_SUCCESS - operation was successful
- * ECORE_BUSY - operation failed
- */
-enum _ecore_status_t ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u32 cmd, u32 param,
- union drv_union_data *p_union_data,
- u32 *o_mcp_resp,
- u32 *o_mcp_param);
-
-/**
* @brief - Sends an NVM write command request to the MFW with
* payload.
*
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 09/32] net/qede: serialize access to MFW mbox
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (7 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 08/32] net/qede: change signature of MCP command API Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 10/32] net/qede: add NIC selftest and query sensor info support Rasesh Mody
` (23 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Harish Patil
From: Harish Patil <harish.patil@qlogic.com>
Add ecore_mcp_mb_lock() and ecore_mcp_mb_unlock() APIs to ensure
a single thread is accessing MFW mailbox.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
drivers/net/qede/base/ecore_mcp.c | 70 ++++++++++++++++++++++++++++++++++-----
drivers/net/qede/base/ecore_mcp.h | 4 +++
2 files changed, 66 insertions(+), 8 deletions(-)
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 24211a3..12e1ec1 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -202,6 +202,51 @@ err:
return ECORE_NOMEM;
}
+/* Locks the MFW mailbox of a PF to ensure a single access.
+ * The lock is achieved in most cases by holding a spinlock, causing other
+ * threads to wait till a previous access is done.
+ * In some cases (currently when a [UN]LOAD_REQ commands are sent), the single
+ * access is achieved by setting a blocking flag, which will fail other
+ * competing contexts to send their mailboxes.
+ */
+static enum _ecore_status_t ecore_mcp_mb_lock(struct ecore_hwfn *p_hwfn,
+ u32 cmd)
+{
+ OSAL_SPIN_LOCK(&p_hwfn->mcp_info->lock);
+
+ /* The spinlock shouldn't be acquired when the mailbox command is
+ * [UN]LOAD_REQ, since the engine is locked by the MFW, and a parallel
+ * pending [UN]LOAD_REQ command of another PF together with a spinlock
+ * (i.e. interrupts are disabled) - can lead to a deadlock.
+ * It is assumed that for a single PF, no other mailbox commands can be
+ * sent from another context while sending LOAD_REQ, and that any
+ * parallel commands to UNLOAD_REQ can be cancelled.
+ */
+ if (cmd == DRV_MSG_CODE_LOAD_DONE || cmd == DRV_MSG_CODE_UNLOAD_DONE)
+ p_hwfn->mcp_info->block_mb_sending = false;
+
+ if (p_hwfn->mcp_info->block_mb_sending) {
+ DP_NOTICE(p_hwfn, false,
+ "Trying to send a MFW mailbox command [0x%x] in parallel to [UN]LOAD_REQ. Aborting.\n",
+ cmd);
+ OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->lock);
+ return ECORE_BUSY;
+ }
+
+ if (cmd == DRV_MSG_CODE_LOAD_REQ || cmd == DRV_MSG_CODE_UNLOAD_REQ) {
+ p_hwfn->mcp_info->block_mb_sending = true;
+ OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->lock);
+ }
+
+ return ECORE_SUCCESS;
+}
+
+static void ecore_mcp_mb_unlock(struct ecore_hwfn *p_hwfn, u32 cmd)
+{
+ if (cmd != DRV_MSG_CODE_LOAD_REQ && cmd != DRV_MSG_CODE_UNLOAD_REQ)
+ OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->lock);
+}
+
enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
{
@@ -214,8 +259,12 @@ enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
delay = EMUL_MCP_RESP_ITER_US;
#endif
-
- OSAL_SPIN_LOCK(&p_hwfn->mcp_info->lock);
+ /* Ensure that only a single thread is accessing the mailbox at a
+ * certain time.
+ */
+ rc = ecore_mcp_mb_lock(p_hwfn, DRV_MSG_CODE_MCP_RESET);
+ if (rc != ECORE_SUCCESS)
+ return rc;
/* Set drv command along with the updated sequence */
org_mcp_reset_seq = ecore_rd(p_hwfn, p_ptt, MISCS_REG_GENERIC_POR_0);
@@ -238,7 +287,7 @@ enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
rc = ECORE_AGAIN;
}
- OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->lock);
+ ecore_mcp_mb_unlock(p_hwfn, DRV_MSG_CODE_MCP_RESET);
return rc;
}
@@ -327,14 +376,16 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
return ECORE_BUSY;
}
- /* Acquiring a spinlock is needed to ensure that only a single thread
- * is accessing the mailbox at a certain time.
- */
- OSAL_SPIN_LOCK(&p_hwfn->mcp_info->lock);
-
union_data_addr = p_hwfn->mcp_info->drv_mb_addr +
OFFSETOF(struct public_drv_mb, union_data);
+ /* Ensure that only a single thread is accessing the mailbox at a
+ * certain time.
+ */
+ rc = ecore_mcp_mb_lock(p_hwfn, p_mb_params->cmd);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
if (p_mb_params->p_data_src != OSAL_NULL)
ecore_memcpy_to(p_hwfn, p_ptt, union_data_addr,
p_mb_params->p_data_src,
@@ -348,6 +399,9 @@ ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
ecore_memcpy_from(p_hwfn, p_ptt, p_mb_params->p_data_dst,
union_data_addr,
sizeof(*p_mb_params->p_data_dst));
+
+ ecore_mcp_mb_unlock(p_hwfn, p_mb_params->cmd);
+
return rc;
}
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 28a8f93..e055835 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -32,6 +32,10 @@
((_p_hwfn)->p_dev->num_ports_in_engines * 2))
struct ecore_mcp_info {
osal_spinlock_t lock; /* Spinlock used for accessing MCP mailbox */
+
+ /* Flag to indicate whether sending a MFW mailbox is forbidden */
+ bool block_mb_sending;
+
u32 public_base; /* Address of the MCP public area */
u32 drv_mb_addr; /* Address of the driver mailbox */
u32 mfw_mb_addr; /* Address of the MFW mailbox */
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 10/32] net/qede: add NIC selftest and query sensor info support
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (8 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 09/32] net/qede: serialize access to MFW mbox Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 11/32] net/qede/base: update base driver Rasesh Mody
` (22 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Harish Patil
From: Harish Patil <harish.patil@qlogic.com>
This patch adds API support for NIC selftests (BIST) and APIs to retrieve
GPIO info, sensor data like temperature, MBA versions, ECC events etc.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
drivers/net/qede/base/ecore_mcp.c | 314 ++++++++++++++++++++++++++++++++++
drivers/net/qede/base/ecore_mcp_api.h | 155 +++++++++++++++++
drivers/net/qede/base/mcp_public.h | 100 +++++++++++
3 files changed, 569 insertions(+)
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 12e1ec1..5baa5a7 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2021,3 +2021,317 @@ enum _ecore_status_t ecore_mcp_gpio_write(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
+
+enum _ecore_status_t ecore_mcp_gpio_info(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u16 gpio, u32 *gpio_direction,
+ u32 *gpio_ctrl)
+{
+ u32 drv_mb_param = 0, rsp, val = 0;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
+
+ drv_mb_param = gpio << DRV_MB_PARAM_GPIO_NUMBER_SHIFT;
+
+ rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_INFO,
+ drv_mb_param, &rsp, &val);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ *gpio_direction = (val & DRV_MB_PARAM_GPIO_DIRECTION_MASK) >>
+ DRV_MB_PARAM_GPIO_DIRECTION_SHIFT;
+ *gpio_ctrl = (val & DRV_MB_PARAM_GPIO_CTRL_MASK) >>
+ DRV_MB_PARAM_GPIO_CTRL_SHIFT;
+
+ if ((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_GPIO_OK)
+ return ECORE_UNKNOWN_ERROR;
+
+ return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t ecore_mcp_bist_register_test(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt)
+{
+ u32 drv_mb_param = 0, rsp, param;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
+
+ drv_mb_param = (DRV_MB_PARAM_BIST_REGISTER_TEST <<
+ DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT);
+
+ rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
+ drv_mb_param, &rsp, ¶m);
+
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK) ||
+ (param != DRV_MB_PARAM_BIST_RC_PASSED))
+ rc = ECORE_UNKNOWN_ERROR;
+
+ return rc;
+}
+
+enum _ecore_status_t ecore_mcp_bist_clock_test(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt)
+{
+ u32 drv_mb_param = 0, rsp, param;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
+
+ drv_mb_param = (DRV_MB_PARAM_BIST_CLOCK_TEST <<
+ DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT);
+
+ rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
+ drv_mb_param, &rsp, ¶m);
+
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK) ||
+ (param != DRV_MB_PARAM_BIST_RC_PASSED))
+ rc = ECORE_UNKNOWN_ERROR;
+
+ return rc;
+}
+
+enum _ecore_status_t ecore_mcp_bist_nvm_test_get_num_images(
+ struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 *num_images)
+{
+ u32 drv_mb_param = 0, rsp;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
+
+ drv_mb_param = (DRV_MB_PARAM_BIST_NVM_TEST_NUM_IMAGES <<
+ DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT);
+
+ rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
+ drv_mb_param, &rsp, num_images);
+
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK))
+ rc = ECORE_UNKNOWN_ERROR;
+
+ return rc;
+}
+
+enum _ecore_status_t ecore_mcp_bist_nvm_test_get_image_att(
+ struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ struct bist_nvm_image_att *p_image_att, u32 image_index)
+{
+ struct ecore_mcp_nvm_params params;
+ enum _ecore_status_t rc;
+ u32 buf_size;
+
+ OSAL_MEMSET(¶ms, 0, sizeof(struct ecore_mcp_nvm_params));
+ params.nvm_common.offset = (DRV_MB_PARAM_BIST_NVM_TEST_IMAGE_BY_INDEX <<
+ DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT);
+ params.nvm_common.offset |= (image_index <<
+ DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_SHIFT);
+
+ params.type = ECORE_MCP_NVM_RD;
+ params.nvm_rd.buf_size = &buf_size;
+ params.nvm_common.cmd = DRV_MSG_CODE_BIST_TEST;
+ params.nvm_rd.buf = (u32 *)p_image_att;
+
+ rc = ecore_mcp_nvm_command(p_hwfn, p_ptt, ¶ms);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ if (((params.nvm_common.resp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK) ||
+ (p_image_att->return_code != 1))
+ rc = ECORE_UNKNOWN_ERROR;
+
+ return rc;
+}
+
+enum _ecore_status_t ecore_mcp_get_nvm_image(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ enum ecore_nvm_images image_id,
+ char *p_buffer, u16 buffer_len)
+{
+ struct bist_nvm_image_att image_att;
+ /* enum nvm_image_type type; */ /* @DPDK */
+ u32 type;
+ u32 num_images, i;
+ enum _ecore_status_t rc;
+
+ OSAL_MEM_ZERO(p_buffer, buffer_len);
+
+ /* Translate image_id into MFW definitions */
+ switch (image_id) {
+ case ECORE_NVM_IMAGE_ISCSI_CFG:
+ /* @DPDK */
+ type = 0x1d; /* NVM_TYPE_ISCSI_CFG; */
+ break;
+ case ECORE_NVM_IMAGE_FCOE_CFG:
+ type = 0x1f; /* NVM_TYPE_FCOE_CFG; */
+ break;
+ default:
+ DP_NOTICE(p_hwfn, false, "Unknown request of image_id %08x\n",
+ image_id);
+ return ECORE_INVAL;
+ }
+
+ /* Learn number of images, then traverse and see if one fits */
+ rc = ecore_mcp_bist_nvm_test_get_num_images(p_hwfn, p_ptt,
+ &num_images);
+ if ((rc != ECORE_SUCCESS) || (!num_images))
+ return ECORE_INVAL;
+
+ for (i = 0; i < num_images; i++) {
+ rc = ecore_mcp_bist_nvm_test_get_image_att(p_hwfn, p_ptt,
+ &image_att, i);
+ if (rc != ECORE_SUCCESS)
+ return ECORE_INVAL;
+
+ if (type == image_att.image_type)
+ break;
+ }
+ if (i == num_images) {
+ DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
+ "Failed to find nvram image of type %08x\n",
+ image_id);
+ return ECORE_INVAL;
+ }
+
+ /* Validate sizes - both the image's and the supplied buffer's */
+ if (image_att.len <= 4) {
+ DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
+ "Image [%d] is too small - only %d bytes\n",
+ image_id, image_att.len);
+ return ECORE_INVAL;
+ }
+
+ /* Each NVM image is suffixed by CRC; Upper-layer has no need for it */
+ image_att.len -= 4;
+
+ if (image_att.len > buffer_len) {
+ DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
+ "Image [%d] is too big - %08x bytes where only %08x are available\n",
+ image_id, image_att.len, buffer_len);
+ return ECORE_NOMEM;
+ }
+
+ return ecore_mcp_nvm_read(p_hwfn->p_dev, image_att.nvm_start_addr,
+ (u8 *)p_buffer, image_att.len);
+}
+
+enum _ecore_status_t
+ecore_mcp_get_temperature_info(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct ecore_temperature_info *p_temp_info)
+{
+ struct ecore_temperature_sensor *p_temp_sensor;
+ struct temperature_status_stc *p_mfw_temp_info;
+ struct ecore_mcp_mb_params mb_params;
+ union drv_union_data union_data;
+ u32 val;
+ enum _ecore_status_t rc;
+ u8 i;
+
+ OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+ mb_params.cmd = DRV_MSG_CODE_GET_TEMPERATURE;
+ mb_params.p_data_dst = &union_data;
+ rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ p_mfw_temp_info = &union_data.temp_info;
+
+ OSAL_BUILD_BUG_ON(ECORE_MAX_NUM_OF_SENSORS != MAX_NUM_OF_SENSORS);
+ p_temp_info->num_sensors = OSAL_MIN_T(u32,
+ p_mfw_temp_info->num_of_sensors,
+ ECORE_MAX_NUM_OF_SENSORS);
+ for (i = 0; i < p_temp_info->num_sensors; i++) {
+ val = p_mfw_temp_info->sensor[i];
+ p_temp_sensor = &p_temp_info->sensors[i];
+ p_temp_sensor->sensor_location = (val & SENSOR_LOCATION_MASK) >>
+ SENSOR_LOCATION_SHIFT;
+ p_temp_sensor->threshold_high = (val & THRESHOLD_HIGH_MASK) >>
+ THRESHOLD_HIGH_SHIFT;
+ p_temp_sensor->critical = (val & CRITICAL_TEMPERATURE_MASK) >>
+ CRITICAL_TEMPERATURE_SHIFT;
+ p_temp_sensor->current_temp = (val & CURRENT_TEMP_MASK) >>
+ CURRENT_TEMP_SHIFT;
+ }
+
+ return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t ecore_mcp_get_mba_versions(
+ struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct ecore_mba_vers *p_mba_vers)
+{
+ struct ecore_mcp_nvm_params params;
+ enum _ecore_status_t rc;
+ u32 buf_size;
+
+ OSAL_MEM_ZERO(¶ms, sizeof(params));
+ params.type = ECORE_MCP_NVM_RD;
+ params.nvm_common.cmd = DRV_MSG_CODE_GET_MBA_VERSION;
+ params.nvm_common.offset = 0;
+ params.nvm_rd.buf = &p_mba_vers->mba_vers[0];
+ params.nvm_rd.buf_size = &buf_size;
+ rc = ecore_mcp_nvm_command(p_hwfn, p_ptt, ¶ms);
+
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ if ((params.nvm_common.resp & FW_MSG_CODE_MASK) !=
+ FW_MSG_CODE_NVM_OK)
+ rc = ECORE_UNKNOWN_ERROR;
+
+ if (buf_size != MCP_DRV_NVM_BUF_LEN)
+ rc = ECORE_UNKNOWN_ERROR;
+
+ return rc;
+}
+
+enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u64 *num_events)
+{
+ u32 rsp;
+
+ return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MEM_ECC_EVENTS,
+ 0, &rsp, (u32 *)num_events);
+}
+
+#define ECORE_RESC_ALLOC_VERSION_MAJOR 1
+#define ECORE_RESC_ALLOC_VERSION_MINOR 0
+#define ECORE_RESC_ALLOC_VERSION \
+ ((ECORE_RESC_ALLOC_VERSION_MAJOR << \
+ DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_SHIFT) | \
+ (ECORE_RESC_ALLOC_VERSION_MINOR << \
+ DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_SHIFT))
+
+enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct resource_info *p_resc_info,
+ u32 *p_mcp_resp, u32 *p_mcp_param)
+{
+ struct ecore_mcp_mb_params mb_params;
+ union drv_union_data *p_union_data;
+ enum _ecore_status_t rc;
+
+ OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+ mb_params.cmd = DRV_MSG_GET_RESOURCE_ALLOC_MSG;
+ mb_params.param = ECORE_RESC_ALLOC_VERSION;
+ p_union_data = (union drv_union_data *)p_resc_info;
+ mb_params.p_data_src = p_union_data;
+ mb_params.p_data_dst = p_union_data;
+ rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ *p_mcp_resp = mb_params.mcp_resp;
+ *p_mcp_param = mb_params.mcp_param;
+
+ DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+ "MFW resource_info: version 0x%x, res_id 0x%x, size 0x%x, offset 0x%x, vf_size 0x%x, vf_offset 0x%x, flags 0x%x\n",
+ *p_mcp_param, p_resc_info->res_id, p_resc_info->size,
+ p_resc_info->offset, p_resc_info->vf_size,
+ p_resc_info->vf_offset, p_resc_info->flags);
+
+ return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index 530c0ec..f21f87a 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -115,6 +115,11 @@ struct ecore_mcp_nvm_params {
};
};
+enum ecore_nvm_images {
+ ECORE_NVM_IMAGE_ISCSI_CFG,
+ ECORE_NVM_IMAGE_FCOE_CFG,
+};
+
struct ecore_mcp_drv_version {
u32 version;
u8 name[MCP_DRV_VER_STR_SIZE - 4];
@@ -171,6 +176,35 @@ enum ecore_led_mode {
};
#endif
+struct ecore_temperature_sensor {
+ u8 sensor_location;
+ u8 threshold_high;
+ u8 critical;
+ u8 current_temp;
+};
+
+#define ECORE_MAX_NUM_OF_SENSORS 7
+struct ecore_temperature_info {
+ u32 num_sensors;
+ struct ecore_temperature_sensor sensors[ECORE_MAX_NUM_OF_SENSORS];
+};
+
+enum ecore_mba_img_idx {
+ ECORE_MBA_LEGACY_IDX,
+ ECORE_MBA_PCI3CLP_IDX,
+ ECORE_MBA_PCI3_IDX,
+ ECORE_MBA_FCODE_IDX,
+ ECORE_EFI_X86_IDX,
+ ECORE_EFI_IPF_IDX,
+ ECORE_EFI_EBC_IDX,
+ ECORE_EFI_X64_IDX,
+ ECORE_MAX_NUM_OF_ROMIMG
+};
+
+struct ecore_mba_vers {
+ u32 mba_vers[ECORE_MAX_NUM_OF_ROMIMG];
+};
+
/**
* @brief - returns the link params of the hw function
*
@@ -607,5 +641,126 @@ enum _ecore_status_t ecore_mcp_gpio_read(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t ecore_mcp_gpio_write(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u16 gpio, u16 gpio_val);
+/**
+ * @brief Gpio get information
+ *
+ * @param p_hwfn - hw function
+ * @param p_ptt - PTT required for register access
+ * @param gpio - gpio number
+ * @param gpio_direction - gpio is output (0) or input (1)
+ * @param gpio_ctrl - gpio control is uninitialized (0),
+ * path 0 (1), path 1 (2) or shared(3)
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_gpio_info(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u16 gpio, u32 *gpio_direction,
+ u32 *gpio_ctrl);
+
+/**
+ * @brief Bist register test
+ *
+ * @param p_hwfn - hw function
+ * @param p_ptt - PTT required for register access
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_bist_register_test(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt);
+
+/**
+ * @brief Bist clock test
+ *
+ * @param p_hwfn - hw function
+ * @param p_ptt - PTT required for register access
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_bist_clock_test(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt);
+
+/**
+ * @brief Bist nvm test - get number of images
+ *
+ * @param p_hwfn - hw function
+ * @param p_ptt - PTT required for register access
+ * @param num_images - number of images if operation was
+ * successful. 0 if not.
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t
+ecore_mcp_bist_nvm_test_get_num_images(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u32 *num_images);
+
+/**
+ * @brief Bist nvm test - get image attributes by index
+ *
+ * @param p_hwfn - hw function
+ * @param p_ptt - PTT required for register access
+ * @param p_image_att - Attributes of image
+ * @param image_index - Index of image to get information for
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t
+ecore_mcp_bist_nvm_test_get_image_att(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct bist_nvm_image_att *p_image_att,
+ u32 image_index);
+
+/**
+ * @brief ecore_mcp_get_temperature_info - get the status of the temperature
+ * sensors
+ *
+ * @param p_hwfn - hw function
+ * @param p_ptt - PTT required for register access
+ * @param p_temp_status - A pointer to an ecore_temperature_info structure to
+ * be filled with the temperature data
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t
+ecore_mcp_get_temperature_info(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct ecore_temperature_info *p_temp_info);
+/**
+ * @brief Get MBA versions - get MBA sub images versions
+ *
+ * @param p_hwfn - hw function
+ * @param p_ptt - PTT required for register access
+ * @param p_mba_vers - MBA versions array to fill
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_get_mba_versions(
+ struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct ecore_mba_vers *p_mba_vers);
+
+/**
+ * @brief Count memory ecc events
+ *
+ * @param p_hwfn - hw function
+ * @param p_ptt - PTT required for register access
+ * @param num_events - number of memory ecc events
+ *
+ * @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
+ */
+enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u64 *num_events);
+
+/**
+ * @brief Sets whether a critical error notification from the MFW is acked, or
+ * is it being ignored and thus allowing the MFW crash dump.
+ *
+ * @param p_dev
+ * @param mdump_enable
+ *
+ */
+void ecore_mcp_mdump_enable(struct ecore_dev *p_dev, bool mdump_enable);
#endif
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 6f4b4f8..bdd51d5 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -761,6 +761,13 @@ struct mcp_file_att {
u32 len;
};
+struct bist_nvm_image_att {
+ u32 return_code;
+ u32 image_type; /* Image type */
+ u32 nvm_start_addr; /* NVM address of the image */
+ u32 len; /* Include CRC */
+};
+
#define MCP_DRV_VER_STR_SIZE 16
#define MCP_DRV_VER_STR_SIZE_DWORD (MCP_DRV_VER_STR_SIZE / sizeof(u32))
#define MCP_DRV_NVM_BUF_LEN 32
@@ -783,6 +790,59 @@ struct ocbb_data_stc {
u32 ocsd_req_update_interval;
};
+#define MAX_NUM_OF_SENSORS 7
+#define MFW_SENSOR_LOCATION_INTERNAL 1
+#define MFW_SENSOR_LOCATION_EXTERNAL 2
+#define MFW_SENSOR_LOCATION_SFP 3
+
+#define SENSOR_LOCATION_SHIFT 0
+#define SENSOR_LOCATION_MASK 0x000000ff
+#define THRESHOLD_HIGH_SHIFT 8
+#define THRESHOLD_HIGH_MASK 0x0000ff00
+#define CRITICAL_TEMPERATURE_SHIFT 16
+#define CRITICAL_TEMPERATURE_MASK 0x00ff0000
+#define CURRENT_TEMP_SHIFT 24
+#define CURRENT_TEMP_MASK 0xff000000
+struct temperature_status_stc {
+ u32 num_of_sensors;
+ u32 sensor[MAX_NUM_OF_SENSORS];
+};
+
+enum resource_id_enum {
+ RESOURCE_NUM_SB_E = 0,
+ RESOURCE_NUM_L2_QUEUE_E = 1,
+ RESOURCE_NUM_VPORT_E = 2,
+ RESOURCE_NUM_VMQ_E = 3,
+ RESOURCE_FACTOR_NUM_RSS_PF_E = 4,
+ RESOURCE_FACTOR_RSS_PER_VF_E = 5,
+ RESOURCE_NUM_RL_E = 6,
+ RESOURCE_NUM_PQ_E = 7,
+ RESOURCE_NUM_VF_E = 8,
+ RESOURCE_VFC_FILTER_E = 9,
+ RESOURCE_ILT_E = 10,
+ RESOURCE_CQS_E = 11,
+ RESOURCE_GFT_PROFILES_E = 12,
+ RESOURCE_NUM_TC_E = 13,
+ RESOURCE_NUM_RSS_ENGINES_E = 14,
+ RESOURCE_LL2_QUEUE_E = 15,
+ RESOURCE_RDMA_STATS_QUEUE_E = 16,
+ RESOURCE_MAX_NUM,
+ RESOURCE_NUM_INVALID = 0xFFFFFFFF
+};
+
+/* Resource ID is to be filled by the driver in the MB request
+ * Size, offset & flags to be filled by the MFW in the MB response
+ */
+struct resource_info {
+ enum resource_id_enum res_id;
+ u32 size; /* number of allocated resources */
+ u32 offset; /* Offset of the 1st resource */
+ u32 vf_size;
+ u32 vf_offset;
+ u32 flags;
+#define RESOURCE_ELEMENT_STRICT (1 << 0)
+};
+
union drv_union_data {
u32 ver_str[MCP_DRV_VER_STR_SIZE_DWORD]; /* LOAD_REQ */
struct mcp_mac wol_mac; /* UNLOAD_DONE */
@@ -802,6 +862,9 @@ union drv_union_data {
struct lan_stats_stc lan_stats;
u32 dpdk_rsvd[3];
struct ocbb_data_stc ocbb_info;
+ struct temperature_status_stc temp_info;
+ struct resource_info resource;
+ struct bist_nvm_image_att nvm_image_att;
/* ... */
};
@@ -833,6 +896,8 @@ struct public_drv_mb {
#define DRV_MSG_CODE_NIG_DRAIN 0x30000000
+#define DRV_MSG_GET_RESOURCE_ALLOC_MSG 0x34000000
+
#define DRV_MSG_CODE_INITIATE_FLR 0x02000000
#define DRV_MSG_CODE_VF_DISABLED_DONE 0xc0000000
#define DRV_MSG_CODE_CFG_VF_MSIX 0xc0010000
@@ -873,10 +938,18 @@ struct public_drv_mb {
#define DRV_MSG_CODE_GPIO_READ 0x001c0000
#define DRV_MSG_CODE_GPIO_WRITE 0x001d0000
+#define DRV_MSG_CODE_GPIO_INFO 0x00270000
+
+#define DRV_MSG_CODE_BIST_TEST 0x001e0000
+#define DRV_MSG_CODE_GET_TEMPERATURE 0x001f0000
#define DRV_MSG_CODE_SET_LED_MODE 0x00200000
+#define DRV_MSG_CODE_TIMESTAMP 0x00210000
#define DRV_MSG_CODE_EMPTY_MB 0x00220000
+#define DRV_MSG_CODE_GET_MBA_VERSION 0x00240000
+#define DRV_MSG_CODE_MEM_ECC_EVENTS 0x00260000
+
#define DRV_MSG_SEQ_NUMBER_MASK 0x0000ffff
u32 drv_mb_param;
@@ -991,6 +1064,33 @@ struct public_drv_mb {
#define DRV_MB_PARAM_GPIO_VALUE_SHIFT 16
#define DRV_MB_PARAM_GPIO_VALUE_MASK 0xFFFF0000
+#define DRV_MB_PARAM_GPIO_DIRECTION_SHIFT 16
+#define DRV_MB_PARAM_GPIO_DIRECTION_MASK 0x00FF0000
+#define DRV_MB_PARAM_GPIO_CTRL_SHIFT 24
+#define DRV_MB_PARAM_GPIO_CTRL_MASK 0xFF000000
+
+ /* Resource Allocation params - Driver version support*/
+#define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_MASK 0xFFFF0000
+#define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_SHIFT 16
+#define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_MASK 0x0000FFFF
+#define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_SHIFT 0
+
+#define DRV_MB_PARAM_BIST_UNKNOWN_TEST 0
+#define DRV_MB_PARAM_BIST_REGISTER_TEST 1
+#define DRV_MB_PARAM_BIST_CLOCK_TEST 2
+#define DRV_MB_PARAM_BIST_NVM_TEST_NUM_IMAGES 3
+#define DRV_MB_PARAM_BIST_NVM_TEST_IMAGE_BY_INDEX 4
+
+#define DRV_MB_PARAM_BIST_RC_UNKNOWN 0
+#define DRV_MB_PARAM_BIST_RC_PASSED 1
+#define DRV_MB_PARAM_BIST_RC_FAILED 2
+#define DRV_MB_PARAM_BIST_RC_INVALID_PARAMETER 3
+
+#define DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT 0
+#define DRV_MB_PARAM_BIST_TEST_INDEX_MASK 0x000000FF
+#define DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_SHIFT 8
+#define DRV_MB_PARAM_BIST_TEST_IMAGE_INDEX_MASK 0x0000FF00
+
u32 fw_mb_header;
#define FW_MSG_CODE_MASK 0xffff0000
#define FW_MSG_CODE_DRV_LOAD_ENGINE 0x10100000
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 11/32] net/qede/base: update base driver
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (9 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 10/32] net/qede: add NIC selftest and query sensor info support Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2021-03-24 14:07 ` Ferruh Yigit
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 12/32] net/qede/base: rename structure and defines Rasesh Mody
` (21 subsequent siblings)
32 siblings, 1 reply; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Rasesh Mody
This patch updates the base driver and incorporates necessary changes
required to bring in the new firmware 8.10.9.0.
In addition, it would allow driver to add new functionalities that might
be needed in future.
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
---
doc/guides/nics/features/qede.ini | 2 +
doc/guides/nics/features/qede_vf.ini | 2 +
doc/guides/nics/qede.rst | 15 +-
drivers/net/qede/base/bcm_osal.h | 6 +-
drivers/net/qede/base/ecore.h | 166 ++-
drivers/net/qede/base/ecore_chain.h | 17 +-
drivers/net/qede/base/ecore_cxt.c | 319 +++++-
drivers/net/qede/base/ecore_cxt.h | 49 +-
drivers/net/qede/base/ecore_cxt_api.h | 15 -
drivers/net/qede/base/ecore_dcbx.c | 581 +++++++++-
drivers/net/qede/base/ecore_dcbx.h | 18 +-
drivers/net/qede/base/ecore_dcbx_api.h | 128 ++-
drivers/net/qede/base/ecore_dev.c | 1551 +++++++++++++++++++-------
drivers/net/qede/base/ecore_dev_api.h | 92 +-
drivers/net/qede/base/ecore_hsi_eth.h | 120 +-
drivers/net/qede/base/ecore_hw.c | 212 ++--
drivers/net/qede/base/ecore_hw.h | 16 +-
drivers/net/qede/base/ecore_init_fw_funcs.c | 324 ++++--
drivers/net/qede/base/ecore_init_fw_funcs.h | 102 +-
drivers/net/qede/base/ecore_init_ops.c | 5 +-
drivers/net/qede/base/ecore_int.c | 271 ++---
drivers/net/qede/base/ecore_int.h | 19 +-
drivers/net/qede/base/ecore_int_api.h | 11 +
drivers/net/qede/base/ecore_iov_api.h | 104 +-
drivers/net/qede/base/ecore_iro.h | 222 ++--
drivers/net/qede/base/ecore_iro_values.h | 108 +-
drivers/net/qede/base/ecore_l2.c | 292 ++---
drivers/net/qede/base/ecore_l2.h | 57 +-
drivers/net/qede/base/ecore_l2_api.h | 9 +-
drivers/net/qede/base/ecore_mcp.c | 350 +++---
drivers/net/qede/base/ecore_mcp.h | 29 +-
drivers/net/qede/base/ecore_mcp_api.h | 81 +-
drivers/net/qede/base/ecore_proto_if.h | 59 +
drivers/net/qede/base/ecore_rt_defs.h | 639 +++++------
drivers/net/qede/base/ecore_sp_commands.c | 85 +-
drivers/net/qede/base/ecore_sp_commands.h | 30 +
drivers/net/qede/base/ecore_spq.c | 167 +--
drivers/net/qede/base/ecore_spq.h | 5 +-
drivers/net/qede/base/ecore_sriov.c | 1596 +++++++++++++++++----------
drivers/net/qede/base/ecore_sriov.h | 149 +--
drivers/net/qede/base/ecore_vf.c | 736 ++++++------
drivers/net/qede/base/ecore_vf.h | 224 +---
drivers/net/qede/base/ecore_vf_api.h | 93 +-
drivers/net/qede/base/ecore_vfpf_if.h | 162 ++-
drivers/net/qede/base/eth_common.h | 203 ++--
drivers/net/qede/base/mcp_public.h | 408 +++++--
drivers/net/qede/base/nvm_cfg.h | 606 +++++++++-
drivers/net/qede/qede_eth_if.c | 1 +
drivers/net/qede/qede_main.c | 20 +-
drivers/net/qede/qede_rxtx.h | 4 +
50 files changed, 6906 insertions(+), 3574 deletions(-)
diff --git a/doc/guides/nics/features/qede.ini b/doc/guides/nics/features/qede.ini
index 0df93a6..7690773 100644
--- a/doc/guides/nics/features/qede.ini
+++ b/doc/guides/nics/features/qede.ini
@@ -19,6 +19,8 @@ VLAN filter = Y
Flow control = Y
CRC offload = Y
VLAN offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
Packet type parsing = Y
Basic stats = Y
Extended stats = Y
diff --git a/doc/guides/nics/features/qede_vf.ini b/doc/guides/nics/features/qede_vf.ini
index f925659..aeb20d2 100644
--- a/doc/guides/nics/features/qede_vf.ini
+++ b/doc/guides/nics/features/qede_vf.ini
@@ -20,6 +20,8 @@ VLAN filter = Y
Flow control = Y
CRC offload = Y
VLAN offload = Y
+L3 checksum offload = Y
+L4 checksum offload = Y
Packet type parsing = Y
Basic stats = Y
Extended stats = Y
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index 3af755e..d107a7d 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -32,7 +32,7 @@ QEDE Poll Mode Driver
======================
The QEDE poll mode driver library (**librte_pmd_qede**) implements support
-for **QLogic FastLinQ QL4xxxx 25G/40G CNA** family of adapters as well
+for **QLogic FastLinQ QL4xxxx 25G/40G/100G CNA** family of adapters as well
as their virtual functions (VF) in SR-IOV context. It is supported on
several standard Linux distros like RHEL7.x, SLES12.x and Ubuntu.
It is compile-tested under FreeBSD OS.
@@ -55,14 +55,15 @@ Supported Features
- TSS
- Multiple MAC address
- Default pause flow control
-- SR-IOV VF for 25G/40G modes
+- SR-IOV VF
+- MTU change
+- Multiprocess aware
Non-supported Features
----------------------
- Scatter-Gather Rx/Tx frames
- Unequal number of Rx/Tx queues
-- MTU change (dynamic)
- SR-IOV PF
- Tunneling offloads
- Reload of the PMD after a non-graceful termination
@@ -75,10 +76,10 @@ Supported QLogic Adapters
Prerequisites
-------------
-- Requires firmware version **8.7.x.** and management firmware
- version **8.7.x or higher**. Firmware may be available
+- Requires firmware version **8.10.x.** and management firmware
+ version **8.10.x or higher**. Firmware may be available
inbox in certain newer Linux distros under the standard directory
- ``E.g. /lib/firmware/qed/qed_init_values_zipped-8.7.7.0.bin``
+ ``E.g. /lib/firmware/qed/qed_init_values_zipped-8.10.9.0.bin``
- If the required firmware files are not available then visit
`QLogic Driver Download Center <http://driverdownloads.qlogic.com>`_.
@@ -120,7 +121,7 @@ enabling debugging options may affect system performance.
- ``CONFIG_RTE_LIBRTE_QEDE_FW`` (default **""**)
Gives absolute path of firmware file.
- ``Eg: "/lib/firmware/qed/qed_init_values_zipped-8.7.7.0.bin"``
+ ``Eg: "/lib/firmware/qed/qed_init_values_zipped-8.10.9.0.bin"``
Empty string indicates driver will pick up the firmware file
from the default location.
diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 9d84ae2..0b446f2 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -9,8 +9,6 @@
#ifndef __BCM_OSAL_H
#define __BCM_OSAL_H
-#include <string.h>
-
#include <rte_byteorder.h>
#include <rte_spinlock.h>
#include <rte_malloc.h>
@@ -43,6 +41,8 @@ void qed_link_update(struct ecore_hwfn *hwfn);
#endif
#endif
+#define OSAL_WARN(arg1, arg2, arg3, ...) (0)
+
/* Memory Types */
typedef uint8_t u8;
typedef uint16_t u16;
@@ -330,6 +330,8 @@ u32 qede_find_first_zero_bit(unsigned long *, u32);
#define OSAL_IOV_VF_VPORT_UPDATE(hwfn, vfid, p_params, p_mask) 0
#define OSAL_VF_UPDATE_ACQUIRE_RESC_RESP(_dev_p, _resc_resp) 0
#define OSAL_IOV_GET_OS_TYPE() 0
+#define OSAL_IOV_VF_MSG_TYPE(hwfn, vfid, vf_msg_type) 0
+#define OSAL_IOV_PF_RESP_TYPE(hwfn, vfid, pf_resp_type) 0
u32 qede_unzip_data(struct ecore_hwfn *p_hwfn, u32 input_len,
u8 *input_buf, u32 max_size, u8 *unzip_buf);
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 9f456e3..874c3a3 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -29,12 +29,13 @@
#include "mcp_public.h"
#define MAX_HWFNS_PER_DEVICE (4)
-#define NAME_SIZE 64 /* @DPDK */
+#define NAME_SIZE 128 /* @DPDK */
#define VER_SIZE 16
-/* @DPDK ARRAY_DECL */
#define ECORE_WFQ_UNIT 100
#include "../qede_logs.h" /* @DPDK */
+#define ISCSI_BDQ_ID(_port_id) (_port_id)
+#define FCOE_BDQ_ID(_port_id) (_port_id + 2)
/* Constants */
#define ECORE_WID_SIZE (1024)
@@ -153,8 +154,11 @@ enum DP_MODULE {
ECORE_MSG_IOV = 0x80000,
ECORE_MSG_SP = 0x100000,
ECORE_MSG_STORAGE = 0x200000,
+ ECORE_MSG_OOO = 0x200000,
ECORE_MSG_CXT = 0x800000,
+ ECORE_MSG_LL2 = 0x1000000,
ECORE_MSG_ILT = 0x2000000,
+ ECORE_MSG_RDMA = 0x4000000,
ECORE_MSG_DEBUG = 0x8000000,
/* to be added...up to 0x8000000 */
};
@@ -174,6 +178,7 @@ struct ecore_sb_attn_info;
struct ecore_cxt_mngr;
struct ecore_dma_mem;
struct ecore_sb_sp_info;
+struct ecore_ll2_info;
struct ecore_igu_info;
struct ecore_mcp_info;
struct ecore_dcbx_info;
@@ -196,6 +201,7 @@ enum ecore_tunn_clss {
ECORE_TUNN_CLSS_MAC_VNI,
ECORE_TUNN_CLSS_INNER_MAC_VLAN,
ECORE_TUNN_CLSS_INNER_MAC_VNI,
+ ECORE_TUNN_CLSS_MAC_VLAN_DUAL_STAGE,
MAX_ECORE_TUNN_CLSS,
};
@@ -228,35 +234,16 @@ struct ecore_tunn_update_params {
u8 tunn_clss_ipgre;
};
-struct ecore_hw_sriov_info {
- /* standard SRIOV capability fields, mostly for debugging */
- int pos; /* capability position */
- int nres; /* number of resources */
- u32 cap; /* SR-IOV Capabilities */
- u16 ctrl; /* SR-IOV Control */
- u16 total_vfs; /* total VFs associated with the PF */
- u16 num_vfs; /* number of vfs that have been started */
- u64 active_vfs[3]; /* bitfield of active vfs */
-#define ECORE_IS_VF_ACTIVE(_p_dev, _rel_vf_id) \
- (!!(_p_dev->sriov_info.active_vfs[_rel_vf_id / 64] & \
- (1ULL << (_rel_vf_id % 64))))
- u16 initial_vfs; /* initial VFs associated with the PF */
- u16 nr_virtfn; /* number of VFs available */
- u16 offset; /* first VF Routing ID offset */
- u16 stride; /* following VF stride */
- u16 vf_device_id; /* VF device id */
- u32 pgsz; /* page size for BAR alignment */
- u8 link; /* Function Dependency Link */
-
- bool b_hw_channel; /* Whether PF uses the HW-channel */
-};
-
/* The PCI personality is not quite synonymous to protocol ID:
* 1. All personalities need CORE connections
- * 2. The Ethernet personality may support also the RoCE protocol
+ * 2. The Ethernet personality may support also the RoCE/iWARP protocol
*/
enum ecore_pci_personality {
ECORE_PCI_ETH,
+ ECORE_PCI_FCOE,
+ ECORE_PCI_ISCSI,
+ ECORE_PCI_ETH_ROCE,
+ ECORE_PCI_IWARP,
ECORE_PCI_DEFAULT /* default in shmem */
};
@@ -269,11 +256,10 @@ struct ecore_qm_iids {
#define MAX_PF_PER_PORT 8
-/*@@@TBD MK RESC: need to remove and use MCP interface instead */
/* HW / FW resources, output of features supported below, most information
* is received from MFW.
*/
-enum ECORE_RESOURCES {
+enum ecore_resources {
ECORE_SB,
ECORE_L2_QUEUE,
ECORE_VPORT,
@@ -282,24 +268,30 @@ enum ECORE_RESOURCES {
ECORE_RL,
ECORE_MAC,
ECORE_VLAN,
+ ECORE_RDMA_CNQ_RAM,
ECORE_ILT,
+ ECORE_LL2_QUEUE,
ECORE_CMDQS_CQS,
- ECORE_MAX_RESC,
+ ECORE_RDMA_STATS_QUEUE,
+ ECORE_MAX_RESC, /* must be last */
};
/* Features that require resources, given as input to the resource management
* algorithm, the output are the resources above
*/
-enum ECORE_FEATURE {
+enum ecore_feature {
ECORE_PF_L2_QUE,
ECORE_PF_TC,
ECORE_VF,
ECORE_EXTRA_VF_QUE,
ECORE_VMQ,
+ ECORE_RDMA_CNQ,
+ ECORE_ISCSI_CQ,
+ ECORE_FCOE_CQ,
ECORE_MAX_FEATURES,
};
-enum ECORE_PORT_MODE {
+enum ecore_port_mode {
ECORE_PORT_MODE_DE_2X40G,
ECORE_PORT_MODE_DE_2X50G,
ECORE_PORT_MODE_DE_1X100G,
@@ -308,11 +300,16 @@ enum ECORE_PORT_MODE {
ECORE_PORT_MODE_DE_4X20G,
ECORE_PORT_MODE_DE_1X40G,
ECORE_PORT_MODE_DE_2X25G,
- ECORE_PORT_MODE_DE_1X25G
+ ECORE_PORT_MODE_DE_1X25G,
+ ECORE_PORT_MODE_DE_4X25G,
};
enum ecore_dev_cap {
ECORE_DEV_CAP_ETH,
+ ECORE_DEV_CAP_FCOE,
+ ECORE_DEV_CAP_ISCSI,
+ ECORE_DEV_CAP_ROCE,
+ ECORE_DEV_CAP_IWARP
};
#ifndef __EXTRACT__LINUX__
@@ -341,10 +338,20 @@ struct ecore_hw_info {
RESC_NUM(_p_hwfn, resc))
#define FEAT_NUM(_p_hwfn, resc) ((_p_hwfn)->hw_info.feat_num[resc])
- u8 num_tc;
+ /* Amount of traffic classes HW supports */
+ u8 num_hw_tc;
+
+/* Amount of TCs which should be active according to DCBx or upper layer driver
+ * configuration
+ */
+
+ u8 num_active_tc;
+
+ /* Traffic class used for tcp out of order traffic */
u8 ooo_tc;
+
+ /* The traffic class used by PF for it's offloaded protocol */
u8 offload_tc;
- u8 non_offload_tc;
u32 concrete_fid;
u16 opaque_fid;
@@ -352,10 +359,14 @@ struct ecore_hw_info {
u32 part_num[4];
unsigned char hw_mac_addr[ETH_ALEN];
+ u64 node_wwn; /* For FCoE only */
+ u64 port_wwn; /* For FCoE only */
+
+ u16 num_iscsi_conns;
+ u16 num_fcoe_conns;
struct ecore_igu_info *p_igu_info;
/* Sriov */
- u32 first_vf_in_pf;
u8 max_chains_per_vf;
u32 port_mode;
@@ -429,6 +440,7 @@ struct ecore_qm_info {
u8 pf_wfq;
u32 pf_rl;
struct ecore_wfq_data *wfq_data;
+ u8 num_pf_rls;
};
struct storm_stats {
@@ -466,6 +478,7 @@ struct ecore_hwfn {
bool hw_init_done;
u8 num_funcs_on_engine;
+ u8 enabled_func_idx;
/* BAR access */
void OSAL_IOMEM *regview;
@@ -502,9 +515,17 @@ struct ecore_hwfn {
struct ecore_sb_attn_info *p_sb_attn;
/* Protocol related */
+ bool using_ll2;
+ struct ecore_ll2_info *p_ll2_info;
struct ecore_ooo_info *p_ooo_info;
+ struct ecore_iscsi_info *p_iscsi_info;
+ struct ecore_fcoe_info *p_fcoe_info;
+ struct ecore_rdma_info *p_rdma_info;
struct ecore_pf_params pf_params;
+ bool b_rdma_enabled_in_prs;
+ u32 rdma_prs_search_reg;
+
/* Array of sb_info of all status blocks */
struct ecore_sb_info *sbs_info[MAX_SB_PER_PF_MIMD];
u16 num_sbs;
@@ -547,6 +568,10 @@ struct ecore_hwfn {
* calculate th
* doorbell address
*/
+
+ /* If one of the following is set then EDPM shouldn't be used */
+ u8 dcbx_no_edpm;
+ u8 db_bar_no_edpm;
};
#ifndef __EXTRACT__LINUX__
@@ -557,6 +582,23 @@ enum ecore_mf_mode {
};
#endif
+/* @DPDK */
+struct ecore_dbg_feature {
+ u8 *dump_buf;
+ u32 buf_size;
+ u32 dumped_dwords;
+};
+
+enum qed_dbg_features {
+ DBG_FEATURE_BUS,
+ DBG_FEATURE_GRC,
+ DBG_FEATURE_IDLE_CHK,
+ DBG_FEATURE_MCP_TRACE,
+ DBG_FEATURE_REG_FIFO,
+ DBG_FEATURE_PROTECTION_OVERRIDE,
+ DBG_FEATURE_NUM
+};
+
struct ecore_dev {
u32 dp_module;
u8 dp_level;
@@ -608,7 +650,7 @@ struct ecore_dev {
(CHIP_REV_IS_EMUL_B0(_p_dev) || \
CHIP_REV_IS_FPGA_B0(_p_dev) || \
(_p_dev)->chip_rev == 1)
-#define CHIP_REV_IS_ASIC(_p_dev) (!CHIP_REV_IS_SLOW(_p_dev))
+ #define CHIP_REV_IS_ASIC(_p_dev) !CHIP_REV_IS_SLOW(_p_dev)
#else
#define CHIP_REV_IS_A0(_p_dev) (!(_p_dev)->chip_rev)
#define CHIP_REV_IS_B0(_p_dev) ((_p_dev)->chip_rev == 1)
@@ -630,12 +672,14 @@ struct ecore_dev {
enum ecore_mf_mode mf_mode;
#define IS_MF_DEFAULT(_p_hwfn) \
(((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_DEFAULT)
-#define IS_MF_SI(_p_hwfn) (((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_NPAR)
-#define IS_MF_SD(_p_hwfn) (((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_OVLAN)
+ #define IS_MF_SI(_p_hwfn) \
+ (((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_NPAR)
+ #define IS_MF_SD(_p_hwfn) \
+ (((_p_hwfn)->p_dev)->mf_mode == ECORE_MF_OVLAN)
int pcie_width;
int pcie_speed;
- u8 ver_str[VER_SIZE];
+ u8 ver_str[NAME_SIZE]; /* @DPDK */
/* Add MF related configuration */
u8 mcp_rev;
u8 boot_mode;
@@ -644,8 +688,8 @@ struct ecore_dev {
u32 int_mode;
enum ecore_coalescing_mode int_coalescing_mode;
- u8 rx_coalesce_usecs;
- u8 tx_coalesce_usecs;
+ u16 rx_coalesce_usecs;
+ u16 tx_coalesce_usecs;
/* Start Bar offset of first hwfn */
void OSAL_IOMEM *regview;
@@ -665,13 +709,20 @@ struct ecore_dev {
struct ecore_hwfn hwfns[MAX_HWFNS_PER_DEVICE];
/* SRIOV */
- struct ecore_hw_sriov_info sriov_info;
+ struct ecore_hw_sriov_info *p_iov_info;
+#define IS_ECORE_SRIOV(p_dev) (!!(p_dev)->p_iov_info)
+ bool b_hw_channel;
+
unsigned long tunn_mode;
-#define IS_ECORE_SRIOV(edev) (!!((edev)->sriov_info.total_vfs))
+
bool b_is_vf;
u32 drv_type;
+ u32 rdma_max_sge;
+ u32 rdma_max_inline;
+ u32 rdma_max_srq_sge;
+
struct ecore_eth_stats *reset_stats;
struct ecore_fw_data *fw_data;
@@ -680,6 +731,13 @@ struct ecore_dev {
/* Recovery */
bool recov_in_prog;
+/* Indicates whether should prevent attentions from being reasserted */
+
+ bool attn_clr_en;
+
+ /* Indicates if the reg_fifo is checked after any register access */
+ bool chk_reg_fifo;
+
#ifndef ASIC_ONLY
bool b_is_emul_full;
#endif
@@ -689,6 +747,9 @@ struct ecore_dev {
u64 fw_len;
#endif
+ /* @DPDK */
+ struct ecore_dbg_feature dbg_features[DBG_FEATURE_NUM];
+ u8 engine_for_debug;
};
#define NUM_OF_VFS(dev) (ECORE_IS_BB(dev) ? MAX_NUM_VFS_BB \
@@ -702,12 +763,14 @@ struct ecore_dev {
#define NUM_OF_ENG_PFS(dev) (ECORE_IS_BB(dev) ? MAX_NUM_PFS_BB \
: MAX_NUM_PFS_K2)
+#ifndef REAL_ASIC_ONLY
#define ENABLE_EAGLE_ENG1_WORKAROUND(p_hwfn) ( \
(ECORE_IS_BB_A0(p_hwfn->p_dev)) && \
(ECORE_PATH_ID(p_hwfn) == 1) && \
((p_hwfn->hw_info.port_mode == ECORE_PORT_MODE_DE_2X40G) || \
(p_hwfn->hw_info.port_mode == ECORE_PORT_MODE_DE_2X50G) || \
(p_hwfn->hw_info.port_mode == ECORE_PORT_MODE_DE_2X25G)))
+#endif
/**
* @brief ecore_concrete_to_sw_fid - get the sw function id from
@@ -736,18 +799,6 @@ static OSAL_INLINE u8 ecore_concrete_to_sw_fid(struct ecore_dev *p_dev,
#define PURE_LB_TC 8
#define OOO_LB_TC 9
-static OSAL_INLINE u16 ecore_sriov_get_next_vf(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id)
-{
- u16 i;
-
- for (i = rel_vf_id; i < p_hwfn->p_dev->sriov_info.total_vfs; i++)
- if (ECORE_IS_VF_ACTIVE(p_hwfn->p_dev, i))
- return i;
-
- return p_hwfn->p_dev->sriov_info.total_vfs;
-}
-
int ecore_configure_vport_wfq(struct ecore_dev *p_dev, u16 vp_id, u32 rate);
void ecore_configure_vp_wfq_on_link_change(struct ecore_dev *p_dev,
u32 min_pf_rate);
@@ -758,11 +809,6 @@ void ecore_clean_wfq_db(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
int ecore_device_num_engines(struct ecore_dev *p_dev);
int ecore_device_num_ports(struct ecore_dev *p_dev);
-#define ecore_for_each_vf(_p_hwfn, _i) \
- for (_i = ecore_sriov_get_next_vf(_p_hwfn, 0); \
- _i < _p_hwfn->p_dev->sriov_info.total_vfs; \
- _i = ecore_sriov_get_next_vf(_p_hwfn, _i + 1))
-
#define ECORE_LEADING_HWFN(dev) (&dev->hwfns[0])
#endif /* __ECORE_H */
diff --git a/drivers/net/qede/base/ecore_chain.h b/drivers/net/qede/base/ecore_chain.h
index 56b7b4d..9ad1874 100644
--- a/drivers/net/qede/base/ecore_chain.h
+++ b/drivers/net/qede/base/ecore_chain.h
@@ -118,6 +118,8 @@ struct ecore_chain {
u16 next_page_mask;
struct ecore_chain_pbl pbl;
+
+ void *dp_ctx;
};
#define ECORE_CHAIN_PBL_ENTRY_SIZE (8)
@@ -542,12 +544,13 @@ static OSAL_INLINE void ecore_chain_reset(struct ecore_chain *p_chain)
* @param intended_use
* @param mode
* @param cnt_type
+ * @param dp_ctx
*/
static OSAL_INLINE void
ecore_chain_init_params(struct ecore_chain *p_chain, u32 page_cnt, u8 elem_size,
enum ecore_chain_use_mode intended_use,
enum ecore_chain_mode mode,
- enum ecore_chain_cnt_type cnt_type)
+ enum ecore_chain_cnt_type cnt_type, void *dp_ctx)
{
/* chain fixed parameters */
p_chain->p_virt_addr = OSAL_NULL;
@@ -571,6 +574,8 @@ ecore_chain_init_params(struct ecore_chain *p_chain, u32 page_cnt, u8 elem_size,
p_chain->pbl.p_phys_table = 0;
p_chain->pbl.p_virt_table = OSAL_NULL;
p_chain->pbl.pp_virt_addr_tbl = OSAL_NULL;
+
+ p_chain->dp_ctx = dp_ctx;
}
/**
@@ -723,4 +728,14 @@ static OSAL_INLINE void ecore_chain_pbl_zero_mem(struct ecore_chain *p_chain)
ECORE_CHAIN_PAGE_SIZE);
}
+int ecore_chain_print(struct ecore_chain *p_chain, char *buffer,
+ u32 buffer_size, u32 *element_indx, u32 stop_indx,
+ bool print_metadata,
+ int (*func_ptr_print_element)(struct ecore_chain *p_chain,
+ void *p_element,
+ char *buffer),
+ int (*func_ptr_print_metadata)(struct ecore_chain
+ *p_chain,
+ char *buffer));
+
#endif /* __ECORE_CHAIN_H__ */
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 22d0b25..3dd953d 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -18,6 +18,7 @@
#include "ecore_cxt.h"
#include "ecore_hw.h"
#include "ecore_dev_api.h"
+#include "ecore_sriov.h"
/* Max number of connection types in HW (DQ/CDU etc.) */
#define MAX_CONN_TYPES PROTOCOLID_COMMON
@@ -60,6 +61,14 @@ union conn_context {
struct eth_conn_context eth_ctx;
};
+/* TYPE-0 task context - iSCSI, FCOE */
+union type0_task_context {
+};
+
+/* TYPE-1 task context - ROCE */
+union type1_task_context {
+};
+
struct src_ent {
u8 opaque[56];
u64 next;
@@ -71,6 +80,14 @@ struct src_ent {
#define CONN_CXT_SIZE(p_hwfn) \
ALIGNED_TYPE_SIZE(union conn_context, p_hwfn)
+#define SRQ_CXT_SIZE (sizeof(struct regpair) * 8) /* @DPDK */
+
+#define TYPE0_TASK_CXT_SIZE(p_hwfn) \
+ ALIGNED_TYPE_SIZE(union type0_task_context, p_hwfn)
+
+/* Alignment is inherent to the type1_task_context structure */
+#define TYPE1_TASK_CXT_SIZE(p_hwfn) sizeof(union type1_task_context)
+
/* PF per protocl configuration object */
#define TASK_SEGMENTS (NUM_TASK_PF_SEGMENTS + NUM_TASK_VF_SEGMENTS)
#define TASK_SEGMENT_VF (NUM_TASK_PF_SEGMENTS)
@@ -96,6 +113,7 @@ struct ecore_conn_type_cfg {
#define ILT_CLI_PF_BLOCKS (1 + NUM_TASK_PF_SEGMENTS * 2)
#define ILT_CLI_VF_BLOCKS (1 + NUM_TASK_VF_SEGMENTS * 2)
#define CDUC_BLK (0)
+#define SRQ_BLK (0)
#define CDUT_SEG_BLK(n) (1 + (u8)(n))
#define CDUT_FL_SEG_BLK(n, X) (1 + (n) + NUM_TASK_##X##_SEGMENTS)
@@ -105,6 +123,7 @@ enum ilt_clients {
ILT_CLI_QM,
ILT_CLI_TM,
ILT_CLI_SRC,
+ ILT_CLI_TSDM,
ILT_CLI_MAX
};
@@ -172,6 +191,9 @@ struct ecore_cxt_mngr {
*/
u32 vf_count;
+ /* total number of SRQ's for this hwfn */
+ u32 srq_count;
+
/* Acquired CIDs */
struct ecore_cid_acquired_map acquired[MAX_CONN_TYPES];
@@ -179,6 +201,9 @@ struct ecore_cxt_mngr {
struct ecore_dma_mem *ilt_shadow;
u32 pf_start_line;
+ /* Mutex for a dynamic ILT allocation */
+ osal_mutex_t mutex;
+
/* SRC T2 */
struct ecore_dma_mem *t2;
u32 t2_num_pages;
@@ -197,6 +222,11 @@ static OSAL_INLINE bool tm_cid_proto(enum protocol_type type)
return type == PROTOCOLID_TOE;
}
+static bool tm_tid_proto(enum protocol_type type)
+{
+ return type == PROTOCOLID_FCOE;
+}
+
/* counts the iids for the CDU/CDUC ILT client configuration */
struct ecore_cdu_iids {
u32 pf_cids;
@@ -255,6 +285,22 @@ static OSAL_INLINE void ecore_cxt_tm_iids(struct ecore_cxt_mngr *p_mngr,
iids->pf_cids += p_cfg->cid_count;
iids->per_vf_cids += p_cfg->cids_per_vf;
}
+
+ if (tm_tid_proto(i)) {
+ struct ecore_tid_seg *segs = p_cfg->tid_seg;
+
+ /* for each segment there is at most one
+ * protocol for which count is not 0.
+ */
+ for (j = 0; j < NUM_TASK_PF_SEGMENTS; j++)
+ iids->pf_tids[j] += segs[j].count;
+
+ /* The last array elelment is for the VFs. As for PF
+ * segments there can be only one protocol for
+ * which this value is not 0.
+ */
+ iids->per_vf_tids += segs[NUM_TASK_PF_SEGMENTS].count;
+ }
}
iids->pf_cids = ROUNDUP(iids->pf_cids, TM_ALIGN);
@@ -317,7 +363,7 @@ static struct ecore_tid_seg *ecore_cxt_tid_seg_info(struct ecore_hwfn *p_hwfn,
}
/* set the iids (cid/tid) count per protocol */
-void ecore_cxt_set_proto_cid_count(struct ecore_hwfn *p_hwfn,
+static void ecore_cxt_set_proto_cid_count(struct ecore_hwfn *p_hwfn,
enum protocol_type type,
u32 cid_count, u32 vf_cid_cnt)
{
@@ -343,7 +389,7 @@ u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn *p_hwfn,
return p_hwfn->p_cxt_mngr->acquired[type].start_cid;
}
-static u32 ecore_cxt_get_proto_tid_count(struct ecore_hwfn *p_hwfn,
+u32 ecore_cxt_get_proto_tid_count(struct ecore_hwfn *p_hwfn,
enum protocol_type type)
{
u32 cnt = 0;
@@ -671,12 +717,27 @@ enum _ecore_status_t ecore_cxt_cfg_ilt_compute(struct ecore_hwfn *p_hwfn)
ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
ILT_CLI_TM);
- p_cli->pf_total_lines = curr_line - p_blk->start_line;
for (i = 1; i < p_mngr->vf_count; i++) {
ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
ILT_CLI_TM);
}
+
+ p_cli->vf_total_lines = curr_line - p_blk->start_line;
+ }
+
+ /* TSDM (SRQ CONTEXT) */
+ total = ecore_cxt_get_srq_count(p_hwfn);
+
+ if (total) {
+ p_cli = &p_mngr->clients[ILT_CLI_TSDM];
+ p_blk = &p_cli->pf_blks[SRQ_BLK];
+ ecore_ilt_cli_blk_fill(p_cli, p_blk, curr_line,
+ total * SRQ_CXT_SIZE, SRQ_CXT_SIZE);
+
+ ecore_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
+ ILT_CLI_TSDM);
+ p_cli->pf_total_lines = curr_line - p_blk->start_line;
}
if (curr_line - p_hwfn->p_cxt_mngr->pf_start_line >
@@ -788,7 +849,7 @@ static enum _ecore_status_t ecore_cxt_src_t2_alloc(struct ecore_hwfn *p_hwfn)
val = 0;
entries[j].next = OSAL_CPU_TO_BE64(val);
- conn_num -= ent_per_page;
+ conn_num -= ent_num;
}
return ECORE_SUCCESS;
@@ -847,7 +908,7 @@ ecore_ilt_blk_alloc(struct ecore_hwfn *p_hwfn,
u32 lines, line, sz_left, lines_to_skip = 0;
/* Special handling for RoCE that supports dynamic allocation */
- if (ilt_client == ILT_CLI_CDUT)
+ if (ilt_client == ILT_CLI_CDUT || ilt_client == ILT_CLI_TSDM)
return ECORE_SUCCESS;
lines_to_skip = p_blk->dynamic_line_cnt;
@@ -994,6 +1055,7 @@ cid_map_fail:
enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
{
+ struct ecore_ilt_client_cfg *clients;
struct ecore_cxt_mngr *p_mngr;
u32 i;
@@ -1005,35 +1067,48 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
}
/* Initialize ILT client registers */
- p_mngr->clients[ILT_CLI_CDUC].first.reg = ILT_CFG_REG(CDUC, FIRST_ILT);
- p_mngr->clients[ILT_CLI_CDUC].last.reg = ILT_CFG_REG(CDUC, LAST_ILT);
- p_mngr->clients[ILT_CLI_CDUC].p_size.reg = ILT_CFG_REG(CDUC, P_SIZE);
+ clients = p_mngr->clients;
+ clients[ILT_CLI_CDUC].first.reg = ILT_CFG_REG(CDUC, FIRST_ILT);
+ clients[ILT_CLI_CDUC].last.reg = ILT_CFG_REG(CDUC, LAST_ILT);
+ clients[ILT_CLI_CDUC].p_size.reg = ILT_CFG_REG(CDUC, P_SIZE);
+
+ clients[ILT_CLI_QM].first.reg = ILT_CFG_REG(QM, FIRST_ILT);
+ clients[ILT_CLI_QM].last.reg = ILT_CFG_REG(QM, LAST_ILT);
+ clients[ILT_CLI_QM].p_size.reg = ILT_CFG_REG(QM, P_SIZE);
- p_mngr->clients[ILT_CLI_QM].first.reg = ILT_CFG_REG(QM, FIRST_ILT);
- p_mngr->clients[ILT_CLI_QM].last.reg = ILT_CFG_REG(QM, LAST_ILT);
- p_mngr->clients[ILT_CLI_QM].p_size.reg = ILT_CFG_REG(QM, P_SIZE);
+ clients[ILT_CLI_TM].first.reg = ILT_CFG_REG(TM, FIRST_ILT);
+ clients[ILT_CLI_TM].last.reg = ILT_CFG_REG(TM, LAST_ILT);
+ clients[ILT_CLI_TM].p_size.reg = ILT_CFG_REG(TM, P_SIZE);
- p_mngr->clients[ILT_CLI_TM].first.reg = ILT_CFG_REG(TM, FIRST_ILT);
- p_mngr->clients[ILT_CLI_TM].last.reg = ILT_CFG_REG(TM, LAST_ILT);
- p_mngr->clients[ILT_CLI_TM].p_size.reg = ILT_CFG_REG(TM, P_SIZE);
+ clients[ILT_CLI_SRC].first.reg = ILT_CFG_REG(SRC, FIRST_ILT);
+ clients[ILT_CLI_SRC].last.reg = ILT_CFG_REG(SRC, LAST_ILT);
+ clients[ILT_CLI_SRC].p_size.reg = ILT_CFG_REG(SRC, P_SIZE);
- p_mngr->clients[ILT_CLI_SRC].first.reg = ILT_CFG_REG(SRC, FIRST_ILT);
- p_mngr->clients[ILT_CLI_SRC].last.reg = ILT_CFG_REG(SRC, LAST_ILT);
- p_mngr->clients[ILT_CLI_SRC].p_size.reg = ILT_CFG_REG(SRC, P_SIZE);
+ clients[ILT_CLI_CDUT].first.reg = ILT_CFG_REG(CDUT, FIRST_ILT);
+ clients[ILT_CLI_CDUT].last.reg = ILT_CFG_REG(CDUT, LAST_ILT);
+ clients[ILT_CLI_CDUT].p_size.reg = ILT_CFG_REG(CDUT, P_SIZE);
- p_mngr->clients[ILT_CLI_CDUT].first.reg = ILT_CFG_REG(CDUT, FIRST_ILT);
- p_mngr->clients[ILT_CLI_CDUT].last.reg = ILT_CFG_REG(CDUT, LAST_ILT);
- p_mngr->clients[ILT_CLI_CDUT].p_size.reg = ILT_CFG_REG(CDUT, P_SIZE);
+ clients[ILT_CLI_TSDM].first.reg = ILT_CFG_REG(TSDM, FIRST_ILT);
+ clients[ILT_CLI_TSDM].last.reg = ILT_CFG_REG(TSDM, LAST_ILT);
+ clients[ILT_CLI_TSDM].p_size.reg = ILT_CFG_REG(TSDM, P_SIZE);
/* default ILT page size for all clients is 32K */
for (i = 0; i < ILT_CLI_MAX; i++)
p_mngr->clients[i].p_size.val = ILT_DEFAULT_HW_P_SIZE;
- /* Initialize task sizes */
+ /* due to removal of ISCSI/FCoE files union type0_task_context
+ * task_type_size will be 0. So hardcoded for now.
+ */
p_mngr->task_type_size[0] = 512; /* @DPDK */
p_mngr->task_type_size[1] = 128; /* @DPDK */
- p_mngr->vf_count = p_hwfn->p_dev->sriov_info.total_vfs;
+ if (p_hwfn->p_dev->p_iov_info)
+ p_mngr->vf_count = p_hwfn->p_dev->p_iov_info->total_vfs;
+
+ /* Initialize the dynamic ILT allocation mutex */
+ OSAL_MUTEX_ALLOC(p_hwfn, &p_mngr->mutex);
+ OSAL_MUTEX_INIT(&p_mngr->mutex);
+
/* Set the cxt mangr pointer priori to further allocations */
p_hwfn->p_cxt_mngr = p_mngr;
@@ -1080,6 +1155,7 @@ void ecore_cxt_mngr_free(struct ecore_hwfn *p_hwfn)
ecore_cid_map_free(p_hwfn);
ecore_cxt_src_t2_free(p_hwfn);
ecore_ilt_shadow_free(p_hwfn);
+ OSAL_MUTEX_DEALLOC(&p_mngr->mutex);
OSAL_FREE(p_hwfn->p_dev, p_hwfn->p_cxt_mngr);
p_hwfn->p_cxt_mngr = OSAL_NULL;
@@ -1383,13 +1459,16 @@ static void ecore_ilt_vf_bounds_init(struct ecore_hwfn *p_hwfn)
u32 blk_factor;
/* For simplicty we set the 'block' to be an ILT page */
+ if (p_hwfn->p_dev->p_iov_info) {
+ struct ecore_hw_sriov_info *p_iov = p_hwfn->p_dev->p_iov_info;
+
STORE_RT_REG(p_hwfn,
PSWRQ2_REG_VF_BASE_RT_OFFSET,
- p_hwfn->hw_info.first_vf_in_pf);
+ p_iov->first_vf_in_pf);
STORE_RT_REG(p_hwfn,
PSWRQ2_REG_VF_LAST_ILT_RT_OFFSET,
- p_hwfn->hw_info.first_vf_in_pf +
- p_hwfn->p_dev->sriov_info.total_vfs);
+ p_iov->first_vf_in_pf + p_iov->total_vfs);
+ }
p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC];
blk_factor = OSAL_LOG2(ILT_PAGE_IN_BYTES(p_cli->p_size.val) >> 10);
@@ -1543,11 +1622,11 @@ static void ecore_tm_init_pf(struct ecore_hwfn *p_hwfn)
SET_FIELD(cfg_word, TM_CFG_NUM_IDS, tm_iids.per_vf_cids);
SET_FIELD(cfg_word, TM_CFG_PRE_SCAN_OFFSET, 0);
SET_FIELD(cfg_word, TM_CFG_PARENT_PF, p_hwfn->rel_pf_id);
- SET_FIELD(cfg_word, TM_CFG_CID_PRE_SCAN_ROWS, 0);
+ SET_FIELD(cfg_word, TM_CFG_CID_PRE_SCAN_ROWS, 0); /* scan all */
rt_reg = TM_REG_CONFIG_CONN_MEM_RT_OFFSET +
(sizeof(cfg_word) / sizeof(u32)) *
- (p_hwfn->hw_info.first_vf_in_pf + i);
+ (p_hwfn->p_dev->p_iov_info->first_vf_in_pf + i);
STORE_RT_REG_AGG(p_hwfn, rt_reg, cfg_word);
}
@@ -1581,7 +1660,7 @@ static void ecore_tm_init_pf(struct ecore_hwfn *p_hwfn)
rt_reg = TM_REG_CONFIG_TASK_MEM_RT_OFFSET +
(sizeof(cfg_word) / sizeof(u32)) *
- (p_hwfn->hw_info.first_vf_in_pf + i);
+ (p_hwfn->p_dev->p_iov_info->first_vf_in_pf + i);
STORE_RT_REG_AGG(p_hwfn, rt_reg, cfg_word);
}
@@ -1611,15 +1690,26 @@ static void ecore_tm_init_pf(struct ecore_hwfn *p_hwfn)
/* @@@TBD how to enable the scan for the VFs */
}
-static void ecore_prs_init_common(struct ecore_hwfn *p_hwfn)
+static void ecore_prs_init_pf(struct ecore_hwfn *p_hwfn)
{
+ struct ecore_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
+ struct ecore_conn_type_cfg *p_fcoe = &p_mngr->conn_cfg[PROTOCOLID_FCOE];
+ struct ecore_tid_seg *p_tid;
+
+ /* If FCoE is active set the MAX OX_ID (tid) in the Parser */
+ if (!p_fcoe->cid_count)
+ return;
+
+ p_tid = &p_fcoe->tid_seg[ECORE_CXT_FCOE_TID_SEG];
+ STORE_RT_REG_AGG(p_hwfn,
+ PRS_REG_TASK_ID_MAX_INITIATOR_PF_RT_OFFSET,
+ p_tid->count);
}
void ecore_cxt_hw_init_common(struct ecore_hwfn *p_hwfn)
{
/* CDU configuration */
ecore_cdu_init_common(p_hwfn);
- ecore_prs_init_common(p_hwfn);
}
void ecore_cxt_hw_init_pf(struct ecore_hwfn *p_hwfn)
@@ -1631,6 +1721,7 @@ void ecore_cxt_hw_init_pf(struct ecore_hwfn *p_hwfn)
ecore_ilt_init_pf(p_hwfn);
ecore_src_init_pf(p_hwfn);
ecore_tm_init_pf(p_hwfn);
+ ecore_prs_init_pf(p_hwfn);
}
enum _ecore_status_t ecore_cxt_acquire_cid(struct ecore_hwfn *p_hwfn,
@@ -1748,6 +1839,20 @@ enum _ecore_status_t ecore_cxt_get_cid_info(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
+void ecore_cxt_set_srq_count(struct ecore_hwfn *p_hwfn, u32 num_srqs)
+{
+ struct ecore_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr;
+
+ p_mgr->srq_count = num_srqs;
+}
+
+u32 ecore_cxt_get_srq_count(struct ecore_hwfn *p_hwfn)
+{
+ struct ecore_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr;
+
+ return p_mgr->srq_count;
+}
+
enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn)
{
/* Set the number of required CORE connections */
@@ -1818,13 +1923,130 @@ enum _ecore_status_t ecore_cxt_get_tid_mem_info(struct ecore_hwfn *p_hwfn,
/* This function is very RoCE oriented, if another protocol in the future
* will want this feature we'll need to modify the function to be more generic
*/
+enum _ecore_status_t
+ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
+ enum ecore_cxt_elem_type elem_type,
+ u32 iid)
+{
+ u32 reg_offset, shadow_line, elem_size, hw_p_size, elems_per_p, line;
+ struct ecore_ilt_client_cfg *p_cli;
+ struct ecore_ilt_cli_blk *p_blk;
+ struct ecore_ptt *p_ptt;
+ dma_addr_t p_phys;
+ u64 ilt_hw_entry;
+ void *p_virt;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
+
+ switch (elem_type) {
+ case ECORE_ELEM_CXT:
+ p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC];
+ elem_size = CONN_CXT_SIZE(p_hwfn);
+ p_blk = &p_cli->pf_blks[CDUC_BLK];
+ break;
+ case ECORE_ELEM_SRQ:
+ p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_TSDM];
+ elem_size = SRQ_CXT_SIZE;
+ p_blk = &p_cli->pf_blks[SRQ_BLK];
+ break;
+ case ECORE_ELEM_TASK:
+ p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUT];
+ elem_size = TYPE1_TASK_CXT_SIZE(p_hwfn);
+ p_blk = &p_cli->pf_blks[CDUT_SEG_BLK(ECORE_CXT_ROCE_TID_SEG)];
+ break;
+ default:
+ DP_NOTICE(p_hwfn, false,
+ "ECORE_INVALID elem type = %d", elem_type);
+ return ECORE_INVAL;
+ }
+
+ /* Calculate line in ilt */
+ hw_p_size = p_cli->p_size.val;
+ elems_per_p = ILT_PAGE_IN_BYTES(hw_p_size) / elem_size;
+ line = p_blk->start_line + (iid / elems_per_p);
+ shadow_line = line - p_hwfn->p_cxt_mngr->pf_start_line;
+
+ /* If line is already allocated, do nothing, otherwise allocate it and
+ * write it to the PSWRQ2 registers.
+ * This section can be run in parallel from different contexts and thus
+ * a mutex protection is needed.
+ */
+
+ OSAL_MUTEX_ACQUIRE(&p_hwfn->p_cxt_mngr->mutex);
+
+ if (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].p_virt)
+ goto out0;
+
+ p_ptt = ecore_ptt_acquire(p_hwfn);
+ if (!p_ptt) {
+ DP_NOTICE(p_hwfn, false,
+ "ECORE_TIME_OUT on ptt acquire - dynamic allocation");
+ rc = ECORE_TIMEOUT;
+ goto out0;
+ }
+
+ p_virt = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
+ &p_phys,
+ p_blk->real_size_in_page);
+ if (!p_virt) {
+ rc = ECORE_NOMEM;
+ goto out1;
+ }
+ OSAL_MEM_ZERO(p_virt, p_blk->real_size_in_page);
+
+ p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].p_virt = p_virt;
+ p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].p_phys = p_phys;
+ p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].size =
+ p_blk->real_size_in_page;
+
+ /* compute absolute offset */
+ reg_offset = PSWRQ2_REG_ILT_MEMORY +
+ (line * ILT_REG_SIZE_IN_BYTES * ILT_ENTRY_IN_REGS);
+
+ ilt_hw_entry = 0;
+ SET_FIELD(ilt_hw_entry, ILT_ENTRY_VALID, 1ULL);
+ SET_FIELD(ilt_hw_entry,
+ ILT_ENTRY_PHY_ADDR,
+ (p_hwfn->p_cxt_mngr->ilt_shadow[shadow_line].p_phys >> 12));
+
+/* Write via DMAE since the PSWRQ2_REG_ILT_MEMORY line is a wide-bus */
+
+ ecore_dmae_host2grc(p_hwfn, p_ptt, (u64)(osal_uintptr_t)&ilt_hw_entry,
+ reg_offset, sizeof(ilt_hw_entry) / sizeof(u32),
+ 0 /* no flags */);
+
+ if (elem_type == ECORE_ELEM_CXT) {
+ u32 last_cid_allocated = (1 + (iid / elems_per_p)) *
+ elems_per_p;
+
+ /* Update the relevant register in the parser */
+ ecore_wr(p_hwfn, p_ptt, PRS_REG_ROCE_DEST_QP_MAX_PF,
+ last_cid_allocated - 1);
+
+ if (!p_hwfn->b_rdma_enabled_in_prs) {
+ /* Enable RoCE search */
+ ecore_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 1);
+ p_hwfn->b_rdma_enabled_in_prs = true;
+ }
+ }
+
+out1:
+ ecore_ptt_release(p_hwfn, p_ptt);
+out0:
+ OSAL_MUTEX_RELEASE(&p_hwfn->p_cxt_mngr->mutex);
+
+ return rc;
+}
+
+/* This function is very RoCE oriented, if another protocol in the future
+ * will want this feature we'll need to modify the function to be more generic
+ */
static enum _ecore_status_t
ecore_cxt_free_ilt_range(struct ecore_hwfn *p_hwfn,
enum ecore_cxt_elem_type elem_type,
u32 start_iid, u32 count)
{
- u32 reg_offset, elem_size, hw_p_size, elems_per_p;
u32 start_line, end_line, shadow_start_line, shadow_end_line;
+ u32 reg_offset, elem_size, hw_p_size, elems_per_p;
struct ecore_ilt_client_cfg *p_cli;
struct ecore_ilt_cli_blk *p_blk;
u32 end_iid = start_iid + count;
@@ -1832,10 +2054,26 @@ ecore_cxt_free_ilt_range(struct ecore_hwfn *p_hwfn,
u64 ilt_hw_entry = 0;
u32 i;
- if (elem_type == ECORE_ELEM_CXT) {
+ switch (elem_type) {
+ case ECORE_ELEM_CXT:
p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC];
elem_size = CONN_CXT_SIZE(p_hwfn);
p_blk = &p_cli->pf_blks[CDUC_BLK];
+ break;
+ case ECORE_ELEM_SRQ:
+ p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_TSDM];
+ elem_size = SRQ_CXT_SIZE;
+ p_blk = &p_cli->pf_blks[SRQ_BLK];
+ break;
+ case ECORE_ELEM_TASK:
+ p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUT];
+ elem_size = TYPE1_TASK_CXT_SIZE(p_hwfn);
+ p_blk = &p_cli->pf_blks[CDUT_SEG_BLK(ECORE_CXT_ROCE_TID_SEG)];
+ break;
+ default:
+ DP_NOTICE(p_hwfn, false,
+ "ECORE_INVALID elem type = %d", elem_type);
+ return ECORE_INVAL;
}
/* Calculate line in ilt */
@@ -1874,9 +2112,14 @@ ecore_cxt_free_ilt_range(struct ecore_hwfn *p_hwfn,
((start_line++) * ILT_REG_SIZE_IN_BYTES *
ILT_ENTRY_IN_REGS);
- ecore_wr(p_hwfn, p_ptt, reg_offset, U64_LO(ilt_hw_entry));
- ecore_wr(p_hwfn, p_ptt, reg_offset + ILT_REG_SIZE_IN_BYTES,
- U64_HI(ilt_hw_entry));
+ /* Write via DMAE since the PSWRQ2_REG_ILT_MEMORY line is a
+ * wide-bus.
+ */
+ ecore_dmae_host2grc(p_hwfn, p_ptt,
+ (u64)(osal_uintptr_t)&ilt_hw_entry,
+ reg_offset,
+ sizeof(ilt_hw_entry) / sizeof(u32),
+ 0 /* no flags */);
}
ecore_ptt_release(p_hwfn, p_ptt);
@@ -1905,6 +2148,12 @@ enum _ecore_status_t ecore_cxt_free_proto_ilt(struct ecore_hwfn *p_hwfn,
rc = ecore_cxt_free_ilt_range(p_hwfn, ECORE_ELEM_TASK, 0,
ecore_cxt_get_proto_tid_count(p_hwfn,
proto));
+ if (rc)
+ return rc;
+
+ /* Free TSDM CXT */
+ rc = ecore_cxt_free_ilt_range(p_hwfn, ECORE_ELEM_SRQ, 0,
+ ecore_cxt_get_srq_count(p_hwfn));
return rc;
}
diff --git a/drivers/net/qede/base/ecore_cxt.h b/drivers/net/qede/base/ecore_cxt.h
index ba02410..5379d7b 100644
--- a/drivers/net/qede/base/ecore_cxt.h
+++ b/drivers/net/qede/base/ecore_cxt.h
@@ -13,24 +13,38 @@
#include "ecore_proto_if.h"
#include "ecore_cxt_api.h"
+/* Tasks segments definitions */
+#define ECORE_CXT_ISCSI_TID_SEG PROTOCOLID_ISCSI /* 0 */
+#define ECORE_CXT_FCOE_TID_SEG PROTOCOLID_FCOE /* 1 */
+#define ECORE_CXT_ROCE_TID_SEG PROTOCOLID_ROCE /* 2 */
+
enum ecore_cxt_elem_type {
ECORE_ELEM_CXT,
+ ECORE_ELEM_SRQ,
ECORE_ELEM_TASK
};
u32 ecore_cxt_get_proto_cid_count(struct ecore_hwfn *p_hwfn,
- enum protocol_type type, u32 *vf_cid);
+ enum protocol_type type,
+ u32 *vf_cid);
+
+u32 ecore_cxt_get_proto_tid_count(struct ecore_hwfn *p_hwfn,
+ enum protocol_type type);
u32 ecore_cxt_get_proto_cid_start(struct ecore_hwfn *p_hwfn,
enum protocol_type type);
+u32 ecore_cxt_get_srq_count(struct ecore_hwfn *p_hwfn);
+#ifndef LINUX_REMOVE
/**
* @brief ecore_cxt_qm_iids - fills the cid/tid counts for the QM configuration
*
* @param p_hwfn
* @param iids [out], a structure holding all the counters
*/
-void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn, struct ecore_qm_iids *iids);
+void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn,
+ struct ecore_qm_iids *iids);
+#endif
/**
* @brief ecore_cxt_set_pf_params - Set the PF params for cxt init
@@ -42,18 +56,6 @@ void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn, struct ecore_qm_iids *iids);
enum _ecore_status_t ecore_cxt_set_pf_params(struct ecore_hwfn *p_hwfn);
/**
- * @brief ecore_cxt_set_proto_cid_count - Set the max cids per protocol for cxt
- * init
- *
- * @param p_hwfn
- * @param type
- * @param cid_cnt - number of pf cids
- * @param vf_cid_cnt - number of vf cids
- */
-void ecore_cxt_set_proto_cid_count(struct ecore_hwfn *p_hwfn,
- enum protocol_type type,
- u32 cid_cnt, u32 vf_cid_cnt);
-/**
* @brief ecore_cxt_cfg_ilt_compute - compute ILT init parameters
*
* @param p_hwfn
@@ -134,7 +136,24 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
* @param p_hwfn
* @param cid
*/
-void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn, u32 cid);
+void ecore_cxt_release_cid(struct ecore_hwfn *p_hwfn,
+ u32 cid);
+
+/**
+ * @brief ecore_cxt_get_tid_mem_info - function checks if the
+ * page containing the iid in the ilt is already
+ * allocated, if it is not it allocates the page.
+ *
+ * @param p_hwfn
+ * @param elem_type
+ * @param iid
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_cxt_dynamic_ilt_alloc(struct ecore_hwfn *p_hwfn,
+ enum ecore_cxt_elem_type elem_type,
+ u32 iid);
/**
* @brief ecore_cxt_free_proto_ilt - function frees ilt pages
diff --git a/drivers/net/qede/base/ecore_cxt_api.h b/drivers/net/qede/base/ecore_cxt_api.h
index 90aff3e..6a50412 100644
--- a/drivers/net/qede/base/ecore_cxt_api.h
+++ b/drivers/net/qede/base/ecore_cxt_api.h
@@ -25,21 +25,6 @@ struct ecore_tid_mem {
u8 *blocks[MAX_TID_BLOCKS]; /* 4K */
};
-static OSAL_INLINE void *get_task_mem(struct ecore_tid_mem *info, u32 tid)
-{
- /* note: waste is superfluous */
- return (void *)(info->blocks[tid / info->num_tids_per_block] +
- (tid % info->num_tids_per_block) * info->tid_size);
-
- /* more elaborate alternative with no modulo
- * u32 mask = info->tid_size * info->num_tids_per_block +
- * info->waste - 1;
- * u32 index = tid / info->num_tids_per_block;
- * u32 offset = tid * info->tid_size + index * info->waste;
- * return (void *)(blocks[index] + (offset & mask));
- */
-}
-
/**
* @brief ecore_cxt_acquire - Acquire a new cid of a specific protocol type
*
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index db73658..8175619 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -15,7 +15,6 @@
#include "ecore_iro.h"
#define ECORE_DCBX_MAX_MIB_READ_TRY (100)
-#define ECORE_MAX_PFC_PRIORITIES 8
#define ECORE_ETH_TYPE_DEFAULT (0)
#define ECORE_DCBX_INVALID_PRIORITY 0xFF
@@ -24,7 +23,7 @@
* the traffic class corresponding to the priority.
*/
#define ECORE_DCBX_PRIO2TC(prio_tc_tbl, prio) \
- ((u32)(pri_tc_tbl >> ((7 - prio) * 4)) & 0x7)
+ ((u32)(prio_tc_tbl >> ((7 - prio) * 4)) & 0x7)
static bool ecore_dcbx_app_ethtype(u32 app_info_bitmap)
{
@@ -38,6 +37,18 @@ static bool ecore_dcbx_app_port(u32 app_info_bitmap)
DCBX_APP_SF_PORT) ? true : false;
}
+static bool ecore_dcbx_ieee_app_port(u32 app_info_bitmap, u8 type)
+{
+ u8 mfw_val = ECORE_MFW_GET_FIELD(app_info_bitmap, DCBX_APP_SF_IEEE);
+
+ /* Old MFW */
+ if (mfw_val == DCBX_APP_SF_IEEE_RESERVED)
+ return ecore_dcbx_app_port(app_info_bitmap);
+
+ return (mfw_val == type || mfw_val == DCBX_APP_SF_IEEE_TCP_UDP_PORT) ?
+ true : false;
+}
+
static bool ecore_dcbx_default_tlv(u32 app_info_bitmap, u16 proto_id)
{
return (ecore_dcbx_app_ethtype(app_info_bitmap) &&
@@ -62,6 +73,12 @@ static bool ecore_dcbx_ieee(u32 dcbx_cfg_bitmap)
DCBX_CONFIG_VERSION_IEEE) ? true : false;
}
+static bool ecore_dcbx_local(u32 dcbx_cfg_bitmap)
+{
+ return (ECORE_MFW_GET_FIELD(dcbx_cfg_bitmap, DCBX_CONFIG_VERSION) ==
+ DCBX_CONFIG_VERSION_STATIC) ? true : false;
+}
+
/* @@@TBD A0 Eagle workaround */
void ecore_dcbx_eagle_workaround(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt, bool set_to_pfc)
@@ -83,8 +100,8 @@ ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn,
{
struct ecore_hw_info *p_info = &p_hwfn->hw_info;
enum dcbx_protocol_type id;
- bool enable, update;
- u8 prio, tc, size;
+ u8 prio, tc, size, update;
+ bool enable;
const char *name; /* @DPDK */
int i;
@@ -102,8 +119,10 @@ ecore_dcbx_dp_protocol(struct ecore_hwfn *p_hwfn,
prio = p_data->arr[id].priority;
DP_INFO(p_hwfn,
- "%s info: update %d, enable %d, prio %d, tc %d, num_tc %d\n",
- name, update, enable, prio, tc, p_info->num_tc);
+ "%s info: update %d, enable %d, prio %d, tc %d,"
+ " num_active_tc %d dscp_enable = %d dscp_val = %d\n",
+ name, update, enable, prio, tc, p_info->num_active_tc,
+ p_data->arr[id].dscp_enable, p_data->arr[id].dscp_val);
}
}
@@ -112,28 +131,42 @@ ecore_dcbx_set_pf_tcs(struct ecore_hw_info *p_info,
u8 tc, enum ecore_pci_personality personality)
{
/* QM reconf data */
- if (p_info->personality == personality) {
- if (personality == ECORE_PCI_ETH)
- p_info->non_offload_tc = tc;
- else
+ if (p_info->personality == personality)
p_info->offload_tc = tc;
}
-}
void
ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
- struct ecore_hw_info *p_info,
+ struct ecore_hwfn *p_hwfn,
bool enable, bool update, u8 prio, u8 tc,
enum dcbx_protocol_type type,
enum ecore_pci_personality personality)
{
+ struct ecore_dcbx_dscp_params *dscp = &p_hwfn->p_dcbx_info->get.dscp;
+
/* PF update ramrod data */
- p_data->arr[type].update = update;
p_data->arr[type].enable = enable;
p_data->arr[type].priority = prio;
p_data->arr[type].tc = tc;
+ p_data->arr[type].dscp_enable = dscp->enabled;
+ if (p_data->arr[type].dscp_enable) {
+ u8 i;
+
+ for (i = 0; i < ECORE_DCBX_DSCP_SIZE; i++)
+ if (prio == dscp->dscp_pri_map[i]) {
+ p_data->arr[type].dscp_val = i;
+ break;
+ }
+ }
+
+ if (enable && p_data->arr[type].dscp_enable)
+ p_data->arr[type].update = UPDATE_DCB_DSCP;
+ else if (enable)
+ p_data->arr[type].update = UPDATE_DCB;
+ else
+ p_data->arr[type].update = DONT_UPDATE_DCB_DHCP;
- ecore_dcbx_set_pf_tcs(p_info, tc, personality);
+ ecore_dcbx_set_pf_tcs(&p_hwfn->hw_info, tc, personality);
}
/* Update app protocol data and hw_info fields with the TLV info */
@@ -143,7 +176,6 @@ ecore_dcbx_update_app_info(struct ecore_dcbx_results *p_data,
bool enable, bool update, u8 prio, u8 tc,
enum dcbx_protocol_type type)
{
- struct ecore_hw_info *p_info = &p_hwfn->hw_info;
enum ecore_pci_personality personality;
enum dcbx_protocol_type id;
const char *name; /* @DPDK */
@@ -161,7 +193,7 @@ ecore_dcbx_update_app_info(struct ecore_dcbx_results *p_data,
personality = ecore_dcbx_app_update[i].personality;
name = ecore_dcbx_app_update[i].name;
- ecore_dcbx_set_params(p_data, p_info, enable, update,
+ ecore_dcbx_set_params(p_data, p_hwfn, enable, update,
prio, tc, type, personality);
}
}
@@ -197,7 +229,8 @@ ecore_dcbx_get_app_priority(u8 pri_bitmap, u8 *priority)
static bool
ecore_dcbx_get_app_protocol_type(struct ecore_hwfn *p_hwfn,
- u32 app_prio_bitmap, u16 id, int *type)
+ u32 app_prio_bitmap, u16 id,
+ enum dcbx_protocol_type *type, bool ieee)
{
bool status = false;
@@ -205,7 +238,11 @@ ecore_dcbx_get_app_protocol_type(struct ecore_hwfn *p_hwfn,
*type = DCBX_PROTOCOL_ETH;
status = true;
} else {
- DP_ERR(p_hwfn, "Unsupported protocol %d\n", id);
+ *type = DCBX_MAX_PROTOCOL_TYPE;
+ DP_ERR(p_hwfn,
+ "No action required, App TLV id = 0x%x"
+ " app_prio_bitmap = 0x%x\n",
+ id, app_prio_bitmap);
}
return status;
@@ -218,16 +255,18 @@ static enum _ecore_status_t
ecore_dcbx_process_tlv(struct ecore_hwfn *p_hwfn,
struct ecore_dcbx_results *p_data,
struct dcbx_app_priority_entry *p_tbl, u32 pri_tc_tbl,
- int count, bool dcbx_enabled)
+ int count, u8 dcbx_version)
{
enum _ecore_status_t rc = ECORE_SUCCESS;
u8 tc, priority, priority_map;
- int i, type = -1;
+ enum dcbx_protocol_type type;
+ bool enable, ieee;
u16 protocol_id;
- bool enable;
+ int i;
DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, "Num APP entries = %d\n", count);
+ ieee = (dcbx_version == DCBX_CONFIG_VERSION_IEEE);
/* Parse APP TLV */
for (i = 0; i < count; i++) {
protocol_id = ECORE_MFW_GET_FIELD(p_tbl[i].entry,
@@ -242,7 +281,8 @@ ecore_dcbx_process_tlv(struct ecore_hwfn *p_hwfn,
tc = ECORE_DCBX_PRIO2TC(pri_tc_tbl, priority);
if (ecore_dcbx_get_app_protocol_type(p_hwfn, p_tbl[i].entry,
- protocol_id, &type)) {
+ protocol_id, &type,
+ ieee)) {
/* ETH always have the enable bit reset, as it gets
* vlan information per packet. For other protocols,
* should be set according to the dcbx_enabled
@@ -267,7 +307,7 @@ ecore_dcbx_process_tlv(struct ecore_hwfn *p_hwfn,
if (p_data->arr[type].update)
continue;
- enable = (type == DCBX_PROTOCOL_ETH) ? false : dcbx_enabled;
+ enable = (type == DCBX_PROTOCOL_ETH) ? false : !!dcbx_version;
ecore_dcbx_update_app_info(p_data, p_hwfn, enable, true,
priority, tc, type);
}
@@ -288,14 +328,11 @@ ecore_dcbx_process_mib_info(struct ecore_hwfn *p_hwfn)
struct dcbx_ets_feature *p_ets;
struct ecore_hw_info *p_info;
u32 pri_tc_tbl, flags;
- bool dcbx_enabled;
+ u8 dcbx_version;
int num_entries;
- /* If DCBx version is non zero, then negotiation was
- * successfuly performed
- */
flags = p_hwfn->p_dcbx_info->operational.flags;
- dcbx_enabled = ECORE_MFW_GET_FIELD(flags, DCBX_CONFIG_VERSION) != 0;
+ dcbx_version = ECORE_MFW_GET_FIELD(flags, DCBX_CONFIG_VERSION);
p_app = &p_hwfn->p_dcbx_info->operational.features.app;
p_tbl = p_app->app_pri_tbl;
@@ -307,13 +344,14 @@ ecore_dcbx_process_mib_info(struct ecore_hwfn *p_hwfn)
num_entries = ECORE_MFW_GET_FIELD(p_app->flags, DCBX_APP_NUM_ENTRIES);
rc = ecore_dcbx_process_tlv(p_hwfn, &data, p_tbl, pri_tc_tbl,
- num_entries, dcbx_enabled);
+ num_entries, dcbx_version);
if (rc != ECORE_SUCCESS)
return rc;
- p_info->num_tc = ECORE_MFW_GET_FIELD(p_ets->flags, DCBX_ETS_MAX_TCS);
+ p_info->num_active_tc = ECORE_MFW_GET_FIELD(p_ets->flags,
+ DCBX_ETS_MAX_TCS);
data.pf_id = p_hwfn->rel_pf_id;
- data.dcbx_enabled = dcbx_enabled;
+ data.dcbx_enabled = !!dcbx_version;
ecore_dcbx_dp_protocol(p_hwfn, &data);
@@ -386,36 +424,92 @@ static void
ecore_dcbx_get_app_data(struct ecore_hwfn *p_hwfn,
struct dcbx_app_priority_feature *p_app,
struct dcbx_app_priority_entry *p_tbl,
- struct ecore_dcbx_params *p_params)
+ struct ecore_dcbx_params *p_params, bool ieee)
{
+ struct ecore_app_entry *entry;
+ u8 pri_map;
int i;
p_params->app_willing = ECORE_MFW_GET_FIELD(p_app->flags,
DCBX_APP_WILLING);
p_params->app_valid = ECORE_MFW_GET_FIELD(p_app->flags,
DCBX_APP_ENABLED);
+ p_params->app_error = ECORE_MFW_GET_FIELD(p_app->flags, DCBX_APP_ERROR);
p_params->num_app_entries = ECORE_MFW_GET_FIELD(p_app->flags,
- DCBX_APP_ENABLED);
- for (i = 0; i < DCBX_MAX_APP_PROTOCOL; i++)
- p_params->app_bitmap[i] = p_tbl[i].entry;
+ DCBX_APP_NUM_ENTRIES);
+ for (i = 0; i < DCBX_MAX_APP_PROTOCOL; i++) {
+ entry = &p_params->app_entry[i];
+ if (ieee) {
+ u8 sf_ieee;
+ u32 val;
+
+ sf_ieee = ECORE_MFW_GET_FIELD(p_tbl[i].entry,
+ DCBX_APP_SF_IEEE);
+ switch (sf_ieee) {
+ case DCBX_APP_SF_IEEE_RESERVED:
+ /* Old MFW */
+ val = ECORE_MFW_GET_FIELD(p_tbl[i].entry,
+ DCBX_APP_SF);
+ entry->sf_ieee = val ?
+ ECORE_DCBX_SF_IEEE_TCP_UDP_PORT :
+ ECORE_DCBX_SF_IEEE_ETHTYPE;
+ break;
+ case DCBX_APP_SF_IEEE_ETHTYPE:
+ entry->sf_ieee = ECORE_DCBX_SF_IEEE_ETHTYPE;
+ break;
+ case DCBX_APP_SF_IEEE_TCP_PORT:
+ entry->sf_ieee = ECORE_DCBX_SF_IEEE_TCP_PORT;
+ break;
+ case DCBX_APP_SF_IEEE_UDP_PORT:
+ entry->sf_ieee = ECORE_DCBX_SF_IEEE_UDP_PORT;
+ break;
+ case DCBX_APP_SF_IEEE_TCP_UDP_PORT:
+ entry->sf_ieee =
+ ECORE_DCBX_SF_IEEE_TCP_UDP_PORT;
+ break;
+ }
+ } else {
+ entry->ethtype = !(ECORE_MFW_GET_FIELD(p_tbl[i].entry,
+ DCBX_APP_SF));
+ }
+
+ pri_map = ECORE_MFW_GET_FIELD(p_tbl[i].entry, DCBX_APP_PRI_MAP);
+ ecore_dcbx_get_app_priority(pri_map, &entry->prio);
+ entry->proto_id = ECORE_MFW_GET_FIELD(p_tbl[i].entry,
+ DCBX_APP_PROTOCOL_ID);
+ ecore_dcbx_get_app_protocol_type(p_hwfn, p_tbl[i].entry,
+ entry->proto_id,
+ &entry->proto_type, ieee);
+ }
DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
- "APP params: willing %d, valid %d\n",
- p_params->app_willing, p_params->app_valid);
+ "APP params: willing %d, valid %d error = %d\n",
+ p_params->app_willing, p_params->app_valid,
+ p_params->app_error);
}
static void
ecore_dcbx_get_pfc_data(struct ecore_hwfn *p_hwfn,
u32 pfc, struct ecore_dcbx_params *p_params)
{
- p_params->pfc_willing = ECORE_MFW_GET_FIELD(pfc, DCBX_PFC_WILLING);
- p_params->max_pfc_tc = ECORE_MFW_GET_FIELD(pfc, DCBX_PFC_CAPS);
- p_params->pfc_enabled = ECORE_MFW_GET_FIELD(pfc, DCBX_PFC_ENABLED);
- p_params->pfc_bitmap = pfc;
+ u8 pfc_map;
+
+ p_params->pfc.willing = ECORE_MFW_GET_FIELD(pfc, DCBX_PFC_WILLING);
+ p_params->pfc.max_tc = ECORE_MFW_GET_FIELD(pfc, DCBX_PFC_CAPS);
+ p_params->pfc.enabled = ECORE_MFW_GET_FIELD(pfc, DCBX_PFC_ENABLED);
+ pfc_map = ECORE_MFW_GET_FIELD(pfc, DCBX_PFC_PRI_EN_BITMAP);
+ p_params->pfc.prio[0] = !!(pfc_map & DCBX_PFC_PRI_EN_BITMAP_PRI_0);
+ p_params->pfc.prio[1] = !!(pfc_map & DCBX_PFC_PRI_EN_BITMAP_PRI_1);
+ p_params->pfc.prio[2] = !!(pfc_map & DCBX_PFC_PRI_EN_BITMAP_PRI_2);
+ p_params->pfc.prio[3] = !!(pfc_map & DCBX_PFC_PRI_EN_BITMAP_PRI_3);
+ p_params->pfc.prio[4] = !!(pfc_map & DCBX_PFC_PRI_EN_BITMAP_PRI_4);
+ p_params->pfc.prio[5] = !!(pfc_map & DCBX_PFC_PRI_EN_BITMAP_PRI_5);
+ p_params->pfc.prio[6] = !!(pfc_map & DCBX_PFC_PRI_EN_BITMAP_PRI_6);
+ p_params->pfc.prio[7] = !!(pfc_map & DCBX_PFC_PRI_EN_BITMAP_PRI_7);
DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
"PFC params: willing %d, pfc_bitmap %d\n",
- p_params->pfc_willing, p_params->pfc_bitmap);
+ p_params->pfc.willing, pfc_map);
}
static void
@@ -423,28 +517,34 @@ ecore_dcbx_get_ets_data(struct ecore_hwfn *p_hwfn,
struct dcbx_ets_feature *p_ets,
struct ecore_dcbx_params *p_params)
{
+ u32 bw_map[2], tsa_map[2], pri_map;
int i;
p_params->ets_willing = ECORE_MFW_GET_FIELD(p_ets->flags,
DCBX_ETS_WILLING);
p_params->ets_enabled = ECORE_MFW_GET_FIELD(p_ets->flags,
DCBX_ETS_ENABLED);
+ p_params->ets_cbs = ECORE_MFW_GET_FIELD(p_ets->flags, DCBX_ETS_CBS);
p_params->max_ets_tc = ECORE_MFW_GET_FIELD(p_ets->flags,
DCBX_ETS_MAX_TCS);
- p_params->ets_pri_tc_tbl[0] = p_ets->pri_tc_tbl[0];
-
DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
- "ETS params: willing %d, pri_tc_tbl_0 %x max_ets_tc %d\n",
- p_params->ets_willing, p_params->ets_pri_tc_tbl[0],
- p_params->max_ets_tc);
+ "ETS params: willing %d, ets_cbs %d pri_tc_tbl_0 %x"
+ " max_ets_tc %d\n",
+ p_params->ets_willing, p_params->ets_cbs,
+ p_ets->pri_tc_tbl[0], p_params->max_ets_tc);
/* 8 bit tsa and bw data corresponding to each of the 8 TC's are
* encoded in a type u32 array of size 2.
*/
- for (i = 0; i < 2; i++) {
- p_params->ets_tc_tsa_tbl[i] = p_ets->tc_tsa_tbl[i];
- p_params->ets_tc_bw_tbl[i] = p_ets->tc_bw_tbl[i];
-
+ bw_map[0] = OSAL_BE32_TO_CPU(p_ets->tc_bw_tbl[0]);
+ bw_map[1] = OSAL_BE32_TO_CPU(p_ets->tc_bw_tbl[1]);
+ tsa_map[0] = OSAL_BE32_TO_CPU(p_ets->tc_tsa_tbl[0]);
+ tsa_map[1] = OSAL_BE32_TO_CPU(p_ets->tc_tsa_tbl[1]);
+ pri_map = OSAL_BE32_TO_CPU(p_ets->pri_tc_tbl[0]);
+ for (i = 0; i < ECORE_MAX_PFC_PRIORITIES; i++) {
+ p_params->ets_tc_bw_tbl[i] = ((u8 *)bw_map)[i];
+ p_params->ets_tc_tsa_tbl[i] = ((u8 *)tsa_map)[i];
+ p_params->ets_pri_tc_tbl[i] = ECORE_DCBX_PRIO2TC(pri_map, i);
DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
"elem %d bw_tbl %x tsa_tbl %x\n",
i, p_params->ets_tc_bw_tbl[i],
@@ -457,9 +557,10 @@ ecore_dcbx_get_common_params(struct ecore_hwfn *p_hwfn,
struct dcbx_app_priority_feature *p_app,
struct dcbx_app_priority_entry *p_tbl,
struct dcbx_ets_feature *p_ets,
- u32 pfc, struct ecore_dcbx_params *p_params)
+ u32 pfc, struct ecore_dcbx_params *p_params,
+ bool ieee)
{
- ecore_dcbx_get_app_data(p_hwfn, p_app, p_tbl, p_params);
+ ecore_dcbx_get_app_data(p_hwfn, p_app, p_tbl, p_params, ieee);
ecore_dcbx_get_ets_data(p_hwfn, p_ets, p_params);
ecore_dcbx_get_pfc_data(p_hwfn, pfc, p_params);
@@ -485,7 +586,8 @@ ecore_dcbx_get_local_params(struct ecore_hwfn *p_hwfn,
p_ets = &p_hwfn->p_dcbx_info->local_admin.features.ets;
pfc = p_hwfn->p_dcbx_info->local_admin.features.pfc;
- ecore_dcbx_get_common_params(p_hwfn, p_app, p_tbl, p_ets, pfc, p_data);
+ ecore_dcbx_get_common_params(p_hwfn, p_app, p_tbl, p_ets, pfc, p_data,
+ false);
p_local->valid = true;
return ECORE_SUCCESS;
@@ -510,7 +612,8 @@ ecore_dcbx_get_remote_params(struct ecore_hwfn *p_hwfn,
p_ets = &p_hwfn->p_dcbx_info->remote.features.ets;
pfc = p_hwfn->p_dcbx_info->remote.features.pfc;
- ecore_dcbx_get_common_params(p_hwfn, p_app, p_tbl, p_ets, pfc, p_data);
+ ecore_dcbx_get_common_params(p_hwfn, p_app, p_tbl, p_ets, pfc, p_data,
+ false);
p_remote->valid = true;
return ECORE_SUCCESS;
@@ -553,12 +656,15 @@ ecore_dcbx_get_operational_params(struct ecore_hwfn *p_hwfn,
p_operational->ieee = ecore_dcbx_ieee(flags);
p_operational->cee = ecore_dcbx_cee(flags);
+ p_operational->local = ecore_dcbx_local(flags);
DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
- "Version support: ieee %d, cee %d\n",
- p_operational->ieee, p_operational->cee);
+ "Version support: ieee %d, cee %d, static %d\n",
+ p_operational->ieee, p_operational->cee,
+ p_operational->local);
- ecore_dcbx_get_common_params(p_hwfn, p_app, p_tbl, p_ets, pfc, p_data);
+ ecore_dcbx_get_common_params(p_hwfn, p_app, p_tbl, p_ets, pfc, p_data,
+ p_operational->ieee);
ecore_dcbx_get_priority_info(p_hwfn, &p_operational->app_prio,
p_results);
err = ECORE_MFW_GET_FIELD(p_app->flags, DCBX_APP_ERROR);
@@ -570,6 +676,35 @@ ecore_dcbx_get_operational_params(struct ecore_hwfn *p_hwfn,
}
static enum _ecore_status_t
+ecore_dcbx_get_dscp_params(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct ecore_dcbx_get *params)
+{
+ struct ecore_dcbx_dscp_params *p_dscp;
+ struct dcb_dscp_map *p_dscp_map;
+ int i, j, entry;
+ u32 pri_map;
+
+ p_dscp = ¶ms->dscp;
+ p_dscp_map = &p_hwfn->p_dcbx_info->dscp_map;
+ p_dscp->enabled = ECORE_MFW_GET_FIELD(p_dscp_map->flags,
+ DCB_DSCP_ENABLE);
+ /* MFW encodes 64 dscp entries into 8 element array of u32 entries,
+ * where each entry holds the 4bit priority map for 8 dscp entries.
+ */
+ for (i = 0, entry = 0; i < ECORE_DCBX_DSCP_SIZE / 8; i++) {
+ pri_map = OSAL_BE32_TO_CPU(p_dscp_map->dscp_pri_map[i]);
+ DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, "elem %d pri_map 0x%x\n",
+ entry, pri_map);
+ for (j = 0; j < ECORE_DCBX_DSCP_SIZE / 8; j++, entry++)
+ p_dscp->dscp_pri_map[entry] = (u32)(pri_map >>
+ (j * 4)) & 0xf;
+ }
+
+ return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
ecore_dcbx_get_local_lldp_params(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct ecore_dcbx_get *params)
@@ -670,6 +805,7 @@ ecore_dcbx_read_remote_lldp_mib(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t rc = ECORE_SUCCESS;
struct ecore_dcbx_mib_meta_data data;
+ OSAL_MEM_ZERO(&data, sizeof(data));
data.addr = p_hwfn->mcp_info->port_addr + offsetof(struct public_port,
lldp_status_params);
data.lldp_remote = p_hwfn->p_dcbx_info->lldp_remote;
@@ -687,6 +823,7 @@ ecore_dcbx_read_operational_mib(struct ecore_hwfn *p_hwfn,
struct ecore_dcbx_mib_meta_data data;
enum _ecore_status_t rc = ECORE_SUCCESS;
+ OSAL_MEM_ZERO(&data, sizeof(data));
data.addr = p_hwfn->mcp_info->port_addr +
offsetof(struct public_port, operational_dcbx_mib);
data.mib = &p_hwfn->p_dcbx_info->operational;
@@ -704,6 +841,7 @@ ecore_dcbx_read_remote_mib(struct ecore_hwfn *p_hwfn,
struct ecore_dcbx_mib_meta_data data;
enum _ecore_status_t rc = ECORE_SUCCESS;
+ OSAL_MEM_ZERO(&data, sizeof(data));
data.addr = p_hwfn->mcp_info->port_addr +
offsetof(struct public_port, remote_dcbx_mib);
data.mib = &p_hwfn->p_dcbx_info->remote;
@@ -729,6 +867,18 @@ ecore_dcbx_read_local_mib(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
return rc;
}
+static void
+ecore_dcbx_read_dscp_mib(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+{
+ struct ecore_dcbx_mib_meta_data data;
+
+ data.addr = p_hwfn->mcp_info->port_addr +
+ offsetof(struct public_port, dcb_dscp_map);
+ data.dscp_map = &p_hwfn->p_dcbx_info->dscp_map;
+ data.size = sizeof(struct dcb_dscp_map);
+ ecore_memcpy_from(p_hwfn, p_ptt, data.dscp_map, data.addr, data.size);
+}
+
static enum _ecore_status_t ecore_dcbx_read_mib(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
enum ecore_mib_read_type type)
@@ -737,6 +887,7 @@ static enum _ecore_status_t ecore_dcbx_read_mib(struct ecore_hwfn *p_hwfn,
switch (type) {
case ECORE_DCBX_OPERATIONAL_MIB:
+ ecore_dcbx_read_dscp_mib(p_hwfn, p_ptt);
rc = ecore_dcbx_read_operational_mib(p_hwfn, p_ptt, type);
break;
case ECORE_DCBX_REMOTE_MIB:
@@ -775,6 +926,9 @@ ecore_dcbx_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
return rc;
if (type == ECORE_DCBX_OPERATIONAL_MIB) {
+ ecore_dcbx_get_dscp_params(p_hwfn, p_ptt,
+ &p_hwfn->p_dcbx_info->get);
+
rc = ecore_dcbx_process_mib_info(p_hwfn);
if (!rc) {
bool enabled;
@@ -795,6 +949,14 @@ ecore_dcbx_mib_update_event(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
}
}
ecore_dcbx_get_params(p_hwfn, p_ptt, type);
+
+ /* Update the DSCP to TC mapping bit if required */
+ if ((type == ECORE_DCBX_OPERATIONAL_MIB) &&
+ p_hwfn->p_dcbx_info->dscp_nig_update) {
+ ecore_wr(p_hwfn, p_ptt, NIG_REG_DSCP_TO_TC_MAP_ENABLE, 0x1);
+ p_hwfn->p_dcbx_info->dscp_nig_update = false;
+ }
+
OSAL_DCBX_AEN(p_hwfn, type);
return rc;
@@ -828,6 +990,8 @@ static void ecore_dcbx_update_protocol_data(struct protocol_dcb_data *p_data,
p_data->dcb_enable_flag = p_src->arr[type].enable;
p_data->dcb_priority = p_src->arr[type].priority;
p_data->dcb_tc = p_src->arr[type].tc;
+ p_data->dscp_enable_flag = p_src->arr[type].dscp_enable;
+ p_data->dscp_val = p_src->arr[type].dscp_val;
}
/* Set pf update ramrod command params */
@@ -835,7 +999,7 @@ void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src,
struct pf_update_ramrod_data *p_dest)
{
struct protocol_dcb_data *p_dcb_data;
- bool update_flag;
+ bool update_flag = false;
p_dest->pf_id = p_src->pf_id;
@@ -887,3 +1051,302 @@ enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *p_hwfn,
return rc;
}
+
+static void
+ecore_dcbx_set_pfc_data(struct ecore_hwfn *p_hwfn,
+ u32 *pfc, struct ecore_dcbx_params *p_params)
+{
+ u8 pfc_map = 0;
+ int i;
+
+ if (p_params->pfc.willing)
+ *pfc |= DCBX_PFC_WILLING_MASK;
+ else
+ *pfc &= ~DCBX_PFC_WILLING_MASK;
+
+ if (p_params->pfc.enabled)
+ *pfc |= DCBX_PFC_ENABLED_MASK;
+ else
+ *pfc &= ~DCBX_PFC_ENABLED_MASK;
+
+ *pfc &= ~DCBX_PFC_CAPS_MASK;
+ *pfc |= (u32)p_params->pfc.max_tc << DCBX_PFC_CAPS_SHIFT;
+
+ for (i = 0; i < ECORE_MAX_PFC_PRIORITIES; i++)
+ if (p_params->pfc.prio[i])
+ pfc_map |= (0x1 << i);
+
+ *pfc |= (pfc_map << DCBX_PFC_PRI_EN_BITMAP_SHIFT);
+
+ DP_VERBOSE(p_hwfn, ECORE_MSG_DCB, "pfc = 0x%x\n", *pfc);
+}
+
+static void
+ecore_dcbx_set_ets_data(struct ecore_hwfn *p_hwfn,
+ struct dcbx_ets_feature *p_ets,
+ struct ecore_dcbx_params *p_params)
+{
+ u8 *bw_map, *tsa_map;
+ int i;
+
+ if (p_params->ets_willing)
+ p_ets->flags |= DCBX_ETS_WILLING_MASK;
+ else
+ p_ets->flags &= ~DCBX_ETS_WILLING_MASK;
+
+ if (p_params->ets_cbs)
+ p_ets->flags |= DCBX_ETS_CBS_MASK;
+ else
+ p_ets->flags &= ~DCBX_ETS_CBS_MASK;
+
+ if (p_params->ets_enabled)
+ p_ets->flags |= DCBX_ETS_ENABLED_MASK;
+ else
+ p_ets->flags &= ~DCBX_ETS_ENABLED_MASK;
+
+ p_ets->flags &= ~DCBX_ETS_MAX_TCS_MASK;
+ p_ets->flags |= (u32)p_params->max_ets_tc << DCBX_ETS_MAX_TCS_SHIFT;
+
+ bw_map = (u8 *)&p_ets->tc_bw_tbl[0];
+ tsa_map = (u8 *)&p_ets->tc_tsa_tbl[0];
+ p_ets->pri_tc_tbl[0] = 0;
+ for (i = 0; i < ECORE_MAX_PFC_PRIORITIES; i++) {
+ bw_map[i] = p_params->ets_tc_bw_tbl[i];
+ tsa_map[i] = p_params->ets_tc_tsa_tbl[i];
+ p_ets->pri_tc_tbl[0] |= (((u32)p_params->ets_pri_tc_tbl[i]) <<
+ ((7 - i) * 4));
+ }
+ p_ets->pri_tc_tbl[0] = OSAL_CPU_TO_BE32(p_ets->pri_tc_tbl[0]);
+ for (i = 0; i < 2; i++) {
+ p_ets->tc_bw_tbl[i] = OSAL_CPU_TO_BE32(p_ets->tc_bw_tbl[i]);
+ p_ets->tc_tsa_tbl[i] = OSAL_CPU_TO_BE32(p_ets->tc_tsa_tbl[i]);
+ }
+}
+
+static void
+ecore_dcbx_set_app_data(struct ecore_hwfn *p_hwfn,
+ struct dcbx_app_priority_feature *p_app,
+ struct ecore_dcbx_params *p_params, bool ieee)
+{
+ u32 *entry;
+ int i;
+
+ if (p_params->app_willing)
+ p_app->flags |= DCBX_APP_WILLING_MASK;
+ else
+ p_app->flags &= ~DCBX_APP_WILLING_MASK;
+
+ if (p_params->app_valid)
+ p_app->flags |= DCBX_APP_ENABLED_MASK;
+ else
+ p_app->flags &= ~DCBX_APP_ENABLED_MASK;
+
+ p_app->flags &= ~DCBX_APP_NUM_ENTRIES_MASK;
+ p_app->flags |= (u32)p_params->num_app_entries <<
+ DCBX_APP_NUM_ENTRIES_SHIFT;
+
+ for (i = 0; i < DCBX_MAX_APP_PROTOCOL; i++) {
+ entry = &p_app->app_pri_tbl[i].entry;
+ if (ieee) {
+ *entry &= ~DCBX_APP_SF_IEEE_MASK;
+ switch (p_params->app_entry[i].sf_ieee) {
+ case ECORE_DCBX_SF_IEEE_ETHTYPE:
+ *entry |= ((u32)DCBX_APP_SF_IEEE_ETHTYPE <<
+ DCBX_APP_SF_IEEE_SHIFT);
+ break;
+ case ECORE_DCBX_SF_IEEE_TCP_PORT:
+ *entry |= ((u32)DCBX_APP_SF_IEEE_TCP_PORT <<
+ DCBX_APP_SF_IEEE_SHIFT);
+ break;
+ case ECORE_DCBX_SF_IEEE_UDP_PORT:
+ *entry |= ((u32)DCBX_APP_SF_IEEE_UDP_PORT <<
+ DCBX_APP_SF_IEEE_SHIFT);
+ break;
+ case ECORE_DCBX_SF_IEEE_TCP_UDP_PORT:
+ *entry |= (u32)DCBX_APP_SF_IEEE_TCP_UDP_PORT <<
+ DCBX_APP_SF_IEEE_SHIFT;
+ break;
+ }
+ } else {
+ *entry &= ~DCBX_APP_SF_MASK;
+ if (p_params->app_entry[i].ethtype)
+ *entry |= ((u32)DCBX_APP_SF_ETHTYPE <<
+ DCBX_APP_SF_SHIFT);
+ else
+ *entry |= ((u32)DCBX_APP_SF_PORT <<
+ DCBX_APP_SF_SHIFT);
+ }
+ *entry &= ~DCBX_APP_PROTOCOL_ID_MASK;
+ *entry |= ((u32)p_params->app_entry[i].proto_id <<
+ DCBX_APP_PROTOCOL_ID_SHIFT);
+ *entry &= ~DCBX_APP_PRI_MAP_MASK;
+ *entry |= ((u32)(p_params->app_entry[i].prio) <<
+ DCBX_APP_PRI_MAP_SHIFT);
+ }
+}
+
+static enum _ecore_status_t
+ecore_dcbx_set_local_params(struct ecore_hwfn *p_hwfn,
+ struct dcbx_local_params *local_admin,
+ struct ecore_dcbx_set *params)
+{
+ bool ieee = false;
+
+ local_admin->flags = 0;
+ OSAL_MEMCPY(&local_admin->features,
+ &p_hwfn->p_dcbx_info->operational.features,
+ sizeof(struct dcbx_features));
+
+ if (params->enabled) {
+ local_admin->config = params->ver_num;
+ ieee = !!(params->ver_num & DCBX_CONFIG_VERSION_IEEE);
+ } else {
+ local_admin->config = DCBX_CONFIG_VERSION_DISABLED;
+ }
+
+ if (params->override_flags & ECORE_DCBX_OVERRIDE_PFC_CFG)
+ ecore_dcbx_set_pfc_data(p_hwfn, &local_admin->features.pfc,
+ ¶ms->config.params);
+
+ if (params->override_flags & ECORE_DCBX_OVERRIDE_ETS_CFG)
+ ecore_dcbx_set_ets_data(p_hwfn, &local_admin->features.ets,
+ ¶ms->config.params);
+
+ if (params->override_flags & ECORE_DCBX_OVERRIDE_APP_CFG)
+ ecore_dcbx_set_app_data(p_hwfn, &local_admin->features.app,
+ ¶ms->config.params, ieee);
+
+ return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_dcbx_set_dscp_params(struct ecore_hwfn *p_hwfn,
+ struct dcb_dscp_map *p_dscp_map,
+ struct ecore_dcbx_set *p_params)
+{
+ int entry, i, j;
+ u32 val;
+
+ OSAL_MEMCPY(p_dscp_map, &p_hwfn->p_dcbx_info->dscp_map,
+ sizeof(*p_dscp_map));
+
+ if (p_params->dscp.enabled)
+ p_dscp_map->flags |= DCB_DSCP_ENABLE_MASK;
+ else
+ p_dscp_map->flags &= ~DCB_DSCP_ENABLE_MASK;
+
+ for (i = 0, entry = 0; i < 8; i++) {
+ val = 0;
+ for (j = 0; j < 8; j++, entry++)
+ val |= (((u32)p_params->dscp.dscp_pri_map[entry]) <<
+ (j * 4));
+
+ p_dscp_map->dscp_pri_map[i] = OSAL_CPU_TO_BE32(val);
+ }
+
+ p_hwfn->p_dcbx_info->dscp_nig_update = true;
+
+ return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t ecore_dcbx_config_params(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct ecore_dcbx_set *params,
+ bool hw_commit)
+{
+ enum _ecore_status_t rc = ECORE_SUCCESS;
+ struct ecore_dcbx_mib_meta_data data;
+ struct dcbx_local_params local_admin;
+ struct dcb_dscp_map dscp_map;
+ u32 resp = 0, param = 0;
+
+ if (!hw_commit) {
+ OSAL_MEMCPY(&p_hwfn->p_dcbx_info->set, params,
+ sizeof(struct ecore_dcbx_set));
+ return ECORE_SUCCESS;
+ }
+
+ /* clear set-parmas cache */
+ OSAL_MEMSET(&p_hwfn->p_dcbx_info->set, 0,
+ sizeof(struct ecore_dcbx_set));
+
+ OSAL_MEMSET(&local_admin, 0, sizeof(local_admin));
+ ecore_dcbx_set_local_params(p_hwfn, &local_admin, params);
+
+ data.addr = p_hwfn->mcp_info->port_addr +
+ offsetof(struct public_port, local_admin_dcbx_mib);
+ data.local_admin = &local_admin;
+ data.size = sizeof(struct dcbx_local_params);
+ ecore_memcpy_to(p_hwfn, p_ptt, data.addr, data.local_admin, data.size);
+
+ if (params->override_flags & ECORE_DCBX_OVERRIDE_DSCP_CFG) {
+ OSAL_MEMSET(&dscp_map, 0, sizeof(dscp_map));
+ ecore_dcbx_set_dscp_params(p_hwfn, &dscp_map, params);
+
+ data.addr = p_hwfn->mcp_info->port_addr +
+ offsetof(struct public_port, dcb_dscp_map);
+ data.dscp_map = &dscp_map;
+ data.size = sizeof(struct dcb_dscp_map);
+ ecore_memcpy_to(p_hwfn, p_ptt, data.addr, data.dscp_map,
+ data.size);
+ }
+
+ rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_SET_DCBX,
+ 1 << DRV_MB_PARAM_LLDP_SEND_SHIFT, &resp, ¶m);
+ if (rc != ECORE_SUCCESS) {
+ DP_NOTICE(p_hwfn, false,
+ "Failed to send DCBX update request\n");
+ return rc;
+ }
+
+ return rc;
+}
+
+enum _ecore_status_t ecore_dcbx_get_config_params(struct ecore_hwfn *p_hwfn,
+ struct ecore_dcbx_set *params)
+{
+ struct ecore_dcbx_get *dcbx_info;
+ int rc;
+
+ if (p_hwfn->p_dcbx_info->set.config.valid) {
+ OSAL_MEMCPY(params, &p_hwfn->p_dcbx_info->set,
+ sizeof(struct ecore_dcbx_set));
+ return ECORE_SUCCESS;
+ }
+
+ dcbx_info = OSAL_ALLOC(p_hwfn->p_dev, GFP_KERNEL,
+ sizeof(struct ecore_dcbx_get));
+ if (!dcbx_info) {
+ DP_ERR(p_hwfn, "Failed to allocate struct ecore_dcbx_info\n");
+ return ECORE_NOMEM;
+ }
+
+ rc = ecore_dcbx_query_params(p_hwfn, dcbx_info,
+ ECORE_DCBX_OPERATIONAL_MIB);
+ if (rc) {
+ OSAL_FREE(p_hwfn->p_dev, dcbx_info);
+ return rc;
+ }
+ p_hwfn->p_dcbx_info->set.override_flags = 0;
+
+ p_hwfn->p_dcbx_info->set.ver_num = DCBX_CONFIG_VERSION_DISABLED;
+ if (dcbx_info->operational.cee)
+ p_hwfn->p_dcbx_info->set.ver_num |= DCBX_CONFIG_VERSION_CEE;
+ if (dcbx_info->operational.ieee)
+ p_hwfn->p_dcbx_info->set.ver_num |= DCBX_CONFIG_VERSION_IEEE;
+ if (dcbx_info->operational.local)
+ p_hwfn->p_dcbx_info->set.ver_num |= DCBX_CONFIG_VERSION_STATIC;
+
+ p_hwfn->p_dcbx_info->set.enabled = dcbx_info->operational.enabled;
+ OSAL_MEMCPY(&p_hwfn->p_dcbx_info->set.config.params,
+ &dcbx_info->operational.params,
+ sizeof(struct ecore_dcbx_admin_params));
+ p_hwfn->p_dcbx_info->set.config.valid = true;
+
+ OSAL_MEMCPY(params, &p_hwfn->p_dcbx_info->set,
+ sizeof(struct ecore_dcbx_set));
+
+ OSAL_FREE(p_hwfn->p_dev, dcbx_info);
+
+ return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_dcbx.h b/drivers/net/qede/base/ecore_dcbx.h
index d577f4e..1518624 100644
--- a/drivers/net/qede/base/ecore_dcbx.h
+++ b/drivers/net/qede/base/ecore_dcbx.h
@@ -25,6 +25,8 @@ struct ecore_dcbx_info {
struct lldp_config_params_s lldp_local[LLDP_MAX_LLDP_AGENTS];
struct dcbx_local_params local_admin;
struct ecore_dcbx_results results;
+ struct dcb_dscp_map dscp_map;
+ bool dscp_nig_update;
struct dcbx_mib operational;
struct dcbx_mib remote;
struct ecore_dcbx_set set;
@@ -32,10 +34,15 @@ struct ecore_dcbx_info {
u8 dcbx_cap;
};
-/* Upper layer driver interface routines */
-enum _ecore_status_t ecore_dcbx_config_params(struct ecore_hwfn *,
- struct ecore_ptt *,
- struct ecore_dcbx_set *);
+struct ecore_dcbx_mib_meta_data {
+ struct lldp_config_params_s *lldp_local;
+ struct lldp_status_params_s *lldp_remote;
+ struct dcbx_local_params *local_admin;
+ struct dcb_dscp_map *dscp_map;
+ struct dcbx_mib *mib;
+ osal_size_t size;
+ u32 addr;
+};
/* ECORE local interface routines */
enum _ecore_status_t
@@ -48,8 +55,11 @@ enum _ecore_status_t ecore_dcbx_info_alloc(struct ecore_hwfn *p_hwfn);
void ecore_dcbx_info_free(struct ecore_hwfn *, struct ecore_dcbx_info *);
void ecore_dcbx_set_pf_update_params(struct ecore_dcbx_results *p_src,
struct pf_update_ramrod_data *p_dest);
+
+#ifndef REAL_ASIC_ONLY
/* @@@TBD eagle phy workaround */
void ecore_dcbx_eagle_workaround(struct ecore_hwfn *, struct ecore_ptt *,
bool set_to_pfc);
+#endif
#endif /* __ECORE_DCBX_H__ */
diff --git a/drivers/net/qede/base/ecore_dcbx_api.h b/drivers/net/qede/base/ecore_dcbx_api.h
index 7cd8ee0..82416e7 100644
--- a/drivers/net/qede/base/ecore_dcbx_api.h
+++ b/drivers/net/qede/base/ecore_dcbx_api.h
@@ -9,7 +9,7 @@
#ifndef __ECORE_DCBX_API_H__
#define __ECORE_DCBX_API_H__
-#include "ecore.h"
+#include "ecore_status.h"
#define DCBX_CONFIG_MAX_APP_PROTOCOL 4
@@ -23,36 +23,32 @@ enum ecore_mib_read_type {
struct ecore_dcbx_app_data {
bool enable; /* DCB enabled */
- bool update; /* Update indication */
+ u8 update; /* Update indication */
u8 priority; /* Priority */
u8 tc; /* Traffic Class */
+ bool dscp_enable; /* DSCP enabled */
+ u8 dscp_val; /* DSCP value */
};
#ifndef __EXTRACT__LINUX__
enum dcbx_protocol_type {
+ DCBX_PROTOCOL_ISCSI,
+ DCBX_PROTOCOL_FCOE,
+ DCBX_PROTOCOL_ROCE,
+ DCBX_PROTOCOL_ROCE_V2,
DCBX_PROTOCOL_ETH,
DCBX_MAX_PROTOCOL_TYPE
};
-#ifdef LINUX_REMOVE
-/* We can't assume THE HSI values are available to clients, so we need
- * to redefine those here.
- */
-#ifndef LLDP_CHASSIS_ID_STAT_LEN
-#define LLDP_CHASSIS_ID_STAT_LEN 4
-#endif
-#ifndef LLDP_PORT_ID_STAT_LEN
-#define LLDP_PORT_ID_STAT_LEN 4
-#endif
-#ifndef DCBX_MAX_APP_PROTOCOL
-#define DCBX_MAX_APP_PROTOCOL 32
-#endif
-
-#endif
+#define ECORE_LLDP_CHASSIS_ID_STAT_LEN 4
+#define ECORE_LLDP_PORT_ID_STAT_LEN 4
+#define ECORE_DCBX_MAX_APP_PROTOCOL 32
+#define ECORE_MAX_PFC_PRIORITIES 8
+#define ECORE_DCBX_DSCP_SIZE 64
struct ecore_dcbx_lldp_remote {
- u32 peer_chassis_id[LLDP_CHASSIS_ID_STAT_LEN];
- u32 peer_port_id[LLDP_PORT_ID_STAT_LEN];
+ u32 peer_chassis_id[ECORE_LLDP_CHASSIS_ID_STAT_LEN];
+ u32 peer_port_id[ECORE_LLDP_PORT_ID_STAT_LEN];
bool enable_rx;
bool enable_tx;
u32 tx_interval;
@@ -60,29 +56,55 @@ struct ecore_dcbx_lldp_remote {
};
struct ecore_dcbx_lldp_local {
- u32 local_chassis_id[LLDP_CHASSIS_ID_STAT_LEN];
- u32 local_port_id[LLDP_PORT_ID_STAT_LEN];
+ u32 local_chassis_id[ECORE_LLDP_CHASSIS_ID_STAT_LEN];
+ u32 local_port_id[ECORE_LLDP_PORT_ID_STAT_LEN];
};
struct ecore_dcbx_app_prio {
+ u8 roce;
+ u8 roce_v2;
+ u8 fcoe;
+ u8 iscsi;
u8 eth;
};
+struct ecore_dbcx_pfc_params {
+ bool willing;
+ bool enabled;
+ u8 prio[ECORE_MAX_PFC_PRIORITIES];
+ u8 max_tc;
+};
+
+enum ecore_dcbx_sf_ieee_type {
+ ECORE_DCBX_SF_IEEE_ETHTYPE,
+ ECORE_DCBX_SF_IEEE_TCP_PORT,
+ ECORE_DCBX_SF_IEEE_UDP_PORT,
+ ECORE_DCBX_SF_IEEE_TCP_UDP_PORT
+};
+
+struct ecore_app_entry {
+ bool ethtype;
+ enum ecore_dcbx_sf_ieee_type sf_ieee;
+ bool enabled;
+ u8 prio;
+ u16 proto_id;
+ enum dcbx_protocol_type proto_type;
+};
+
struct ecore_dcbx_params {
- u32 app_bitmap[DCBX_MAX_APP_PROTOCOL];
+ struct ecore_app_entry app_entry[ECORE_DCBX_MAX_APP_PROTOCOL];
u16 num_app_entries;
bool app_willing;
bool app_valid;
+ bool app_error;
bool ets_willing;
bool ets_enabled;
+ bool ets_cbs;
bool valid; /* Indicate validity of params */
- u32 ets_pri_tc_tbl[1];
- u32 ets_tc_bw_tbl[2];
- u32 ets_tc_tsa_tbl[2];
- bool pfc_willing;
- bool pfc_enabled;
- u32 pfc_bitmap;
- u8 max_pfc_tc;
+ u8 ets_pri_tc_tbl[ECORE_MAX_PFC_PRIORITIES];
+ u8 ets_tc_bw_tbl[ECORE_MAX_PFC_PRIORITIES];
+ u8 ets_tc_tsa_tbl[ECORE_MAX_PFC_PRIORITIES];
+ struct ecore_dbcx_pfc_params pfc;
u8 max_ets_tc;
};
@@ -103,22 +125,40 @@ struct ecore_dcbx_operational_params {
bool enabled;
bool ieee;
bool cee;
+ bool local;
u32 err;
};
+struct ecore_dcbx_dscp_params {
+ bool enabled;
+ u8 dscp_pri_map[ECORE_DCBX_DSCP_SIZE];
+};
+
struct ecore_dcbx_get {
struct ecore_dcbx_operational_params operational;
struct ecore_dcbx_lldp_remote lldp_remote;
struct ecore_dcbx_lldp_local lldp_local;
struct ecore_dcbx_remote_params remote;
struct ecore_dcbx_admin_params local;
+ struct ecore_dcbx_dscp_params dscp;
};
#endif
+#define ECORE_DCBX_VERSION_DISABLED 0
+#define ECORE_DCBX_VERSION_IEEE 1
+#define ECORE_DCBX_VERSION_CEE 2
+
struct ecore_dcbx_set {
- struct ecore_dcbx_admin_params config;
+#define ECORE_DCBX_OVERRIDE_STATE (1 << 0)
+#define ECORE_DCBX_OVERRIDE_PFC_CFG (1 << 1)
+#define ECORE_DCBX_OVERRIDE_ETS_CFG (1 << 2)
+#define ECORE_DCBX_OVERRIDE_APP_CFG (1 << 3)
+#define ECORE_DCBX_OVERRIDE_DSCP_CFG (1 << 4)
+ u32 override_flags;
bool enabled;
+ struct ecore_dcbx_admin_params config;
u32 ver_num;
+ struct ecore_dcbx_dscp_params dscp;
};
struct ecore_dcbx_results {
@@ -133,27 +173,23 @@ struct ecore_dcbx_app_metadata {
enum ecore_pci_personality personality;
};
-struct ecore_dcbx_mib_meta_data {
- struct lldp_config_params_s *lldp_local;
- struct lldp_status_params_s *lldp_remote;
- struct dcbx_local_params *local_admin;
- struct dcbx_mib *mib;
- osal_size_t size;
- u32 addr;
-};
-
-void
-ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
- struct ecore_hw_info *p_info,
- bool enable, bool update, u8 prio, u8 tc,
- enum dcbx_protocol_type type,
- enum ecore_pci_personality personality);
-
enum _ecore_status_t ecore_dcbx_query_params(struct ecore_hwfn *,
struct ecore_dcbx_get *,
enum ecore_mib_read_type);
+enum _ecore_status_t ecore_dcbx_get_config_params(struct ecore_hwfn *,
+ struct ecore_dcbx_set *);
+
+enum _ecore_status_t ecore_dcbx_config_params(struct ecore_hwfn *,
+ struct ecore_ptt *,
+ struct ecore_dcbx_set *,
+ bool);
+
static const struct ecore_dcbx_app_metadata ecore_dcbx_app_update[] = {
+ {DCBX_PROTOCOL_ISCSI, "ISCSI", ECORE_PCI_ISCSI},
+ {DCBX_PROTOCOL_FCOE, "FCOE", ECORE_PCI_FCOE},
+ {DCBX_PROTOCOL_ROCE, "ROCE", ECORE_PCI_ETH_ROCE},
+ {DCBX_PROTOCOL_ROCE_V2, "ROCE_V2", ECORE_PCI_ETH_ROCE},
{DCBX_PROTOCOL_ETH, "ETH", ECORE_PCI_ETH}
};
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index fd38215..319edeb 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -29,11 +29,20 @@
#include "ecore_iro.h"
#include "nvm_cfg.h"
#include "ecore_dev_api.h"
-#include "ecore_attn_values.h"
#include "ecore_dcbx.h"
+/* TODO - there's a bug in DCBx re-configuration flows in MF, as the QM
+ * registers involved are not split and thus configuration is a race where
+ * some of the PFs configuration might be lost.
+ * Eventually, this needs to move into a MFW-covered HW-lock as arbitration
+ * mechanism as this doesn't cover some cases [E.g., PDA or scenarios where
+ * there's more than a single compiled ecore component in system].
+ */
+static osal_spinlock_t qm_lock;
+static bool qm_lock_init;
+
/* Configurable */
-#define ECORE_MIN_DPIS (4) /* The minimal number of DPIs required
+#define ECORE_MIN_DPIS (4) /* The minimal num of DPIs required to
* load the driver. The number was
* arbitrarily set.
*/
@@ -50,7 +59,17 @@ static u32 ecore_hw_bar_size(struct ecore_hwfn *p_hwfn, enum BAR_ID bar_id)
{
u32 bar_reg = (bar_id == BAR_ID_0 ?
PGLUE_B_REG_PF_BAR0_SIZE : PGLUE_B_REG_PF_BAR1_SIZE);
- u32 val = ecore_rd(p_hwfn, p_hwfn->p_main_ptt, bar_reg);
+ u32 val;
+
+ if (IS_VF(p_hwfn->p_dev)) {
+ /* TODO - assume each VF hwfn has 64Kb for Bar0; Bar1 can be
+ * read from actual register, but we're currently not using
+ * it for actual doorbelling.
+ */
+ return 1 << 17;
+ }
+
+ val = ecore_rd(p_hwfn, p_hwfn->p_main_ptt, bar_reg);
/* The above registers were updated in the past only in CMT mode. Since
* they were found to be useful MFW started updating them from 8.7.7.0.
@@ -59,16 +78,18 @@ static u32 ecore_hw_bar_size(struct ecore_hwfn *p_hwfn, enum BAR_ID bar_id)
if (!val) {
if (p_hwfn->p_dev->num_hwfns > 1) {
DP_NOTICE(p_hwfn, false,
- "BAR size not configured. Assuming BAR"
- " size of 256kB for GRC and 512kB for DB\n");
+ "BAR size not configured. Assuming BAR size");
+ DP_NOTICE(p_hwfn, false,
+ "of 256kB for GRC and 512kB for DB\n");
return BAR_ID_0 ? 256 * 1024 : 512 * 1024;
- }
-
+ } else {
+ DP_NOTICE(p_hwfn, false,
+ "BAR size not configured. Assuming BAR size");
DP_NOTICE(p_hwfn, false,
- "BAR size not configured. Assuming BAR"
- " size of 512kB for GRC and 512kB for DB\n");
+ "of 512kB for GRC and 512kB for DB\n");
return 512 * 1024;
}
+ }
return 1 << (val + 15);
}
@@ -156,6 +177,9 @@ void ecore_resc_free(struct ecore_dev *p_dev)
ecore_eq_free(p_hwfn, p_hwfn->p_eq);
ecore_consq_free(p_hwfn, p_hwfn->p_consq);
ecore_int_free(p_hwfn);
+#ifdef CONFIG_ECORE_LL2
+ ecore_ll2_free(p_hwfn, p_hwfn->p_ll2_info);
+#endif
ecore_iov_free(p_hwfn);
ecore_dmae_info_free(p_hwfn);
ecore_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
@@ -166,16 +190,28 @@ void ecore_resc_free(struct ecore_dev *p_dev)
static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
bool b_sleepable)
{
- u8 num_vports, vf_offset = 0, i, vport_id, num_ports;
+ u8 num_vports, vf_offset = 0, i, vport_id, num_ports, curr_queue;
struct ecore_qm_info *qm_info = &p_hwfn->qm_info;
struct init_qm_port_params *p_qm_port;
- u16 num_pqs, multi_cos_tcs = 1;
-#ifdef CONFIG_ECORE_SRIOV
- u16 num_vfs = p_hwfn->p_dev->sriov_info.total_vfs;
-#else
+ bool init_rdma_offload_pq = false;
+ bool init_pure_ack_pq = false;
+ bool init_ooo_pq = false;
+ u16 num_pqs, protocol_pqs;
+ u16 num_pf_rls = 0;
u16 num_vfs = 0;
-#endif
+ u32 pf_rl;
+ u8 pf_wfq;
+ /* @TMP - saving the existing min/max bw config before resetting the
+ * qm_info to restore them.
+ */
+ pf_rl = qm_info->pf_rl;
+ pf_wfq = qm_info->pf_wfq;
+
+#ifdef CONFIG_ECORE_SRIOV
+ if (p_hwfn->p_dev->p_iov_info)
+ num_vfs = p_hwfn->p_dev->p_iov_info->total_vfs;
+#endif
OSAL_MEM_ZERO(qm_info, sizeof(*qm_info));
#ifndef ASIC_ONLY
@@ -187,20 +223,74 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
}
#endif
- num_pqs = multi_cos_tcs + num_vfs + 1; /* The '1' is for pure-LB */
+ /* ethernet PFs require a pq per tc. Even if only a subset of the TCs
+ * active, we want physical queues allocated for all of them, since we
+ * don't have a good recycle flow. Non ethernet PFs require only a
+ * single physical queue.
+ */
+ if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE ||
+ p_hwfn->hw_info.personality == ECORE_PCI_IWARP ||
+ p_hwfn->hw_info.personality == ECORE_PCI_ETH)
+ protocol_pqs = p_hwfn->hw_info.num_hw_tc;
+ else
+ protocol_pqs = 1;
+
+ num_pqs = protocol_pqs + num_vfs + 1; /* The '1' is for pure-LB */
num_vports = (u8)RESC_NUM(p_hwfn, ECORE_VPORT);
+ if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) {
+ num_pqs++; /* for RoCE queue */
+ init_rdma_offload_pq = true;
+ if (p_hwfn->pf_params.rdma_pf_params.enable_dcqcn) {
+ /* Due to FW assumption that rl==vport, we limit the
+ * number of rate limiters by the minimum between its
+ * allocated number and the allocated number of vports.
+ * Another limitation is the number of supported qps
+ * with rate limiters in FW.
+ */
+ num_pf_rls =
+ (u16)OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_RL),
+ RESC_NUM(p_hwfn, ECORE_VPORT));
+
+ /* we subtract num_vfs because each one requires a rate
+ * limiter, and one default rate limiter.
+ */
+ if (num_pf_rls < num_vfs + 1) {
+ DP_ERR(p_hwfn, "No RL for DCQCN");
+ DP_ERR(p_hwfn, "[num_pf_rls %d num_vfs %d]\n",
+ num_pf_rls, num_vfs);
+ return ECORE_INVAL;
+ }
+ num_pf_rls -= num_vfs + 1;
+ }
+
+ num_pqs += num_pf_rls;
+ qm_info->num_pf_rls = (u8)num_pf_rls;
+ }
+
+ if (p_hwfn->hw_info.personality == ECORE_PCI_IWARP) {
+ num_pqs += 3; /* for iwarp queue / pure-ack / ooo */
+ init_rdma_offload_pq = true;
+ init_pure_ack_pq = true;
+ init_ooo_pq = true;
+ }
+
+ if (p_hwfn->hw_info.personality == ECORE_PCI_ISCSI) {
+ num_pqs += 2; /* for iSCSI pure-ACK / OOO queue */
+ init_pure_ack_pq = true;
+ init_ooo_pq = true;
+ }
+
/* Sanity checking that setup requires legal number of resources */
if (num_pqs > RESC_NUM(p_hwfn, ECORE_PQ)) {
DP_ERR(p_hwfn,
- "Need too many Physical queues - 0x%04x when"
- " only %04x are available\n",
+ "Need too many Physical queues - 0x%04x avail %04x",
num_pqs, RESC_NUM(p_hwfn, ECORE_PQ));
return ECORE_INVAL;
}
/* PQs will be arranged as follows: First per-TC PQ, then pure-LB queue,
- * then special queues, then per-VF PQ.
+ * then special queues (iSCSI pure-ACK / RoCE), then per-VF PQ.
*/
qm_info->qm_pq_params = OSAL_ZALLOC(p_hwfn->p_dev,
b_sleepable ? GFP_KERNEL :
@@ -238,13 +328,40 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
vport_id = (u8)RESC_START(p_hwfn, ECORE_VPORT);
- /* First init per-TC PQs */
- for (i = 0; i < multi_cos_tcs; i++) {
- struct init_qm_pq_params *params = &qm_info->qm_pq_params[i];
+ /* First init rate limited queues ( Due to RoCE assumption of
+ * qpid=rlid )
+ */
+ for (curr_queue = 0; curr_queue < num_pf_rls; curr_queue++) {
+ qm_info->qm_pq_params[curr_queue].vport_id = vport_id++;
+ qm_info->qm_pq_params[curr_queue].tc_id =
+ p_hwfn->hw_info.offload_tc;
+ qm_info->qm_pq_params[curr_queue].wrr_group = 1;
+ qm_info->qm_pq_params[curr_queue].rl_valid = 1;
+ };
+
+ /* Protocol PQs */
+ for (i = 0; i < protocol_pqs; i++) {
+ struct init_qm_pq_params *params =
+ &qm_info->qm_pq_params[curr_queue++];
- if (p_hwfn->hw_info.personality == ECORE_PCI_ETH) {
+ if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE ||
+ p_hwfn->hw_info.personality == ECORE_PCI_IWARP ||
+ p_hwfn->hw_info.personality == ECORE_PCI_ETH) {
params->vport_id = vport_id;
- params->tc_id = p_hwfn->hw_info.non_offload_tc;
+ params->tc_id = i;
+ /* Note: this assumes that if we had a configuration
+ * with N tcs and subsequently another configuration
+ * With Fewer TCs, the in flight traffic (in QM queues,
+ * in FW, from driver to FW) will still trickle out and
+ * not get "stuck" in the QM. This is determined by the
+ * NIG_REG_TX_ARB_CLIENT_IS_SUBJECT2WFQ. Unused TCs are
+ * supposed to be cleared in this map, allowing traffic
+ * to flush out. If this is not the case, we would need
+ * to set the TC of unused queues to 0, and reconfigure
+ * QM every time num of TCs changes. Unused queues in
+ * this context would mean those intended for TCs where
+ * tc_id > hw_info.num_active_tcs.
+ */
params->wrr_group = 1; /* @@@TBD ECORE_WRR_MEDIUM */
} else {
params->vport_id = vport_id;
@@ -254,22 +371,50 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
}
/* Then init pure-LB PQ */
- qm_info->pure_lb_pq = i;
- qm_info->qm_pq_params[i].vport_id =
+ qm_info->pure_lb_pq = curr_queue;
+ qm_info->qm_pq_params[curr_queue].vport_id =
(u8)RESC_START(p_hwfn, ECORE_VPORT);
- qm_info->qm_pq_params[i].tc_id = PURE_LB_TC;
- qm_info->qm_pq_params[i].wrr_group = 1;
- i++;
+ qm_info->qm_pq_params[curr_queue].tc_id = PURE_LB_TC;
+ qm_info->qm_pq_params[curr_queue].wrr_group = 1;
+ curr_queue++;
+
+ qm_info->offload_pq = 0; /* Already initialized for iSCSI/FCoE */
+ if (init_rdma_offload_pq) {
+ qm_info->offload_pq = curr_queue;
+ qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
+ qm_info->qm_pq_params[curr_queue].tc_id =
+ p_hwfn->hw_info.offload_tc;
+ qm_info->qm_pq_params[curr_queue].wrr_group = 1;
+ curr_queue++;
+ }
+
+ if (init_pure_ack_pq) {
+ qm_info->pure_ack_pq = curr_queue;
+ qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
+ qm_info->qm_pq_params[curr_queue].tc_id =
+ p_hwfn->hw_info.offload_tc;
+ qm_info->qm_pq_params[curr_queue].wrr_group = 1;
+ curr_queue++;
+ }
+
+ if (init_ooo_pq) {
+ qm_info->ooo_pq = curr_queue;
+ qm_info->qm_pq_params[curr_queue].vport_id = vport_id;
+ qm_info->qm_pq_params[curr_queue].tc_id = DCBX_ISCSI_OOO_TC;
+ qm_info->qm_pq_params[curr_queue].wrr_group = 1;
+ curr_queue++;
+ }
/* Then init per-VF PQs */
- vf_offset = i;
+ vf_offset = curr_queue;
for (i = 0; i < num_vfs; i++) {
/* First vport is used by the PF */
- qm_info->qm_pq_params[vf_offset + i].vport_id = vport_id +
- i + 1;
- qm_info->qm_pq_params[vf_offset + i].tc_id =
- p_hwfn->hw_info.non_offload_tc;
- qm_info->qm_pq_params[vf_offset + i].wrr_group = 1;
+ qm_info->qm_pq_params[curr_queue].vport_id = vport_id + i + 1;
+ /* @@@TBD VF Multi-cos */
+ qm_info->qm_pq_params[curr_queue].tc_id = 0;
+ qm_info->qm_pq_params[curr_queue].wrr_group = 1;
+ qm_info->qm_pq_params[curr_queue].rl_valid = 1;
+ curr_queue++;
};
qm_info->vf_queues_offset = vf_offset;
@@ -281,6 +426,13 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
for (i = 0; i < num_ports; i++) {
p_qm_port = &qm_info->qm_port_params[i];
p_qm_port->active = 1;
+ /* @@@TMP - was NUM_OF_PHYS_TCS; Changed until dcbx will
+ * be in place
+ */
+ if (num_ports == 4)
+ p_qm_port->active_phys_tcs = 0xf;
+ else
+ p_qm_port->active_phys_tcs = 0x9f;
p_qm_port->num_pbf_cmd_lines = PBF_MAX_CMD_LINES / num_ports;
p_qm_port->num_btb_blocks = BTB_MAX_BLOCKS / num_ports;
}
@@ -298,10 +450,10 @@ static enum _ecore_status_t ecore_init_qm_info(struct ecore_hwfn *p_hwfn,
for (i = 0; i < qm_info->num_vports; i++)
qm_info->qm_vport_params[i].vport_wfq = 1;
- qm_info->pf_wfq = 0;
- qm_info->pf_rl = 0;
qm_info->vport_rl_en = 1;
qm_info->vport_wfq_en = 1;
+ qm_info->pf_rl = pf_rl;
+ qm_info->pf_wfq = pf_wfq;
return ECORE_SUCCESS;
@@ -339,8 +491,10 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
return rc;
/* stop PF's qm queues */
+ OSAL_SPIN_LOCK(&qm_lock);
b_rc = ecore_send_qm_stop_cmd(p_hwfn, p_ptt, false, true,
qm_info->start_pq, qm_info->num_pqs);
+ OSAL_SPIN_UNLOCK(&qm_lock);
if (!b_rc)
return ECORE_INVAL;
@@ -357,9 +511,11 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
return rc;
/* start PF's qm queues */
+ OSAL_SPIN_LOCK(&qm_lock);
b_rc = ecore_send_qm_stop_cmd(p_hwfn, p_ptt, true, true,
qm_info->start_pq, qm_info->num_pqs);
- if (!rc)
+ OSAL_SPIN_UNLOCK(&qm_lock);
+ if (!b_rc)
return ECORE_INVAL;
return ECORE_SUCCESS;
@@ -367,16 +523,19 @@ enum _ecore_status_t ecore_qm_reconf(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
{
- enum _ecore_status_t rc = ECORE_SUCCESS;
struct ecore_consq *p_consq;
struct ecore_eq *p_eq;
+#ifdef CONFIG_ECORE_LL2
+ struct ecore_ll2_info *p_ll2_info;
+#endif
+ enum _ecore_status_t rc = ECORE_SUCCESS;
int i;
if (IS_VF(p_dev))
return rc;
p_dev->fw_data = OSAL_ZALLOC(p_dev, GFP_KERNEL,
- sizeof(struct ecore_fw_data));
+ sizeof(*p_dev->fw_data));
if (!p_dev->fw_data)
return ECORE_NOMEM;
@@ -409,6 +568,7 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
for_each_hwfn(p_dev, i) {
struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
+ u32 n_eqes, num_cons;
/* First allocate the context manager structure */
rc = ecore_cxt_mngr_alloc(p_hwfn);
@@ -457,7 +617,60 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
goto alloc_err;
/* EQ */
- p_eq = ecore_eq_alloc(p_hwfn, 256);
+ n_eqes = ecore_chain_get_capacity(&p_hwfn->p_spq->chain);
+ if ((p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) ||
+ (p_hwfn->hw_info.personality == ECORE_PCI_IWARP)) {
+ /* Calculate the EQ size
+ * ---------------------
+ * Each ICID may generate up to one event at a time i.e.
+ * the event must be handled/cleared before a new one
+ * can be generated. We calculate the sum of events per
+ * protocol and create an EQ deep enough to handle the
+ * worst case:
+ * - Core - according to SPQ.
+ * - RoCE - per QP there are a couple of ICIDs, one
+ * responder and one requester, each can
+ * generate an EQE => n_eqes_qp = 2 * n_qp.
+ * Each CQ can generate an EQE. There are 2 CQs
+ * per QP => n_eqes_cq = 2 * n_qp.
+ * Hence the RoCE total is 4 * n_qp or
+ * 2 * num_cons.
+ * - ENet - There can be up to two events per VF. One
+ * for VF-PF channel and another for VF FLR
+ * initial cleanup. The number of VFs is
+ * bounded by MAX_NUM_VFS_BB, and is much
+ * smaller than RoCE's so we avoid exact
+ * calculation.
+ */
+ if (p_hwfn->hw_info.personality == ECORE_PCI_ETH_ROCE) {
+ num_cons =
+ ecore_cxt_get_proto_cid_count(
+ p_hwfn,
+ PROTOCOLID_ROCE,
+ 0);
+ num_cons *= 2;
+ } else {
+ num_cons = ecore_cxt_get_proto_cid_count(
+ p_hwfn,
+ PROTOCOLID_IWARP,
+ 0);
+ }
+ n_eqes += num_cons + 2 * MAX_NUM_VFS_BB;
+ } else if (p_hwfn->hw_info.personality == ECORE_PCI_ISCSI) {
+ num_cons =
+ ecore_cxt_get_proto_cid_count(p_hwfn,
+ PROTOCOLID_ISCSI, 0);
+ n_eqes += 2 * num_cons;
+ }
+
+ if (n_eqes > 0xFFFF) {
+ DP_ERR(p_hwfn, "Cannot allocate 0x%x EQ elements."
+ "The maximum of a u16 chain is 0x%x\n",
+ n_eqes, 0xFFFF);
+ goto alloc_err;
+ }
+
+ p_eq = ecore_eq_alloc(p_hwfn, (u16)n_eqes);
if (!p_eq)
goto alloc_no_mem;
p_hwfn->p_eq = p_eq;
@@ -533,6 +746,10 @@ void ecore_resc_setup(struct ecore_dev *p_dev)
ecore_int_setup(p_hwfn, p_hwfn->p_main_ptt);
ecore_iov_setup(p_hwfn, p_hwfn->p_main_ptt);
+#ifdef CONFIG_ECORE_LL2
+ if (p_hwfn->using_ll2)
+ ecore_ll2_setup(p_hwfn, p_hwfn->p_ll2_info);
+#endif
}
}
@@ -598,7 +815,7 @@ enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn *p_hwfn,
return rc;
}
-static void ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
+static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
{
int hw_mode = 0;
@@ -611,7 +828,7 @@ static void ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
} else {
DP_NOTICE(p_hwfn, true, "Unknown chip type %#x\n",
p_hwfn->p_dev->type);
- return;
+ return ECORE_INVAL;
}
/* Ports per engine is based on the values in CNIG_REG_NW_PORT_MODE */
@@ -629,7 +846,7 @@ static void ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
DP_NOTICE(p_hwfn, true,
"num_ports_in_engine = %d not supported\n",
p_hwfn->p_dev->num_ports_in_engines);
- return;
+ return ECORE_INVAL;
}
switch (p_hwfn->p_dev->mf_mode) {
@@ -660,8 +877,10 @@ static void ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
#endif
hw_mode |= 1 << MODE_ASIC;
+#ifndef REAL_ASIC_ONLY
if (ENABLE_EAGLE_ENG1_WORKAROUND(p_hwfn))
hw_mode |= 1 << MODE_EAGLE_ENG1_WORKAROUND;
+#endif
if (p_hwfn->p_dev->num_hwfns > 1)
hw_mode |= 1 << MODE_100G;
@@ -671,11 +890,13 @@ static void ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
DP_VERBOSE(p_hwfn, (ECORE_MSG_PROBE | ECORE_MSG_IFUP),
"Configuring function for hw_mode: 0x%08x\n",
p_hwfn->hw_info.hw_mode);
+
+ return ECORE_SUCCESS;
}
#ifndef ASIC_ONLY
/* MFW-replacement initializations for non-ASIC */
-static void ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
+static enum _ecore_status_t ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
{
u32 pl_hv = 1;
@@ -713,6 +934,8 @@ static void ecore_hw_init_chip(struct ecore_hwfn *p_hwfn,
if (i == 100)
DP_NOTICE(p_hwfn, true,
"RBC done failed to complete in PSWRQ2\n");
+
+ return ECORE_SUCCESS;
}
#endif
@@ -764,8 +987,11 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
ecore_gtt_init(p_hwfn);
#ifndef ASIC_ONLY
- if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
- ecore_hw_init_chip(p_hwfn, p_hwfn->p_main_ptt);
+ if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+ rc = ecore_hw_init_chip(p_hwfn, p_hwfn->p_main_ptt);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+ }
#endif
if (p_hwfn->mcp_info) {
@@ -807,9 +1033,15 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
ecore_wr(p_hwfn, p_ptt, PGLUE_B_REG_USE_CLIENTID_IN_TAG, 1);
if (ECORE_IS_BB(p_hwfn->p_dev)) {
+ /* Workaround clears ROCE search for all functions to prevent
+ * involving non initialized function in processing ROCE packet.
+ */
num_pfs = NUM_OF_ENG_PFS(p_hwfn->p_dev);
- if (num_pfs == 1)
- return rc;
+ for (pf_id = 0; pf_id < num_pfs; pf_id++) {
+ ecore_fid_pretend(p_hwfn, p_ptt, pf_id);
+ ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_ROCE, 0x0);
+ ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_TCP, 0x0);
+ }
/* pretend to original PF */
ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
}
@@ -826,6 +1058,9 @@ static enum _ecore_status_t ecore_hw_init_common(struct ecore_hwfn *p_hwfn,
concrete_fid = ecore_vfid_to_concrete(p_hwfn, vf_id);
ecore_fid_pretend(p_hwfn, p_ptt, (u16)concrete_fid);
ecore_wr(p_hwfn, p_ptt, CCFC_REG_STRONG_ENABLE_VF, 0x1);
+ ecore_wr(p_hwfn, p_ptt, CCFC_REG_WEAK_ENABLE_VF, 0x0);
+ ecore_wr(p_hwfn, p_ptt, TCFC_REG_STRONG_ENABLE_VF, 0x1);
+ ecore_wr(p_hwfn, p_ptt, TCFC_REG_WEAK_ENABLE_VF, 0x0);
}
/* pretend to original PF */
ecore_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
@@ -933,10 +1168,14 @@ static void ecore_emul_link_init(struct ecore_hwfn *p_hwfn,
ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_PFC_CTRL,
0x30ffffc000ULL, 0, port);
ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_CTRL, 0x3 | (loopback << 2), 0,
- port);
- ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_CTRL, 0x1003 | (loopback << 2),
- 0, port);
+ port); /* XLMAC: TX_EN, RX_EN */
+ /* XLMAC: TX_EN, RX_EN, SW_LINK_STATUS */
+ ecore_wr_nw_port(p_hwfn, p_ptt, XLMAC_CTRL,
+ 0x1003 | (loopback << 2), 0, port);
+ /* Enabled Parallel PFC interface */
ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_FLOW_CONTROL_CONFIG, 1, 0, port);
+
+ /* XLPORT port enable */
ecore_wr_nw_port(p_hwfn, p_ptt, XLPORT_ENABLE_REG, 0xf, 1, port);
}
@@ -991,12 +1230,10 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
{
enum _ecore_status_t rc = ECORE_SUCCESS;
- /* Init sequence */
rc = ecore_init_run(p_hwfn, p_ptt, PHASE_PORT, p_hwfn->port_id,
hw_mode);
if (rc != ECORE_SUCCESS)
return rc;
-
#ifndef ASIC_ONLY
if (CHIP_REV_IS_ASIC(p_hwfn->p_dev))
return ECORE_SUCCESS;
@@ -1035,14 +1272,75 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
}
static enum _ecore_status_t
+ecore_hw_init_dpi_size(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt, u32 pwm_region_size, u32 n_cpus)
+{
+ u32 dpi_page_size_1, dpi_page_size_2, dpi_page_size;
+ u32 dpi_bit_shift, dpi_count;
+ u32 min_dpis;
+
+ /* Calculate DPI size
+ * ------------------
+ * The PWM region contains Doorbell Pages. The first is reserverd for
+ * the kernel for, e.g, L2. The others are free to be used by non-
+ * trusted applications, typically from user space. Each page, called a
+ * doorbell page is sectioned into windows that allow doorbells to be
+ * issued in parallel by the kernel/application. The size of such a
+ * window (a.k.a. WID) is 1kB.
+ * Summary:
+ * 1kB WID x N WIDS = DPI page size
+ * DPI page size x N DPIs = PWM region size
+ * Notes:
+ * The size of the DPI page size must be in multiples of OSAL_PAGE_SIZE
+ * in order to ensure that two applications won't share the same page.
+ * It also must contain at least one WID per CPU to allow parallelism.
+ * It also must be a power of 2, since it is stored as a bit shift.
+ *
+ * The DPI page size is stored in a register as 'dpi_bit_shift' so that
+ * 0 is 4kB, 1 is 8kB and etc. Hence the minimum size is 4,096
+ * containing 4 WIDs.
+ */
+ dpi_page_size_1 = ECORE_WID_SIZE * n_cpus;
+ dpi_page_size_2 = OSAL_MAX_T(u32, ECORE_WID_SIZE, OSAL_PAGE_SIZE);
+ dpi_page_size = OSAL_MAX_T(u32, dpi_page_size_1, dpi_page_size_2);
+ dpi_page_size = OSAL_ROUNDUP_POW_OF_TWO(dpi_page_size);
+ dpi_bit_shift = OSAL_LOG2(dpi_page_size / 4096);
+
+ dpi_count = pwm_region_size / dpi_page_size;
+
+ min_dpis = p_hwfn->pf_params.rdma_pf_params.min_dpis;
+ min_dpis = OSAL_MAX_T(u32, ECORE_MIN_DPIS, min_dpis);
+
+ /* Update hwfn */
+ p_hwfn->dpi_size = dpi_page_size;
+ p_hwfn->dpi_count = dpi_count;
+
+ /* Update registers */
+ ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_DPI_BIT_SHIFT, dpi_bit_shift);
+
+ if (dpi_count < min_dpis)
+ return ECORE_NORESOURCES;
+
+ return ECORE_SUCCESS;
+}
+
+enum ECORE_ROCE_EDPM_MODE {
+ ECORE_ROCE_EDPM_MODE_ENABLE = 0,
+ ECORE_ROCE_EDPM_MODE_FORCE_ON = 1,
+ ECORE_ROCE_EDPM_MODE_DISABLE = 2,
+};
+
+static enum _ecore_status_t
ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
{
u32 pwm_regsize, norm_regsize;
u32 non_pwm_conn, min_addr_reg1;
u32 db_bar_size, n_cpus;
+ u32 roce_edpm_mode;
u32 pf_dems_shift;
int rc = ECORE_SUCCESS;
+ u8 cond;
db_bar_size = ecore_hw_bar_size(p_hwfn, BAR_ID_1);
if (p_hwfn->p_dev->num_hwfns > 1)
@@ -1073,26 +1371,62 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
/* Check that the normal and PWM sizes are valid */
if (db_bar_size < norm_regsize) {
DP_ERR(p_hwfn->p_dev,
- "Doorbell BAR size 0x%x is too"
- " small (normal region is 0x%0x )\n",
+ "Doorbell BAR size 0x%x is too small (normal region is 0x%0x )\n",
db_bar_size, norm_regsize);
return ECORE_NORESOURCES;
}
if (pwm_regsize < ECORE_MIN_PWM_REGION) {
DP_ERR(p_hwfn->p_dev,
- "PWM region size 0x%0x is too small."
- " Should be at least 0x%0x (Doorbell BAR size"
- " is 0x%x and normal region size is 0x%0x)\n",
+ "PWM region size 0x%0x is too small. Should be at least 0x%0x (Doorbell BAR size is 0x%x and normal region size is 0x%0x)\n",
pwm_regsize, ECORE_MIN_PWM_REGION, db_bar_size,
norm_regsize);
return ECORE_NORESOURCES;
}
- /* Update hwfn */
- p_hwfn->dpi_start_offset = norm_regsize; /* this is later used to
- * calculate the doorbell
- * address
+ /* Calculate number of DPIs */
+ roce_edpm_mode = p_hwfn->pf_params.rdma_pf_params.roce_edpm_mode;
+ if ((roce_edpm_mode == ECORE_ROCE_EDPM_MODE_ENABLE) ||
+ ((roce_edpm_mode == ECORE_ROCE_EDPM_MODE_FORCE_ON))) {
+ /* Either EDPM is mandatory, or we are attempting to allocate a
+ * WID per CPU.
+ */
+ n_cpus = OSAL_NUM_ACTIVE_CPU();
+ rc = ecore_hw_init_dpi_size(p_hwfn, p_ptt, pwm_regsize, n_cpus);
+ }
+
+ cond = ((rc) && (roce_edpm_mode == ECORE_ROCE_EDPM_MODE_ENABLE)) ||
+ (roce_edpm_mode == ECORE_ROCE_EDPM_MODE_DISABLE);
+ if (cond || p_hwfn->dcbx_no_edpm) {
+ /* Either EDPM is disabled from user configuration, or it is
+ * disabled via DCBx, or it is not mandatory and we failed to
+ * allocated a WID per CPU.
+ */
+ n_cpus = 1;
+ rc = ecore_hw_init_dpi_size(p_hwfn, p_ptt, pwm_regsize, n_cpus);
+
+ /* If we entered this flow due to DCBX then the DPM register is
+ * already configured.
*/
+ }
+
+ DP_INFO(p_hwfn,
+ "doorbell bar: normal_region_size=%d, pwm_region_size=%d",
+ norm_regsize, pwm_regsize);
+ DP_INFO(p_hwfn,
+ " dpi_size=%d, dpi_count=%d, roce_edpm=%s\n",
+ p_hwfn->dpi_size, p_hwfn->dpi_count,
+ ((p_hwfn->dcbx_no_edpm) || (p_hwfn->db_bar_no_edpm)) ?
+ "disabled" : "enabled");
+
+ /* Check return codes from above calls */
+ if (rc) {
+ DP_ERR(p_hwfn,
+ "Failed to allocate enough DPIs\n");
+ return ECORE_NORESOURCES;
+ }
+
+ /* Update hwfn */
+ p_hwfn->dpi_start_offset = norm_regsize;
/* Update registers */
/* DEMS size is configured log2 of DWORDs, hence the division by 4 */
@@ -1100,12 +1434,6 @@ ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_ICID_BIT_SHIFT_NORM, pf_dems_shift);
ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_MIN_ADDR_REG1, min_addr_reg1);
- DP_INFO(p_hwfn,
- "Doorbell size 0x%x, Normal region 0x%x, PWM region 0x%x\n",
- db_bar_size, norm_regsize, pwm_regsize);
- DP_INFO(p_hwfn, "DPI size 0x%x, DPI count 0x%x\n", p_hwfn->dpi_size,
- p_hwfn->dpi_count);
-
return ECORE_SUCCESS;
}
@@ -1131,11 +1459,11 @@ ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
p_hwfn->qm_info.pf_wfq = p_info->bandwidth_min;
/* Update rate limit once we'll actually have a link */
- p_hwfn->qm_info.pf_rl = 100;
+ p_hwfn->qm_info.pf_rl = 100000;
}
ecore_cxt_hw_init_pf(p_hwfn);
- ecore_int_igu_init_rt(p_hwfn); /* @@@TBD TODO MichalS multi hwfn ?? */
+ ecore_int_igu_init_rt(p_hwfn);
/* Set VLAN in NIG if needed */
if (hw_mode & (1 << MODE_MF_SD)) {
@@ -1154,7 +1482,11 @@ ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
}
/* Protocl Configuration - @@@TBD - should we set 0 otherwise? */
- STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_TCP_RT_OFFSET, 0);
+ STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_TCP_RT_OFFSET,
+ (p_hwfn->hw_info.personality == ECORE_PCI_ISCSI) ? 1 : 0);
+ STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_FCOE_RT_OFFSET,
+ (p_hwfn->hw_info.personality == ECORE_PCI_FCOE) ? 1 : 0);
+ STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_ROCE_RT_OFFSET, 0);
/* perform debug configuration when chip is out of reset */
OSAL_BEFORE_PF_START((void *)p_hwfn->p_dev, p_hwfn->my_id);
@@ -1184,24 +1516,20 @@ ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
* PCI config space.
*/
/* Not in use @DPDK
- * pos = OSAL_PCI_FIND_CAPABILITY(p_hwfn->p_dev, PCI_CAP_ID_EXP);
- * if (!pos) {
- * DP_NOTICE(p_hwfn, true,
- * "Failed to find the PCI Express"
- * " Capability structure in the PCI config space\n");
- * return ECORE_IO;
- * }
- * OSAL_PCI_READ_CONFIG_WORD(p_hwfn->p_dev, pos + PCI_EXP_DEVCTL,
- * &ctrl);
- * ctrl &= ~PCI_EXP_DEVCTL_RELAX_EN;
- * OSAL_PCI_WRITE_CONFIG_WORD(p_hwfn->p_dev, pos + PCI_EXP_DEVCTL,
- * &ctrl);
- */
+ * pos = OSAL_PCI_FIND_CAPABILITY(p_hwfn->p_dev, PCI_CAP_ID_EXP);
+ * if (!pos) {
+ * DP_NOTICE(p_hwfn, true,
+ * "Failed to find the PCIe Cap\n");
+ * return ECORE_IO;
+ * }
+ * OSAL_PCI_READ_CONFIG_WORD(p_hwfn->p_dev, pos + PCI_EXP_DEVCTL, &ctrl);
+ * ctrl &= ~PCI_EXP_DEVCTL_RELAX_EN;
+ * OSAL_PCI_WRITE_CONFIG_WORD(p_hwfn->p_dev, pos + PCI_EXP_DEVCTL, ctrl);
+ */
rc = ecore_hw_init_pf_doorbell_bar(p_hwfn, p_ptt);
if (rc)
return rc;
-
if (b_hw_start) {
/* enable interrupts */
ecore_int_igu_enable(p_hwfn, p_ptt, int_mode);
@@ -1217,14 +1545,27 @@ ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
"PRS_REG_SEARCH_TAG1: %x\n", prs_reg);
+ if (p_hwfn->hw_info.personality == ECORE_PCI_FCOE) {
+ ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_TAG1,
+ (1 << 2));
+ ecore_wr(p_hwfn, p_ptt,
+ PRS_REG_PKT_LEN_STAT_TAGS_NOT_COUNTED_FIRST,
+ 0x100);
+ }
DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
- "PRS_REG_SEARCH register after start PFn\n");
+ "PRS_REG_SEARCH registers after start PFn\n");
prs_reg = ecore_rd(p_hwfn, p_ptt, PRS_REG_SEARCH_TCP);
DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
"PRS_REG_SEARCH_TCP: %x\n", prs_reg);
prs_reg = ecore_rd(p_hwfn, p_ptt, PRS_REG_SEARCH_UDP);
DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
"PRS_REG_SEARCH_UDP: %x\n", prs_reg);
+ prs_reg = ecore_rd(p_hwfn, p_ptt, PRS_REG_SEARCH_FCOE);
+ DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
+ "PRS_REG_SEARCH_FCOE: %x\n", prs_reg);
+ prs_reg = ecore_rd(p_hwfn, p_ptt, PRS_REG_SEARCH_ROCE);
+ DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
+ "PRS_REG_SEARCH_ROCE: %x\n", prs_reg);
prs_reg = ecore_rd(p_hwfn, p_ptt,
PRS_REG_SEARCH_TCP_FIRST_FRAG);
DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
@@ -1288,6 +1629,12 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
u32 load_code, param;
int i, j;
+ if ((int_mode == ECORE_INT_MODE_MSI) && (p_dev->num_hwfns > 1)) {
+ DP_NOTICE(p_dev, false,
+ "MSI mode is not supported for CMT devices\n");
+ return ECORE_INVAL;
+ }
+
if (IS_PF(p_dev)) {
rc = ecore_init_fw_data(p_dev, bin_fw_data);
if (rc != ECORE_SUCCESS)
@@ -1298,16 +1645,19 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
if (IS_VF(p_dev)) {
- rc = ecore_vf_pf_init(p_hwfn);
- if (rc)
- return rc;
+ p_hwfn->b_int_enabled = 1;
continue;
}
/* Enable DMAE in PXP */
rc = ecore_change_pci_hwfn(p_hwfn, p_hwfn->p_main_ptt, true);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ rc = ecore_calc_hw_mode(p_hwfn);
+ if (rc != ECORE_SUCCESS)
+ return rc;
- ecore_calc_hw_mode(p_hwfn);
/* @@@TBD need to add here:
* Check for fan failure
* Prev_unload
@@ -1344,6 +1694,11 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
p_hwfn->first_on_engine = (load_code ==
FW_MSG_CODE_DRV_LOAD_ENGINE);
+ if (!qm_lock_init) {
+ OSAL_SPIN_LOCK_INIT(&qm_lock);
+ qm_lock_init = true;
+ }
+
switch (load_code) {
case FW_MSG_CODE_DRV_LOAD_ENGINE:
rc = ecore_hw_init_common(p_hwfn, p_hwfn->p_main_ptt,
@@ -1424,7 +1779,7 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
}
#define ECORE_HW_STOP_RETRY_LIMIT (10)
-static OSAL_INLINE void ecore_hw_timers_stop(struct ecore_dev *p_dev,
+static void ecore_hw_timers_stop(struct ecore_dev *p_dev,
struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
{
@@ -1500,6 +1855,8 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
/* close parser */
ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_TCP, 0x0);
ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_UDP, 0x0);
+ ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_FCOE, 0x0);
+ ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_ROCE, 0x0);
ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_OPENFLOW, 0x0);
/* @@@TBD - clean transmission queues (5.b) */
@@ -1553,6 +1910,8 @@ void ecore_hw_stop_fastpath(struct ecore_dev *p_dev)
ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_TCP, 0x0);
ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_UDP, 0x0);
+ ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_FCOE, 0x0);
+ ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_ROCE, 0x0);
ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_OPENFLOW, 0x0);
/* @@@TBD - clean transmission queues (5.b) */
@@ -1573,6 +1932,16 @@ void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn)
if (IS_VF(p_hwfn->p_dev))
return;
+ /* If roce info is allocated it means roce is initialized and should
+ * be enabled in searcher.
+ */
+ if (p_hwfn->p_rdma_info) {
+ if (p_hwfn->b_rdma_enabled_in_prs)
+ ecore_wr(p_hwfn, p_ptt,
+ p_hwfn->rdma_prs_search_reg, 0x1);
+ ecore_wr(p_hwfn, p_ptt, TM_REG_PF_ENABLE_CONN, 0x1);
+ }
+
/* Re-open incoming traffic */
ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF, 0x0);
@@ -1623,15 +1992,10 @@ enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev)
/* Disable PF in HW blocks */
ecore_wr(p_hwfn, p_hwfn->p_main_ptt, DORQ_REG_PF_DB_ENABLE, 0);
ecore_wr(p_hwfn, p_hwfn->p_main_ptt, QM_REG_PF_EN, 0);
- ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
- TCFC_REG_STRONG_ENABLE_PF, 0);
- ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
- CCFC_REG_STRONG_ENABLE_PF, 0);
if (p_dev->recov_in_prog) {
DP_VERBOSE(p_hwfn, ECORE_MSG_IFDOWN,
- "Recovery is in progress -> skip "
- "sending unload_req/done\n");
+ "Recovery is in progress -> skip sending unload_req/done\n");
break;
}
@@ -1672,10 +2036,25 @@ static void ecore_hw_hwfn_free(struct ecore_hwfn *p_hwfn)
static void ecore_hw_hwfn_prepare(struct ecore_hwfn *p_hwfn)
{
/* clear indirect access */
- ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PGLUE_B_REG_PGL_ADDR_88_F0, 0);
- ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PGLUE_B_REG_PGL_ADDR_8C_F0, 0);
- ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PGLUE_B_REG_PGL_ADDR_90_F0, 0);
- ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PGLUE_B_REG_PGL_ADDR_94_F0, 0);
+ if (ECORE_IS_AH(p_hwfn->p_dev)) {
+ ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
+ PGLUE_B_REG_PGL_ADDR_E8_F0, 0);
+ ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
+ PGLUE_B_REG_PGL_ADDR_EC_F0, 0);
+ ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
+ PGLUE_B_REG_PGL_ADDR_F0_F0, 0);
+ ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
+ PGLUE_B_REG_PGL_ADDR_F4_F0, 0);
+ } else {
+ ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
+ PGLUE_B_REG_PGL_ADDR_88_F0, 0);
+ ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
+ PGLUE_B_REG_PGL_ADDR_8C_F0, 0);
+ ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
+ PGLUE_B_REG_PGL_ADDR_90_F0, 0);
+ ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
+ PGLUE_B_REG_PGL_ADDR_94_F0, 0);
+ }
/* Clean Previous errors if such exist */
ecore_wr(p_hwfn, p_hwfn->p_main_ptt,
@@ -1695,7 +2074,6 @@ static void get_function_id(struct ecore_hwfn *p_hwfn)
p_hwfn->hw_info.concrete_fid = REG_RD(p_hwfn, PXP_PF_ME_CONCRETE_ADDR);
/* Bits 16-19 from the ME registers are the pf_num */
- /* @@ @TBD - check, may be wrong after B0 implementation for CMT */
p_hwfn->abs_pf_id = (p_hwfn->hw_info.concrete_fid >> 16) & 0xf;
p_hwfn->rel_pf_id = GET_FIELD(p_hwfn->hw_info.concrete_fid,
PXP_CONCRETE_FID_PFID);
@@ -1719,59 +2097,285 @@ static void ecore_hw_set_feat(struct ecore_hwfn *p_hwfn)
RESC_NUM(p_hwfn, ECORE_L2_QUEUE));
DP_VERBOSE(p_hwfn, ECORE_MSG_PROBE,
- "#PF_L2_QUEUES=%d #SBS=%d num_features=%d\n",
+ "#PF_L2_QUEUES=%d #ROCE_CNQ=%d #SBS=%d num_features=%d\n",
feat_num[ECORE_PF_L2_QUE],
+ feat_num[ECORE_RDMA_CNQ],
RESC_NUM(p_hwfn, ECORE_SB), num_features);
}
-/* @@@TBD MK RESC: This info is currently hard code and set as if we were MF
- * need to read it from shmem...
- */
-static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn)
+static enum resource_id_enum
+ecore_hw_get_mfw_res_id(enum ecore_resources res_id)
+{
+ enum resource_id_enum mfw_res_id = RESOURCE_NUM_INVALID;
+
+ switch (res_id) {
+ case ECORE_SB:
+ mfw_res_id = RESOURCE_NUM_SB_E;
+ break;
+ case ECORE_L2_QUEUE:
+ mfw_res_id = RESOURCE_NUM_L2_QUEUE_E;
+ break;
+ case ECORE_VPORT:
+ mfw_res_id = RESOURCE_NUM_VPORT_E;
+ break;
+ case ECORE_RSS_ENG:
+ mfw_res_id = RESOURCE_NUM_RSS_ENGINES_E;
+ break;
+ case ECORE_PQ:
+ mfw_res_id = RESOURCE_NUM_PQ_E;
+ break;
+ case ECORE_RL:
+ mfw_res_id = RESOURCE_NUM_RL_E;
+ break;
+ case ECORE_MAC:
+ case ECORE_VLAN:
+ /* Each VFC resource can accommodate both a MAC and a VLAN */
+ mfw_res_id = RESOURCE_VFC_FILTER_E;
+ break;
+ case ECORE_ILT:
+ mfw_res_id = RESOURCE_ILT_E;
+ break;
+ case ECORE_LL2_QUEUE:
+ mfw_res_id = RESOURCE_LL2_QUEUE_E;
+ break;
+ case ECORE_RDMA_CNQ_RAM:
+ case ECORE_CMDQS_CQS:
+ /* CNQ/CMDQS are the same resource */
+ mfw_res_id = RESOURCE_CQS_E;
+ break;
+ case ECORE_RDMA_STATS_QUEUE:
+ mfw_res_id = RESOURCE_RDMA_STATS_QUEUE_E;
+ break;
+ default:
+ break;
+ }
+
+ return mfw_res_id;
+}
+
+static u32 ecore_hw_get_dflt_resc_num(struct ecore_hwfn *p_hwfn,
+ enum ecore_resources res_id)
{
- u32 *resc_start = p_hwfn->hw_info.resc_start;
u8 num_funcs = p_hwfn->num_funcs_on_engine;
- u32 *resc_num = p_hwfn->hw_info.resc_num;
- int i, max_vf_vlan_filters;
- struct ecore_sb_cnt_info sb_cnt_info;
bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
+ struct ecore_sb_cnt_info sb_cnt_info;
+ u32 dflt_resc_num = 0;
+ switch (res_id) {
+ case ECORE_SB:
OSAL_MEM_ZERO(&sb_cnt_info, sizeof(sb_cnt_info));
-
-#ifdef CONFIG_ECORE_SRIOV
- max_vf_vlan_filters = ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS;
-#else
- max_vf_vlan_filters = 0;
-#endif
-
ecore_int_get_num_sbs(p_hwfn, &sb_cnt_info);
- resc_num[ECORE_SB] = OSAL_MIN_T(u32,
- (MAX_SB_PER_PATH_BB / num_funcs),
- sb_cnt_info.sb_cnt);
-
- resc_num[ECORE_L2_QUEUE] = (b_ah ? MAX_NUM_L2_QUEUES_K2 :
+ dflt_resc_num = sb_cnt_info.sb_cnt;
+ break;
+ case ECORE_L2_QUEUE:
+ dflt_resc_num = (b_ah ? MAX_NUM_L2_QUEUES_K2 :
MAX_NUM_L2_QUEUES_BB) / num_funcs;
- resc_num[ECORE_VPORT] = (b_ah ? MAX_NUM_VPORTS_K2 :
+ break;
+ case ECORE_VPORT:
+ dflt_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
MAX_NUM_VPORTS_BB) / num_funcs;
- resc_num[ECORE_RSS_ENG] = (b_ah ? ETH_RSS_ENGINE_NUM_K2 :
+ break;
+ case ECORE_RSS_ENG:
+ dflt_resc_num = (b_ah ? ETH_RSS_ENGINE_NUM_K2 :
ETH_RSS_ENGINE_NUM_BB) / num_funcs;
- resc_num[ECORE_PQ] = (b_ah ? MAX_QM_TX_QUEUES_K2 :
+ break;
+ case ECORE_PQ:
+ dflt_resc_num = (b_ah ? MAX_QM_TX_QUEUES_K2 :
MAX_QM_TX_QUEUES_BB) / num_funcs;
- resc_num[ECORE_RL] = 8;
- resc_num[ECORE_MAC] = ETH_NUM_MAC_FILTERS / num_funcs;
- resc_num[ECORE_VLAN] = (ETH_NUM_VLAN_FILTERS -
- max_vf_vlan_filters +
- 1 /*For vlan0 */) / num_funcs;
-
- /* TODO - there will be a problem in AH - there are only 11k lines */
- resc_num[ECORE_ILT] = (b_ah ? PXP_NUM_ILT_RECORDS_K2 :
+ break;
+ case ECORE_RL:
+ dflt_resc_num = MAX_QM_GLOBAL_RLS / num_funcs;
+ break;
+ case ECORE_MAC:
+ case ECORE_VLAN:
+ /* Each VFC resource can accommodate both a MAC and a VLAN */
+ dflt_resc_num = ETH_NUM_MAC_FILTERS / num_funcs;
+ break;
+ case ECORE_ILT:
+ dflt_resc_num = (b_ah ? PXP_NUM_ILT_RECORDS_K2 :
PXP_NUM_ILT_RECORDS_BB) / num_funcs;
+ break;
+ case ECORE_LL2_QUEUE:
+ dflt_resc_num = MAX_NUM_LL2_RX_QUEUES / num_funcs;
+ break;
+ case ECORE_RDMA_CNQ_RAM:
+ case ECORE_CMDQS_CQS:
+ /* CNQ/CMDQS are the same resource */
+ /* @DPDK */
+ dflt_resc_num = (NUM_OF_GLOBAL_QUEUES / 2) / num_funcs;
+ break;
+ case ECORE_RDMA_STATS_QUEUE:
+ /* @DPDK */
+ dflt_resc_num = (b_ah ? MAX_NUM_VPORTS_K2 :
+ MAX_NUM_VPORTS_BB) / num_funcs;
+ break;
+ default:
+ break;
+ }
+
+ return dflt_resc_num;
+}
+
+static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
+ enum ecore_resources res_id,
+ bool drv_resc_alloc)
+{
+ u32 dflt_resc_num = 0, dflt_resc_start = 0, mcp_resp, mcp_param;
+ u32 *p_resc_num, *p_resc_start;
+ struct resource_info resc_info;
+ enum _ecore_status_t rc;
+
+ p_resc_num = &RESC_NUM(p_hwfn, res_id);
+ p_resc_start = &RESC_START(p_hwfn, res_id);
+
+ dflt_resc_num = ecore_hw_get_dflt_resc_num(p_hwfn, res_id);
+ if (!dflt_resc_num) {
+ DP_ERR(p_hwfn, "Failed to get default amount for resource %d\n",
+ res_id);
+ return ECORE_INVAL;
+ }
+ dflt_resc_start = dflt_resc_num * p_hwfn->enabled_func_idx;
+
+#ifndef ASIC_ONLY
+ if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
+ *p_resc_num = dflt_resc_num;
+ *p_resc_start = dflt_resc_start;
+ goto out;
+ }
+#endif
+
+ OSAL_MEM_ZERO(&resc_info, sizeof(resc_info));
+ resc_info.res_id = ecore_hw_get_mfw_res_id(res_id);
+ if (resc_info.res_id == RESOURCE_NUM_INVALID) {
+ DP_ERR(p_hwfn,
+ "Failed to match resource %d with MFW resources\n",
+ res_id);
+ return ECORE_INVAL;
+ }
+
+ rc = ecore_mcp_get_resc_info(p_hwfn, p_hwfn->p_main_ptt, &resc_info,
+ &mcp_resp, &mcp_param);
+ if (rc != ECORE_SUCCESS) {
+ DP_NOTICE(p_hwfn, true,
+ "MFW resp failure for a resc alloc req [res_id %d]\n",
+ res_id);
+ return rc;
+ }
+
+ /* Default driver values are applied in the following cases:
+ * - The resource allocation MB command is not supported by the MFW
+ * - There is an internal error in the MFW while processing the request
+ * - The resource ID is unknown to the MFW
+ */
+ if (mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_OK &&
+ mcp_resp != FW_MSG_CODE_RESOURCE_ALLOC_DEPRECATED) {
+ /* @DPDK */
+ DP_INFO(p_hwfn,
+ "No allocation info for resc %d [mcp_resp 0x%x].",
+ res_id, mcp_resp);
+ DP_INFO(p_hwfn,
+ "Applying default values [num %d, start %d].\n",
+ dflt_resc_num, dflt_resc_start);
+
+ *p_resc_num = dflt_resc_num;
+ *p_resc_start = dflt_resc_start;
+ goto out;
+ }
+
+ /* TBD - remove this when revising the handling of the SB resource */
+ if (res_id == ECORE_SB) {
+ /* Excluding the slowpath SB */
+ resc_info.size -= 1;
+ resc_info.offset -= p_hwfn->enabled_func_idx;
+ }
+
+ *p_resc_num = resc_info.size;
+ *p_resc_start = resc_info.offset;
+
+ if (*p_resc_num != dflt_resc_num || *p_resc_start != dflt_resc_start) {
+ DP_NOTICE(p_hwfn, false,
+ "Resource %d: MFW allocation [num %d, start %d]",
+ res_id, *p_resc_num, *p_resc_start);
+ DP_NOTICE(p_hwfn, false,
+ "differs from default values [num %d, start %d]%s\n",
+ dflt_resc_num,
+ dflt_resc_start,
+ drv_resc_alloc ? " - applying default values" : "");
+ if (drv_resc_alloc) {
+ *p_resc_num = dflt_resc_num;
+ *p_resc_start = dflt_resc_start;
+ }
+ }
+ out:
+ return ECORE_SUCCESS;
+}
+
+static const char *ecore_hw_get_resc_name(enum ecore_resources res_id)
+{
+ switch (res_id) {
+ case ECORE_SB:
+ return "SB";
+ case ECORE_L2_QUEUE:
+ return "L2_QUEUE";
+ case ECORE_VPORT:
+ return "VPORT";
+ case ECORE_RSS_ENG:
+ return "RSS_ENG";
+ case ECORE_PQ:
+ return "PQ";
+ case ECORE_RL:
+ return "RL";
+ case ECORE_MAC:
+ return "MAC";
+ case ECORE_VLAN:
+ return "VLAN";
+ case ECORE_RDMA_CNQ_RAM:
+ return "RDMA_CNQ_RAM";
+ case ECORE_ILT:
+ return "ILT";
+ case ECORE_LL2_QUEUE:
+ return "LL2_QUEUE";
+ case ECORE_CMDQS_CQS:
+ return "CMDQS_CQS";
+ case ECORE_RDMA_STATS_QUEUE:
+ return "RDMA_STATS_QUEUE";
+ default:
+ return "UNKNOWN_RESOURCE";
+ }
+}
+
+static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
+ bool drv_resc_alloc)
+{
+ bool b_ah = ECORE_IS_AH(p_hwfn->p_dev);
+ enum _ecore_status_t rc;
+ u8 res_id;
+#ifndef ASIC_ONLY
+ u32 *resc_start = p_hwfn->hw_info.resc_start;
+ u32 *resc_num = p_hwfn->hw_info.resc_num;
+ /* For AH, an equal share of the ILT lines between the maximal number of
+ * PFs is not enough for RoCE. This would be solved by the future
+ * resource allocation scheme, but isn't currently present for
+ * FPGA/emulation. For now we keep a number that is sufficient for RoCE
+ * to work - the BB number of ILT lines divided by its max PFs number.
+ */
+ u32 roce_min_ilt_lines = PXP_NUM_ILT_RECORDS_BB / MAX_NUM_PFS_BB;
+#endif
+
+ for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++) {
+ rc = ecore_hw_set_resc_info(p_hwfn, res_id, drv_resc_alloc);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+ }
#ifndef ASIC_ONLY
if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
/* Reduced build contains less PQs */
- if (!(p_hwfn->p_dev->b_is_emul_full))
+ if (!(p_hwfn->p_dev->b_is_emul_full)) {
resc_num[ECORE_PQ] = 32;
+ resc_start[ECORE_PQ] = resc_num[ECORE_PQ] *
+ p_hwfn->enabled_func_idx;
+ }
/* For AH emulation, since we have a possible maximal number of
* 16 enabled PFs, in case there are not enough ILT lines -
@@ -1779,19 +2383,17 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn)
* only with less ILT lines.
*/
if (!p_hwfn->rel_pf_id && p_hwfn->p_dev->b_is_emul_full)
- resc_num[ECORE_ILT] = resc_num[ECORE_ILT];
+ resc_num[ECORE_ILT] = OSAL_MAX_T(u32,
+ resc_num[ECORE_ILT],
+ roce_min_ilt_lines);
}
-#endif
- for (i = 0; i < ECORE_MAX_RESC; i++)
- resc_start[i] = resc_num[i] * p_hwfn->rel_pf_id;
-
-#ifndef ASIC_ONLY
/* Correct the common ILT calculation if PF0 has more */
if (CHIP_REV_IS_SLOW(p_hwfn->p_dev) &&
p_hwfn->p_dev->b_is_emul_full &&
- p_hwfn->rel_pf_id && resc_num[ECORE_ILT])
- resc_start[ECORE_ILT] += resc_num[ECORE_ILT];
+ p_hwfn->rel_pf_id && resc_num[ECORE_ILT] < roce_min_ilt_lines)
+ resc_start[ECORE_ILT] += roce_min_ilt_lines -
+ resc_num[ECORE_ILT];
#endif
/* Sanity for ILT */
@@ -1808,29 +2410,12 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn)
ecore_hw_set_feat(p_hwfn);
DP_VERBOSE(p_hwfn, ECORE_MSG_PROBE,
- "The numbers for each resource are:\n"
- "SB = %d start = %d\n"
- "L2_QUEUE = %d start = %d\n"
- "VPORT = %d start = %d\n"
- "PQ = %d start = %d\n"
- "RL = %d start = %d\n"
- "MAC = %d start = %d\n"
- "VLAN = %d start = %d\n"
- "ILT = %d start = %d\n"
- "CMDQS_CQS = %d start = %d\n",
- RESC_NUM(p_hwfn, ECORE_SB), RESC_START(p_hwfn, ECORE_SB),
- RESC_NUM(p_hwfn, ECORE_L2_QUEUE),
- RESC_START(p_hwfn, ECORE_L2_QUEUE),
- RESC_NUM(p_hwfn, ECORE_VPORT),
- RESC_START(p_hwfn, ECORE_VPORT),
- RESC_NUM(p_hwfn, ECORE_PQ), RESC_START(p_hwfn, ECORE_PQ),
- RESC_NUM(p_hwfn, ECORE_RL), RESC_START(p_hwfn, ECORE_RL),
- RESC_NUM(p_hwfn, ECORE_MAC), RESC_START(p_hwfn, ECORE_MAC),
- RESC_NUM(p_hwfn, ECORE_VLAN),
- RESC_START(p_hwfn, ECORE_VLAN),
- RESC_NUM(p_hwfn, ECORE_ILT), RESC_START(p_hwfn, ECORE_ILT),
- RESC_NUM(p_hwfn, ECORE_CMDQS_CQS),
- RESC_START(p_hwfn, ECORE_CMDQS_CQS));
+ "The numbers for each resource are:\n");
+ for (res_id = 0; res_id < ECORE_MAX_RESC; res_id++)
+ DP_VERBOSE(p_hwfn, ECORE_MSG_PROBE, "%s = %d start = %d\n",
+ ecore_hw_get_resc_name(res_id),
+ RESC_NUM(p_hwfn, res_id),
+ RESC_START(p_hwfn, res_id));
return ECORE_SUCCESS;
}
@@ -1839,19 +2424,20 @@ static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
{
u32 nvm_cfg1_offset, mf_mode, addr, generic_cont0, core_cfg;
- u32 port_cfg_addr, link_temp, device_capabilities;
+ u32 port_cfg_addr, link_temp, nvm_cfg_addr, device_capabilities;
struct ecore_mcp_link_params *link;
/* Read global nvm_cfg address */
- u32 nvm_cfg_addr = ecore_rd(p_hwfn, p_ptt, MISC_REG_GEN_PURP_CR0);
+ nvm_cfg_addr = ecore_rd(p_hwfn, p_ptt, MISC_REG_GEN_PURP_CR0);
/* Verify MCP has initialized it */
- if (nvm_cfg_addr == 0) {
+ if (!nvm_cfg_addr) {
DP_NOTICE(p_hwfn, false, "Shared memory not initialized\n");
return ECORE_INVAL;
}
/* Read nvm_cfg1 (Notice this is just offset, and not offsize (TBD) */
+
nvm_cfg1_offset = ecore_rd(p_hwfn, p_ptt, nvm_cfg_addr + 4);
addr = MCP_REG_SCRATCH + nvm_cfg1_offset +
@@ -1862,33 +2448,36 @@ static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
switch ((core_cfg & NVM_CFG1_GLOB_NETWORK_PORT_MODE_MASK) >>
NVM_CFG1_GLOB_NETWORK_PORT_MODE_OFFSET) {
- case NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_2X40G:
+ case NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_2X40G:
p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_2X40G;
break;
- case NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_2X50G:
+ case NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X50G:
p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_2X50G;
break;
- case NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_1X100G:
+ case NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_1X100G:
p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_1X100G;
break;
- case NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_4X10G_F:
+ case NVM_CFG1_GLOB_NETWORK_PORT_MODE_4X10G_F:
p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_4X10G_F;
break;
- case NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_4X10G_E:
+ case NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_4X10G_E:
p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_4X10G_E;
break;
- case NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_4X20G:
+ case NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_4X20G:
p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_4X20G;
break;
- case NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_1X40G:
+ case NVM_CFG1_GLOB_NETWORK_PORT_MODE_1X40G:
p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_1X40G;
break;
- case NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_2X25G:
+ case NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X25G:
p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_2X25G;
break;
- case NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_1X25G:
+ case NVM_CFG1_GLOB_NETWORK_PORT_MODE_1X25G:
p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_1X25G;
break;
+ case NVM_CFG1_GLOB_NETWORK_PORT_MODE_4X25G:
+ p_hwfn->hw_info.port_mode = ECORE_PORT_MODE_DE_4X25G;
+ break;
default:
DP_NOTICE(p_hwfn, true, "Unknown port mode in 0x%08x\n",
core_cfg);
@@ -1931,13 +2520,18 @@ static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
case NVM_CFG1_PORT_DRV_LINK_SPEED_50G:
link->speed.forced_speed = 50000;
break;
- case NVM_CFG1_PORT_DRV_LINK_SPEED_100G:
+ case NVM_CFG1_PORT_DRV_LINK_SPEED_BB_100G:
link->speed.forced_speed = 100000;
break;
default:
DP_NOTICE(p_hwfn, true, "Unknown Speed in 0x%08x\n", link_temp);
}
+ p_hwfn->mcp_info->link_capabilities.default_speed =
+ link->speed.forced_speed;
+ p_hwfn->mcp_info->link_capabilities.default_speed_autoneg =
+ link->speed.autoneg;
+
link_temp &= NVM_CFG1_PORT_DRV_FLOW_CONTROL_MASK;
link_temp >>= NVM_CFG1_PORT_DRV_FLOW_CONTROL_OFFSET;
link->pause.autoneg = !!(link_temp &
@@ -1986,6 +2580,18 @@ static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
if (device_capabilities & NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ETHERNET)
OSAL_SET_BIT(ECORE_DEV_CAP_ETH,
&p_hwfn->hw_info.device_capabilities);
+ if (device_capabilities & NVM_CFG1_GLOB_DEVICE_CAPABILITIES_FCOE)
+ OSAL_SET_BIT(ECORE_DEV_CAP_FCOE,
+ &p_hwfn->hw_info.device_capabilities);
+ if (device_capabilities & NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ISCSI)
+ OSAL_SET_BIT(ECORE_DEV_CAP_ISCSI,
+ &p_hwfn->hw_info.device_capabilities);
+ if (device_capabilities & NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ROCE)
+ OSAL_SET_BIT(ECORE_DEV_CAP_ROCE,
+ &p_hwfn->hw_info.device_capabilities);
+ if (device_capabilities & NVM_CFG1_GLOB_DEVICE_CAPABILITIES_IWARP)
+ OSAL_SET_BIT(ECORE_DEV_CAP_IWARP,
+ &p_hwfn->hw_info.device_capabilities);
return ecore_mcp_fill_shmem_func_info(p_hwfn, p_ptt);
}
@@ -1993,52 +2599,69 @@ static enum _ecore_status_t ecore_hw_get_nvm_info(struct ecore_hwfn *p_hwfn,
static void ecore_get_num_funcs(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
{
- u8 num_funcs;
- u32 tmp, mask;
+ u8 num_funcs, enabled_func_idx = p_hwfn->rel_pf_id;
+ u32 reg_function_hide, tmp, eng_mask, low_pfs_mask;
+ struct ecore_dev *p_dev = p_hwfn->p_dev;
- num_funcs = ECORE_IS_AH(p_hwfn->p_dev) ? MAX_NUM_PFS_K2
- : MAX_NUM_PFS_BB;
+ num_funcs = ECORE_IS_AH(p_dev) ? MAX_NUM_PFS_K2 : MAX_NUM_PFS_BB;
/* Bit 0 of MISCS_REG_FUNCTION_HIDE indicates whether the bypass values
* in the other bits are selected.
* Bits 1-15 are for functions 1-15, respectively, and their value is
* '0' only for enabled functions (function 0 always exists and
* enabled).
- * In case of CMT, only the "even" functions are enabled, and thus the
- * number of functions for both hwfns is learnt from the same bits.
+ * In case of CMT in BB, only the "even" functions are enabled, and thus
+ * the number of functions for both hwfns is learnt from the same bits.
*/
+ reg_function_hide = ecore_rd(p_hwfn, p_ptt, MISCS_REG_FUNCTION_HIDE);
- tmp = ecore_rd(p_hwfn, p_ptt, MISCS_REG_FUNCTION_HIDE);
- if (tmp & 0x1) {
- if (ECORE_PATH_ID(p_hwfn) && p_hwfn->p_dev->num_hwfns == 1) {
- num_funcs = 0;
- mask = 0xaaaa;
+ if (reg_function_hide & 0x1) {
+ if (ECORE_IS_BB(p_dev)) {
+ if (ECORE_PATH_ID(p_hwfn) && p_dev->num_hwfns == 1) {
+ num_funcs = 0;
+ eng_mask = 0xaaaa;
} else {
num_funcs = 1;
- mask = 0x5554;
+ eng_mask = 0x5554;
+ }
+ } else {
+ num_funcs = 1;
+ eng_mask = 0xfffe;
}
- tmp = (tmp ^ 0xffffffff) & mask;
+ /* Get the number of the enabled functions on the engine */
+ tmp = (reg_function_hide ^ 0xffffffff) & eng_mask;
while (tmp) {
if (tmp & 0x1)
num_funcs++;
tmp >>= 0x1;
}
+
+ /* Get the PF index within the enabled functions */
+ low_pfs_mask = (0x1 << p_hwfn->abs_pf_id) - 1;
+ tmp = reg_function_hide & eng_mask & low_pfs_mask;
+ while (tmp) {
+ if (tmp & 0x1)
+ enabled_func_idx--;
+ tmp >>= 0x1;
+ }
}
p_hwfn->num_funcs_on_engine = num_funcs;
+ p_hwfn->enabled_func_idx = enabled_func_idx;
#ifndef ASIC_ONLY
- if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
+ if (CHIP_REV_IS_FPGA(p_dev)) {
DP_NOTICE(p_hwfn, false,
- "FPGA: Limit number of PFs to 4 [would affect"
- " resource allocation, needed for IOV]\n");
+ "FPGA: Limit number of PFs to 4 [would affect resource allocation, needed for IOV]\n");
p_hwfn->num_funcs_on_engine = 4;
}
#endif
- DP_VERBOSE(p_hwfn, ECORE_MSG_PROBE, "num_funcs_on_engine = %d\n",
- p_hwfn->num_funcs_on_engine);
+ DP_VERBOSE(p_hwfn, ECORE_MSG_PROBE,
+ "PF [rel_id %d, abs_id %d] occupies index %d within the %d enabled functions on the engine\n",
+ p_hwfn->rel_pf_id, p_hwfn->abs_pf_id,
+ p_hwfn->enabled_func_idx, p_hwfn->num_funcs_on_engine);
}
static void ecore_hw_info_port_num_bb(struct ecore_hwfn *p_hwfn,
@@ -2080,6 +2703,26 @@ static void ecore_hw_info_port_num_ah(struct ecore_hwfn *p_hwfn,
p_hwfn->p_dev->num_ports_in_engines = 0;
+#ifndef ASIC_ONLY
+ if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+ port = ecore_rd(p_hwfn, p_ptt, MISCS_REG_ECO_RESERVED);
+ switch ((port & 0xf000) >> 12) {
+ case 1:
+ p_hwfn->p_dev->num_ports_in_engines = 1;
+ break;
+ case 3:
+ p_hwfn->p_dev->num_ports_in_engines = 2;
+ break;
+ case 0xf:
+ p_hwfn->p_dev->num_ports_in_engines = 4;
+ break;
+ default:
+ DP_NOTICE(p_hwfn, false,
+ "Unknown port mode in ECO_RESERVED %08x\n",
+ port);
+ }
+ } else
+#endif
for (i = 0; i < MAX_NUM_PORTS_K2; i++) {
port = ecore_rd(p_hwfn, p_ptt,
CNIG_REG_NIG_PORT0_CONF_K2 + (i * 4));
@@ -2098,15 +2741,17 @@ static void ecore_hw_info_port_num(struct ecore_hwfn *p_hwfn,
}
static enum _ecore_status_t
-ecore_get_hw_info(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- enum ecore_pci_personality personality)
+ecore_get_hw_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ enum ecore_pci_personality personality, bool drv_resc_alloc)
{
enum _ecore_status_t rc;
- rc = ecore_iov_hw_info(p_hwfn, p_hwfn->p_main_ptt);
+ /* Since all information is common, only first hwfns should do this */
+ if (IS_LEAD_HWFN(p_hwfn)) {
+ rc = ecore_iov_hw_info(p_hwfn);
if (rc)
return rc;
+ }
/* TODO In get_hw_info, amoungst others:
* Get MCP FW revision and determine according to it the supported
@@ -2158,21 +2803,39 @@ ecore_get_hw_info(struct ecore_hwfn *p_hwfn,
/* To overcome ILT lack for emulation, until at least until we'll have
* a definite answer from system about it, allow only PF0 to be RoCE.
*/
- if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev))
- p_hwfn->hw_info.personality = ECORE_PCI_ETH;
+ if (CHIP_REV_IS_EMUL(p_hwfn->p_dev) && ECORE_IS_AH(p_hwfn->p_dev)) {
+ if (!p_hwfn->rel_pf_id)
+ p_hwfn->hw_info.personality = ECORE_PCI_ETH_ROCE;
+ else
+ p_hwfn->hw_info.personality = ECORE_PCI_ETH;
+ }
#endif
+ /* although in BB some constellations may support more than 4 tcs,
+ * that can result in performance penalty in some cases. 4
+ * represents a good tradeoff between performance and flexibility.
+ */
+ p_hwfn->hw_info.num_hw_tc = NUM_PHYS_TCS_4PORT_K2;
+
+ /* start out with a single active tc. This can be increased either
+ * by dcbx negotiation or by upper layer driver
+ */
+ p_hwfn->hw_info.num_active_tc = 1;
+
ecore_get_num_funcs(p_hwfn, p_ptt);
- /* Feat num is dependent on personality and on the number of functions
- * on the engine. Therefore it should be come after personality
- * initialization and after getting the number of functions.
+ /* In case of forcing the driver's default resource allocation, calling
+ * ecore_hw_get_resc() should come after initializing the personality
+ * and after getting the number of functions, since the calculation of
+ * the resources/features depends on them.
+ * This order is not harmful if not forcing.
*/
- return ecore_hw_get_resc(p_hwfn);
+ return ecore_hw_get_resc(p_hwfn, drv_resc_alloc);
}
-/* @TMP - this should move to a proper .h */
-#define CHIP_NUM_AH 0x8070
+#define ECORE_DEV_ID_MASK 0xff00
+#define ECORE_DEV_ID_MASK_BB 0x1600
+#define ECORE_DEV_ID_MASK_AH 0x8000
static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
{
@@ -2185,6 +2848,12 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
OSAL_PCI_READ_CONFIG_WORD(p_dev, PCICFG_DEVICE_ID_OFFSET,
&p_dev->device_id);
+ /* Determine type */
+ if ((p_dev->device_id & ECORE_DEV_ID_MASK) == ECORE_DEV_ID_MASK_AH)
+ p_dev->type = ECORE_DEV_TYPE_AH;
+ else
+ p_dev->type = ECORE_DEV_TYPE_BB;
+
p_dev->chip_num = (u16)ecore_rd(p_hwfn, p_hwfn->p_main_ptt,
MISCS_REG_CHIP_NUM);
p_dev->chip_rev = (u16)ecore_rd(p_hwfn, p_hwfn->p_main_ptt,
@@ -2192,12 +2861,6 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
MASK_FIELD(CHIP_REV, p_dev->chip_rev);
- /* Determine type */
- if (p_dev->device_id == CHIP_NUM_AH)
- p_dev->type = ECORE_DEV_TYPE_AH;
- else
- p_dev->type = ECORE_DEV_TYPE_BB;
-
/* Learn number of HW-functions */
tmp = ecore_rd(p_hwfn, p_hwfn->p_main_ptt,
MISCS_REG_CMT_ENABLED_FOR_PAIR);
@@ -2260,6 +2923,7 @@ static enum _ecore_status_t ecore_get_dev_info(struct ecore_dev *p_dev)
return ECORE_SUCCESS;
}
+#ifndef LINUX_REMOVE
void ecore_prepare_hibernate(struct ecore_dev *p_dev)
{
int j;
@@ -2275,14 +2939,16 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev)
p_hwfn->hw_init_done = false;
p_hwfn->first_on_engine = false;
+
+ ecore_ptt_invalidate(p_hwfn);
}
}
+#endif
static enum _ecore_status_t
-ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
- void OSAL_IOMEM *p_regview,
+ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
void OSAL_IOMEM *p_doorbells,
- enum ecore_pci_personality personality)
+ struct ecore_hw_prepare_params *p_params)
{
enum _ecore_status_t rc = ECORE_SUCCESS;
@@ -2290,11 +2956,13 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
p_hwfn->regview = p_regview;
p_hwfn->doorbells = p_doorbells;
+ if (IS_VF(p_hwfn->p_dev))
+ return ecore_vf_hw_prepare(p_hwfn);
+
/* Validate that chip access is feasible */
if (REG_RD(p_hwfn, PXP_PF_ME_OPAQUE_ADDR) == 0xffffffff) {
DP_ERR(p_hwfn,
- "Reading the ME register returns all Fs;"
- " Preventing further chip access\n");
+ "Reading the ME register returns all Fs; Preventing further chip access\n");
return ECORE_INVAL;
}
@@ -2327,7 +2995,8 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
}
/* Read the device configuration information from the HW and SHMEM */
- rc = ecore_get_hw_info(p_hwfn, p_hwfn->p_main_ptt, personality);
+ rc = ecore_get_hw_info(p_hwfn, p_hwfn->p_main_ptt,
+ p_params->personality, p_params->drv_resc_alloc);
if (rc) {
DP_NOTICE(p_hwfn, true, "Failed to get HW information\n");
goto err2;
@@ -2354,6 +3023,8 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
return rc;
err2:
+ if (IS_LEAD_HWFN(p_hwfn))
+ ecore_iov_free_hw_info(p_hwfn->p_dev);
ecore_mcp_free(p_hwfn);
err1:
ecore_hw_hwfn_free(p_hwfn);
@@ -2361,27 +3032,28 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
return rc;
}
-enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev, int personality)
+enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
+ struct ecore_hw_prepare_params *p_params)
{
struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
enum _ecore_status_t rc;
- if (IS_VF(p_dev))
- return ecore_vf_hw_prepare(p_dev);
+ p_dev->chk_reg_fifo = p_params->chk_reg_fifo;
/* Store the precompiled init data ptrs */
+ if (IS_PF(p_dev))
ecore_init_iro_array(p_dev);
/* Initialize the first hwfn - will learn number of hwfns */
rc = ecore_hw_prepare_single(p_hwfn,
p_dev->regview,
- p_dev->doorbells, personality);
+ p_dev->doorbells, p_params);
if (rc != ECORE_SUCCESS)
return rc;
- personality = p_hwfn->hw_info.personality;
+ p_params->personality = p_hwfn->hw_info.personality;
- /* initialalize 2nd hwfn if necessary */
+ /* initilalize 2nd hwfn if necessary */
if (p_dev->num_hwfns > 1) {
void OSAL_IOMEM *p_regview, *p_doorbell;
u8 OSAL_IOMEM *addr;
@@ -2397,15 +3069,20 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev, int personality)
/* prepare second hw function */
rc = ecore_hw_prepare_single(&p_dev->hwfns[1], p_regview,
- p_doorbell, personality);
+ p_doorbell, p_params);
/* in case of error, need to free the previously
- * initialiazed hwfn 0
+ * initiliazed hwfn 0.
*/
if (rc != ECORE_SUCCESS) {
- ecore_init_free(p_hwfn);
- ecore_mcp_free(p_hwfn);
- ecore_hw_hwfn_free(p_hwfn);
+ if (IS_PF(p_dev)) {
+ ecore_init_free(p_hwfn);
+ ecore_mcp_free(p_hwfn);
+ ecore_hw_hwfn_free(p_hwfn);
+ } else {
+ DP_NOTICE(p_dev, true,
+ "What do we need to free when VF hwfn1 init fails\n");
+ }
return rc;
}
}
@@ -2431,6 +3108,8 @@ void ecore_hw_remove(struct ecore_dev *p_dev)
OSAL_MUTEX_DEALLOC(&p_hwfn->dmae_info.mutex);
}
+
+ ecore_iov_free_hw_info(p_dev);
}
static void ecore_chain_free_next_ptr(struct ecore_dev *p_dev,
@@ -2611,25 +3290,26 @@ static enum _ecore_status_t ecore_chain_alloc_pbl(struct ecore_dev *p_dev,
pp_virt_addr_tbl = (void **)OSAL_VALLOC(p_dev, size);
if (!pp_virt_addr_tbl) {
DP_NOTICE(p_dev, true,
- "Failed to allocate memory for the chain"
- " virtual addresses table\n");
+ "Failed to allocate memory for the chain virtual addresses table\n");
return ECORE_NOMEM;
}
OSAL_MEM_ZERO(pp_virt_addr_tbl, size);
/* The allocation of the PBL table is done with its full size, since it
* is expected to be successive.
+ * ecore_chain_init_pbl_mem() is called even in a case of an allocation
+ * failure, since pp_virt_addr_tbl was previously allocated, and it
+ * should be saved to allow its freeing during the error flow.
*/
size = page_cnt * ECORE_CHAIN_PBL_ENTRY_SIZE;
p_pbl_virt = OSAL_DMA_ALLOC_COHERENT(p_dev, &p_pbl_phys, size);
+ ecore_chain_init_pbl_mem(p_chain, p_pbl_virt, p_pbl_phys,
+ pp_virt_addr_tbl);
if (!p_pbl_virt) {
DP_NOTICE(p_dev, true, "Failed to allocate chain pbl memory\n");
return ECORE_NOMEM;
}
- ecore_chain_init_pbl_mem(p_chain, p_pbl_virt, p_pbl_phys,
- pp_virt_addr_tbl);
-
for (i = 0; i < page_cnt; i++) {
p_virt = OSAL_DMA_ALLOC_COHERENT(p_dev, &p_phys,
ECORE_CHAIN_PAGE_SIZE);
@@ -2675,14 +3355,13 @@ enum _ecore_status_t ecore_chain_alloc(struct ecore_dev *p_dev,
if (rc) {
DP_NOTICE(p_dev, true,
"Cannot allocate a chain with the given arguments:\n"
- " [use_mode %d, mode %d, cnt_type %d, num_elems %d,"
- " elem_size %zu]\n",
+ "[use_mode %d, mode %d, cnt_type %d, num_elems %d, elem_size %zu]\n",
intended_use, mode, cnt_type, num_elems, elem_size);
return rc;
}
ecore_chain_init_params(p_chain, page_cnt, (u8)elem_size, intended_use,
- mode, cnt_type);
+ mode, cnt_type, p_dev->dp_ctx);
switch (mode) {
case ECORE_CHAIN_MODE_NEXT_PTR:
@@ -2853,9 +3532,12 @@ void ecore_llh_remove_mac_filter(struct ecore_hwfn *p_hwfn,
"Tried to remove a non-configured filter\n");
}
-enum _ecore_status_t ecore_llh_add_ethertype_filter(struct ecore_hwfn *p_hwfn,
+enum _ecore_status_t
+ecore_llh_add_protocol_filter(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
- u16 filter)
+ u16 source_port_or_eth_type,
+ u16 dest_port,
+ enum ecore_llh_port_filter_type_t type)
{
u32 high, low, en;
int i;
@@ -2863,9 +3545,29 @@ enum _ecore_status_t ecore_llh_add_ethertype_filter(struct ecore_hwfn *p_hwfn,
if (!(IS_MF_SI(p_hwfn) || IS_MF_DEFAULT(p_hwfn)))
return ECORE_SUCCESS;
- high = filter;
+ high = 0;
low = 0;
-
+ switch (type) {
+ case ECORE_LLH_FILTER_ETHERTYPE:
+ high = source_port_or_eth_type;
+ break;
+ case ECORE_LLH_FILTER_TCP_SRC_PORT:
+ case ECORE_LLH_FILTER_UDP_SRC_PORT:
+ low = source_port_or_eth_type << 16;
+ break;
+ case ECORE_LLH_FILTER_TCP_DEST_PORT:
+ case ECORE_LLH_FILTER_UDP_DEST_PORT:
+ low = dest_port;
+ break;
+ case ECORE_LLH_FILTER_TCP_SRC_AND_DEST_PORT:
+ case ECORE_LLH_FILTER_UDP_SRC_AND_DEST_PORT:
+ low = (source_port_or_eth_type << 16) | dest_port;
+ break;
+ default:
+ DP_NOTICE(p_hwfn, true,
+ "Non valid LLH protocol filter type %d\n", type);
+ return ECORE_INVAL;
+ }
/* Find a free entry and utilize it */
for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) {
en = ecore_rd(p_hwfn, p_ptt,
@@ -2882,7 +3584,7 @@ enum _ecore_status_t ecore_llh_add_ethertype_filter(struct ecore_hwfn *p_hwfn,
NIG_REG_LLH_FUNC_FILTER_MODE + i * sizeof(u32), 1);
ecore_wr(p_hwfn, p_ptt,
NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE +
- i * sizeof(u32), 1);
+ i * sizeof(u32), 1 << type);
ecore_wr(p_hwfn, p_ptt,
NIG_REG_LLH_FUNC_FILTER_EN + i * sizeof(u32), 1);
break;
@@ -2890,17 +3592,52 @@ enum _ecore_status_t ecore_llh_add_ethertype_filter(struct ecore_hwfn *p_hwfn,
if (i >= NIG_REG_LLH_FUNC_FILTER_EN_SIZE) {
DP_NOTICE(p_hwfn, false,
"Failed to find an empty LLH filter to utilize\n");
- return ECORE_INVAL;
+ return ECORE_NORESOURCES;
}
-
+ switch (type) {
+ case ECORE_LLH_FILTER_ETHERTYPE:
DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
- "ETH type: %x is added at %d\n", filter, i);
-
+ "ETH type %x is added at %d\n",
+ source_port_or_eth_type, i);
+ break;
+ case ECORE_LLH_FILTER_TCP_SRC_PORT:
+ DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+ "TCP src port %x is added at %d\n",
+ source_port_or_eth_type, i);
+ break;
+ case ECORE_LLH_FILTER_UDP_SRC_PORT:
+ DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+ "UDP src port %x is added at %d\n",
+ source_port_or_eth_type, i);
+ break;
+ case ECORE_LLH_FILTER_TCP_DEST_PORT:
+ DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+ "TCP dst port %x is added at %d\n", dest_port, i);
+ break;
+ case ECORE_LLH_FILTER_UDP_DEST_PORT:
+ DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+ "UDP dst port %x is added at %d\n", dest_port, i);
+ break;
+ case ECORE_LLH_FILTER_TCP_SRC_AND_DEST_PORT:
+ DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+ "TCP src/dst ports %x/%x are added at %d\n",
+ source_port_or_eth_type, dest_port, i);
+ break;
+ case ECORE_LLH_FILTER_UDP_SRC_AND_DEST_PORT:
+ DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+ "UDP src/dst ports %x/%x are added at %d\n",
+ source_port_or_eth_type, dest_port, i);
+ break;
+ }
return ECORE_SUCCESS;
}
-void ecore_llh_remove_ethertype_filter(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u16 filter)
+void
+ecore_llh_remove_protocol_filter(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u16 source_port_or_eth_type,
+ u16 dest_port,
+ enum ecore_llh_port_filter_type_t type)
{
u32 high, low;
int i;
@@ -2908,11 +3645,41 @@ void ecore_llh_remove_ethertype_filter(struct ecore_hwfn *p_hwfn,
if (!(IS_MF_SI(p_hwfn) || IS_MF_DEFAULT(p_hwfn)))
return;
- high = filter;
+ high = 0;
low = 0;
+ switch (type) {
+ case ECORE_LLH_FILTER_ETHERTYPE:
+ high = source_port_or_eth_type;
+ break;
+ case ECORE_LLH_FILTER_TCP_SRC_PORT:
+ case ECORE_LLH_FILTER_UDP_SRC_PORT:
+ low = source_port_or_eth_type << 16;
+ break;
+ case ECORE_LLH_FILTER_TCP_DEST_PORT:
+ case ECORE_LLH_FILTER_UDP_DEST_PORT:
+ low = dest_port;
+ break;
+ case ECORE_LLH_FILTER_TCP_SRC_AND_DEST_PORT:
+ case ECORE_LLH_FILTER_UDP_SRC_AND_DEST_PORT:
+ low = (source_port_or_eth_type << 16) | dest_port;
+ break;
+ default:
+ DP_NOTICE(p_hwfn, true,
+ "Non valid LLH protocol filter type %d\n", type);
+ return;
+ }
- /* Find the entry and clean it */
for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) {
+ if (!ecore_rd(p_hwfn, p_ptt,
+ NIG_REG_LLH_FUNC_FILTER_EN + i * sizeof(u32)))
+ continue;
+ if (!ecore_rd(p_hwfn, p_ptt,
+ NIG_REG_LLH_FUNC_FILTER_MODE + i * sizeof(u32)))
+ continue;
+ if (!(ecore_rd(p_hwfn, p_ptt,
+ NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE +
+ i * sizeof(u32)) & (1 << type)))
+ continue;
if (ecore_rd(p_hwfn, p_ptt,
NIG_REG_LLH_FUNC_FILTER_VALUE +
2 * i * sizeof(u32)) != low)
@@ -2925,6 +3692,11 @@ void ecore_llh_remove_ethertype_filter(struct ecore_hwfn *p_hwfn,
ecore_wr(p_hwfn, p_ptt,
NIG_REG_LLH_FUNC_FILTER_EN + i * sizeof(u32), 0);
ecore_wr(p_hwfn, p_ptt,
+ NIG_REG_LLH_FUNC_FILTER_MODE + i * sizeof(u32), 0);
+ ecore_wr(p_hwfn, p_ptt,
+ NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE +
+ i * sizeof(u32), 0);
+ ecore_wr(p_hwfn, p_ptt,
NIG_REG_LLH_FUNC_FILTER_VALUE +
2 * i * sizeof(u32), 0);
ecore_wr(p_hwfn, p_ptt,
@@ -2932,6 +3704,7 @@ void ecore_llh_remove_ethertype_filter(struct ecore_hwfn *p_hwfn,
(2 * i + 1) * sizeof(u32), 0);
break;
}
+
if (i >= NIG_REG_LLH_FUNC_FILTER_EN_SIZE)
DP_NOTICE(p_hwfn, false,
"Tried to remove a non-configured filter\n");
@@ -2957,97 +3730,30 @@ void ecore_llh_clear_all_filters(struct ecore_hwfn *p_hwfn,
}
}
-enum _ecore_status_t ecore_test_registers(struct ecore_hwfn *p_hwfn,
+enum _ecore_status_t
+ecore_llh_set_function_as_default(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
{
- u32 reg_tbl[] = {
- BRB_REG_HEADER_SIZE,
- BTB_REG_HEADER_SIZE,
- CAU_REG_LONG_TIMEOUT_THRESHOLD,
- CCFC_REG_ACTIVITY_COUNTER,
- CDU_REG_CID_ADDR_PARAMS,
- DBG_REG_CLIENT_ENABLE,
- DMAE_REG_INIT,
- DORQ_REG_IFEN,
- GRC_REG_TIMEOUT_EN,
- IGU_REG_BLOCK_CONFIGURATION,
- MCM_REG_INIT,
- MCP2_REG_DBG_DWORD_ENABLE,
- MISC_REG_PORT_MODE,
- MISCS_REG_CLK_100G_MODE,
- MSDM_REG_ENABLE_IN1,
- MSEM_REG_ENABLE_IN,
- NIG_REG_CM_HDR,
- NCSI_REG_CONFIG,
- PBF_REG_INIT,
- PTU_REG_ATC_INIT_ARRAY,
- PCM_REG_INIT,
- PGLUE_B_REG_ADMIN_PER_PF_REGION,
- PRM_REG_DISABLE_PRM,
- PRS_REG_SOFT_RST,
- PSDM_REG_ENABLE_IN1,
- PSEM_REG_ENABLE_IN,
- PSWRQ_REG_DBG_SELECT,
- PSWRQ2_REG_CDUT_P_SIZE,
- PSWHST_REG_DISCARD_INTERNAL_WRITES,
- PSWHST2_REG_DBGSYN_ALMOST_FULL_THR,
- PSWRD_REG_DBG_SELECT,
- PSWRD2_REG_CONF11,
- PSWWR_REG_USDM_FULL_TH,
- PSWWR2_REG_CDU_FULL_TH2,
- QM_REG_MAXPQSIZE_0,
- RSS_REG_RSS_INIT_EN,
- RDIF_REG_STOP_ON_ERROR,
- SRC_REG_SOFT_RST,
- TCFC_REG_ACTIVITY_COUNTER,
- TCM_REG_INIT,
- TM_REG_PXP_READ_DATA_FIFO_INIT,
- TSDM_REG_ENABLE_IN1,
- TSEM_REG_ENABLE_IN,
- TDIF_REG_STOP_ON_ERROR,
- UCM_REG_INIT,
- UMAC_REG_IPG_HD_BKP_CNTL_BB_B0,
- USDM_REG_ENABLE_IN1,
- USEM_REG_ENABLE_IN,
- XCM_REG_INIT,
- XSDM_REG_ENABLE_IN1,
- XSEM_REG_ENABLE_IN,
- YCM_REG_INIT,
- YSDM_REG_ENABLE_IN1,
- YSEM_REG_ENABLE_IN,
- XYLD_REG_SCBD_STRICT_PRIO,
- TMLD_REG_SCBD_STRICT_PRIO,
- MULD_REG_SCBD_STRICT_PRIO,
- YULD_REG_SCBD_STRICT_PRIO,
- };
- u32 test_val[] = { 0x0, 0x1 };
- u32 val, save_val, i, j;
-
- for (i = 0; i < OSAL_ARRAY_SIZE(test_val); i++) {
- for (j = 0; j < OSAL_ARRAY_SIZE(reg_tbl); j++) {
- save_val = ecore_rd(p_hwfn, p_ptt, reg_tbl[j]);
- ecore_wr(p_hwfn, p_ptt, reg_tbl[j], test_val[i]);
- val = ecore_rd(p_hwfn, p_ptt, reg_tbl[j]);
- /* Restore the original register's value */
- ecore_wr(p_hwfn, p_ptt, reg_tbl[j], save_val);
- if (val != test_val[i]) {
- DP_INFO(p_hwfn->p_dev,
- "offset 0x%x: val 0x%x != 0x%x\n",
- reg_tbl[j], val, test_val[i]);
- return ECORE_AGAIN;
- }
- }
- }
+ if (IS_MF_DEFAULT(p_hwfn) && ECORE_IS_BB(p_hwfn->p_dev)) {
+ ecore_wr(p_hwfn, p_ptt,
+ NIG_REG_LLH_TAGMAC_DEF_PF_VECTOR,
+ 1 << p_hwfn->abs_pf_id / 2);
+ ecore_wr(p_hwfn, p_ptt, PRS_REG_MSG_INFO, 0);
return ECORE_SUCCESS;
}
+ DP_NOTICE(p_hwfn, false,
+ "This function can't be set as default\n");
+ return ECORE_INVAL;
+}
+
static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
- u32 hw_addr, void *p_qzone,
- osal_size_t qzone_size,
+ u32 hw_addr, void *p_eth_qzone,
+ osal_size_t eth_qzone_size,
u8 timeset)
{
- struct coalescing_timeset *p_coalesce_timeset;
+ struct coalescing_timeset *p_coal_timeset;
if (IS_VF(p_hwfn->p_dev)) {
DP_NOTICE(p_hwfn, true, "VF coalescing config not supported\n");
@@ -3060,34 +3766,49 @@ static enum _ecore_status_t ecore_set_coalesce(struct ecore_hwfn *p_hwfn,
return ECORE_INVAL;
}
- OSAL_MEMSET(p_qzone, 0, qzone_size);
- p_coalesce_timeset = p_qzone;
- ecore_memcpy_to(p_hwfn, p_ptt, hw_addr, p_qzone, qzone_size);
+ OSAL_MEMSET(p_eth_qzone, 0, eth_qzone_size);
+ p_coal_timeset = p_eth_qzone;
+ SET_FIELD(p_coal_timeset->value, COALESCING_TIMESET_TIMESET, timeset);
+ SET_FIELD(p_coal_timeset->value, COALESCING_TIMESET_VALID, 1);
+ ecore_memcpy_to(p_hwfn, p_ptt, hw_addr, p_eth_qzone, eth_qzone_size);
return ECORE_SUCCESS;
}
enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
- u8 coalesce, u8 qid)
+ u16 coalesce, u8 qid, u16 sb_id)
{
- struct ustorm_eth_queue_zone qzone;
+ struct ustorm_eth_queue_zone eth_qzone;
u16 fw_qid = 0;
u32 address;
- u8 timeset;
enum _ecore_status_t rc;
+ u8 timeset, timer_res;
+
+ /* Coalesce = (timeset << timer-resolution), timeset is 7bit wide */
+ if (coalesce <= 0x7F) {
+ timer_res = 0;
+ } else if (coalesce <= 0xFF) {
+ timer_res = 1;
+ } else if (coalesce <= 0x1FF) {
+ timer_res = 2;
+ } else {
+ DP_ERR(p_hwfn, "Invalid coalesce value - %d\n", coalesce);
+ return ECORE_INVAL;
+ }
+ timeset = (u8)(coalesce >> timer_res);
rc = ecore_fw_l2_queue(p_hwfn, (u16)qid, &fw_qid);
if (rc != ECORE_SUCCESS)
return rc;
+ rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res, sb_id, false);
+ if (rc != ECORE_SUCCESS)
+ goto out;
+
address = BAR0_MAP_REG_USDM_RAM + USTORM_ETH_QUEUE_ZONE_OFFSET(fw_qid);
- /* Translate the coalescing time into a timeset, according to:
- * Timeout[Rx] = TimeSet[Rx] << (TimerRes[Rx] + 1)
- */
- timeset = coalesce >> (ECORE_CAU_DEF_RX_TIMER_RES + 1);
- rc = ecore_set_coalesce(p_hwfn, p_ptt, address, &qzone,
+ rc = ecore_set_coalesce(p_hwfn, p_ptt, address, ð_qzone,
sizeof(struct ustorm_eth_queue_zone), timeset);
if (rc != ECORE_SUCCESS)
goto out;
@@ -3099,26 +3820,40 @@ enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
- u8 coalesce, u8 qid)
+ u16 coalesce, u8 qid, u16 sb_id)
{
- struct ystorm_eth_queue_zone qzone;
+ struct xstorm_eth_queue_zone eth_qzone;
u16 fw_qid = 0;
u32 address;
- u8 timeset;
enum _ecore_status_t rc;
+ u8 timeset, timer_res;
+
+ /* Coalesce = (timeset << timer-resolution), timeset is 7bit wide */
+ if (coalesce <= 0x7F) {
+ timer_res = 0;
+ } else if (coalesce <= 0xFF) {
+ timer_res = 1;
+ } else if (coalesce <= 0x1FF) {
+ timer_res = 2;
+ } else {
+ DP_ERR(p_hwfn, "Invalid coalesce value - %d\n", coalesce);
+ return ECORE_INVAL;
+ }
+
+ timeset = (u8)(coalesce >> timer_res);
rc = ecore_fw_l2_queue(p_hwfn, (u16)qid, &fw_qid);
if (rc != ECORE_SUCCESS)
return rc;
- address = BAR0_MAP_REG_YSDM_RAM + YSTORM_ETH_QUEUE_ZONE_OFFSET(fw_qid);
- /* Translate the coalescing time into a timeset, according to:
- * Timeout[Tx] = TimeSet[Tx] << (TimerRes[Tx] + 1)
- */
- timeset = coalesce >> (ECORE_CAU_DEF_TX_TIMER_RES + 1);
+ rc = ecore_int_set_timer_res(p_hwfn, p_ptt, timer_res, sb_id, true);
+ if (rc != ECORE_SUCCESS)
+ goto out;
- rc = ecore_set_coalesce(p_hwfn, p_ptt, address, &qzone,
- sizeof(struct ystorm_eth_queue_zone), timeset);
+ address = BAR0_MAP_REG_XSDM_RAM + XSTORM_ETH_QUEUE_ZONE_OFFSET(fw_qid);
+
+ rc = ecore_set_coalesce(p_hwfn, p_ptt, address, ð_qzone,
+ sizeof(struct xstorm_eth_queue_zone), timeset);
if (rc != ECORE_SUCCESS)
goto out;
@@ -3136,16 +3871,15 @@ static void ecore_configure_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
u32 min_pf_rate)
{
struct init_qm_vport_params *vport_params;
- int i, num_vports;
+ int i;
vport_params = p_hwfn->qm_info.qm_vport_params;
- num_vports = p_hwfn->qm_info.num_vports;
- for (i = 0; i < num_vports; i++) {
+ for (i = 0; i < p_hwfn->qm_info.num_vports; i++) {
u32 wfq_speed = p_hwfn->qm_info.wfq_data[i].min_speed;
- vport_params[i].vport_wfq =
- (wfq_speed * ECORE_WFQ_UNIT) / min_pf_rate;
+ vport_params[i].vport_wfq = (wfq_speed * ECORE_WFQ_UNIT) /
+ min_pf_rate;
ecore_init_vport_wfq(p_hwfn, p_ptt,
vport_params[i].first_tx_pq_id,
vport_params[i].vport_wfq);
@@ -3155,16 +3889,10 @@ static void ecore_configure_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
static void
ecore_init_wfq_default_param(struct ecore_hwfn *p_hwfn, u32 min_pf_rate)
{
- int i, num_vports;
- u32 min_speed;
-
- num_vports = p_hwfn->qm_info.num_vports;
- min_speed = min_pf_rate / num_vports;
+ int i;
- for (i = 0; i < num_vports; i++) {
+ for (i = 0; i < p_hwfn->qm_info.num_vports; i++)
p_hwfn->qm_info.qm_vport_params[i].vport_wfq = 1;
- p_hwfn->qm_info.wfq_data[i].default_min_speed = min_speed;
- }
}
static void ecore_disable_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
@@ -3172,12 +3900,11 @@ static void ecore_disable_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
u32 min_pf_rate)
{
struct init_qm_vport_params *vport_params;
- int i, num_vports;
+ int i;
vport_params = p_hwfn->qm_info.qm_vport_params;
- num_vports = p_hwfn->qm_info.num_vports;
- for (i = 0; i < num_vports; i++) {
+ for (i = 0; i < p_hwfn->qm_info.num_vports; i++) {
ecore_init_wfq_default_param(p_hwfn, min_pf_rate);
ecore_init_vport_wfq(p_hwfn, p_ptt,
vport_params[i].first_tx_pq_id,
@@ -3185,7 +3912,13 @@ static void ecore_disable_wfq_for_all_vports(struct ecore_hwfn *p_hwfn,
}
}
-/* validate wfq for a given vport and required min rate */
+/* This function performs several validations for WFQ
+ * configuration and required min rate for a given vport
+ * 1. req_rate must be greater than one percent of min_pf_rate.
+ * 2. req_rate should not cause other vports [not configured for WFQ explicitly]
+ * rates to get less than one percent of min_pf_rate.
+ * 3. total_req_min_rate [all vports min rate sum] shouldn't exceed min_pf_rate.
+ */
static enum _ecore_status_t ecore_init_wfq_param(struct ecore_hwfn *p_hwfn,
u16 vport_id, u32 req_rate,
u32 min_pf_rate)
@@ -3195,7 +3928,8 @@ static enum _ecore_status_t ecore_init_wfq_param(struct ecore_hwfn *p_hwfn,
num_vports = p_hwfn->qm_info.num_vports;
- /* Check pre-set data for some of the vports */
+/* Accounting for the vports which are configured for WFQ explicitly */
+
for (i = 0; i < num_vports; i++) {
u32 tmp_speed;
@@ -3209,36 +3943,34 @@ static enum _ecore_status_t ecore_init_wfq_param(struct ecore_hwfn *p_hwfn,
/* Include current vport data as well */
req_count++;
total_req_min_rate += req_rate;
- non_requested_count = p_hwfn->qm_info.num_vports - req_count;
+ non_requested_count = num_vports - req_count;
/* validate possible error cases */
if (req_rate > min_pf_rate) {
DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
- "Vport [%d] - Requested rate[%d Mbps] is greater"
- " than configured PF min rate[%d Mbps]\n",
+ "Vport [%d] - Requested rate[%d Mbps] is greater than configured PF min rate[%d Mbps]\n",
vport_id, req_rate, min_pf_rate);
return ECORE_INVAL;
}
- if (req_rate * ECORE_WFQ_UNIT / min_pf_rate < 1) {
+ if (req_rate < min_pf_rate / ECORE_WFQ_UNIT) {
DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
- "Vport [%d] - Requested rate[%d Mbps] is less than"
- " one percent of configured PF min rate[%d Mbps]\n",
+ "Vport [%d] - Requested rate[%d Mbps] is less than one percent of configured PF min rate[%d Mbps]\n",
vport_id, req_rate, min_pf_rate);
return ECORE_INVAL;
}
/* TBD - for number of vports greater than 100 */
- if (ECORE_WFQ_UNIT / p_hwfn->qm_info.num_vports < 1) {
+ if (num_vports > ECORE_WFQ_UNIT) {
DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
- "Number of vports are greater than 100\n");
+ "Number of vports is greater than %d\n",
+ ECORE_WFQ_UNIT);
return ECORE_INVAL;
}
if (total_req_min_rate > min_pf_rate) {
DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
- "Total requested min rate for all vports[%d Mbps]"
- "is greater than configured PF min rate[%d Mbps]\n",
+ "Total requested min rate for all vports[%d Mbps] is greater than configured PF min rate[%d Mbps]\n",
total_req_min_rate, min_pf_rate);
return ECORE_INVAL;
}
@@ -3248,8 +3980,12 @@ static enum _ecore_status_t ecore_init_wfq_param(struct ecore_hwfn *p_hwfn,
left_rate_per_vp = total_left_rate / non_requested_count;
/* validate if non requested get < 1% of min bw */
- if (left_rate_per_vp * ECORE_WFQ_UNIT / min_pf_rate < 1)
+ if (left_rate_per_vp < min_pf_rate / ECORE_WFQ_UNIT) {
+ DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
+ "Non WFQ configured vports rate [%d Mbps] is less than one percent of configured PF min rate[%d Mbps]\n",
+ left_rate_per_vp, min_pf_rate);
return ECORE_INVAL;
+ }
/* now req_rate for given vport passes all scenarios.
* assign final wfq rates to all vports.
@@ -3298,27 +4034,27 @@ static int __ecore_configure_vp_wfq_on_link_change(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u32 min_pf_rate)
{
- int rc = ECORE_SUCCESS;
bool use_wfq = false;
- u16 i, num_vports;
-
- num_vports = p_hwfn->qm_info.num_vports;
+ int rc = ECORE_SUCCESS;
+ u16 i;
/* Validate all pre configured vports for wfq */
- for (i = 0; i < num_vports; i++) {
- if (p_hwfn->qm_info.wfq_data[i].configured) {
- u32 rate = p_hwfn->qm_info.wfq_data[i].min_speed;
+ for (i = 0; i < p_hwfn->qm_info.num_vports; i++) {
+ u32 rate;
+ if (!p_hwfn->qm_info.wfq_data[i].configured)
+ continue;
+
+ rate = p_hwfn->qm_info.wfq_data[i].min_speed;
use_wfq = true;
+
rc = ecore_init_wfq_param(p_hwfn, i, rate, min_pf_rate);
- if (rc == ECORE_INVAL) {
+ if (rc != ECORE_SUCCESS) {
DP_NOTICE(p_hwfn, false,
- "Validation failed while"
- " configuring min rate\n");
+ "WFQ validation failed while configuring min rate\n");
break;
}
}
- }
if (rc == ECORE_SUCCESS && use_wfq)
ecore_configure_wfq_for_all_vports(p_hwfn, p_ptt, min_pf_rate);
@@ -3395,12 +4131,21 @@ int __ecore_configure_pf_max_bandwidth(struct ecore_hwfn *p_hwfn,
p_hwfn->mcp_info->func_info.bandwidth_max = max_bw;
- if (!p_link->line_speed)
+ if (!p_link->line_speed && (max_bw != 100))
return rc;
p_link->speed = (p_link->line_speed * max_bw) / 100;
+ p_hwfn->qm_info.pf_rl = p_link->speed;
- rc = ecore_init_pf_rl(p_hwfn, p_ptt, p_hwfn->rel_pf_id, p_link->speed);
+ /* Since the limiter also affects Tx-switched traffic, we don't want it
+ * to limit such traffic in case there's no actual limit.
+ * In that case, set limit to imaginary high boundary.
+ */
+ if (max_bw == 100)
+ p_hwfn->qm_info.pf_rl = 100000;
+
+ rc = ecore_init_pf_rl(p_hwfn, p_ptt, p_hwfn->rel_pf_id,
+ p_hwfn->qm_info.pf_rl);
DP_VERBOSE(p_hwfn, ECORE_MSG_LINK,
"Configured MAX bandwidth to be %08x Mb/sec\n",
@@ -3433,12 +4178,11 @@ int ecore_configure_pf_max_bandwidth(struct ecore_dev *p_dev, u8 max_bw)
rc = __ecore_configure_pf_max_bandwidth(p_hwfn, p_ptt,
p_link, max_bw);
- if (rc != ECORE_SUCCESS) {
- ecore_ptt_release(p_hwfn, p_ptt);
- return rc;
- }
ecore_ptt_release(p_hwfn, p_ptt);
+
+ if (rc != ECORE_SUCCESS)
+ break;
}
return rc;
@@ -3452,6 +4196,7 @@ int __ecore_configure_pf_min_bandwidth(struct ecore_hwfn *p_hwfn,
int rc = ECORE_SUCCESS;
p_hwfn->mcp_info->func_info.bandwidth_min = min_bw;
+ p_hwfn->qm_info.pf_wfq = min_bw;
if (!p_link->line_speed)
return rc;
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 77f4869..e6924bd 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -13,8 +13,6 @@
#include "ecore_chain.h"
#include "ecore_int_api.h"
-struct ecore_tunn_start_params;
-
/**
* @brief ecore_init_dp - initialize the debug level
*
@@ -108,6 +106,7 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev);
*/
void ecore_hw_stop_fastpath(struct ecore_dev *p_dev);
+#ifndef LINUX_REMOVE
/**
* @brief ecore_prepare_hibernate -should be called when
* the system is going into the hibernate state
@@ -116,6 +115,7 @@ void ecore_hw_stop_fastpath(struct ecore_dev *p_dev);
*
*/
void ecore_prepare_hibernate(struct ecore_dev *p_dev);
+#endif
/**
* @brief ecore_hw_start_fastpath -restart fastpath traffic,
@@ -135,15 +135,25 @@ void ecore_hw_start_fastpath(struct ecore_hwfn *p_hwfn);
*/
enum _ecore_status_t ecore_hw_reset(struct ecore_dev *p_dev);
+struct ecore_hw_prepare_params {
+ /* personality to initialize */
+ int personality;
+ /* force the driver's default resource allocation */
+ bool drv_resc_alloc;
+ /* check the reg_fifo after any register access */
+ bool chk_reg_fifo;
+};
+
/**
* @brief ecore_hw_prepare -
*
* @param p_dev
- * @param personality - personality to initialize
+ * @param p_params
*
* @return enum _ecore_status_t
*/
-enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev, int personality);
+enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
+ struct ecore_hw_prepare_params *p_params);
/**
* @brief ecore_hw_remove -
@@ -422,28 +432,50 @@ enum _ecore_status_t ecore_llh_add_mac_filter(struct ecore_hwfn *p_hwfn,
* @param p_filter - MAC to remove
*/
void ecore_llh_remove_mac_filter(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u8 *p_filter);
+ struct ecore_ptt *p_ptt,
+ u8 *p_filter);
+
+enum ecore_llh_port_filter_type_t {
+ ECORE_LLH_FILTER_ETHERTYPE,
+ ECORE_LLH_FILTER_TCP_SRC_PORT,
+ ECORE_LLH_FILTER_TCP_DEST_PORT,
+ ECORE_LLH_FILTER_TCP_SRC_AND_DEST_PORT,
+ ECORE_LLH_FILTER_UDP_SRC_PORT,
+ ECORE_LLH_FILTER_UDP_DEST_PORT,
+ ECORE_LLH_FILTER_UDP_SRC_AND_DEST_PORT
+};
/**
- * @brief ecore_llh_add_ethertype_filter - configures a ethertype filter in llh
+ * @brief ecore_llh_add_protocol_filter - configures a protocol filter in llh
*
* @param p_hwfn
* @param p_ptt
- * @param filter - ethertype to add
+ * @param source_port_or_eth_type - source port or ethertype to add
+ * @param dest_port - destination port to add
+ * @param type - type of filters and comparing
*/
-enum _ecore_status_t ecore_llh_add_ethertype_filter(struct ecore_hwfn *p_hwfn,
+enum _ecore_status_t
+ecore_llh_add_protocol_filter(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
- u16 filter);
+ u16 source_port_or_eth_type,
+ u16 dest_port,
+ enum ecore_llh_port_filter_type_t type);
/**
- * @brief ecore_llh_remove_ethertype_filter - removes a ethertype llh filter
+ * @brief ecore_llh_remove_protocol_filter - remove a protocol filter in llh
*
* @param p_hwfn
* @param p_ptt
- * @param filter - ethertype to remove
+ * @param source_port_or_eth_type - source port or ethertype to add
+ * @param dest_port - destination port to add
+ * @param type - type of filters and comparing
*/
-void ecore_llh_remove_ethertype_filter(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u16 filter);
+void
+ecore_llh_remove_protocol_filter(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u16 source_port_or_eth_type,
+ u16 dest_port,
+ enum ecore_llh_port_filter_type_t type);
/**
* @brief ecore_llh_clear_all_filters - removes all MAC filters from llh
@@ -455,56 +487,64 @@ void ecore_llh_clear_all_filters(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt);
/**
-*@brief Cleanup of previous driver remains prior to load
+ * @brief ecore_llh_set_function_as_default - set function as default per port
*
* @param p_hwfn
* @param p_ptt
- * @param id - For PF, engine-relative. For VF, PF-relative.
- * @param is_vf - true iff cleanup is made for a VF.
- *
- * @return enum _ecore_status_t
*/
-enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u16 id, bool is_vf);
+enum _ecore_status_t
+ecore_llh_set_function_as_default(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt);
/**
- * @brief ecore_test_registers - Perform register tests
+ *@brief Cleanup of previous driver remains prior to load
*
* @param p_hwfn
* @param p_ptt
+ * @param id - For PF, engine-relative. For VF, PF-relative.
+ * @param is_vf - true iff cleanup is made for a VF.
*
* @return enum _ecore_status_t
*/
-enum _ecore_status_t ecore_test_registers(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt);
+enum _ecore_status_t ecore_final_cleanup(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u16 id,
+ bool is_vf);
/**
* @brief ecore_set_rxq_coalesce - Configure coalesce parameters for an Rx queue
+ * The fact that we can configure coalescing to up to 511, but on varying
+ * accuracy [the bigger the value the less accurate] up to a mistake of 3usec
+ * for the highest values.
*
* @param p_hwfn
* @param p_ptt
* @param coalesce - Coalesce value in micro seconds.
* @param qid - Queue index.
+ * @param qid - SB Id
*
* @return enum _ecore_status_t
*/
enum _ecore_status_t ecore_set_rxq_coalesce(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
- u8 coalesce, u8 qid);
+ u16 coalesce, u8 qid, u16 sb_id);
/**
* @brief ecore_set_txq_coalesce - Configure coalesce parameters for a Tx queue
+ * While the API allows setting coalescing per-qid, all tx queues sharing a
+ * SB should be in same range [i.e., either 0-0x7f, 0x80-0xff or 0x100-0x1ff]
+ * otherwise configuration would break.
*
* @param p_hwfn
* @param p_ptt
* @param coalesce - Coalesce value in micro seconds.
* @param qid - Queue index.
+ * @param qid - SB Id
*
* @return enum _ecore_status_t
*/
enum _ecore_status_t ecore_set_txq_coalesce(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
- u8 coalesce, u8 qid);
+ u16 coalesce, u8 qid, u16 sb_id);
#endif
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index dd94d31..5892f99 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -246,11 +246,11 @@ struct xstorm_eth_conn_ag_ctx {
#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT 6
u8 edpm_event_id /* byte2 */;
__le16 physical_q0 /* physical_q0 */;
- __le16 word1 /* physical_q1 */;
+ __le16 quota /* physical_q1 */;
__le16 edpm_num_bds /* physical_q2 */;
__le16 tx_bd_cons /* word3 */;
__le16 tx_bd_prod /* word4 */;
- __le16 go_to_bd_cons /* word5 */;
+ __le16 tx_class /* word5 */;
__le16 conn_dpi /* conn_dpi */;
u8 byte3 /* byte3 */;
u8 byte4 /* byte4 */;
@@ -306,7 +306,7 @@ struct ystorm_eth_conn_st_ctx {
struct ystorm_eth_conn_ag_ctx {
u8 byte0 /* cdu_validation */;
- u8 byte1 /* state */;
+ u8 state /* state */;
u8 flags0;
#define YSTORM_ETH_CONN_AG_CTX_BIT0_MASK 0x1
#define YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT 0
@@ -335,7 +335,7 @@ struct ystorm_eth_conn_ag_ctx {
#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT 6
#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK 0x1
#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT 7
- u8 byte2 /* byte2 */;
+ u8 tx_q0_int_coallecing_timeset /* byte2 */;
u8 byte3 /* byte3 */;
__le16 word0 /* word0 */;
__le32 terminate_spqe /* reg0 */;
@@ -635,9 +635,8 @@ enum eth_event_opcode {
ETH_EVENT_RX_CREATE_OPENFLOW_ACTION,
ETH_EVENT_RX_ADD_UDP_FILTER,
ETH_EVENT_RX_DELETE_UDP_FILTER,
- ETH_EVENT_RX_ADD_GFT_FILTER,
- ETH_EVENT_RX_DELETE_GFT_FILTER,
ETH_EVENT_RX_CREATE_GFT_ACTION,
+ ETH_EVENT_RX_GFT_UPDATE_FILTER,
MAX_ETH_EVENT_OPCODE
};
@@ -732,21 +731,19 @@ enum eth_ramrod_cmd_id {
ETH_RAMROD_TX_QUEUE_STOP /* TX Queue Stop Ramrod */,
ETH_RAMROD_FILTERS_UPDATE /* Add or Remove Mac/Vlan/Pair filters */,
ETH_RAMROD_RX_QUEUE_UPDATE /* RX Queue Update Ramrod */,
- ETH_RAMROD_RX_CREATE_OPENFLOW_ACTION
- /* RX - Create an Openflow Action */,
- ETH_RAMROD_RX_ADD_OPENFLOW_FILTER
- /* RX - Add an Openflow Filter to the Searcher */,
- ETH_RAMROD_RX_DELETE_OPENFLOW_FILTER
- /* RX - Delete an Openflow Filter to the Searcher */,
- ETH_RAMROD_RX_ADD_UDP_FILTER /* RX - Add a UDP Filter to the Searcher */
- ,
- ETH_RAMROD_RX_DELETE_UDP_FILTER
- /* RX - Delete a UDP Filter to the Searcher */,
- ETH_RAMROD_RX_CREATE_GFT_ACTION /* RX - Create an Gft Action */,
- ETH_RAMROD_RX_DELETE_GFT_FILTER
- /* RX - Delete an GFT Filter to the Searcher */,
- ETH_RAMROD_RX_ADD_GFT_FILTER
- /* RX - Add an GFT Filter to the Searcher */,
+/* RX - Create an Openflow Action */
+ ETH_RAMROD_RX_CREATE_OPENFLOW_ACTION,
+/* RX - Add an Openflow Filter to the Searcher */
+ ETH_RAMROD_RX_ADD_OPENFLOW_FILTER,
+/* RX - Delete an Openflow Filter to the Searcher */
+ ETH_RAMROD_RX_DELETE_OPENFLOW_FILTER,
+/* RX - Add a UDP Filter to the Searcher */
+ ETH_RAMROD_RX_ADD_UDP_FILTER,
+/* RX - Delete a UDP Filter to the Searcher */
+ ETH_RAMROD_RX_DELETE_UDP_FILTER,
+ ETH_RAMROD_RX_CREATE_GFT_ACTION /* RX - Create a Gft Action */,
+/* RX - Add/Delete a GFT Filter to the Searcher */
+ ETH_RAMROD_GFT_UPDATE_FILTER,
MAX_ETH_RAMROD_CMD_ID
};
@@ -911,14 +908,21 @@ struct eth_vport_tx_mode {
};
/*
- * Ramrod data for rx add gft filter data
+ * Ramrod data for rx create gft action
*/
-struct rx_add_gft_filter_data {
- struct regpair pkt_hdr_addr /* Packet Header That Defines GFT Filter */
- ;
- __le16 action_icid /* ICID of Action to run for this filter */;
- __le16 pkt_hdr_length /* Packet Header Length */;
- u8 reserved[4];
+enum gft_filter_update_action {
+ GFT_ADD_FILTER,
+ GFT_DELETE_FILTER,
+ MAX_GFT_FILTER_UPDATE_ACTION
+};
+
+/*
+ * Ramrod data for rx create gft action
+ */
+enum gft_logic_filter_type {
+ GFT_FILTER_TYPE /* flow FW is GFT-logic as well */,
+ RFS_FILTER_TYPE /* flow FW is A-RFS-logic */,
+ MAX_GFT_LOGIC_FILTER_TYPE
};
/*
@@ -990,7 +994,13 @@ struct rx_queue_start_ramrod_data {
u8 notify_en;
u8 toggle_val
/* Initial value for the toggle valid bit - used in PMD mode */;
- u8 reserved[7];
+/* Index of RX producers in VF zone. Used for VF only. */
+ u8 vf_rx_prod_index;
+/* Backward compatibility mode. If set, unprotected mStorm queue zone will used
+ * for VF RX producers instead of VF zone.
+ */
+ u8 vf_rx_prod_use_zone_a;
+ u8 reserved[5];
__le16 reserved1 /* FW reserved. */;
struct regpair cqe_pbl_addr /* Base address on host of CQE PBL */;
struct regpair bd_base /* bd address of the first bd page */;
@@ -1046,13 +1056,33 @@ struct rx_udp_filter_data {
};
/*
- * Ramrod data for rx queue start ramrod
+ * Ramrod to add filter - filter is packet headr of type of packet wished to
+ * pass certin FW flow
+ */
+struct rx_update_gft_filter_data {
+/* Pointer to Packet Header That Defines GFT Filter */
+ struct regpair pkt_hdr_addr;
+ __le16 pkt_hdr_length /* Packet Header Length */;
+/* If is_rfs flag is set: Queue Id to associate filter with else: action icid */
+ __le16 rx_qid_or_action_icid;
+/* Field is used if is_rfs flag is set: vport Id of which to associate filter
+ * with
+ */
+ u8 vport_id;
+/* Use enum to set type of flow using gft HW logic blocks */
+ u8 filter_type;
+ u8 filter_action /* Use to set type of action on filter */;
+ u8 reserved;
+};
+
+/*
+ * Ramrod data for tx queue start ramrod
*/
struct tx_queue_start_ramrod_data {
__le16 sb_id /* Status block ID */;
u8 sb_index /* Status block protocol index */;
u8 vport_id /* VPort ID */;
- u8 reserved0 /* FW reserved. */;
+ u8 reserved0 /* FW reserved. (qcn_rl_en) */;
u8 stats_counter_id /* Statistics counter ID to use */;
__le16 qm_pq_id /* QM PQ ID */;
u8 flags;
@@ -1076,10 +1106,16 @@ struct tx_queue_start_ramrod_data {
__le16 pxp_st_index /* PXP command Steering tag index */;
__le16 comp_agg_size /* TX completion min agg size - for PMD queues */;
__le16 queue_zone_id /* queue zone ID to use */;
- __le16 test_dup_count /* In Test Mode, number of duplications */;
+ __le16 reserved2 /* FW reserved. (test_dup_count) */;
__le16 pbl_size /* Number of BD pages pointed by PBL */;
__le16 tx_queue_id
/* unique Queue ID - currently used only by PMD flow */;
+/* Unique Same-As-Last Resource ID - improves performance for same-as-last
+ * packets per connection (range 0..ETH_TX_NUM_SAME_AS_LAST_ENTRIES-1 IDs
+ * available)
+ */
+ __le16 same_as_last_id;
+ __le16 reserved[3];
struct regpair pbl_base_addr /* address of the pbl page */;
struct regpair bd_cons_address
/* BD consumer address in host - for PMD queues */;
@@ -1124,12 +1160,16 @@ struct vport_start_ramrod_data {
u8 handle_ptp_pkts /* If set, the vport handles PTP Timesync Packets */
;
u8 silent_vlan_removal_en;
-/* If enable then innerVlan will be striped and not written to cqe */
+ /* If enable then innerVlan will be striped and not written to cqe */
u8 untagged;
struct eth_tx_err_vals tx_err_behav
/* Desired behavior per TX error type */;
u8 zero_placement_offset;
- u8 reserved[7];
+/* If set, Contorl frames will be filtered according to MAC check. */
+ u8 ctl_frame_mac_check_en;
+/* If set, Contorl frames will be filtered according to ethtype check. */
+ u8 ctl_frame_ethtype_check_en;
+ u8 reserved[5];
};
/*
@@ -1459,8 +1499,8 @@ struct mstorm_eth_conn_ag_ctx {
__le32 reg1 /* reg1 */;
};
-/* @DPDK: xstormEthConnAgCtxDqExtLdPart */
-struct xstorm_eth_conn_ag_ctx_dq_ext_ld_part {
+
+struct xstormEthConnAgCtxDqExtLdPart {
u8 reserved0 /* cdu_validation */;
u8 eth_state /* state */;
u8 flags0;
@@ -1672,11 +1712,11 @@ struct xstorm_eth_conn_ag_ctx_dq_ext_ld_part {
#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_SHIFT 6
u8 edpm_event_id /* byte2 */;
__le16 physical_q0 /* physical_q0 */;
- __le16 word1 /* physical_q1 */;
+ __le16 quota /* physical_q1 */;
__le16 edpm_num_bds /* physical_q2 */;
__le16 tx_bd_cons /* word3 */;
__le16 tx_bd_prod /* word4 */;
- __le16 go_to_bd_cons /* word5 */;
+ __le16 tx_class /* word5 */;
__le16 conn_dpi /* conn_dpi */;
u8 byte3 /* byte3 */;
u8 byte4 /* byte4 */;
@@ -1901,11 +1941,11 @@ struct xstorm_eth_hw_conn_ag_ctx {
#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT 6
u8 edpm_event_id /* byte2 */;
__le16 physical_q0 /* physical_q0 */;
- __le16 word1 /* physical_q1 */;
+ __le16 quota /* physical_q1 */;
__le16 edpm_num_bds /* physical_q2 */;
__le16 tx_bd_cons /* word3 */;
__le16 tx_bd_prod /* word4 */;
- __le16 go_to_bd_cons /* word5 */;
+ __le16 tx_class /* word5 */;
__le16 conn_dpi /* conn_dpi */;
};
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 04ec1ea..7f4db0a 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -23,27 +23,28 @@
#define ECORE_BAR_ACQUIRE_TIMEOUT 1000
/* Invalid values */
-#define ECORE_BAR_INVALID_OFFSET -1
+#define ECORE_BAR_INVALID_OFFSET (OSAL_CPU_TO_LE32(-1))
struct ecore_ptt {
osal_list_entry_t list_entry;
unsigned int idx;
struct pxp_ptt_entry pxp;
+ u8 hwfn_id;
};
struct ecore_ptt_pool {
osal_list_t free_list;
- osal_spinlock_t lock;
+ osal_spinlock_t lock; /* ptt synchronized access */
struct ecore_ptt ptts[PXP_EXTERNAL_BAR_PF_WINDOW_NUM];
};
enum _ecore_status_t ecore_ptt_pool_alloc(struct ecore_hwfn *p_hwfn)
{
- struct ecore_ptt_pool *p_pool;
+ struct ecore_ptt_pool *p_pool = OSAL_ALLOC(p_hwfn->p_dev,
+ GFP_KERNEL,
+ sizeof(*p_pool));
int i;
- p_pool = OSAL_ALLOC(p_hwfn->p_dev, GFP_KERNEL,
- sizeof(struct ecore_ptt_pool));
if (!p_pool)
return ECORE_NOMEM;
@@ -52,6 +53,7 @@ enum _ecore_status_t ecore_ptt_pool_alloc(struct ecore_hwfn *p_hwfn)
p_pool->ptts[i].idx = i;
p_pool->ptts[i].pxp.offset = ECORE_BAR_INVALID_OFFSET;
p_pool->ptts[i].pxp.pretend.control = 0;
+ p_pool->ptts[i].hwfn_id = p_hwfn->my_id;
/* There are special PTT entries that are taken only by design.
* The rest are added ot the list for general usage.
@@ -95,32 +97,36 @@ struct ecore_ptt *ecore_ptt_acquire(struct ecore_hwfn *p_hwfn)
/* Take the free PTT from the list */
for (i = 0; i < ECORE_BAR_ACQUIRE_TIMEOUT; i++) {
OSAL_SPIN_LOCK(&p_hwfn->p_ptt_pool->lock);
- if (!OSAL_LIST_IS_EMPTY(&p_hwfn->p_ptt_pool->free_list))
- break;
- OSAL_SPIN_UNLOCK(&p_hwfn->p_ptt_pool->lock);
- OSAL_MSLEEP(1);
- }
-
- /* We should not time-out, but it can happen... --> Lock isn't held */
- if (i == ECORE_BAR_ACQUIRE_TIMEOUT) {
- DP_NOTICE(p_hwfn, true, "Failed to allocate PTT\n");
- return OSAL_NULL;
- }
-
- p_ptt = OSAL_LIST_FIRST_ENTRY(&p_hwfn->p_ptt_pool->free_list,
+ if (!OSAL_LIST_IS_EMPTY(&p_hwfn->p_ptt_pool->free_list)) {
+ p_ptt = OSAL_LIST_FIRST_ENTRY(
+ &p_hwfn->p_ptt_pool->free_list,
struct ecore_ptt, list_entry);
OSAL_LIST_REMOVE_ENTRY(&p_ptt->list_entry,
&p_hwfn->p_ptt_pool->free_list);
+
OSAL_SPIN_UNLOCK(&p_hwfn->p_ptt_pool->lock);
- DP_VERBOSE(p_hwfn, ECORE_MSG_HW, "allocated ptt %d\n", p_ptt->idx);
+ DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+ "allocated ptt %d\n", p_ptt->idx);
return p_ptt;
}
+ OSAL_SPIN_UNLOCK(&p_hwfn->p_ptt_pool->lock);
+ OSAL_MSLEEP(1);
+ }
+
+ DP_NOTICE(p_hwfn, true,
+ "PTT acquire timeout - failed to allocate PTT\n");
+ return OSAL_NULL;
+}
+
void ecore_ptt_release(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
{
/* This PTT should not be set to pretend if it is being released */
+ /* TODO - add some pretend sanity checks, to make sure pretend
+ * isn't set on this ptt
+ */
OSAL_SPIN_LOCK(&p_hwfn->p_ptt_pool->lock);
OSAL_LIST_PUSH_HEAD(&p_ptt->list_entry, &p_hwfn->p_ptt_pool->free_list);
@@ -130,7 +136,7 @@ void ecore_ptt_release(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
u32 ecore_ptt_get_hw_addr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
{
/* The HW is using DWORDS and we need to translate it to Bytes */
- return p_ptt->pxp.offset << 2;
+ return OSAL_LE32_TO_CPU(p_ptt->pxp.offset) << 2;
}
static u32 ecore_ptt_config_addr(struct ecore_ptt *p_ptt)
@@ -161,11 +167,12 @@ void ecore_ptt_set_win(struct ecore_hwfn *p_hwfn,
p_ptt->idx, new_hw_addr);
/* The HW is using DWORDS and the address is in Bytes */
- p_ptt->pxp.offset = new_hw_addr >> 2;
+ p_ptt->pxp.offset = OSAL_CPU_TO_LE32(new_hw_addr >> 2);
REG_WR(p_hwfn,
ecore_ptt_config_addr(p_ptt) +
- OFFSETOF(struct pxp_ptt_entry, offset), p_ptt->pxp.offset);
+ OFFSETOF(struct pxp_ptt_entry, offset),
+ OSAL_LE32_TO_CPU(p_ptt->pxp.offset));
}
static u32 ecore_set_ptt(struct ecore_hwfn *p_hwfn,
@@ -176,6 +183,11 @@ static u32 ecore_set_ptt(struct ecore_hwfn *p_hwfn,
offset = hw_addr - win_hw_addr;
+ if (p_ptt->hwfn_id != p_hwfn->my_id)
+ DP_NOTICE(p_hwfn, true,
+ "ptt[%d] of hwfn[%02x] is used by hwfn[%02x]!\n",
+ p_ptt->idx, p_ptt->hwfn_id, p_hwfn->my_id);
+
/* Verify the address is within the window */
if (hw_addr < win_hw_addr ||
offset >= PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE) {
@@ -198,11 +210,37 @@ struct ecore_ptt *ecore_get_reserved_ptt(struct ecore_hwfn *p_hwfn,
return &p_hwfn->p_ptt_pool->ptts[ptt_idx];
}
+static bool ecore_is_reg_fifo_empty(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt)
+{
+ bool is_empty = true;
+ u32 bar_addr;
+
+ if (!p_hwfn->p_dev->chk_reg_fifo)
+ goto out;
+
+ /* ecore_rd() cannot be used here since it calls this function */
+ bar_addr = ecore_set_ptt(p_hwfn, p_ptt, GRC_REG_TRACE_FIFO_VALID_DATA);
+ is_empty = REG_RD(p_hwfn, bar_addr) == 0;
+
+#ifndef ASIC_ONLY
+ if (CHIP_REV_IS_SLOW(p_hwfn->p_dev))
+ OSAL_UDELAY(100);
+#endif
+
+out:
+ return is_empty;
+}
+
void ecore_wr(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt, u32 hw_addr, u32 val)
{
- u32 bar_addr = ecore_set_ptt(p_hwfn, p_ptt, hw_addr);
+ bool prev_fifo_err;
+ u32 bar_addr;
+
+ prev_fifo_err = !ecore_is_reg_fifo_empty(p_hwfn, p_ptt);
+ bar_addr = ecore_set_ptt(p_hwfn, p_ptt, hw_addr);
REG_WR(p_hwfn, bar_addr, val);
DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
"bar_addr 0x%x, hw_addr 0x%x, val 0x%x\n",
@@ -212,12 +250,21 @@ void ecore_wr(struct ecore_hwfn *p_hwfn,
if (CHIP_REV_IS_SLOW(p_hwfn->p_dev))
OSAL_UDELAY(100);
#endif
+
+ OSAL_WARN(!prev_fifo_err && !ecore_is_reg_fifo_empty(p_hwfn, p_ptt),
+ "reg_fifo err was caused by a call to ecore_wr(0x%x, 0x%x)\n",
+ hw_addr, val);
}
u32 ecore_rd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 hw_addr)
{
- u32 bar_addr = ecore_set_ptt(p_hwfn, p_ptt, hw_addr);
- u32 val = REG_RD(p_hwfn, bar_addr);
+ bool prev_fifo_err;
+ u32 bar_addr, val;
+
+ prev_fifo_err = !ecore_is_reg_fifo_empty(p_hwfn, p_ptt);
+
+ bar_addr = ecore_set_ptt(p_hwfn, p_ptt, hw_addr);
+ val = REG_RD(p_hwfn, bar_addr);
DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
"bar_addr 0x%x, hw_addr 0x%x, val 0x%x\n",
@@ -228,6 +275,10 @@ u32 ecore_rd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u32 hw_addr)
OSAL_UDELAY(100);
#endif
+ OSAL_WARN(!prev_fifo_err && !ecore_is_reg_fifo_empty(p_hwfn, p_ptt),
+ "reg_fifo error was caused by a call to ecore_rd(0x%x)\n",
+ hw_addr);
+
return val;
}
@@ -292,60 +343,59 @@ void ecore_memcpy_to(struct ecore_hwfn *p_hwfn,
void ecore_fid_pretend(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt, u16 fid)
{
- void *p_pretend;
u16 control = 0;
SET_FIELD(control, PXP_PRETEND_CMD_IS_CONCRETE, 1);
SET_FIELD(control, PXP_PRETEND_CMD_PRETEND_FUNCTION, 1);
/* Every pretend undos prev pretends, including previous port pretend */
+
SET_FIELD(control, PXP_PRETEND_CMD_PORT, 0);
SET_FIELD(control, PXP_PRETEND_CMD_USE_PORT, 0);
SET_FIELD(control, PXP_PRETEND_CMD_PRETEND_PORT, 1);
- p_ptt->pxp.pretend.control = OSAL_CPU_TO_LE16(control);
if (!GET_FIELD(fid, PXP_CONCRETE_FID_VFVALID))
fid = GET_FIELD(fid, PXP_CONCRETE_FID_PFID);
+ p_ptt->pxp.pretend.control = OSAL_CPU_TO_LE16(control);
p_ptt->pxp.pretend.fid.concrete_fid.fid = OSAL_CPU_TO_LE16(fid);
- p_pretend = &p_ptt->pxp.pretend;
REG_WR(p_hwfn,
ecore_ptt_config_addr(p_ptt) +
- OFFSETOF(struct pxp_ptt_entry, pretend), *(u32 *)p_pretend);
+ OFFSETOF(struct pxp_ptt_entry, pretend),
+ *(u32 *)&p_ptt->pxp.pretend);
}
void ecore_port_pretend(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt, u8 port_id)
{
- void *p_pretend;
u16 control = 0;
SET_FIELD(control, PXP_PRETEND_CMD_PORT, port_id);
SET_FIELD(control, PXP_PRETEND_CMD_USE_PORT, 1);
SET_FIELD(control, PXP_PRETEND_CMD_PRETEND_PORT, 1);
- p_ptt->pxp.pretend.control = control;
+ p_ptt->pxp.pretend.control = OSAL_CPU_TO_LE16(control);
- p_pretend = &p_ptt->pxp.pretend;
REG_WR(p_hwfn,
ecore_ptt_config_addr(p_ptt) +
- OFFSETOF(struct pxp_ptt_entry, pretend), *(u32 *)p_pretend);
+ OFFSETOF(struct pxp_ptt_entry, pretend),
+ *(u32 *)&p_ptt->pxp.pretend);
}
void ecore_port_unpretend(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
{
- void *p_pretend;
u16 control = 0;
SET_FIELD(control, PXP_PRETEND_CMD_PORT, 0);
SET_FIELD(control, PXP_PRETEND_CMD_USE_PORT, 0);
SET_FIELD(control, PXP_PRETEND_CMD_PRETEND_PORT, 1);
- p_ptt->pxp.pretend.control = control;
- p_pretend = &p_ptt->pxp.pretend;
+ p_ptt->pxp.pretend.control = OSAL_CPU_TO_LE16(control);
+
REG_WR(p_hwfn,
ecore_ptt_config_addr(p_ptt) +
- OFFSETOF(struct pxp_ptt_entry, pretend), *(u32 *)p_pretend);
+ OFFSETOF(struct pxp_ptt_entry, pretend),
+ *(u32 *)&p_ptt->pxp.pretend);
}
u32 ecore_vfid_to_concrete(struct ecore_hwfn *p_hwfn, u8 vfid)
@@ -442,15 +492,16 @@ static u32 ecore_dmae_idx_to_go_cmd(u8 idx)
{
OSAL_BUILD_BUG_ON((DMAE_REG_GO_C31 - DMAE_REG_GO_C0) != 31 * 4);
- return DMAE_REG_GO_C0 + idx * 4;
+ /* All the DMAE 'go' registers form an array in internal memory */
+ return DMAE_REG_GO_C0 + (idx << 2);
}
static enum _ecore_status_t
ecore_dmae_post_command(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
{
struct dmae_cmd *p_command = p_hwfn->dmae_info.p_dmae_cmd;
- enum _ecore_status_t ecore_status = ECORE_SUCCESS;
u8 idx_cmd = p_hwfn->dmae_info.channel, i;
+ enum _ecore_status_t ecore_status = ECORE_SUCCESS;
/* verify address is not OSAL_NULL */
if ((((!p_command->dst_addr_lo) && (!p_command->dst_addr_hi)) ||
@@ -459,13 +510,14 @@ ecore_dmae_post_command(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
"source or destination address 0 idx_cmd=%d\n"
"opcode = [0x%08x,0x%04x] len=0x%x"
" src=0x%x:%x dst=0x%x:%x\n",
- idx_cmd, (u32)p_command->opcode,
- (u16)p_command->opcode_b,
- (int)p_command->length_dw,
- (int)p_command->src_addr_hi,
- (int)p_command->src_addr_lo,
- (int)p_command->dst_addr_hi,
- (int)p_command->dst_addr_lo);
+ idx_cmd,
+ OSAL_LE32_TO_CPU(p_command->opcode),
+ OSAL_LE16_TO_CPU(p_command->opcode_b),
+ OSAL_LE16_TO_CPU(p_command->length_dw),
+ OSAL_LE32_TO_CPU(p_command->src_addr_hi),
+ OSAL_LE32_TO_CPU(p_command->src_addr_lo),
+ OSAL_LE32_TO_CPU(p_command->dst_addr_hi),
+ OSAL_LE32_TO_CPU(p_command->dst_addr_lo));
return ECORE_INVAL;
}
@@ -473,12 +525,14 @@ ecore_dmae_post_command(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
"Posting DMAE command [idx %d]: opcode = [0x%08x,0x%04x]"
"len=0x%x src=0x%x:%x dst=0x%x:%x\n",
- idx_cmd, (u32)p_command->opcode,
- (u16)p_command->opcode_b,
- (int)p_command->length,
- (int)p_command->src_addr_hi,
- (int)p_command->src_addr_lo,
- (int)p_command->dst_addr_hi, (int)p_command->dst_addr_lo);
+ idx_cmd,
+ OSAL_LE32_TO_CPU(p_command->opcode),
+ OSAL_LE16_TO_CPU(p_command->opcode_b),
+ OSAL_LE16_TO_CPU(p_command->length_dw),
+ OSAL_LE32_TO_CPU(p_command->src_addr_hi),
+ OSAL_LE32_TO_CPU(p_command->src_addr_lo),
+ OSAL_LE32_TO_CPU(p_command->dst_addr_hi),
+ OSAL_LE32_TO_CPU(p_command->dst_addr_lo));
/* Copy the command to DMAE - need to do it before every call
* for source/dest address no reset.
@@ -514,8 +568,7 @@ enum _ecore_status_t ecore_dmae_info_alloc(struct ecore_hwfn *p_hwfn)
if (*p_comp == OSAL_NULL) {
DP_NOTICE(p_hwfn, true,
"Failed to allocate `p_completion_word'\n");
- ecore_dmae_info_free(p_hwfn);
- return ECORE_NOMEM;
+ goto err;
}
p_addr = &p_hwfn->dmae_info.dmae_cmd_phys_addr;
@@ -524,8 +577,7 @@ enum _ecore_status_t ecore_dmae_info_alloc(struct ecore_hwfn *p_hwfn)
if (*p_cmd == OSAL_NULL) {
DP_NOTICE(p_hwfn, true,
"Failed to allocate `struct dmae_cmd'\n");
- ecore_dmae_info_free(p_hwfn);
- return ECORE_NOMEM;
+ goto err;
}
p_addr = &p_hwfn->dmae_info.intermediate_buffer_phys_addr;
@@ -534,14 +586,15 @@ enum _ecore_status_t ecore_dmae_info_alloc(struct ecore_hwfn *p_hwfn)
if (*p_buff == OSAL_NULL) {
DP_NOTICE(p_hwfn, true,
"Failed to allocate `intermediate_buffer'\n");
- ecore_dmae_info_free(p_hwfn);
- return ECORE_NOMEM;
+ goto err;
}
- /* DMAE_E4_TODO : Need to change this to reflect proper channel */
p_hwfn->dmae_info.channel = p_hwfn->rel_pf_id;
return ECORE_SUCCESS;
+err:
+ ecore_dmae_info_free(p_hwfn);
+ return ECORE_NOMEM;
}
void ecore_dmae_info_free(struct ecore_hwfn *p_hwfn)
@@ -598,9 +651,6 @@ static enum _ecore_status_t ecore_dmae_operation_wait(struct ecore_hwfn *p_hwfn)
*/
OSAL_BARRIER(p_hwfn->p_dev);
while (*p_hwfn->dmae_info.p_completion_word != DMAE_COMPLETION_VAL) {
- /* DMAE_E4_TODO : using OSAL_MSLEEP instead of mm_wait since mm
- * functions are getting depriciated. Need to review for future.
- */
OSAL_UDELAY(DMAE_MIN_WAIT_TIME);
if (++wait_cnt > wait_cnt_limit) {
DP_NOTICE(p_hwfn->p_dev, ECORE_MSG_HW,
@@ -629,7 +679,7 @@ ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u64 src_addr,
u64 dst_addr,
- u8 src_type, u8 dst_type, u32 length)
+ u8 src_type, u8 dst_type, u32 length_dw)
{
dma_addr_t phys = p_hwfn->dmae_info.intermediate_buffer_phys_addr;
struct dmae_cmd *cmd = p_hwfn->dmae_info.p_dmae_cmd;
@@ -638,16 +688,16 @@ ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn,
switch (src_type) {
case ECORE_DMAE_ADDRESS_GRC:
case ECORE_DMAE_ADDRESS_HOST_PHYS:
- cmd->src_addr_hi = DMA_HI(src_addr);
- cmd->src_addr_lo = DMA_LO(src_addr);
+ cmd->src_addr_hi = OSAL_CPU_TO_LE32(DMA_HI(src_addr));
+ cmd->src_addr_lo = OSAL_CPU_TO_LE32(DMA_LO(src_addr));
break;
/* for virt source addresses we use the intermediate buffer. */
case ECORE_DMAE_ADDRESS_HOST_VIRT:
- cmd->src_addr_hi = DMA_HI(phys);
- cmd->src_addr_lo = DMA_LO(phys);
+ cmd->src_addr_hi = OSAL_CPU_TO_LE32(DMA_HI(phys));
+ cmd->src_addr_lo = OSAL_CPU_TO_LE32(DMA_LO(phys));
OSAL_MEMCPY(&p_hwfn->dmae_info.p_intermediate_buffer[0],
(void *)(osal_uintptr_t)src_addr,
- length * sizeof(u32));
+ length_dw * sizeof(u32));
break;
default:
return ECORE_INVAL;
@@ -656,26 +706,26 @@ ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn,
switch (dst_type) {
case ECORE_DMAE_ADDRESS_GRC:
case ECORE_DMAE_ADDRESS_HOST_PHYS:
- cmd->dst_addr_hi = DMA_HI(dst_addr);
- cmd->dst_addr_lo = DMA_LO(dst_addr);
+ cmd->dst_addr_hi = OSAL_CPU_TO_LE32(DMA_HI(dst_addr));
+ cmd->dst_addr_lo = OSAL_CPU_TO_LE32(DMA_LO(dst_addr));
break;
/* for virt destination address we use the intermediate buff. */
case ECORE_DMAE_ADDRESS_HOST_VIRT:
- cmd->dst_addr_hi = DMA_HI(phys);
- cmd->dst_addr_lo = DMA_LO(phys);
+ cmd->dst_addr_hi = OSAL_CPU_TO_LE32(DMA_HI(phys));
+ cmd->dst_addr_lo = OSAL_CPU_TO_LE32(DMA_LO(phys));
break;
default:
return ECORE_INVAL;
}
- cmd->length_dw = (u16)length;
+ cmd->length_dw = OSAL_CPU_TO_LE16((u16)length_dw);
if (src_type == ECORE_DMAE_ADDRESS_HOST_VIRT ||
src_type == ECORE_DMAE_ADDRESS_HOST_PHYS)
OSAL_DMA_SYNC(p_hwfn->p_dev,
(void *)HILO_U64(cmd->src_addr_hi,
cmd->src_addr_lo),
- length * sizeof(u32), false);
+ length_dw * sizeof(u32), false);
ecore_dmae_post_command(p_hwfn, p_ptt);
@@ -687,21 +737,21 @@ ecore_dmae_execute_sub_operation(struct ecore_hwfn *p_hwfn,
OSAL_DMA_SYNC(p_hwfn->p_dev,
(void *)HILO_U64(cmd->src_addr_hi,
cmd->src_addr_lo),
- length * sizeof(u32), true);
+ length_dw * sizeof(u32), true);
if (ecore_status != ECORE_SUCCESS) {
DP_NOTICE(p_hwfn, ECORE_MSG_HW,
"ecore_dmae_host2grc: Wait Failed. source_addr"
" 0x%lx, grc_addr 0x%lx, size_in_dwords 0x%x\n",
(unsigned long)src_addr, (unsigned long)dst_addr,
- length);
+ length_dw);
return ecore_status;
}
if (dst_type == ECORE_DMAE_ADDRESS_HOST_VIRT)
OSAL_MEMCPY((void *)(osal_uintptr_t)(dst_addr),
&p_hwfn->dmae_info.p_intermediate_buffer[0],
- length * sizeof(u32));
+ length_dw * sizeof(u32));
return ECORE_SUCCESS;
}
@@ -719,18 +769,18 @@ ecore_dmae_execute_command(struct ecore_hwfn *p_hwfn,
dma_addr_t phys = p_hwfn->dmae_info.completion_word_phys_addr;
u16 length_cur = 0, i = 0, cnt_split = 0, length_mod = 0;
struct dmae_cmd *cmd = p_hwfn->dmae_info.p_dmae_cmd;
- enum _ecore_status_t ecore_status = ECORE_SUCCESS;
u64 src_addr_split = 0, dst_addr_split = 0;
u16 length_limit = DMAE_MAX_RW_SIZE;
+ enum _ecore_status_t ecore_status = ECORE_SUCCESS;
u32 offset = 0;
ecore_dmae_opcode(p_hwfn,
(src_type == ECORE_DMAE_ADDRESS_GRC),
(dst_type == ECORE_DMAE_ADDRESS_GRC), p_params);
- cmd->comp_addr_lo = DMA_LO(phys);
- cmd->comp_addr_hi = DMA_HI(phys);
- cmd->comp_val = DMAE_COMPLETION_VAL;
+ cmd->comp_addr_lo = OSAL_CPU_TO_LE32(DMA_LO(phys));
+ cmd->comp_addr_hi = OSAL_CPU_TO_LE32(DMA_HI(phys));
+ cmd->comp_val = OSAL_CPU_TO_LE32(DMAE_COMPLETION_VAL);
/* Check if the grc_addr is valid like < MAX_GRC_OFFSET */
cnt_split = size_in_dwords / length_limit;
diff --git a/drivers/net/qede/base/ecore_hw.h b/drivers/net/qede/base/ecore_hw.h
index 154eb3c..0750b2e 100644
--- a/drivers/net/qede/base/ecore_hw.h
+++ b/drivers/net/qede/base/ecore_hw.h
@@ -260,6 +260,10 @@ void ecore_dmae_info_free(struct ecore_hwfn *p_hwfn);
union ecore_qm_pq_params {
struct {
+ u8 q_idx;
+ } iscsi;
+
+ struct {
u8 tc;
} core;
@@ -268,10 +272,20 @@ union ecore_qm_pq_params {
u8 vf_id;
u8 tc;
} eth;
+
+ struct {
+ u8 dcqcn;
+ u8 qpid; /* roce relative */
+ } roce;
+
+ struct {
+ u8 qidx;
+ } iwarp;
};
u16 ecore_get_qm_pq(struct ecore_hwfn *p_hwfn,
- enum protocol_type proto, union ecore_qm_pq_params *params);
+ enum protocol_type proto,
+ union ecore_qm_pq_params *params);
enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
const u8 *fw_data);
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index bffc73c..e83eeb8 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -13,11 +13,11 @@
#include "ecore_rt_defs.h"
#include "ecore_hsi_common.h"
#include "ecore_hsi_init_func.h"
+#include "ecore_hsi_eth.h"
#include "ecore_hsi_init_tool.h"
+#include "ecore_iro.h"
#include "ecore_init_fw_funcs.h"
-
-/* @DPDK CmInterfaceEnum */
-enum cm_interface_enum {
+enum CmInterfaceEnum {
MCM_SEC,
MCM_PRI,
UCM_SEC,
@@ -51,17 +51,23 @@ enum cm_interface_enum {
#define QM_RL_UPPER_BOUND 62500000
#define QM_RL_PERIOD 5
#define QM_RL_PERIOD_CLK_25M (25 * QM_RL_PERIOD)
-#define QM_RL_INC_VAL(rate) \
-OSAL_MAX_T(u32, (((rate ? rate : 1000000) * QM_RL_PERIOD * 1.01) / 8), 1)
#define QM_RL_MAX_INC_VAL 43750000
+/* RL increment value - the factor of 1.01 was added after seeing only
+ * 99% factor reached in a 25Gbps port with DPDK RFC 2544 test.
+ * In this scenario the PF RL was reducing the line rate to 99% although
+ * the credit increment value was the correct one and FW calculated
+ * correct packet sizes. The reason for the inaccuracy of the RL is
+ * unknown at this point.
+ */
+/* rate in mbps */
+#define QM_RL_INC_VAL(rate) OSAL_MAX_T(u32, (u32)(((rate ? rate : 1000000) * \
+ QM_RL_PERIOD * 101) / (8 * 100)), 1)
/* AFullOprtnstcCrdMask constants */
#define QM_OPPOR_LINE_VOQ_DEF 1
#define QM_OPPOR_FW_STOP_DEF 0
#define QM_OPPOR_PQ_EMPTY_DEF 1
-#define EAGLE_WORKAROUND_TC 7
/* Command Queue constants */
#define PBF_CMDQ_PURE_LB_LINES 150
-#define PBF_CMDQ_EAGLE_WORKAROUND_LINES 8 /* eagle workaround CmdQ */
#define PBF_CMDQ_LINES_RT_OFFSET(voq) \
(PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET + \
voq * (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET \
@@ -73,8 +79,8 @@ voq * (PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET \
((((pbf_cmd_lines) - 4) * 2) | QM_LINE_CRD_REG_SIGN_BIT)
/* BTB: blocks constants (block size = 256B) */
#define BTB_JUMBO_PKT_BLOCKS 38 /* 256B blocks in 9700B packet */
-#define BTB_HEADROOM_BLOCKS BTB_JUMBO_PKT_BLOCKS /* headroom per-port */
-#define BTB_EAGLE_WORKAROUND_BLOCKS 4 /* eagle workaround blocks */
+/* headroom per-port */
+#define BTB_HEADROOM_BLOCKS BTB_JUMBO_PKT_BLOCKS
#define BTB_PURE_LB_FACTOR 10
#define BTB_PURE_LB_RATIO 7 /* factored (hence really 0.7) */
/* QM stop command constants */
@@ -170,6 +176,9 @@ static void ecore_cmdq_lines_voq_rt_init(struct ecore_hwfn *p_hwfn,
u8 voq, u16 cmdq_lines)
{
u32 qm_line_crd;
+ /* In A0 - Limit the size of pbf queue so that only 511 commands
+ * with the minimum size of 4 (FCoE minimum size)
+ */
bool is_bb_a0 = ECORE_IS_BB_A0(p_hwfn->p_dev);
if (is_bb_a0)
cmdq_lines = OSAL_MIN_T(u32, cmdq_lines, 1022);
@@ -189,18 +198,16 @@ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn,
port_params[MAX_NUM_PORTS])
{
u8 tc, voq, port_id, num_tcs_in_port;
- bool eagle_workaround = ENABLE_EAGLE_ENG1_WORKAROUND(p_hwfn);
/* clear PBF lines for all VOQs */
for (voq = 0; voq < MAX_NUM_VOQS; voq++)
STORE_RT_REG(p_hwfn, PBF_CMDQ_LINES_RT_OFFSET(voq), 0);
for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
if (port_params[port_id].active) {
u16 phys_lines, phys_lines_per_tc;
+ /* find #lines to divide between active physical TCs */
phys_lines =
port_params[port_id].num_pbf_cmd_lines -
PBF_CMDQ_PURE_LB_LINES;
- if (eagle_workaround)
- phys_lines -= PBF_CMDQ_EAGLE_WORKAROUND_LINES;
/* find #lines per active physical TC */
num_tcs_in_port = 0;
for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++) {
@@ -222,14 +229,6 @@ static void ecore_cmdq_lines_rt_init(struct ecore_hwfn *p_hwfn,
/* init registers for pure LB TC */
ecore_cmdq_lines_voq_rt_init(p_hwfn, LB_VOQ(port_id),
PBF_CMDQ_PURE_LB_LINES);
- /* init registers for eagle workaround */
- if (eagle_workaround) {
- voq =
- PHYS_VOQ(port_id, EAGLE_WORKAROUND_TC,
- max_phys_tcs_per_port);
- ecore_cmdq_lines_voq_rt_init(p_hwfn, voq,
- PBF_CMDQ_EAGLE_WORKAROUND_LINES);
- }
}
}
}
@@ -262,15 +261,13 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
{
u8 tc, voq, port_id, num_tcs_in_port;
u32 usable_blocks, pure_lb_blocks, phys_blocks;
- bool eagle_workaround = ENABLE_EAGLE_ENG1_WORKAROUND(p_hwfn);
for (port_id = 0; port_id < max_ports_per_engine; port_id++) {
if (port_params[port_id].active) {
/* subtract headroom blocks */
usable_blocks =
port_params[port_id].num_btb_blocks -
BTB_HEADROOM_BLOCKS;
- if (eagle_workaround)
- usable_blocks -= BTB_EAGLE_WORKAROUND_BLOCKS;
+/* find blocks per physical TC. use factor to avoid floating arithmethic */
num_tcs_in_port = 0;
for (tc = 0; tc < NUM_OF_PHYS_TCS; tc++)
@@ -303,18 +300,8 @@ static void ecore_btb_blocks_rt_init(struct ecore_hwfn *p_hwfn,
}
/* init pure LB TC */
STORE_RT_REG(p_hwfn,
- PBF_BTB_GUARANTEED_RT_OFFSET(LB_VOQ
- (port_id)),
- pure_lb_blocks);
- /* init eagle workaround */
- if (eagle_workaround) {
- voq =
- PHYS_VOQ(port_id, EAGLE_WORKAROUND_TC,
- max_phys_tcs_per_port);
- STORE_RT_REG(p_hwfn,
- PBF_BTB_GUARANTEED_RT_OFFSET(voq),
- BTB_EAGLE_WORKAROUND_BLOCKS);
- }
+ PBF_BTB_GUARANTEED_RT_OFFSET(
+ LB_VOQ(port_id)), pure_lb_blocks);
}
}
}
@@ -363,6 +350,10 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
u8 voq =
VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
bool is_vf_pq = (i >= num_pf_pqs);
+ /* added to avoid compilation warning */
+ u32 max_qm_global_rls = MAX_QM_GLOBAL_RLS;
+ bool rl_valid = pq_params[i].rl_valid &&
+ pq_params[i].vport_id < max_qm_global_rls;
/* update first Tx PQ of VPORT/TC */
u8 vport_id_in_pf = pq_params[i].vport_id - start_vport;
u16 first_tx_pq_id =
@@ -379,14 +370,19 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
(voq << QM_WFQ_VP_PQ_VOQ_SHIFT) | (pf_id <<
QM_WFQ_VP_PQ_PF_SHIFT));
}
+ /* check RL ID */
+ if (pq_params[i].rl_valid && pq_params[i].vport_id >=
+ max_qm_global_rls)
+ DP_NOTICE(p_hwfn, true,
+ "Invalid VPORT ID for rate limiter config");
/* fill PQ map entry */
OSAL_MEMSET(&tx_pq_map, 0, sizeof(tx_pq_map));
SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_PQ_VALID, 1);
SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_RL_VALID,
- is_vf_pq ? 1 : 0);
+ rl_valid ? 1 : 0);
SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_VP_PQ_ID, first_tx_pq_id);
SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_RL_ID,
- is_vf_pq ? pq_params[i].vport_id : 0);
+ rl_valid ? pq_params[i].vport_id : 0);
SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_VOQ, voq);
SET_FIELD(tx_pq_map.reg, QM_RF_PQ_MAP_WRR_WEIGHT_GROUP,
pq_params[i].wrr_group);
@@ -398,6 +394,9 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
mem_addr_4kb);
/* check if VF PQ */
if (is_vf_pq) {
+ /* if PQ is associated with a VF, add indication to PQ
+ * VF mask
+ */
tx_pq_vf_mask[pq_id / tx_pq_vf_mask_width] |=
(1 << (pq_id % tx_pq_vf_mask_width));
mem_addr_4kb += vport_pq_mem_4kb;
@@ -409,6 +408,9 @@ static void ecore_tx_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
for (i = 0; i < num_tx_pq_vf_masks; i++) {
if (tx_pq_vf_mask[i]) {
if (is_bb_a0) {
+ /* A0-only: perform read-modify-write
+ *(fixed in B0)
+ */
u32 curr_mask =
is_first_pf ? 0 : ecore_rd(p_hwfn, p_ptt,
QM_REG_MAXPQSIZETXSEL_0
@@ -432,6 +434,8 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
u32 num_tids, u32 base_mem_addr_4kb)
{
u16 i, pq_id;
+/* a single other PQ grp is used in each PF, where PQ group i is used in PF i */
+
u16 pq_group = pf_id;
u32 pq_size = num_pf_cids + num_tids;
u32 pq_mem_4kb = QM_PQ_MEM_4KB(pq_size);
@@ -450,7 +454,7 @@ static void ecore_other_pq_map_rt_init(struct ecore_hwfn *p_hwfn,
mem_addr_4kb += pq_mem_4kb;
}
}
-
+/* Prepare PF WFQ runtime init values for specified PF. Return -1 on error. */
static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
u8 port_id,
u8 pf_id,
@@ -474,15 +478,14 @@ static int ecore_pf_wfq_rt_init(struct ecore_hwfn *p_hwfn,
u8 voq =
VOQ(port_id, pq_params[i].tc_id, max_phys_tcs_per_port);
OVERWRITE_RT_REG(p_hwfn, crd_reg_offset + voq * MAX_NUM_PFS_BB,
- QM_WFQ_CRD_REG_SIGN_BIT);
+ (u32)QM_WFQ_CRD_REG_SIGN_BIT);
}
STORE_RT_REG(p_hwfn, QM_REG_WFQPFUPPERBOUND_RT_OFFSET + pf_id,
- QM_WFQ_UPPER_BOUND | QM_WFQ_CRD_REG_SIGN_BIT);
+ QM_WFQ_UPPER_BOUND | (u32)QM_WFQ_CRD_REG_SIGN_BIT);
STORE_RT_REG(p_hwfn, QM_REG_WFQPFWEIGHT_RT_OFFSET + pf_id, inc_val);
return 0;
}
-
-/* Prepare PF RL runtime init values for the specified PF. Return -1 on err */
+/* Prepare PF RL runtime init values for specified PF. Return -1 on error. */
static int ecore_pf_rl_rt_init(struct ecore_hwfn *p_hwfn, u8 pf_id, u32 pf_rl)
{
u32 inc_val = QM_RL_INC_VAL(pf_rl);
@@ -491,13 +494,15 @@ static int ecore_pf_rl_rt_init(struct ecore_hwfn *p_hwfn, u8 pf_id, u32 pf_rl)
return -1;
}
STORE_RT_REG(p_hwfn, QM_REG_RLPFCRD_RT_OFFSET + pf_id,
- QM_RL_CRD_REG_SIGN_BIT);
+ (u32)QM_RL_CRD_REG_SIGN_BIT);
STORE_RT_REG(p_hwfn, QM_REG_RLPFUPPERBOUND_RT_OFFSET + pf_id,
- QM_RL_UPPER_BOUND | QM_RL_CRD_REG_SIGN_BIT);
+ QM_RL_UPPER_BOUND | (u32)QM_RL_CRD_REG_SIGN_BIT);
STORE_RT_REG(p_hwfn, QM_REG_RLPFINCVAL_RT_OFFSET + pf_id, inc_val);
return 0;
}
-
+/* Prepare VPORT WFQ runtime init values for the specified VPORTs. Return -1 on
+ * error.
+ */
static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
u8 num_vports,
struct init_qm_vport_params *vport_params)
@@ -513,6 +518,9 @@ static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
"Invalid VPORT WFQ weight config");
return -1;
}
+ /* each VPORT can have several VPORT PQ IDs for
+ * different TCs
+ */
for (tc = 0; tc < NUM_OF_TCS; tc++) {
u16 vport_pq_id =
vport_params[i].first_tx_pq_id[tc];
@@ -520,7 +528,7 @@ static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
STORE_RT_REG(p_hwfn,
QM_REG_WFQVPCRD_RT_OFFSET +
vport_pq_id,
- QM_WFQ_CRD_REG_SIGN_BIT);
+ (u32)QM_WFQ_CRD_REG_SIGN_BIT);
STORE_RT_REG(p_hwfn,
QM_REG_WFQVPWEIGHT_RT_OFFSET
+ vport_pq_id, inc_val);
@@ -531,13 +539,20 @@ static int ecore_vp_wfq_rt_init(struct ecore_hwfn *p_hwfn,
return 0;
}
-/* Prepare VPORT RL runtime init values for specified VPORT. Ret -1 on error. */
+/* Prepare VPORT RL runtime init values for the specified VPORTs.
+ * Return -1 on error.
+ */
static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn,
u8 start_vport,
u8 num_vports,
struct init_qm_vport_params *vport_params)
{
u8 i, vport_id;
+ if (start_vport + num_vports >= MAX_QM_GLOBAL_RLS) {
+ DP_NOTICE(p_hwfn, true,
+ "Invalid VPORT ID for rate limiter configuration");
+ return -1;
+ }
/* go over all PF VPORTs */
for (i = 0, vport_id = start_vport; i < num_vports; i++, vport_id++) {
u32 inc_val = QM_RL_INC_VAL(vport_params[i].vport_rl);
@@ -547,10 +562,10 @@ static int ecore_vport_rl_rt_init(struct ecore_hwfn *p_hwfn,
return -1;
}
STORE_RT_REG(p_hwfn, QM_REG_RLGLBLCRD_RT_OFFSET + vport_id,
- QM_RL_CRD_REG_SIGN_BIT);
+ (u32)QM_RL_CRD_REG_SIGN_BIT);
STORE_RT_REG(p_hwfn,
QM_REG_RLGLBLUPPERBOUND_RT_OFFSET + vport_id,
- QM_RL_UPPER_BOUND | QM_RL_CRD_REG_SIGN_BIT);
+ QM_RL_UPPER_BOUND | (u32)QM_RL_CRD_REG_SIGN_BIT);
STORE_RT_REG(p_hwfn, QM_REG_RLGLBLINCVAL_RT_OFFSET + vport_id,
inc_val);
}
@@ -568,7 +583,7 @@ static bool ecore_poll_on_qm_cmd_ready(struct ecore_hwfn *p_hwfn,
}
/* check if timeout while waiting for SDM command ready */
if (i == QM_STOP_CMD_MAX_POLL_COUNT) {
- DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
+ DP_VERBOSE(p_hwfn, ECORE_MSG_DEBUG,
"Timeout waiting for QM SDM cmd ready signal\n");
return false;
}
@@ -610,7 +625,6 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
struct init_qm_port_params
port_params[MAX_NUM_PORTS])
{
- u8 port_id;
/* init AFullOprtnstcCrdMask */
u32 mask =
(QM_OPPOR_LINE_VOQ_DEF << QM_RF_OPPORTUNISTIC_MASK_LINEVOQ_SHIFT) |
@@ -717,7 +731,7 @@ int ecore_init_pf_rl(struct ecore_hwfn *p_hwfn,
return -1;
}
ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFCRD + pf_id * 4,
- QM_RL_CRD_REG_SIGN_BIT);
+ (u32)QM_RL_CRD_REG_SIGN_BIT);
ecore_wr(p_hwfn, p_ptt, QM_REG_RLPFINCVAL + pf_id * 4, inc_val);
return 0;
}
@@ -746,14 +760,20 @@ int ecore_init_vport_wfq(struct ecore_hwfn *p_hwfn,
int ecore_init_vport_rl(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt, u8 vport_id, u32 vport_rl)
{
- u32 inc_val = QM_RL_INC_VAL(vport_rl);
+ u32 inc_val, max_qm_global_rls = MAX_QM_GLOBAL_RLS;
+ if (vport_id >= max_qm_global_rls) {
+ DP_NOTICE(p_hwfn, true,
+ "Invalid VPORT ID for rate limiter configuration");
+ return -1;
+ }
+ inc_val = QM_RL_INC_VAL(vport_rl);
if (inc_val > QM_RL_MAX_INC_VAL) {
DP_NOTICE(p_hwfn, true,
"Invalid VPORT rate-limit configuration");
return -1;
}
ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLCRD + vport_id * 4,
- QM_RL_CRD_REG_SIGN_BIT);
+ (u32)QM_RL_CRD_REG_SIGN_BIT);
ecore_wr(p_hwfn, p_ptt, QM_REG_RLGLBLINCVAL + vport_id * 4, inc_val);
return 0;
}
@@ -1132,22 +1152,22 @@ void ecore_init_brb_ram(struct ecore_hwfn *p_hwfn,
/*In MF should be called once per engine to set EtherType of OuterTag*/
void ecore_set_engine_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u32 eth_type)
+ struct ecore_ptt *p_ptt, u32 ethType)
{
/* update PRS register */
- STORE_RT_REG(p_hwfn, PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET, eth_type);
+ STORE_RT_REG(p_hwfn, PRS_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
/* update NIG register */
- STORE_RT_REG(p_hwfn, NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET, eth_type);
+ STORE_RT_REG(p_hwfn, NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
/* update PBF register */
- STORE_RT_REG(p_hwfn, PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET, eth_type);
+ STORE_RT_REG(p_hwfn, PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET, ethType);
}
/*In MF should be called once per port to set EtherType of OuterTag*/
void ecore_set_port_mf_ovlan_eth_type(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u32 eth_type)
+ struct ecore_ptt *p_ptt, u32 ethType)
{
/* update DORQ register */
- STORE_RT_REG(p_hwfn, DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET, eth_type);
+ STORE_RT_REG(p_hwfn, DORQ_REG_TAG1_ETHERTYPE_RT_OFFSET, ethType);
}
#define SET_TUNNEL_TYPE_ENABLE_BIT(var, offset, enable) \
@@ -1159,7 +1179,7 @@ void ecore_set_vxlan_dest_port(struct ecore_hwfn *p_hwfn,
/* update PRS register */
ecore_wr(p_hwfn, p_ptt, PRS_REG_VXLAN_PORT, dest_port);
/* update NIG register */
- ecore_wr(p_hwfn, p_ptt, NIG_REG_VXLAN_PORT, dest_port);
+ ecore_wr(p_hwfn, p_ptt, NIG_REG_VXLAN_CTRL, dest_port);
/* update PBF register */
ecore_wr(p_hwfn, p_ptt, PBF_REG_VXLAN_PORT, dest_port);
}
@@ -1176,7 +1196,7 @@ void ecore_set_vxlan_enable(struct ecore_hwfn *p_hwfn,
ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
if (reg_val) {
ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
- PRS_ETH_TUNN_FIC_FORMAT);
+ (u32)PRS_ETH_TUNN_FIC_FORMAT);
}
/* update NIG register */
reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE);
@@ -1205,7 +1225,7 @@ void ecore_set_gre_enable(struct ecore_hwfn *p_hwfn,
ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
if (reg_val) {
ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
- PRS_ETH_TUNN_FIC_FORMAT);
+ (u32)PRS_ETH_TUNN_FIC_FORMAT);
}
/* update NIG register */
reg_val = ecore_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE);
@@ -1256,7 +1276,7 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
ecore_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
if (reg_val) {
ecore_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
- PRS_ETH_TUNN_FIC_FORMAT);
+ (u32)PRS_ETH_TUNN_FIC_FORMAT);
}
/* update NIG register */
ecore_wr(p_hwfn, p_ptt, NIG_REG_NGE_ETH_ENABLE,
@@ -1277,3 +1297,179 @@ void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
ecore_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN,
ip_geneve_enable ? 1 : 0);
}
+
+#define T_ETH_PACKET_ACTION_GFT_EVENTID 23
+#define PARSER_ETH_CONN_GFT_ACTION_CM_HDR 272
+#define T_ETH_PACKET_MATCH_RFS_EVENTID 25
+#define PARSER_ETH_CONN_CM_HDR (0x0)
+#define CAM_LINE_SIZE sizeof(u32)
+#define RAM_LINE_SIZE sizeof(u64)
+#define REG_SIZE sizeof(u32)
+
+void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt)
+{
+ /* set RFS event ID to be awakened i Tstorm By Prs */
+ u32 rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT);
+ rfs_cm_hdr_event_id |= T_ETH_PACKET_ACTION_GFT_EVENTID <<
+ PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
+ rfs_cm_hdr_event_id |= PARSER_ETH_CONN_GFT_ACTION_CM_HDR <<
+ PRS_REG_CM_HDR_GFT_CM_HDR_SHIFT;
+ ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, rfs_cm_hdr_event_id);
+}
+
+void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u16 pf_id,
+ bool tcp,
+ bool udp,
+ bool ipv4,
+ bool ipv6)
+{
+ u32 rfs_cm_hdr_event_id = ecore_rd(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT);
+ union gft_cam_line_union camLine;
+ struct gft_ram_line ramLine;
+ u32 *ramLinePointer = (u32 *)&ramLine;
+ int i;
+ if (!ipv6 && !ipv4)
+ DP_NOTICE(p_hwfn, true,
+ "set_rfs_mode_enable: must accept at "
+ "least on of - ipv4 or ipv6");
+ if (!tcp && !udp)
+ DP_NOTICE(p_hwfn, true,
+ "set_rfs_mode_enable: must accept at "
+ "least on of - udp or tcp");
+ /* set RFS event ID to be awakened i Tstorm By Prs */
+ rfs_cm_hdr_event_id |= T_ETH_PACKET_MATCH_RFS_EVENTID <<
+ PRS_REG_CM_HDR_GFT_EVENT_ID_SHIFT;
+ rfs_cm_hdr_event_id |= PARSER_ETH_CONN_CM_HDR <<
+ PRS_REG_CM_HDR_GFT_CM_HDR_SHIFT;
+ ecore_wr(p_hwfn, p_ptt, PRS_REG_CM_HDR_GFT, rfs_cm_hdr_event_id);
+ /* Configure Registers for RFS mode */
+/* enable gft search */
+ ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_GFT, 1);
+ ecore_wr(p_hwfn, p_ptt, PRS_REG_LOAD_L2_FILTER, 0); /* do not load
+ * context only cid
+ * in PRS on match
+ */
+ camLine.cam_line_mapped.camline = 0;
+ /* cam line is now valid!! */
+ SET_FIELD(camLine.cam_line_mapped.camline,
+ GFT_CAM_LINE_MAPPED_VALID, 1);
+ /* filters are per PF!! */
+ SET_FIELD(camLine.cam_line_mapped.camline,
+ GFT_CAM_LINE_MAPPED_PF_ID_MASK, 1);
+ SET_FIELD(camLine.cam_line_mapped.camline,
+ GFT_CAM_LINE_MAPPED_PF_ID, pf_id);
+ if (!(tcp && udp)) {
+ SET_FIELD(camLine.cam_line_mapped.camline,
+ GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK, 1);
+ if (tcp)
+ SET_FIELD(camLine.cam_line_mapped.camline,
+ GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE,
+ GFT_PROFILE_TCP_PROTOCOL);
+ else
+ SET_FIELD(camLine.cam_line_mapped.camline,
+ GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE,
+ GFT_PROFILE_UDP_PROTOCOL);
+ }
+ if (!(ipv4 && ipv6)) {
+ SET_FIELD(camLine.cam_line_mapped.camline,
+ GFT_CAM_LINE_MAPPED_IP_VERSION_MASK, 1);
+ if (ipv4)
+ SET_FIELD(camLine.cam_line_mapped.camline,
+ GFT_CAM_LINE_MAPPED_IP_VERSION,
+ GFT_PROFILE_IPV4);
+ else
+ SET_FIELD(camLine.cam_line_mapped.camline,
+ GFT_CAM_LINE_MAPPED_IP_VERSION,
+ GFT_PROFILE_IPV6);
+ }
+ /* write characteristics to cam */
+ ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id,
+ camLine.cam_line_mapped.camline);
+ camLine.cam_line_mapped.camline =
+ ecore_rd(p_hwfn, p_ptt, PRS_REG_GFT_CAM + CAM_LINE_SIZE * pf_id);
+ /* write line to RAM - compare to filter 4 tuple */
+ ramLine.low32bits = 0;
+ ramLine.high32bits = 0;
+ SET_FIELD(ramLine.high32bits, GFT_RAM_LINE_DST_IP, 1);
+ SET_FIELD(ramLine.high32bits, GFT_RAM_LINE_SRC_IP, 1);
+ SET_FIELD(ramLine.low32bits, GFT_RAM_LINE_SRC_PORT, 1);
+ SET_FIELD(ramLine.low32bits, GFT_RAM_LINE_DST_PORT, 1);
+ /* each iteration write to reg */
+ for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++)
+ ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
+ RAM_LINE_SIZE * pf_id +
+ i * REG_SIZE, *(ramLinePointer + i));
+ /* set default profile so that no filter match will happen */
+ ramLine.low32bits = 0xffff;
+ ramLine.high32bits = 0xffff;
+ for (i = 0; i < RAM_LINE_SIZE / REG_SIZE; i++)
+ ecore_wr(p_hwfn, p_ptt, PRS_REG_GFT_PROFILE_MASK_RAM +
+ RAM_LINE_SIZE * PRS_GFT_CAM_LINES_NO_MATCH +
+ i * REG_SIZE, *(ramLinePointer + i));
+}
+
+/* Configure VF zone size mode*/
+void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt, u16 mode,
+ bool runtime_init)
+{
+ u32 msdm_vf_size_log = MSTORM_VF_ZONE_DEFAULT_SIZE_LOG;
+ u32 msdm_vf_offset_mask;
+ if (mode == VF_ZONE_SIZE_MODE_DOUBLE)
+ msdm_vf_size_log += 1;
+ else if (mode == VF_ZONE_SIZE_MODE_QUAD)
+ msdm_vf_size_log += 2;
+ msdm_vf_offset_mask = (1 << msdm_vf_size_log) - 1;
+ if (runtime_init) {
+ STORE_RT_REG(p_hwfn,
+ PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET,
+ msdm_vf_size_log);
+ STORE_RT_REG(p_hwfn,
+ PGLUE_REG_B_MSDM_OFFSET_MASK_B_RT_OFFSET,
+ msdm_vf_offset_mask);
+ } else {
+ ecore_wr(p_hwfn, p_ptt,
+ PGLUE_B_REG_MSDM_VF_SHIFT_B, msdm_vf_size_log);
+ ecore_wr(p_hwfn, p_ptt,
+ PGLUE_B_REG_MSDM_OFFSET_MASK_B, msdm_vf_offset_mask);
+ }
+}
+
+/* get mstorm statistics for offset by VF zone size mode*/
+u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
+ u16 stat_cnt_id,
+ u16 vf_zone_size_mode)
+{
+ u32 offset = MSTORM_QUEUE_STAT_OFFSET(stat_cnt_id);
+ if ((vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) &&
+ (stat_cnt_id > MAX_NUM_PFS)) {
+ if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
+ offset += (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
+ (stat_cnt_id - MAX_NUM_PFS);
+ else if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_QUAD)
+ offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
+ (stat_cnt_id - MAX_NUM_PFS);
+ }
+ return offset;
+}
+
+/* get mstorm VF producer offset by VF zone size mode*/
+u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn,
+ u8 vf_id,
+ u8 vf_queue_id,
+ u16 vf_zone_size_mode)
+{
+ u32 offset = MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id);
+ if (vf_zone_size_mode != VF_ZONE_SIZE_MODE_DEFAULT) {
+ if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_DOUBLE)
+ offset += (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
+ vf_id;
+ else if (vf_zone_size_mode == VF_ZONE_SIZE_MODE_QUAD)
+ offset += 3 * (1 << MSTORM_VF_ZONE_DEFAULT_SIZE_LOG) *
+ vf_id;
+ }
+ return offset;
+}
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index f5df764..9df0e7d 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -28,10 +28,12 @@ struct init_qm_pq_params;
u32 ecore_qm_pf_mem_size(u8 pf_id,
u32 num_pf_cids,
u32 num_vf_cids,
- u32 num_tids, u16 num_pf_pqs, u16 num_vf_pqs);
+ u32 num_tids,
+ u16 num_pf_pqs,
+ u16 num_vf_pqs);
/**
- * @brief ecore_qm_common_rt_init -
- * Prepare QM runtime init values for the engine phase
+ * @brief ecore_qm_common_rt_init - Prepare QM runtime init values for engine
+ * phase
*
* @param p_hwfn
* @param max_ports_per_engine - max number of ports per engine in HW
@@ -51,9 +53,35 @@ int ecore_qm_common_rt_init(struct ecore_hwfn *p_hwfn,
bool pf_wfq_en,
bool vport_rl_en,
bool vport_wfq_en,
- struct init_qm_port_params
- port_params[MAX_NUM_PORTS]);
-
+ struct init_qm_port_params port_params[MAX_NUM_PORTS]);
+/**
+ * @brief ecore_qm_pf_rt_init Prepare QM runtime init values for the PF phase
+ *
+ * @param p_hwfn
+ * @param p_ptt - ptt window used for writing the registers
+ * @param port_id - port ID
+ * @param pf_id - PF ID
+ * @param max_phys_tcs_per_port - max number of physical TCs per port in HW
+ * @param is_first_pf - 1 = first PF in engine, 0 = othwerwise
+ * @param num_pf_cids - number of connections used by this PF
+ * @param num_vf_cids - number of connections used by VFs of this PF
+ * @param num_tids - number of tasks used by this PF
+ * @param start_pq - first Tx PQ ID associated with this PF
+ * @param num_pf_pqs - number of Tx PQs associated with this PF (non-VF)
+ * @param num_vf_pqs - number of Tx PQs associated with a VF
+ * @param start_vport - first VPORT ID associated with this PF
+ * @param num_vports - number of VPORTs associated with this PF
+ * @param pf_wfq - WFQ weight. if PF WFQ is globally disabled, the weight must
+ * be 0. otherwise, the weight must be non-zero.
+ * @param pf_rl - rate limit in Mb/sec units. a value of 0 means don't
+ * configure. ignored if PF RL is globally disabled.
+ * @param pq_params - array of size (num_pf_pqs+num_vf_pqs) with parameters for
+ * each Tx PQ associated with the specified PF.
+ * @param vport_params - array of size num_vports with parameters for each
+ * associated VPORT.
+ *
+ * @return 0 on success, -1 on error.
+ */
int ecore_qm_pf_rt_init(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u8 port_id,
@@ -287,5 +315,65 @@ void ecore_set_geneve_dest_port(struct ecore_hwfn *p_hwfn,
*/
void ecore_set_geneve_enable(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
- bool eth_geneve_enable, bool ip_geneve_enable);
+ bool eth_geneve_enable,
+ bool ip_geneve_enable);
+#ifndef UNUSED_HSI_FUNC
+/**
+* @brief ecore_set_gft_event_id_cm_hdr - configure GFT event id and cm header
+*
+* @param p_ptt - ptt window used for writing the registers.
+*/
+void ecore_set_gft_event_id_cm_hdr(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt);
+/**
+* @brief ecore_set_rfs_mode_enable - enable and configure HW for RFS
+*
+*
+* @param p_ptt - ptt window used for writing the registers.
+* @param pf_id - pf on which to enable RFS.
+* @param tcp - set profile tcp packets.
+* @param udp - set profile udp packet.
+* @param ipv4 - set profile ipv4 packet.
+* @param ipv6 - set profile ipv6 packet.
+*/
+void ecore_set_rfs_mode_enable(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u16 pf_id,
+ bool tcp,
+ bool udp,
+ bool ipv4,
+ bool ipv6);
+#endif /* UNUSED_HSI_FUNC */
+/**
+* @brief ecore_config_vf_zone_size_mode - Configure VF zone size mode. Must be
+* used before first ETH queue started.
+*
+*
+* @param p_ptt - ptt window used for writing the registers. Don't care
+* if runtime_init used
+* @param mode - VF zone size mode. Use enum vf_zone_size_mode.
+* @param runtime_init - Set 1 to init runtime registers in engine phase. Set 0
+* if VF zone size mode configured after engine phase.
+*/
+void ecore_config_vf_zone_size_mode(struct ecore_hwfn *p_hwfn, struct ecore_ptt
+ *p_ptt, u16 mode, bool runtime_init);
+/**
+* @brief ecore_get_mstorm_queue_stat_offset - get mstorm statistics offset by VF
+* zone size mode.
+*
+* @param stat_cnt_id - statistic counter id
+* @param vf_zone_size_mode - VF zone size mode. Use enum vf_zone_size_mode.
+*/
+u32 ecore_get_mstorm_queue_stat_offset(struct ecore_hwfn *p_hwfn,
+ u16 stat_cnt_id, u16 vf_zone_size_mode);
+/**
+* @brief ecore_get_mstorm_eth_vf_prods_offset - VF producer offset by VF zone
+* size mode.
+*
+* @param vf_id - vf id.
+* @param vf_queue_id - per VF rx queue id.
+* @param vf_zone_size_mode - vf zone size mode. Use enum vf_zone_size_mode.
+*/
+u32 ecore_get_mstorm_eth_vf_prods_offset(struct ecore_hwfn *p_hwfn, u8 vf_id, u8
+ vf_queue_id, u16 vf_zone_size_mode);
#endif
diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index 71bad30..351e946 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -525,7 +525,7 @@ void ecore_gtt_init(struct ecore_hwfn *p_hwfn)
* not too bright, but it should work on the simple FPGA/EMUL
* scenarios.
*/
- bool initialized = false; /* @DPDK */
+ static bool initialized;
int poll_cnt = 500;
u32 val;
@@ -573,7 +573,8 @@ enum _ecore_status_t ecore_init_fw_data(struct ecore_dev *p_dev,
return ECORE_INVAL;
}
- buf_hdr = (struct bin_buffer_hdr *)(uintptr_t)data;
+ /* First Dword contains metadata and should be skipped */
+ buf_hdr = (struct bin_buffer_hdr *)((uintptr_t)(data + sizeof(u32)));
offset = buf_hdr[BIN_BUF_INIT_FW_VER_INFO].offset;
fw->fw_ver_info = (struct fw_ver_info *)((uintptr_t)(data + offset));
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index 4d5543a..6fb037d 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -21,7 +21,6 @@
#include "ecore_hw_defs.h"
#include "ecore_hsi_common.h"
#include "ecore_mcp.h"
-#include "ecore_attn_values.h"
struct ecore_pi_info {
ecore_int_comp_cb_t comp_cb;
@@ -61,8 +60,6 @@ struct aeu_invert_reg_bit {
#define ATTENTION_OFFSET_SHIFT (12)
#define ATTENTION_CLEAR_ENABLE (1 << 28)
-#define ATTENTION_FW_DUMP (1 << 29)
-#define ATTENTION_PANIC_DUMP (1 << 30)
unsigned int flags;
/* Callback to call if attention will be triggered */
@@ -726,46 +723,12 @@ static enum _ecore_status_t ecore_int_assertion(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-static void ecore_int_deassertion_print_bit(struct ecore_hwfn *p_hwfn,
- struct attn_hw_reg *p_reg_desc,
- struct attn_hw_block *p_block,
- enum ecore_attention_type type,
- u32 val, u32 mask)
+static void ecore_int_attn_print(struct ecore_hwfn *p_hwfn,
+ enum block_id id, enum dbg_attn_type type,
+ bool b_clear)
{
- int j;
-#ifdef ATTN_DESC
- const char **description;
-
- if (type == ECORE_ATTN_TYPE_ATTN)
- description = p_block->int_desc;
- else
- description = p_block->prty_desc;
-#endif
-
- for (j = 0; j < p_reg_desc->num_of_bits; j++) {
- if (val & (1 << j)) {
-#ifdef ATTN_DESC
- DP_NOTICE(p_hwfn, false,
- "%s (%s): %s [reg %d [0x%08x], bit %d]%s\n",
- p_block->name,
- type == ECORE_ATTN_TYPE_ATTN ? "Interrupt" :
- "Parity",
- description[p_reg_desc->bit_attn_idx[j]],
- p_reg_desc->reg_idx,
- p_reg_desc->sts_addr, j,
- (mask & (1 << j)) ? " [MASKED]" : "");
-#else
- DP_NOTICE(p_hwfn->p_dev, false,
- "%s (%s): [reg %d [0x%08x], bit %d]%s\n",
- p_block->name,
- type == ECORE_ATTN_TYPE_ATTN ? "Interrupt" :
- "Parity",
- p_reg_desc->reg_idx,
- p_reg_desc->sts_addr, j,
- (mask & (1 << j)) ? " [MASKED]" : "");
-#endif
- }
- }
+ /* @DPDK */
+ DP_NOTICE(p_hwfn->p_dev, false, "[block_id %d type %d]\n", id, type);
}
/**
@@ -788,13 +751,7 @@ ecore_int_deassertion_aeu_bit(struct ecore_hwfn *p_hwfn,
u32 bitmask)
{
enum _ecore_status_t rc = ECORE_INVAL;
- u32 val, mask;
-
-#ifndef REMOVE_DBG
- u32 interrupts[20]; /* TODO- change into HSI define once supplied */
-
- OSAL_MEMSET(interrupts, 0, sizeof(u32) * 20); /* FIXME real size) */
-#endif
+ bool b_fatal = false;
DP_INFO(p_hwfn, "Deasserted attention `%s'[%08x]\n",
p_bit_name, bitmask);
@@ -806,13 +763,17 @@ ecore_int_deassertion_aeu_bit(struct ecore_hwfn *p_hwfn,
rc = p_aeu->cb(p_hwfn);
}
+ if (rc != ECORE_SUCCESS)
+ b_fatal = true;
+
/* Print HW block interrupt registers */
- if (p_aeu->block_index != MAX_BLOCK_ID)
- DP_NOTICE(p_hwfn->p_dev, false, "[block_id %d type %d]\n",
- p_aeu->block_index, ATTN_TYPE_INTERRUPT);
+ if (p_aeu->block_index != MAX_BLOCK_ID) {
+ ecore_int_attn_print(p_hwfn, p_aeu->block_index,
+ ATTN_TYPE_INTERRUPT, !b_fatal);
+}
/* Reach assertion if attention is fatal */
- if (rc != ECORE_SUCCESS) {
+ if (b_fatal) {
DP_NOTICE(p_hwfn, true, "`%s': Fatal attention\n",
p_bit_name);
@@ -820,7 +781,8 @@ ecore_int_deassertion_aeu_bit(struct ecore_hwfn *p_hwfn,
}
/* Prevent this Attention from being asserted in the future */
- if (p_aeu->flags & ATTENTION_CLEAR_ENABLE) {
+ if (p_aeu->flags & ATTENTION_CLEAR_ENABLE ||
+ p_hwfn->p_dev->attn_clr_en) {
u32 val;
u32 mask = ~bitmask;
val = ecore_rd(p_hwfn, p_hwfn->p_dpc_ptt, aeu_en_reg);
@@ -829,13 +791,6 @@ ecore_int_deassertion_aeu_bit(struct ecore_hwfn *p_hwfn,
p_bit_name);
}
- if (p_aeu->flags & (ATTENTION_FW_DUMP | ATTENTION_PANIC_DUMP)) {
- /* @@@TODO - what to dump? <yuvalmin 04/02/13> */
- DP_ERR(p_hwfn->p_dev, "`%s' - Dumps aren't implemented yet\n",
- p_aeu->bit_name);
- return ECORE_NOTIMPL;
- }
-
return rc;
}
@@ -856,15 +811,18 @@ static void ecore_int_deassertion_parity(struct ecore_hwfn *p_hwfn,
DP_INFO(p_hwfn->p_dev, "%s[%d] parity attention is set\n",
p_aeu->bit_name, bit_index);
- if (block_id != MAX_BLOCK_ID)
+ if (block_id == MAX_BLOCK_ID)
return;
+ ecore_int_attn_print(p_hwfn, block_id,
+ ATTN_TYPE_PARITY, false);
+
/* In A0, there's a single parity bit for several blocks */
if (block_id == BLOCK_BTB) {
- DP_NOTICE(p_hwfn->p_dev, false, "[block_id %d type %d]\n",
- BLOCK_OPTE, ATTN_TYPE_PARITY);
- DP_NOTICE(p_hwfn->p_dev, false, "[block_id %d type %d]\n",
- BLOCK_MCP, ATTN_TYPE_PARITY);
+ ecore_int_attn_print(p_hwfn, BLOCK_OPTE,
+ ATTN_TYPE_PARITY, false);
+ ecore_int_attn_print(p_hwfn, BLOCK_MCP,
+ ATTN_TYPE_PARITY, false);
}
}
@@ -1094,13 +1052,11 @@ void ecore_int_sp_dpc(osal_int_ptr_t hwfn_cookie)
struct ecore_pi_info *pi_info = OSAL_NULL;
struct ecore_sb_attn_info *sb_attn;
struct ecore_sb_info *sb_info;
- static int arr_size;
+ int arr_size;
u16 rc = 0;
- if (!p_hwfn) {
- DP_ERR(p_hwfn->p_dev, "DPC called - no hwfn!\n");
+ if (!p_hwfn)
return;
- }
if (!p_hwfn->p_sp_sb) {
DP_ERR(p_hwfn->p_dev, "DPC called - no p_sp_sb\n");
@@ -1275,7 +1231,7 @@ static enum _ecore_status_t ecore_int_sb_attn_alloc(struct ecore_hwfn *p_hwfn,
void *p_virt;
/* SB struct */
- p_sb = OSAL_ALLOC(p_dev, GFP_KERNEL, sizeof(struct ecore_sb_attn_info));
+ p_sb = OSAL_ALLOC(p_dev, GFP_KERNEL, sizeof(*p_sb));
if (!p_sb) {
DP_NOTICE(p_dev, true,
"Failed to allocate `struct ecore_sb_attn_info'");
@@ -1300,17 +1256,8 @@ static enum _ecore_status_t ecore_int_sb_attn_alloc(struct ecore_hwfn *p_hwfn,
}
/* coalescing timeout = timeset << (timer_res + 1) */
-#ifdef RTE_LIBRTE_QEDE_RX_COAL_US
-#define ECORE_CAU_DEF_RX_USECS RTE_LIBRTE_QEDE_RX_COAL_US
-#else
#define ECORE_CAU_DEF_RX_USECS 24
-#endif
-
-#ifdef RTE_LIBRTE_QEDE_TX_COAL_US
-#define ECORE_CAU_DEF_TX_USECS RTE_LIBRTE_QEDE_TX_COAL_US
-#else
#define ECORE_CAU_DEF_TX_USECS 48
-#endif
void ecore_init_cau_sb_entry(struct ecore_hwfn *p_hwfn,
struct cau_sb_entry *p_sb_entry,
@@ -1318,6 +1265,7 @@ void ecore_init_cau_sb_entry(struct ecore_hwfn *p_hwfn,
{
struct ecore_dev *p_dev = p_hwfn->p_dev;
u32 cau_state;
+ u8 timer_res;
OSAL_MEMSET(p_sb_entry, 0, sizeof(*p_sb_entry));
@@ -1327,28 +1275,33 @@ void ecore_init_cau_sb_entry(struct ecore_hwfn *p_hwfn,
SET_FIELD(p_sb_entry->params, CAU_SB_ENTRY_SB_TIMESET0, 0x7F);
SET_FIELD(p_sb_entry->params, CAU_SB_ENTRY_SB_TIMESET1, 0x7F);
- /* setting the time resultion to a fixed value ( = 1) */
- SET_FIELD(p_sb_entry->params, CAU_SB_ENTRY_TIMER_RES0,
- ECORE_CAU_DEF_RX_TIMER_RES);
- SET_FIELD(p_sb_entry->params, CAU_SB_ENTRY_TIMER_RES1,
- ECORE_CAU_DEF_TX_TIMER_RES);
-
cau_state = CAU_HC_DISABLE_STATE;
if (p_dev->int_coalescing_mode == ECORE_COAL_MODE_ENABLE) {
cau_state = CAU_HC_ENABLE_STATE;
- if (!p_dev->rx_coalesce_usecs) {
+ if (!p_dev->rx_coalesce_usecs)
p_dev->rx_coalesce_usecs = ECORE_CAU_DEF_RX_USECS;
- DP_INFO(p_dev, "Coalesce params rx-usecs=%u\n",
- p_dev->rx_coalesce_usecs);
- }
- if (!p_dev->tx_coalesce_usecs) {
+ if (!p_dev->tx_coalesce_usecs)
p_dev->tx_coalesce_usecs = ECORE_CAU_DEF_TX_USECS;
- DP_INFO(p_dev, "Coalesce params tx-usecs=%u\n",
- p_dev->tx_coalesce_usecs);
- }
}
+ /* Coalesce = (timeset << timer-res), timeset is 7bit wide */
+ if (p_dev->rx_coalesce_usecs <= 0x7F)
+ timer_res = 0;
+ else if (p_dev->rx_coalesce_usecs <= 0xFF)
+ timer_res = 1;
+ else
+ timer_res = 2;
+ SET_FIELD(p_sb_entry->params, CAU_SB_ENTRY_TIMER_RES0, timer_res);
+
+ if (p_dev->tx_coalesce_usecs <= 0x7F)
+ timer_res = 0;
+ else if (p_dev->tx_coalesce_usecs <= 0xFF)
+ timer_res = 1;
+ else
+ timer_res = 2;
+ SET_FIELD(p_sb_entry->params, CAU_SB_ENTRY_TIMER_RES1, timer_res);
+
SET_FIELD(p_sb_entry->data, CAU_SB_ENTRY_STATE0, cau_state);
SET_FIELD(p_sb_entry->data, CAU_SB_ENTRY_STATE1, cau_state);
}
@@ -1388,17 +1341,32 @@ void ecore_int_cau_conf_sb(struct ecore_hwfn *p_hwfn,
/* Configure pi coalescing if set */
if (p_hwfn->p_dev->int_coalescing_mode == ECORE_COAL_MODE_ENABLE) {
- u8 num_tc = 1; /* @@@TBD aelior ECORE_MULTI_COS */
- u8 timeset = p_hwfn->p_dev->rx_coalesce_usecs >>
- (ECORE_CAU_DEF_RX_TIMER_RES + 1);
+ /* eth will open queues for all tcs, so configure all of them
+ * properly, rather than just the active ones
+ */
+ u8 num_tc = p_hwfn->hw_info.num_hw_tc;
+
+ u8 timeset, timer_res;
u8 i;
+ /* timeset = (coalesce >> timer-res), timeset is 7bit wide */
+ if (p_hwfn->p_dev->rx_coalesce_usecs <= 0x7F)
+ timer_res = 0;
+ else if (p_hwfn->p_dev->rx_coalesce_usecs <= 0xFF)
+ timer_res = 1;
+ else
+ timer_res = 2;
+ timeset = (u8)(p_hwfn->p_dev->rx_coalesce_usecs >> timer_res);
ecore_int_cau_conf_pi(p_hwfn, p_ptt, igu_sb_id, RX_PI,
ECORE_COAL_RX_STATE_MACHINE, timeset);
- timeset = p_hwfn->p_dev->tx_coalesce_usecs >>
- (ECORE_CAU_DEF_TX_TIMER_RES + 1);
-
+ if (p_hwfn->p_dev->tx_coalesce_usecs <= 0x7F)
+ timer_res = 0;
+ else if (p_hwfn->p_dev->tx_coalesce_usecs <= 0xFF)
+ timer_res = 1;
+ else
+ timer_res = 2;
+ timeset = (u8)(p_hwfn->p_dev->tx_coalesce_usecs >> timer_res);
for (i = 0; i < num_tc; i++) {
ecore_int_cau_conf_pi(p_hwfn, p_ptt,
igu_sb_id, TX_PI(i),
@@ -1572,10 +1540,10 @@ static enum _ecore_status_t ecore_int_sp_sb_alloc(struct ecore_hwfn *p_hwfn,
/* SB struct */
p_sb =
OSAL_ALLOC(p_hwfn->p_dev, GFP_KERNEL,
- sizeof(struct ecore_sb_sp_info));
+ sizeof(*p_sb));
if (!p_sb) {
DP_NOTICE(p_hwfn, true,
- "Failed to allocate `struct ecore_sb_info'");
+ "Failed to allocate `struct ecore_sb_info'\n");
return ECORE_NOMEM;
}
@@ -1644,14 +1612,14 @@ void ecore_int_igu_enable_int(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
enum ecore_int_mode int_mode)
{
- u32 igu_pf_conf = IGU_PF_CONF_FUNC_EN;
+ u32 igu_pf_conf = IGU_PF_CONF_FUNC_EN | IGU_PF_CONF_ATTN_BIT_EN;
#ifndef ASIC_ONLY
- if (CHIP_REV_IS_FPGA(p_hwfn->p_dev))
+ if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
DP_INFO(p_hwfn, "FPGA - don't enable ATTN generation in IGU\n");
- else
+ igu_pf_conf &= ~IGU_PF_CONF_ATTN_BIT_EN;
+ }
#endif
- igu_pf_conf |= IGU_PF_CONF_ATTN_BIT_EN;
p_hwfn->p_dev->int_mode = int_mode;
switch (p_hwfn->p_dev->int_mode) {
@@ -1706,12 +1674,6 @@ ecore_int_igu_enable(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
enum _ecore_status_t rc = ECORE_SUCCESS;
u32 tmp;
- /* @@@tmp - Mask General HW attentions 0-31, Enable 32-36 */
- tmp = ecore_rd(p_hwfn, p_ptt, MISC_REG_AEU_ENABLE4_IGU_OUT_0);
- tmp |= 0xf;
- ecore_wr(p_hwfn, p_ptt, MISC_REG_AEU_ENABLE3_IGU_OUT_0, 0);
- ecore_wr(p_hwfn, p_ptt, MISC_REG_AEU_ENABLE4_IGU_OUT_0, tmp);
-
/* @@@tmp - Starting with MFW 8.2.1.0 we've started hitting AVS stop
* attentions. Since we're waiting for BRCM answer regarding this
* attention, in the meanwhile we simply mask it.
@@ -1752,7 +1714,7 @@ void ecore_int_igu_disable_int(struct ecore_hwfn *p_hwfn,
}
#define IGU_CLEANUP_SLEEP_LENGTH (1000)
-void ecore_int_igu_cleanup_sb(struct ecore_hwfn *p_hwfn,
+static void ecore_int_igu_cleanup_sb(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u32 sb_id, bool cleanup_set, u16 opaque_fid)
{
@@ -1811,7 +1773,7 @@ void ecore_int_igu_init_pure_rt_single(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u32 sb_id, u16 opaque, bool b_set)
{
- int pi;
+ int pi, i;
/* Set */
if (b_set)
@@ -1820,6 +1782,23 @@ void ecore_int_igu_init_pure_rt_single(struct ecore_hwfn *p_hwfn,
/* Clear */
ecore_int_igu_cleanup_sb(p_hwfn, p_ptt, sb_id, 0, opaque);
+ /* Wait for the IGU SB to cleanup */
+ for (i = 0; i < IGU_CLEANUP_SLEEP_LENGTH; i++) {
+ u32 val;
+
+ val = ecore_rd(p_hwfn, p_ptt,
+ IGU_REG_WRITE_DONE_PENDING +
+ ((sb_id / 32) * 4));
+ if (val & (1 << (sb_id % 32)))
+ OSAL_UDELAY(10);
+ else
+ break;
+ }
+ if (i == IGU_CLEANUP_SLEEP_LENGTH)
+ DP_NOTICE(p_hwfn, true,
+ "Failed SB[0x%08x] still appearing in WRITE_DONE_PENDING\n",
+ sb_id);
+
/* Clear the CAU for the SB */
for (pi = 0; pi < 12; pi++)
ecore_wr(p_hwfn, p_ptt,
@@ -1895,8 +1874,8 @@ enum _ecore_status_t ecore_int_igu_read_cam(struct ecore_hwfn *p_hwfn,
{
struct ecore_igu_info *p_igu_info;
struct ecore_igu_block *p_block;
+ u32 min_vf = 0, max_vf = 0, val;
u16 sb_id, last_iov_sb_id = 0;
- u32 min_vf, max_vf, val;
u16 prev_sb_id = 0xFF;
p_hwfn->hw_info.p_igu_info = OSAL_ALLOC(p_hwfn->p_dev,
@@ -1915,16 +1894,14 @@ enum _ecore_status_t ecore_int_igu_read_cam(struct ecore_hwfn *p_hwfn,
p_igu_info->igu_dsb_id = 0xffff;
p_igu_info->igu_base_sb_iov = 0xffff;
-#ifdef CONFIG_ECORE_SRIOV
- min_vf = p_hwfn->hw_info.first_vf_in_pf;
- max_vf = p_hwfn->hw_info.first_vf_in_pf +
- p_hwfn->p_dev->sriov_info.total_vfs;
-#else
- min_vf = 0;
- max_vf = 0;
-#endif
+ if (p_hwfn->p_dev->p_iov_info) {
+ struct ecore_hw_sriov_info *p_iov = p_hwfn->p_dev->p_iov_info;
- for (sb_id = 0; sb_id < ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev);
+ min_vf = p_iov->first_vf_in_pf;
+ max_vf = p_iov->first_vf_in_pf + p_iov->total_vfs;
+ }
+ for (sb_id = 0;
+ sb_id < ECORE_MAPPING_MEMORY_SIZE(p_hwfn->p_dev);
sb_id++) {
p_block = &p_igu_info->igu_map.igu_blocks[sb_id];
val = ecore_int_igu_read_cam_block(p_hwfn, p_ptt, sb_id);
@@ -2126,12 +2103,12 @@ u16 ecore_int_queue_id_from_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id)
} else if ((sb_id >= p_info->igu_base_sb_iov) &&
(sb_id < p_info->igu_base_sb_iov + p_info->igu_sb_cnt_iov)) {
return sb_id - p_info->igu_base_sb_iov + p_info->igu_sb_cnt;
- }
-
+ } else {
DP_NOTICE(p_hwfn, true, "SB %d not in range for function\n",
sb_id);
return 0;
}
+}
void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev)
{
@@ -2140,3 +2117,45 @@ void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev)
for_each_hwfn(p_dev, i)
p_dev->hwfns[i].b_int_requested = false;
}
+
+void ecore_int_attn_clr_enable(struct ecore_dev *p_dev, bool clr_enable)
+{
+ p_dev->attn_clr_en = clr_enable;
+}
+
+enum _ecore_status_t ecore_int_set_timer_res(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u8 timer_res, u16 sb_id, bool tx)
+{
+ enum _ecore_status_t rc;
+ struct cau_sb_entry sb_entry;
+
+ if (!p_hwfn->hw_init_done) {
+ DP_ERR(p_hwfn, "hardware not initialized yet\n");
+ return ECORE_INVAL;
+ }
+
+ rc = ecore_dmae_grc2host(p_hwfn, p_ptt, CAU_REG_SB_VAR_MEMORY +
+ sb_id * sizeof(u64),
+ (u64)(osal_uintptr_t)&sb_entry, 2, 0);
+ if (rc != ECORE_SUCCESS) {
+ DP_ERR(p_hwfn, "dmae_grc2host failed %d\n", rc);
+ return rc;
+ }
+
+ if (tx)
+ SET_FIELD(sb_entry.params, CAU_SB_ENTRY_TIMER_RES1, timer_res);
+ else
+ SET_FIELD(sb_entry.params, CAU_SB_ENTRY_TIMER_RES0, timer_res);
+
+ rc = ecore_dmae_host2grc(p_hwfn, p_ptt,
+ (u64)(osal_uintptr_t)&sb_entry,
+ CAU_REG_SB_VAR_MEMORY +
+ sb_id * sizeof(u64), 2, 0);
+ if (rc != ECORE_SUCCESS) {
+ DP_ERR(p_hwfn, "dmae_host2grc failed %d\n", rc);
+ return rc;
+ }
+
+ return rc;
+}
diff --git a/drivers/net/qede/base/ecore_int.h b/drivers/net/qede/base/ecore_int.h
index eeec8ca..45358b9 100644
--- a/drivers/net/qede/base/ecore_int.h
+++ b/drivers/net/qede/base/ecore_int.h
@@ -122,22 +122,6 @@ u16 ecore_int_get_sp_sb_id(struct ecore_hwfn *p_hwfn);
* @param p_hwfn
* @param p_ptt
* @param sb_id - igu status block id
- * @param cleanup_set - set(1) / clear(0)
- * @param opaque_fid - the function for which to perform
- * cleanup, for example a PF on behalf of
- * its VFs.
- */
-void ecore_int_igu_cleanup_sb(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u32 sb_id, bool cleanup_set, u16 opaque_fid);
-
-/**
- * @brief Status block cleanup. Should be called for each status
- * block that will be used -> both PF / VF
- *
- * @param p_hwfn
- * @param p_ptt
- * @param sb_id - igu status block id
* @param opaque - opaque fid of the sb owner.
* @param cleanup_set - set(1) / clear(0)
*/
@@ -223,6 +207,9 @@ void ecore_init_cau_sb_entry(struct ecore_hwfn *p_hwfn,
struct cau_sb_entry *p_sb_entry, u8 pf_id,
u16 vf_number, u8 vf_valid);
+enum _ecore_status_t ecore_int_set_timer_res(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u8 timer_res, u16 sb_id, bool tx);
#ifndef ASIC_ONLY
#define ECORE_MAPPING_MEMORY_SIZE(dev) \
((CHIP_REV_IS_SLOW(dev) && (!(dev)->b_is_emul_full)) ? \
diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h
index f6db807..fc873e7 100644
--- a/drivers/net/qede/base/ecore_int_api.h
+++ b/drivers/net/qede/base/ecore_int_api.h
@@ -274,4 +274,15 @@ void ecore_int_get_num_sbs(struct ecore_hwfn *p_hwfn,
*/
void ecore_int_disable_post_isr_release(struct ecore_dev *p_dev);
+/**
+ * @brief ecore_int_attn_clr_enable - sets whether the general behavior is
+ * preventing attentions from being reasserted, or following the
+ * attributes of the specific attention.
+ *
+ * @param p_dev
+ * @param clr_enable
+ *
+ */
+void ecore_int_attn_clr_enable(struct ecore_dev *p_dev, bool clr_enable);
+
#endif
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 0085726..14f3f47 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -9,14 +9,17 @@
#ifndef __ECORE_SRIOV_API_H__
#define __ECORE_SRIOV_API_H__
+#include "common_hsi.h"
#include "ecore_status.h"
+#define ECORE_ETH_VF_NUM_MAC_FILTERS 1
+#define ECORE_ETH_VF_NUM_VLAN_FILTERS 2
#define ECORE_VF_ARRAY_LENGTH (3)
#define IS_VF(p_dev) ((p_dev)->b_is_vf)
#define IS_PF(p_dev) (!((p_dev)->b_is_vf))
#ifdef CONFIG_ECORE_SRIOV
-#define IS_PF_SRIOV(p_hwfn) (!!((p_hwfn)->p_dev->sriov_info.total_vfs))
+#define IS_PF_SRIOV(p_hwfn) (!!((p_hwfn)->p_dev->p_iov_info))
#else
#define IS_PF_SRIOV(p_hwfn) (0)
#endif
@@ -39,6 +42,18 @@ enum ecore_iov_vport_update_flag {
ECORE_IOV_VP_UPDATE_MAX = 8,
};
+/* PF to VF STATUS is part of vfpf-channel API
+ * and must be forward compatible
+*/
+enum ecore_iov_pf_to_vf_status {
+ PFVF_STATUS_WAITING = 0,
+ PFVF_STATUS_SUCCESS,
+ PFVF_STATUS_FAILURE,
+ PFVF_STATUS_NOT_SUPPORTED,
+ PFVF_STATUS_NO_RESOURCE,
+ PFVF_STATUS_FORCED,
+};
+
struct ecore_mcp_link_params;
struct ecore_mcp_link_state;
struct ecore_mcp_link_capabilities;
@@ -100,32 +115,58 @@ struct ecore_iov_sw_mbx {
*
* @return struct ecore_iov_sw_mbx*
*/
-struct ecore_iov_sw_mbx *ecore_iov_get_vf_sw_mbx(struct ecore_hwfn *p_hwfn,
+struct ecore_iov_sw_mbx*
+ecore_iov_get_vf_sw_mbx(struct ecore_hwfn *p_hwfn,
u16 rel_vf_id);
#endif
+/* This struct is part of ecore_dev and contains data relevant to all hwfns;
+ * Initialized only if SR-IOV cpabability is exposed in PCIe config space.
+ */
+struct ecore_hw_sriov_info {
+ /* standard SRIOV capability fields, mostly for debugging */
+ int pos; /* capability position */
+ int nres; /* number of resources */
+ u32 cap; /* SR-IOV Capabilities */
+ u16 ctrl; /* SR-IOV Control */
+ u16 total_vfs; /* total VFs associated with the PF */
+ u16 num_vfs; /* number of vfs that have been started */
+ u16 initial_vfs; /* initial VFs associated with the PF */
+ u16 nr_virtfn; /* number of VFs available */
+ u16 offset; /* first VF Routing ID offset */
+ u16 stride; /* following VF stride */
+ u16 vf_device_id; /* VF device id */
+ u32 pgsz; /* page size for BAR alignment */
+ u8 link; /* Function Dependency Link */
+
+ u32 first_vf_in_pf;
+};
+
#ifdef CONFIG_ECORE_SRIOV
+#ifndef LINUX_REMOVE
/**
* @brief mark/clear all VFs before/after an incoming PCIe sriov
* disable.
*
- * @param p_hwfn
+ * @param p_dev
* @param to_disable
*/
-void ecore_iov_set_vfs_to_disable(struct ecore_hwfn *p_hwfn, u8 to_disable);
+void ecore_iov_set_vfs_to_disable(struct ecore_dev *p_dev,
+ u8 to_disable);
/**
- * @brief mark/clear chosen VFs before/after an incoming PCIe
+ * @brief mark/clear chosen VF before/after an incoming PCIe
* sriov disable.
*
- * @param p_hwfn
+ * @param p_dev
+ * @param rel_vf_id
* @param to_disable
*/
-void ecore_iov_set_vf_to_disable(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id, u8 to_disable);
+void ecore_iov_set_vf_to_disable(struct ecore_dev *p_dev,
+ u16 rel_vf_id,
+ u8 to_disable);
/**
- *
* @brief ecore_iov_init_hw_for_vf - initialize the HW for
* enabling access of a VF. Also includes preparing the
* IGU for VF access. This needs to be called AFTER hw is
@@ -171,7 +212,6 @@ enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u16 rel_vf_id);
-#ifndef LINUX_REMOVE
/**
* @brief ecore_iov_set_vf_ctx - set a context for a given VF
*
@@ -182,8 +222,8 @@ enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
* @return enum _ecore_status_t
*/
enum _ecore_status_t ecore_iov_set_vf_ctx(struct ecore_hwfn *p_hwfn,
- u16 vf_id, void *ctx);
-#endif
+ u16 vf_id,
+ void *ctx);
/**
* @brief FLR cleanup for all VFs
@@ -333,17 +373,6 @@ enum _ecore_status_t ecore_iov_bulletin_set_mac(struct ecore_hwfn *p_hwfn,
u8 *mac, int vfid);
/**
- * @brief Set forced VLAN [pvid] in PFs copy of bulletin board
- * and configures FW/HW to support the configuration.
- * Setting of pvid 0 would clear the feature.
- * @param p_hwfn
- * @param pvid
- * @param vfid
- */
-void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
- u16 pvid, int vfid);
-
-/**
* @brief Set default behaviour of VF in case no vlans are configured for it
* whether to accept only untagged traffic or all.
* Must be called prior to the VF vport-start.
@@ -380,6 +409,17 @@ void ecore_iov_get_vfs_vport_id(struct ecore_hwfn *p_hwfn, int vfid,
u8 *p_vport_id);
/**
+ * @brief Set forced VLAN [pvid] in PFs copy of bulletin board
+ * and configures FW/HW to support the configuration.
+ * Setting of pvid 0 would clear the feature.
+ * @param p_hwfn
+ * @param pvid
+ * @param vfid
+ */
+void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
+ u16 pvid, int vfid);
+
+/**
* @brief Check if VF has VPORT instance. This can be used
* to check if VPORT is active.
*
@@ -953,4 +993,22 @@ static OSAL_INLINE enum _ecore_status_t ecore_iov_configure_min_tx_rate(
return ECORE_INVAL;
}
#endif
+
+/**
+ * @brief - Given a VF index, return index of next [including that] active VF.
+ *
+ * @param p_hwfn
+ * @param rel_vf_id
+ *
+ * @return MAX_NUM_VFS in case no further active VFs, otherwise index.
+ */
+u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id);
+
+#endif /* CONFIG_ECORE_SRIOV */
+
+#define ecore_for_each_vf(_p_hwfn, _i) \
+ for (_i = ecore_iov_get_next_active_vf(_p_hwfn, 0); \
+ _i < MAX_NUM_VFS; \
+ _i = ecore_iov_get_next_active_vf(_p_hwfn, _i + 1))
+
#endif
diff --git a/drivers/net/qede/base/ecore_iro.h b/drivers/net/qede/base/ecore_iro.h
index 7cabdf7..aad9012 100644
--- a/drivers/net/qede/base/ecore_iro.h
+++ b/drivers/net/qede/base/ecore_iro.h
@@ -13,103 +13,177 @@
#define YSTORM_FLOW_CONTROL_MODE_OFFSET (IRO[0].base)
#define YSTORM_FLOW_CONTROL_MODE_SIZE (IRO[0].size)
/* Tstorm port statistics */
-#define TSTORM_PORT_STAT_OFFSET(port_id) \
-(IRO[1].base + ((port_id) * IRO[1].m1))
+#define TSTORM_PORT_STAT_OFFSET(port_id) (IRO[1].base + ((port_id) * IRO[1].m1))
#define TSTORM_PORT_STAT_SIZE (IRO[1].size)
+/* Tstorm ll2 port statistics */
+#define TSTORM_LL2_PORT_STAT_OFFSET(port_id) (IRO[2].base + \
+ ((port_id) * IRO[2].m1))
+#define TSTORM_LL2_PORT_STAT_SIZE (IRO[2].size)
/* Ustorm VF-PF Channel ready flag */
-#define USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id) \
-(IRO[3].base + ((vf_id) * IRO[3].m1))
+#define USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id) (IRO[3].base + \
+ ((vf_id) * IRO[3].m1))
#define USTORM_VF_PF_CHANNEL_READY_SIZE (IRO[3].size)
/* Ustorm Final flr cleanup ack */
-#define USTORM_FLR_FINAL_ACK_OFFSET(pf_id) \
-(IRO[4].base + ((pf_id) * IRO[4].m1))
+#define USTORM_FLR_FINAL_ACK_OFFSET(pf_id) (IRO[4].base + ((pf_id) * IRO[4].m1))
#define USTORM_FLR_FINAL_ACK_SIZE (IRO[4].size)
/* Ustorm Event ring consumer */
-#define USTORM_EQE_CONS_OFFSET(pf_id) \
-(IRO[5].base + ((pf_id) * IRO[5].m1))
+#define USTORM_EQE_CONS_OFFSET(pf_id) (IRO[5].base + ((pf_id) * IRO[5].m1))
#define USTORM_EQE_CONS_SIZE (IRO[5].size)
+/* Ustorm eth queue zone */
+#define USTORM_ETH_QUEUE_ZONE_OFFSET(queue_zone_id) (IRO[6].base + \
+ ((queue_zone_id) * IRO[6].m1))
+#define USTORM_ETH_QUEUE_ZONE_SIZE (IRO[6].size)
/* Ustorm Common Queue ring consumer */
-#define USTORM_COMMON_QUEUE_CONS_OFFSET(global_queue_id) \
-(IRO[6].base + ((global_queue_id) * IRO[6].m1))
-#define USTORM_COMMON_QUEUE_CONS_SIZE (IRO[6].size)
+#define USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id) (IRO[7].base + \
+ ((queue_zone_id) * IRO[7].m1))
+#define USTORM_COMMON_QUEUE_CONS_SIZE (IRO[7].size)
/* Xstorm Integration Test Data */
-#define XSTORM_INTEG_TEST_DATA_OFFSET (IRO[7].base)
-#define XSTORM_INTEG_TEST_DATA_SIZE (IRO[7].size)
+#define XSTORM_INTEG_TEST_DATA_OFFSET (IRO[8].base)
+#define XSTORM_INTEG_TEST_DATA_SIZE (IRO[8].size)
/* Ystorm Integration Test Data */
-#define YSTORM_INTEG_TEST_DATA_OFFSET (IRO[8].base)
-#define YSTORM_INTEG_TEST_DATA_SIZE (IRO[8].size)
+#define YSTORM_INTEG_TEST_DATA_OFFSET (IRO[9].base)
+#define YSTORM_INTEG_TEST_DATA_SIZE (IRO[9].size)
/* Pstorm Integration Test Data */
-#define PSTORM_INTEG_TEST_DATA_OFFSET (IRO[9].base)
-#define PSTORM_INTEG_TEST_DATA_SIZE (IRO[9].size)
+#define PSTORM_INTEG_TEST_DATA_OFFSET (IRO[10].base)
+#define PSTORM_INTEG_TEST_DATA_SIZE (IRO[10].size)
/* Tstorm Integration Test Data */
-#define TSTORM_INTEG_TEST_DATA_OFFSET (IRO[10].base)
-#define TSTORM_INTEG_TEST_DATA_SIZE (IRO[10].size)
+#define TSTORM_INTEG_TEST_DATA_OFFSET (IRO[11].base)
+#define TSTORM_INTEG_TEST_DATA_SIZE (IRO[11].size)
/* Mstorm Integration Test Data */
-#define MSTORM_INTEG_TEST_DATA_OFFSET (IRO[11].base)
-#define MSTORM_INTEG_TEST_DATA_SIZE (IRO[11].size)
+#define MSTORM_INTEG_TEST_DATA_OFFSET (IRO[12].base)
+#define MSTORM_INTEG_TEST_DATA_SIZE (IRO[12].size)
/* Ustorm Integration Test Data */
-#define USTORM_INTEG_TEST_DATA_OFFSET (IRO[12].base)
-#define USTORM_INTEG_TEST_DATA_SIZE (IRO[12].size)
+#define USTORM_INTEG_TEST_DATA_OFFSET (IRO[13].base)
+#define USTORM_INTEG_TEST_DATA_SIZE (IRO[13].size)
+/* Tstorm producers */
+#define TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) (IRO[14].base + \
+ ((core_rx_queue_id) * IRO[14].m1))
+#define TSTORM_LL2_RX_PRODS_SIZE (IRO[14].size)
+/* Tstorm LightL2 queue statistics */
+#define CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) \
+ (IRO[15].base + ((core_rx_queue_id) * IRO[15].m1))
+#define CORE_LL2_TSTORM_PER_QUEUE_STAT_SIZE (IRO[15].size)
+/* Ustorm LiteL2 queue statistics */
+#define CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) \
+ (IRO[16].base + ((core_rx_queue_id) * IRO[16].m1))
+#define CORE_LL2_USTORM_PER_QUEUE_STAT_SIZE (IRO[16].size)
+/* Pstorm LiteL2 queue statistics */
+#define CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id) \
+ (IRO[17].base + ((core_tx_stats_id) * IRO[17].m1))
+#define CORE_LL2_PSTORM_PER_QUEUE_STAT_SIZE (IRO[17].size)
/* Mstorm queue statistics */
-#define MSTORM_QUEUE_STAT_OFFSET(stat_counter_id) \
-(IRO[17].base + ((stat_counter_id) * IRO[17].m1))
-#define MSTORM_QUEUE_STAT_SIZE (IRO[17].size)
-/* Mstorm producers */
-#define MSTORM_PRODS_OFFSET(queue_id) \
-(IRO[18].base + ((queue_id) * IRO[18].m1))
-#define MSTORM_PRODS_SIZE (IRO[18].size)
+#define MSTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[18].base + \
+ ((stat_counter_id) * IRO[18].m1))
+#define MSTORM_QUEUE_STAT_SIZE (IRO[18].size)
+/* Mstorm ETH PF queues producers */
+#define MSTORM_ETH_PF_PRODS_OFFSET(queue_id) (IRO[19].base + \
+ ((queue_id) * IRO[19].m1))
+#define MSTORM_ETH_PF_PRODS_SIZE (IRO[19].size)
+/* Mstorm ETH VF queues producers offset in RAM. Used in default VF zone size
+ * mode.
+ */
+#define MSTORM_ETH_VF_PRODS_OFFSET(vf_id, vf_queue_id) (IRO[20].base + \
+ ((vf_id) * IRO[20].m1) + ((vf_queue_id) * IRO[20].m2))
+#define MSTORM_ETH_VF_PRODS_SIZE (IRO[20].size)
/* TPA agregation timeout in us resolution (on ASIC) */
-#define MSTORM_TPA_TIMEOUT_US_OFFSET (IRO[19].base)
-#define MSTORM_TPA_TIMEOUT_US_SIZE (IRO[19].size)
+#define MSTORM_TPA_TIMEOUT_US_OFFSET (IRO[21].base)
+#define MSTORM_TPA_TIMEOUT_US_SIZE (IRO[21].size)
+/* Mstorm pf statistics */
+#define MSTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[22].base + ((pf_id) * IRO[22].m1))
+#define MSTORM_ETH_PF_STAT_SIZE (IRO[22].size)
/* Ustorm queue statistics */
-#define USTORM_QUEUE_STAT_OFFSET(stat_counter_id) \
-(IRO[20].base + ((stat_counter_id) * IRO[20].m1))
-#define USTORM_QUEUE_STAT_SIZE (IRO[20].size)
-/* Ustorm queue zone */
-#define USTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) \
-(IRO[21].base + ((queue_id) * IRO[21].m1))
-#define USTORM_ETH_QUEUE_ZONE_SIZE (IRO[21].size)
+#define USTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[23].base + \
+ ((stat_counter_id) * IRO[23].m1))
+#define USTORM_QUEUE_STAT_SIZE (IRO[23].size)
+/* Ustorm pf statistics */
+#define USTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[24].base + ((pf_id) * IRO[24].m1))
+#define USTORM_ETH_PF_STAT_SIZE (IRO[24].size)
/* Pstorm queue statistics */
-#define PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) \
-(IRO[22].base + ((stat_counter_id) * IRO[22].m1))
-#define PSTORM_QUEUE_STAT_SIZE (IRO[22].size)
+#define PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) (IRO[25].base + \
+ ((stat_counter_id) * IRO[25].m1))
+#define PSTORM_QUEUE_STAT_SIZE (IRO[25].size)
+/* Pstorm pf statistics */
+#define PSTORM_ETH_PF_STAT_OFFSET(pf_id) (IRO[26].base + ((pf_id) * IRO[26].m1))
+#define PSTORM_ETH_PF_STAT_SIZE (IRO[26].size)
+/* Control frame's EthType configuration for TX control frame security */
+#define PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id) (IRO[27].base + \
+ ((ethType_id) * IRO[27].m1))
+#define PSTORM_CTL_FRAME_ETHTYPE_SIZE (IRO[27].size)
/* Tstorm last parser message */
-#define TSTORM_ETH_PRS_INPUT_OFFSET (IRO[23].base)
-#define TSTORM_ETH_PRS_INPUT_SIZE (IRO[23].size)
+#define TSTORM_ETH_PRS_INPUT_OFFSET (IRO[28].base)
+#define TSTORM_ETH_PRS_INPUT_SIZE (IRO[28].size)
/* Tstorm Eth limit Rx rate */
-#define ETH_RX_RATE_LIMIT_OFFSET(pf_id) \
-(IRO[24].base + ((pf_id) * IRO[24].m1))
-#define ETH_RX_RATE_LIMIT_SIZE (IRO[24].size)
-/* Ystorm queue zone */
-#define YSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) \
-(IRO[25].base + ((queue_id) * IRO[25].m1))
-#define YSTORM_ETH_QUEUE_ZONE_SIZE (IRO[25].size)
+#define ETH_RX_RATE_LIMIT_OFFSET(pf_id) (IRO[29].base + ((pf_id) * IRO[29].m1))
+#define ETH_RX_RATE_LIMIT_SIZE (IRO[29].size)
+/* Xstorm queue zone */
+#define XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) (IRO[30].base + \
+ ((queue_id) * IRO[30].m1))
+#define XSTORM_ETH_QUEUE_ZONE_SIZE (IRO[30].size)
/* Ystorm cqe producer */
-#define YSTORM_TOE_CQ_PROD_OFFSET(rss_id) \
-(IRO[26].base + ((rss_id) * IRO[26].m1))
-#define YSTORM_TOE_CQ_PROD_SIZE (IRO[26].size)
+#define YSTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[31].base + \
+ ((rss_id) * IRO[31].m1))
+#define YSTORM_TOE_CQ_PROD_SIZE (IRO[31].size)
/* Ustorm cqe producer */
-#define USTORM_TOE_CQ_PROD_OFFSET(rss_id) \
-(IRO[27].base + ((rss_id) * IRO[27].m1))
-#define USTORM_TOE_CQ_PROD_SIZE (IRO[27].size)
+#define USTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[32].base + \
+ ((rss_id) * IRO[32].m1))
+#define USTORM_TOE_CQ_PROD_SIZE (IRO[32].size)
/* Ustorm grq producer */
-#define USTORM_TOE_GRQ_PROD_OFFSET(pf_id) \
-(IRO[28].base + ((pf_id) * IRO[28].m1))
-#define USTORM_TOE_GRQ_PROD_SIZE (IRO[28].size)
+#define USTORM_TOE_GRQ_PROD_OFFSET(pf_id) (IRO[33].base + \
+ ((pf_id) * IRO[33].m1))
+#define USTORM_TOE_GRQ_PROD_SIZE (IRO[33].size)
/* Tstorm cmdq-cons of given command queue-id */
-#define TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id) \
-(IRO[29].base + ((cmdq_queue_id) * IRO[29].m1))
-#define TSTORM_SCSI_CMDQ_CONS_SIZE (IRO[29].size)
-#define TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id, bdq_id) \
-(IRO[30].base + ((func_id) * IRO[30].m1) + ((bdq_id) * IRO[30].m2))
-#define TSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[30].size)
-/* Mstorm rq-cons of given queue-id */
-#define MSTORM_SCSI_RQ_CONS_OFFSET(rq_queue_id) \
-(IRO[31].base + ((rq_queue_id) * IRO[31].m1))
-#define MSTORM_SCSI_RQ_CONS_SIZE (IRO[31].size)
-/* Mstorm bdq-external-producer of given BDQ function ID, BDqueue-id */
-#define MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id, bdq_id) \
-(IRO[32].base + ((func_id) * IRO[32].m1) + ((bdq_id) * IRO[32].m2))
-#define MSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[32].size)
+#define TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id) (IRO[34].base + \
+ ((cmdq_queue_id) * IRO[34].m1))
+#define TSTORM_SCSI_CMDQ_CONS_SIZE (IRO[34].size)
+/* Tstorm (reflects M-Storm) bdq-external-producer of given function ID,
+ * BDqueue-id
+ */
+#define TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id, bdq_id) (IRO[35].base + \
+ ((func_id) * IRO[35].m1) + ((bdq_id) * IRO[35].m2))
+#define TSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[35].size)
+/* Mstorm bdq-external-producer of given BDQ resource ID, BDqueue-id */
+#define MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id, bdq_id) (IRO[36].base + \
+ ((func_id) * IRO[36].m1) + ((bdq_id) * IRO[36].m2))
+#define MSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[36].size)
+/* Tstorm iSCSI RX stats */
+#define TSTORM_ISCSI_RX_STATS_OFFSET(pf_id) (IRO[37].base + \
+ ((pf_id) * IRO[37].m1))
+#define TSTORM_ISCSI_RX_STATS_SIZE (IRO[37].size)
+/* Mstorm iSCSI RX stats */
+#define MSTORM_ISCSI_RX_STATS_OFFSET(pf_id) (IRO[38].base + \
+ ((pf_id) * IRO[38].m1))
+#define MSTORM_ISCSI_RX_STATS_SIZE (IRO[38].size)
+/* Ustorm iSCSI RX stats */
+#define USTORM_ISCSI_RX_STATS_OFFSET(pf_id) (IRO[39].base + \
+ ((pf_id) * IRO[39].m1))
+#define USTORM_ISCSI_RX_STATS_SIZE (IRO[39].size)
+/* Xstorm iSCSI TX stats */
+#define XSTORM_ISCSI_TX_STATS_OFFSET(pf_id) (IRO[40].base + \
+ ((pf_id) * IRO[40].m1))
+#define XSTORM_ISCSI_TX_STATS_SIZE (IRO[40].size)
+/* Ystorm iSCSI TX stats */
+#define YSTORM_ISCSI_TX_STATS_OFFSET(pf_id) (IRO[41].base + \
+ ((pf_id) * IRO[41].m1))
+#define YSTORM_ISCSI_TX_STATS_SIZE (IRO[41].size)
+/* Pstorm iSCSI TX stats */
+#define PSTORM_ISCSI_TX_STATS_OFFSET(pf_id) (IRO[42].base + \
+ ((pf_id) * IRO[42].m1))
+#define PSTORM_ISCSI_TX_STATS_SIZE (IRO[42].size)
+/* Tstorm FCoE RX stats */
+#define TSTORM_FCOE_RX_STATS_OFFSET(pf_id) (IRO[43].base + \
+ ((pf_id) * IRO[43].m1))
+#define TSTORM_FCOE_RX_STATS_SIZE (IRO[43].size)
+/* Pstorm FCoE TX stats */
+#define PSTORM_FCOE_TX_STATS_OFFSET(pf_id) (IRO[44].base + \
+ ((pf_id) * IRO[44].m1))
+#define PSTORM_FCOE_TX_STATS_SIZE (IRO[44].size)
+/* Pstorm RDMA queue statistics */
+#define PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) \
+ (IRO[45].base + ((rdma_stat_counter_id) * IRO[45].m1))
+#define PSTORM_RDMA_QUEUE_STAT_SIZE (IRO[45].size)
+/* Tstorm RDMA queue statistics */
+#define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[46].base + \
+ ((rdma_stat_counter_id) * IRO[46].m1))
+#define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[46].size)
#endif /* __IRO_H__ */
diff --git a/drivers/net/qede/base/ecore_iro_values.h b/drivers/net/qede/base/ecore_iro_values.h
index 548ad14..43e01e4 100644
--- a/drivers/net/qede/base/ecore_iro_values.h
+++ b/drivers/net/qede/base/ecore_iro_values.h
@@ -9,51 +9,101 @@
#ifndef __IRO_VALUES_H__
#define __IRO_VALUES_H__
-static const struct iro iro_arr[44] = {
+static const struct iro iro_arr[47] = {
+/* YSTORM_FLOW_CONTROL_MODE_OFFSET */
{ 0x0, 0x0, 0x0, 0x0, 0x8},
- {0x4db0, 0x60, 0x0, 0x0, 0x60},
- {0x6418, 0x20, 0x0, 0x0, 0x20},
- {0x500, 0x8, 0x0, 0x0, 0x4},
- {0x480, 0x8, 0x0, 0x0, 0x4},
+/* TSTORM_PORT_STAT_OFFSET(port_id) */
+ { 0x4cb0, 0x78, 0x0, 0x0, 0x78},
+/* TSTORM_LL2_PORT_STAT_OFFSET(port_id) */
+ { 0x6318, 0x20, 0x0, 0x0, 0x20},
+/* USTORM_VF_PF_CHANNEL_READY_OFFSET(vf_id) */
+ { 0xb00, 0x8, 0x0, 0x0, 0x4},
+/* USTORM_FLR_FINAL_ACK_OFFSET(pf_id) */
+ { 0xa80, 0x8, 0x0, 0x0, 0x4},
+/* USTORM_EQE_CONS_OFFSET(pf_id) */
{ 0x0, 0x8, 0x0, 0x0, 0x2},
- {0x80, 0x8, 0x0, 0x0, 0x2},
- {0x4938, 0x0, 0x0, 0x0, 0x78},
+/* USTORM_ETH_QUEUE_ZONE_OFFSET(queue_zone_id) */
+ { 0x80, 0x8, 0x0, 0x0, 0x4},
+/* USTORM_COMMON_QUEUE_CONS_OFFSET(queue_zone_id) */
+ { 0x84, 0x8, 0x0, 0x0, 0x2},
+/* XSTORM_INTEG_TEST_DATA_OFFSET */
+ { 0x4bc0, 0x0, 0x0, 0x0, 0x78},
+/* YSTORM_INTEG_TEST_DATA_OFFSET */
{ 0x3df0, 0x0, 0x0, 0x0, 0x78},
+/* PSTORM_INTEG_TEST_DATA_OFFSET */
{ 0x29b0, 0x0, 0x0, 0x0, 0x78},
- {0x4d38, 0x0, 0x0, 0x0, 0x78},
- {0x56c8, 0x0, 0x0, 0x0, 0x78},
+/* TSTORM_INTEG_TEST_DATA_OFFSET */
+ { 0x4c38, 0x0, 0x0, 0x0, 0x78},
+/* MSTORM_INTEG_TEST_DATA_OFFSET */
+ { 0x4990, 0x0, 0x0, 0x0, 0x78},
+/* USTORM_INTEG_TEST_DATA_OFFSET */
{ 0x7e48, 0x0, 0x0, 0x0, 0x78},
+/* TSTORM_LL2_RX_PRODS_OFFSET(core_rx_queue_id) */
{ 0xa28, 0x8, 0x0, 0x0, 0x8},
- {0x61f8, 0x10, 0x0, 0x0, 0x10},
- {0xb500, 0x30, 0x0, 0x0, 0x30},
+/* CORE_LL2_TSTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */
+ { 0x60f8, 0x10, 0x0, 0x0, 0x10},
+/* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */
+ { 0xb820, 0x30, 0x0, 0x0, 0x30},
+/* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id) */
{ 0x95b8, 0x30, 0x0, 0x0, 0x30},
- {0x5898, 0x40, 0x0, 0x0, 0x40},
- {0x1f8, 0x10, 0x0, 0x0, 0x8},
- {0xa228, 0x0, 0x0, 0x0, 0x4},
+/* MSTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
+ { 0x4b60, 0x80, 0x0, 0x0, 0x40},
+/* MSTORM_ETH_PF_PRODS_OFFSET(queue_id) */
+ { 0x1f8, 0x4, 0x0, 0x0, 0x4},
+/* MSTORM_ETH_VF_PRODS_OFFSET(vf_id,vf_queue_id) */
+ { 0x53a0, 0x80, 0x4, 0x0, 0x4},
+/* MSTORM_TPA_TIMEOUT_US_OFFSET */
+ { 0xc8f0, 0x0, 0x0, 0x0, 0x4},
+/* MSTORM_ETH_PF_STAT_OFFSET(pf_id) */
+ { 0x4ba0, 0x80, 0x0, 0x0, 0x20},
+/* USTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
{ 0x8050, 0x40, 0x0, 0x0, 0x30},
- {0xcf8, 0x8, 0x0, 0x0, 0x8},
+/* USTORM_ETH_PF_STAT_OFFSET(pf_id) */
+ { 0xe770, 0x60, 0x0, 0x0, 0x60},
+/* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
{ 0x2b48, 0x80, 0x0, 0x0, 0x38},
- {0xadf0, 0x0, 0x0, 0x0, 0xf0},
- {0xaee0, 0x8, 0x0, 0x0, 0x8},
- {0x80, 0x8, 0x0, 0x0, 0x8},
+/* PSTORM_ETH_PF_STAT_OFFSET(pf_id) */
+ { 0xf188, 0x78, 0x0, 0x0, 0x78},
+/* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id) */
+ { 0x1f8, 0x4, 0x0, 0x0, 0x4},
+/* TSTORM_ETH_PRS_INPUT_OFFSET */
+ { 0xacf0, 0x0, 0x0, 0x0, 0xf0},
+/* ETH_RX_RATE_LIMIT_OFFSET(pf_id) */
+ { 0xade0, 0x8, 0x0, 0x0, 0x8},
+/* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) */
+ { 0x1f8, 0x8, 0x0, 0x0, 0x8},
+/* YSTORM_TOE_CQ_PROD_OFFSET(rss_id) */
{ 0xac0, 0x8, 0x0, 0x0, 0x8},
+/* USTORM_TOE_CQ_PROD_OFFSET(rss_id) */
{ 0x2578, 0x8, 0x0, 0x0, 0x8},
+/* USTORM_TOE_GRQ_PROD_OFFSET(pf_id) */
{ 0x24f8, 0x8, 0x0, 0x0, 0x8},
+/* TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id) */
{ 0x0, 0x8, 0x0, 0x0, 0x8},
+/* TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id,bdq_id) */
{ 0x200, 0x10, 0x8, 0x0, 0x8},
- {0x17f8, 0x8, 0x0, 0x0, 0x2},
- {0x19f8, 0x10, 0x8, 0x0, 0x2},
- {0xd988, 0x38, 0x0, 0x0, 0x24},
- {0x11040, 0x10, 0x0, 0x0, 0x8},
- {0x11670, 0x38, 0x0, 0x0, 0x18},
- {0xaeb8, 0x30, 0x0, 0x0, 0x10},
+/* MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id,bdq_id) */
+ { 0xb78, 0x10, 0x8, 0x0, 0x2},
+/* TSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
+ { 0xd888, 0x38, 0x0, 0x0, 0x24},
+/* MSTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
+ { 0x12c38, 0x10, 0x0, 0x0, 0x8},
+/* USTORM_ISCSI_RX_STATS_OFFSET(pf_id) */
+ { 0x11aa0, 0x38, 0x0, 0x0, 0x18},
+/* XSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
+ { 0xa8c0, 0x30, 0x0, 0x0, 0x10},
+/* YSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
{ 0x86f8, 0x28, 0x0, 0x0, 0x18},
- {0xebf8, 0x10, 0x0, 0x0, 0x10},
- {0xde08, 0x40, 0x0, 0x0, 0x30},
- {0x121a0, 0x38, 0x0, 0x0, 0x8},
- {0xf060, 0x20, 0x0, 0x0, 0x20},
+/* PSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
+ { 0x101f8, 0x10, 0x0, 0x0, 0x10},
+/* TSTORM_FCOE_RX_STATS_OFFSET(pf_id) */
+ { 0xdd08, 0x48, 0x0, 0x0, 0x38},
+/* PSTORM_FCOE_TX_STATS_OFFSET(pf_id) */
+ { 0x10660, 0x20, 0x0, 0x0, 0x20},
+/* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
{ 0x2b80, 0x80, 0x0, 0x0, 0x10},
- {0x50a0, 0x10, 0x0, 0x0, 0x10},
+/* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
+ { 0x5000, 0x10, 0x0, 0x0, 0x10},
};
#endif /* __IRO_VALUES_H__ */
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index bc6b59d..b1190e4 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -84,6 +84,8 @@ ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
p_ramrod->tpa_param.tpa_min_size_to_start = p_params->mtu / 2;
p_ramrod->tpa_param.tpa_ipv4_en_flg = 1;
p_ramrod->tpa_param.tpa_ipv6_en_flg = 1;
+ p_ramrod->tpa_param.tpa_ipv4_tunn_en_flg = 1;
+ p_ramrod->tpa_param.tpa_ipv6_tunn_en_flg = 1;
p_ramrod->tpa_param.tpa_pkt_split_flg = 1;
p_ramrod->tpa_param.tpa_gro_consistent_flg = 1;
break;
@@ -97,6 +99,9 @@ ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
p_ramrod->tx_switching_en = 0;
#endif
+ p_ramrod->ctl_frame_mac_check_en = !!p_params->check_mac;
+ p_ramrod->ctl_frame_ethtype_check_en = !!p_params->check_ethtype;
+
/* Software Function ID in hwfn (PFs are 0 - 15, VFs are 16 - 135) */
p_ramrod->sw_fid = ecore_concrete_to_sw_fid(p_hwfn->p_dev,
p_params->concrete_fid);
@@ -202,10 +207,12 @@ ecore_sp_vport_update_rss(struct ecore_hwfn *p_hwfn,
static void
ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn,
struct vport_update_ramrod_data *p_ramrod,
- struct ecore_filter_accept_flags flags)
+ struct ecore_filter_accept_flags accept_flags)
{
- p_ramrod->common.update_rx_mode_flg = flags.update_rx_mode_config;
- p_ramrod->common.update_tx_mode_flg = flags.update_tx_mode_config;
+ p_ramrod->common.update_rx_mode_flg =
+ accept_flags.update_rx_mode_config;
+ p_ramrod->common.update_tx_mode_flg =
+ accept_flags.update_tx_mode_config;
#ifndef ASIC_ONLY
/* On B0 emulation we cannot enable Tx, since this would cause writes
@@ -220,74 +227,55 @@ ecore_sp_update_accept_mode(struct ecore_hwfn *p_hwfn,
/* Set Rx mode accept flags */
if (p_ramrod->common.update_rx_mode_flg) {
- __le16 *state = &p_ramrod->rx_mode.state;
- u8 accept_filter = flags.rx_accept_filter;
-
-/*
- * SET_FIELD(*state, ETH_VPORT_RX_MODE_UCAST_DROP_ALL,
- * !!(accept_filter & ECORE_ACCEPT_NONE));
- */
-
- SET_FIELD(*state, ETH_VPORT_RX_MODE_UCAST_ACCEPT_ALL,
- (!!(accept_filter & ECORE_ACCEPT_UCAST_MATCHED) &&
- !!(accept_filter & ECORE_ACCEPT_UCAST_UNMATCHED)));
+ u8 accept_filter = accept_flags.rx_accept_filter;
+ u16 state = 0;
- SET_FIELD(*state, ETH_VPORT_RX_MODE_UCAST_DROP_ALL,
+ SET_FIELD(state, ETH_VPORT_RX_MODE_UCAST_DROP_ALL,
!(!!(accept_filter & ECORE_ACCEPT_UCAST_MATCHED) ||
!!(accept_filter & ECORE_ACCEPT_UCAST_UNMATCHED)));
- SET_FIELD(*state, ETH_VPORT_RX_MODE_UCAST_ACCEPT_UNMATCHED,
+ SET_FIELD(state, ETH_VPORT_RX_MODE_UCAST_ACCEPT_UNMATCHED,
!!(accept_filter & ECORE_ACCEPT_UCAST_UNMATCHED));
-/*
- * SET_FIELD(*state, ETH_VPORT_RX_MODE_MCAST_DROP_ALL,
- * !!(accept_filter & ECORE_ACCEPT_NONE));
- */
- SET_FIELD(*state, ETH_VPORT_RX_MODE_MCAST_DROP_ALL,
+
+ SET_FIELD(state, ETH_VPORT_RX_MODE_MCAST_DROP_ALL,
!(!!(accept_filter & ECORE_ACCEPT_MCAST_MATCHED) ||
!!(accept_filter & ECORE_ACCEPT_MCAST_UNMATCHED)));
- SET_FIELD(*state, ETH_VPORT_RX_MODE_MCAST_ACCEPT_ALL,
+ SET_FIELD(state, ETH_VPORT_RX_MODE_MCAST_ACCEPT_ALL,
(!!(accept_filter & ECORE_ACCEPT_MCAST_MATCHED) &&
!!(accept_filter & ECORE_ACCEPT_MCAST_UNMATCHED)));
- SET_FIELD(*state, ETH_VPORT_RX_MODE_BCAST_ACCEPT_ALL,
+ SET_FIELD(state, ETH_VPORT_RX_MODE_BCAST_ACCEPT_ALL,
!!(accept_filter & ECORE_ACCEPT_BCAST));
+ p_ramrod->rx_mode.state = OSAL_CPU_TO_LE16(state);
DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
"p_ramrod->rx_mode.state = 0x%x\n",
- p_ramrod->rx_mode.state);
+ state);
}
/* Set Tx mode accept flags */
if (p_ramrod->common.update_tx_mode_flg) {
- __le16 *state = &p_ramrod->tx_mode.state;
- u8 accept_filter = flags.tx_accept_filter;
+ u8 accept_filter = accept_flags.tx_accept_filter;
+ u16 state = 0;
- SET_FIELD(*state, ETH_VPORT_TX_MODE_UCAST_DROP_ALL,
+ SET_FIELD(state, ETH_VPORT_TX_MODE_UCAST_DROP_ALL,
!!(accept_filter & ECORE_ACCEPT_NONE));
- SET_FIELD(*state, ETH_VPORT_TX_MODE_MCAST_DROP_ALL,
+ SET_FIELD(state, ETH_VPORT_TX_MODE_MCAST_DROP_ALL,
!!(accept_filter & ECORE_ACCEPT_NONE));
- SET_FIELD(*state, ETH_VPORT_TX_MODE_MCAST_ACCEPT_ALL,
+ SET_FIELD(state, ETH_VPORT_TX_MODE_MCAST_ACCEPT_ALL,
(!!(accept_filter & ECORE_ACCEPT_MCAST_MATCHED) &&
!!(accept_filter & ECORE_ACCEPT_MCAST_UNMATCHED)));
- SET_FIELD(*state, ETH_VPORT_TX_MODE_BCAST_ACCEPT_ALL,
+ SET_FIELD(state, ETH_VPORT_TX_MODE_BCAST_ACCEPT_ALL,
!!(accept_filter & ECORE_ACCEPT_BCAST));
- /* @DPDK */
- /* ETH_VPORT_RX_MODE_UCAST_ACCEPT_ALL and
- * ETH_VPORT_TX_MODE_UCAST_ACCEPT_ALL
- * needs to be set for VF-VF communication to work
- * when dest macaddr is unknown.
- */
- SET_FIELD(*state, ETH_VPORT_TX_MODE_UCAST_ACCEPT_ALL,
- (!!(accept_filter & ECORE_ACCEPT_UCAST_MATCHED) &&
- !!(accept_filter & ECORE_ACCEPT_UCAST_UNMATCHED)));
+ p_ramrod->tx_mode.state = OSAL_CPU_TO_LE16(state);
DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
"p_ramrod->tx_mode.state = 0x%x\n",
- p_ramrod->tx_mode.state);
+ state);
}
}
@@ -351,12 +339,12 @@ ecore_sp_vport_update(struct ecore_hwfn *p_hwfn,
struct ecore_spq_comp_cb *p_comp_data)
{
struct ecore_rss_params *p_rss_params = p_params->rss_params;
+ struct vport_update_ramrod_data_cmn *p_cmn;
+ struct ecore_sp_init_data init_data;
struct vport_update_ramrod_data *p_ramrod = OSAL_NULL;
struct ecore_spq_entry *p_ent = OSAL_NULL;
- enum _ecore_status_t rc = ECORE_NOTIMPL;
- struct ecore_sp_init_data init_data;
u8 abs_vport_id = 0, val;
- u16 wordval;
+ enum _ecore_status_t rc = ECORE_NOTIMPL;
if (IS_VF(p_hwfn->p_dev)) {
rc = ecore_vf_pf_vport_update(p_hwfn, p_params);
@@ -382,30 +370,31 @@ ecore_sp_vport_update(struct ecore_hwfn *p_hwfn,
/* Copy input params to ramrod according to FW struct */
p_ramrod = &p_ent->ramrod.vport_update;
+ p_cmn = &p_ramrod->common;
- p_ramrod->common.vport_id = abs_vport_id;
+ p_cmn->vport_id = abs_vport_id;
+
+ p_cmn->rx_active_flg = p_params->vport_active_rx_flg;
+ p_cmn->update_rx_active_flg = p_params->update_vport_active_rx_flg;
+ p_cmn->tx_active_flg = p_params->vport_active_tx_flg;
+ p_cmn->update_tx_active_flg = p_params->update_vport_active_tx_flg;
+
+ p_cmn->accept_any_vlan = p_params->accept_any_vlan;
+ val = p_params->update_accept_any_vlan_flg;
+ p_cmn->update_accept_any_vlan_flg = val;
- p_ramrod->common.rx_active_flg = p_params->vport_active_rx_flg;
- p_ramrod->common.tx_active_flg = p_params->vport_active_tx_flg;
- val = p_params->update_vport_active_rx_flg;
- p_ramrod->common.update_rx_active_flg = val;
- val = p_params->update_vport_active_tx_flg;
- p_ramrod->common.update_tx_active_flg = val;
+ p_cmn->inner_vlan_removal_en = p_params->inner_vlan_removal_flg;
val = p_params->update_inner_vlan_removal_flg;
- p_ramrod->common.update_inner_vlan_removal_en_flg = val;
- val = p_params->inner_vlan_removal_flg;
- p_ramrod->common.inner_vlan_removal_en = val;
- val = p_params->silent_vlan_removal_flg;
- p_ramrod->common.silent_vlan_removal_en = val;
- val = p_params->update_tx_switching_flg;
- p_ramrod->common.update_tx_switching_en_flg = val;
+ p_cmn->update_inner_vlan_removal_en_flg = val;
+
+ p_cmn->default_vlan_en = p_params->default_vlan_enable_flg;
val = p_params->update_default_vlan_enable_flg;
- p_ramrod->common.update_default_vlan_en_flg = val;
- p_ramrod->common.default_vlan_en = p_params->default_vlan_enable_flg;
- val = p_params->update_default_vlan_flg;
- p_ramrod->common.update_default_vlan_flg = val;
- wordval = p_params->default_vlan;
- p_ramrod->common.default_vlan = OSAL_CPU_TO_LE16(wordval);
+ p_cmn->update_default_vlan_en_flg = val;
+
+ p_cmn->default_vlan = OSAL_CPU_TO_LE16(p_params->default_vlan);
+ p_cmn->update_default_vlan_flg = p_params->update_default_vlan_flg;
+
+ p_cmn->silent_vlan_removal_en = p_params->silent_vlan_removal_flg;
p_ramrod->common.tx_switching_en = p_params->tx_switching_flg;
@@ -419,13 +408,11 @@ ecore_sp_vport_update(struct ecore_hwfn *p_hwfn,
p_ramrod->common.update_tx_switching_en_flg = 1;
}
#endif
+ p_cmn->update_tx_switching_en_flg = p_params->update_tx_switching_flg;
+ p_cmn->anti_spoofing_en = p_params->anti_spoofing_en;
val = p_params->update_anti_spoofing_en_flg;
p_ramrod->common.update_anti_spoofing_en_flg = val;
- p_ramrod->common.anti_spoofing_en = p_params->anti_spoofing_en;
- p_ramrod->common.accept_any_vlan = p_params->accept_any_vlan;
- val = p_params->update_accept_any_vlan_flg;
- p_ramrod->common.update_accept_any_vlan_flg = val;
rc = ecore_sp_vport_update_rss(p_hwfn, p_ramrod, p_rss_params);
if (rc != ECORE_SUCCESS) {
@@ -499,20 +486,20 @@ ecore_filter_accept_cmd(struct ecore_dev *p_dev,
enum spq_mode comp_mode,
struct ecore_spq_comp_cb *p_comp_data)
{
- struct ecore_sp_vport_update_params update_params;
+ struct ecore_sp_vport_update_params vport_update_params;
int i, rc;
/* Prepare and send the vport rx_mode change */
- OSAL_MEMSET(&update_params, 0, sizeof(update_params));
- update_params.vport_id = vport;
- update_params.accept_flags = accept_flags;
- update_params.update_accept_any_vlan_flg = update_accept_any_vlan;
- update_params.accept_any_vlan = accept_any_vlan;
+ OSAL_MEMSET(&vport_update_params, 0, sizeof(vport_update_params));
+ vport_update_params.vport_id = vport;
+ vport_update_params.accept_flags = accept_flags;
+ vport_update_params.update_accept_any_vlan_flg = update_accept_any_vlan;
+ vport_update_params.accept_any_vlan = accept_any_vlan;
for_each_hwfn(p_dev, i) {
struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
- update_params.opaque_fid = p_hwfn->hw_info.opaque_fid;
+ vport_update_params.opaque_fid = p_hwfn->hw_info.opaque_fid;
if (IS_VF(p_dev)) {
rc = ecore_vf_pf_accept_flags(p_hwfn, &accept_flags);
@@ -521,7 +508,7 @@ ecore_filter_accept_cmd(struct ecore_dev *p_dev,
continue;
}
- rc = ecore_sp_vport_update(p_hwfn, &update_params,
+ rc = ecore_sp_vport_update(p_hwfn, &vport_update_params,
comp_mode, p_comp_data);
if (rc != ECORE_SUCCESS) {
DP_ERR(p_dev, "Update rx_mode failed %d\n", rc);
@@ -557,23 +544,26 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
u16 opaque_fid,
u32 cid,
u16 rx_queue_id,
+ u8 vf_rx_queue_id,
u8 vport_id,
u8 stats_id,
u16 sb,
u8 sb_index,
u16 bd_max_bytes,
dma_addr_t bd_chain_phys_addr,
- dma_addr_t cqe_pbl_addr, u16 cqe_pbl_size)
+ dma_addr_t cqe_pbl_addr,
+ u16 cqe_pbl_size, bool b_use_zone_a_prod)
{
- struct ecore_hw_cid_data *p_rx_cid = &p_hwfn->p_rx_cids[rx_queue_id];
struct rx_queue_start_ramrod_data *p_ramrod = OSAL_NULL;
struct ecore_spq_entry *p_ent = OSAL_NULL;
- enum _ecore_status_t rc = ECORE_NOTIMPL;
struct ecore_sp_init_data init_data;
+ struct ecore_hw_cid_data *p_rx_cid;
u16 abs_rx_q_id = 0;
u8 abs_vport_id = 0;
+ enum _ecore_status_t rc = ECORE_NOTIMPL;
/* Store information for the stop */
+ p_rx_cid = &p_hwfn->p_rx_cids[rx_queue_id];
p_rx_cid->cid = cid;
p_rx_cid->opaque_fid = opaque_fid;
p_rx_cid->vport_id = vport_id;
@@ -618,6 +608,15 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
p_ramrod->num_of_pbl_pages = OSAL_CPU_TO_LE16(cqe_pbl_size);
DMA_REGPAIR_LE(p_ramrod->cqe_pbl_addr, cqe_pbl_addr);
+ if (vf_rx_queue_id || b_use_zone_a_prod) {
+ p_ramrod->vf_rx_prod_index = vf_rx_queue_id;
+ DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+ "Queue%s is meant for VF rxq[%02x]\n",
+ b_use_zone_a_prod ? " [legacy]" : "",
+ vf_rx_queue_id);
+ p_ramrod->vf_rx_prod_use_zone_a = b_use_zone_a_prod;
+ }
+
return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
}
@@ -634,11 +633,11 @@ enum _ecore_status_t ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
u16 cqe_pbl_size,
void OSAL_IOMEM **pp_prod)
{
- struct ecore_hw_cid_data *p_rx_cid = &p_hwfn->p_rx_cids[rx_queue_id];
- u8 abs_stats_id = 0;
+ struct ecore_hw_cid_data *p_rx_cid;
+ u32 init_prod_val = 0;
u16 abs_l2_queue = 0;
+ u8 abs_stats_id = 0;
enum _ecore_status_t rc;
- u64 init_prod_val = 0;
if (IS_VF(p_hwfn->p_dev)) {
return ecore_vf_pf_rxq_start(p_hwfn,
@@ -660,14 +659,17 @@ enum _ecore_status_t ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
return rc;
*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview +
- GTT_BAR0_MAP_REG_MSDM_RAM + MSTORM_PRODS_OFFSET(abs_l2_queue);
+ GTT_BAR0_MAP_REG_MSDM_RAM +
+ MSTORM_ETH_PF_PRODS_OFFSET(abs_l2_queue);
/* Init the rcq, rx bd and rx sge (if valid) producers to 0 */
- __internal_ram_wr(p_hwfn, *pp_prod, sizeof(u64),
+ __internal_ram_wr(p_hwfn, *pp_prod, sizeof(u32),
(u32 *)(&init_prod_val));
/* Allocate a CID for the queue */
- rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH, &p_rx_cid->cid);
+ p_rx_cid = &p_hwfn->p_rx_cids[rx_queue_id];
+ rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
+ &p_rx_cid->cid);
if (rc != ECORE_SUCCESS) {
DP_NOTICE(p_hwfn, true, "Failed to acquire cid\n");
return rc;
@@ -678,13 +680,16 @@ enum _ecore_status_t ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
opaque_fid,
p_rx_cid->cid,
rx_queue_id,
+ 0,
vport_id,
abs_stats_id,
sb,
sb_index,
bd_max_bytes,
bd_chain_phys_addr,
- cqe_pbl_addr, cqe_pbl_size);
+ cqe_pbl_addr,
+ cqe_pbl_size,
+ false);
if (rc != ECORE_SUCCESS)
ecore_sp_release_queue_cid(p_hwfn, p_rx_cid);
@@ -859,8 +864,7 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
p_ramrod->queue_zone_id = OSAL_CPU_TO_LE16(abs_tx_q_id);
p_ramrod->pbl_size = OSAL_CPU_TO_LE16(pbl_size);
- p_ramrod->pbl_base_addr.hi = DMA_HI_LE(pbl_addr);
- p_ramrod->pbl_base_addr.lo = DMA_LO_LE(pbl_addr);
+ DMA_REGPAIR_LE(p_ramrod->pbl_base_addr, pbl_addr);
pq_id = ecore_get_qm_pq(p_hwfn, PROTOCOLID_ETH, p_pq_params);
p_ramrod->qm_pq_id = OSAL_CPU_TO_LE16(pq_id);
@@ -875,30 +879,36 @@ enum _ecore_status_t ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
u8 stats_id,
u16 sb,
u8 sb_index,
+ u8 tc,
dma_addr_t pbl_addr,
u16 pbl_size,
void OSAL_IOMEM **pp_doorbell)
{
- struct ecore_hw_cid_data *p_tx_cid = &p_hwfn->p_tx_cids[tx_queue_id];
+ struct ecore_hw_cid_data *p_tx_cid;
union ecore_qm_pq_params pq_params;
- enum _ecore_status_t rc;
u8 abs_stats_id = 0;
+ enum _ecore_status_t rc;
if (IS_VF(p_hwfn->p_dev)) {
return ecore_vf_pf_txq_start(p_hwfn,
tx_queue_id,
sb,
sb_index,
- pbl_addr, pbl_size, pp_doorbell);
+ pbl_addr,
+ pbl_size,
+ pp_doorbell);
}
rc = ecore_fw_vport(p_hwfn, stats_id, &abs_stats_id);
if (rc != ECORE_SUCCESS)
return rc;
+ p_tx_cid = &p_hwfn->p_tx_cids[tx_queue_id];
OSAL_MEMSET(p_tx_cid, 0, sizeof(*p_tx_cid));
OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
+ pq_params.eth.tc = tc;
+
/* Allocate a CID for the queue */
rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH, &p_tx_cid->cid);
if (rc != ECORE_SUCCESS) {
@@ -943,10 +953,9 @@ enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
u16 tx_queue_id)
{
struct ecore_hw_cid_data *p_tx_cid = &p_hwfn->p_tx_cids[tx_queue_id];
- struct tx_queue_stop_ramrod_data *p_ramrod = OSAL_NULL;
struct ecore_spq_entry *p_ent = OSAL_NULL;
- enum _ecore_status_t rc = ECORE_NOTIMPL;
struct ecore_sp_init_data init_data;
+ enum _ecore_status_t rc = ECORE_NOTIMPL;
if (IS_VF(p_hwfn->p_dev))
return ecore_vf_pf_txq_stop(p_hwfn, tx_queue_id);
@@ -963,8 +972,6 @@ enum _ecore_status_t ecore_sp_eth_tx_queue_stop(struct ecore_hwfn *p_hwfn,
if (rc != ECORE_SUCCESS)
return rc;
- p_ramrod = &p_ent->ramrod.tx_queue_stop;
-
rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
if (rc != ECORE_SUCCESS)
return rc;
@@ -1247,7 +1254,7 @@ static u32 ecore_calc_crc32c(u8 *crc32_packet,
return crc32_result;
}
-static OSAL_INLINE u32 ecore_crc32c_le(u32 seed, u8 *mac, u32 len)
+static u32 ecore_crc32c_le(u32 seed, u8 *mac, u32 len)
{
u32 packet_buf[2] = { 0 };
@@ -1270,18 +1277,22 @@ ecore_sp_eth_filter_mcast(struct ecore_hwfn *p_hwfn,
enum spq_mode comp_mode,
struct ecore_spq_comp_cb *p_comp_data)
{
- struct vport_update_ramrod_data *p_ramrod = OSAL_NULL;
unsigned long bins[ETH_MULTICAST_MAC_BINS_IN_REGS];
+ struct vport_update_ramrod_data *p_ramrod = OSAL_NULL;
struct ecore_spq_entry *p_ent = OSAL_NULL;
struct ecore_sp_init_data init_data;
- enum _ecore_status_t rc;
u8 abs_vport_id = 0;
+ enum _ecore_status_t rc;
int i;
+ if (p_filter_cmd->opcode == ECORE_FILTER_ADD)
+ rc = ecore_fw_vport(p_hwfn,
+ p_filter_cmd->vport_to_add_to,
+ &abs_vport_id);
+ else
rc = ecore_fw_vport(p_hwfn,
- (p_filter_cmd->opcode == ECORE_FILTER_ADD) ?
- p_filter_cmd->vport_to_add_to :
- p_filter_cmd->vport_to_remove_from, &abs_vport_id);
+ p_filter_cmd->vport_to_remove_from,
+ &abs_vport_id);
if (rc != ECORE_SUCCESS)
return rc;
@@ -1356,14 +1367,16 @@ ecore_filter_mcast_cmd(struct ecore_dev *p_dev,
for_each_hwfn(p_dev, i) {
struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
+ u16 opaque_fid;
if (IS_VF(p_dev)) {
ecore_vf_pf_filter_mcast(p_hwfn, p_filter_cmd);
continue;
}
+ opaque_fid = p_hwfn->hw_info.opaque_fid;
rc = ecore_sp_eth_filter_mcast(p_hwfn,
- p_hwfn->hw_info.opaque_fid,
+ opaque_fid,
p_filter_cmd,
comp_mode, p_comp_data);
if (rc != ECORE_SUCCESS)
@@ -1384,14 +1397,16 @@ ecore_filter_ucast_cmd(struct ecore_dev *p_dev,
for_each_hwfn(p_dev, i) {
struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
+ u16 opaque_fid;
if (IS_VF(p_dev)) {
rc = ecore_vf_pf_filter_ucast(p_hwfn, p_filter_cmd);
continue;
}
+ opaque_fid = p_hwfn->hw_info.opaque_fid;
rc = ecore_sp_eth_filter_ucast(p_hwfn,
- p_hwfn->hw_info.opaque_fid,
+ opaque_fid,
p_filter_cmd,
comp_mode, p_comp_data);
if (rc != ECORE_SUCCESS)
@@ -1401,77 +1416,6 @@ ecore_filter_ucast_cmd(struct ecore_dev *p_dev,
return rc;
}
-/* IOV related */
-enum _ecore_status_t ecore_sp_vf_start(struct ecore_hwfn *p_hwfn,
- u32 concrete_vfid, u16 opaque_vfid)
-{
- struct vf_start_ramrod_data *p_ramrod = OSAL_NULL;
- struct ecore_spq_entry *p_ent = OSAL_NULL;
- enum _ecore_status_t rc = ECORE_NOTIMPL;
- struct ecore_sp_init_data init_data;
-
- /* Get SPQ entry */
- OSAL_MEMSET(&init_data, 0, sizeof(init_data));
- init_data.cid = ecore_spq_get_cid(p_hwfn);
- init_data.opaque_fid = opaque_vfid;
- init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
-
- rc = ecore_sp_init_request(p_hwfn, &p_ent,
- COMMON_RAMROD_VF_START,
- PROTOCOLID_COMMON, &init_data);
- if (rc != ECORE_SUCCESS)
- return rc;
-
- p_ramrod = &p_ent->ramrod.vf_start;
-
- p_ramrod->vf_id = GET_FIELD(concrete_vfid, PXP_CONCRETE_FID_VFID);
- p_ramrod->opaque_fid = OSAL_CPU_TO_LE16(opaque_vfid);
-
- switch (p_hwfn->hw_info.personality) {
- case ECORE_PCI_ETH:
- p_ramrod->personality = PERSONALITY_ETH;
- break;
- default:
- DP_NOTICE(p_hwfn, true, "Unknown VF personality %d\n",
- p_hwfn->hw_info.personality);
- return ECORE_INVAL;
- }
-
- return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-}
-
-enum _ecore_status_t ecore_sp_vf_update(struct ecore_hwfn *p_hwfn)
-{
- return ECORE_NOTIMPL;
-}
-
-enum _ecore_status_t ecore_sp_vf_stop(struct ecore_hwfn *p_hwfn,
- u32 concrete_vfid, u16 opaque_vfid)
-{
- enum _ecore_status_t rc = ECORE_NOTIMPL;
- struct vf_stop_ramrod_data *p_ramrod = OSAL_NULL;
- struct ecore_spq_entry *p_ent = OSAL_NULL;
- struct ecore_sp_init_data init_data;
-
- /* Get SPQ entry */
- OSAL_MEMSET(&init_data, 0, sizeof(init_data));
- init_data.cid = ecore_spq_get_cid(p_hwfn);
- init_data.opaque_fid = opaque_vfid;
- init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
-
- rc = ecore_sp_init_request(p_hwfn, &p_ent,
- COMMON_RAMROD_VF_STOP,
- PROTOCOLID_COMMON, &init_data);
- if (rc != ECORE_SUCCESS)
- return rc;
-
- p_ramrod = &p_ent->ramrod.vf_stop;
-
- p_ramrod->vf_id = GET_FIELD(concrete_vfid, PXP_CONCRETE_FID_VFID);
-
- return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
-}
-
/* Statistics related code */
static void __ecore_get_vport_pstats_addrlen(struct ecore_hwfn *p_hwfn,
u32 *p_addr, u32 *p_len,
@@ -1740,7 +1684,7 @@ static void _ecore_get_vport_stats(struct ecore_dev *p_dev,
IS_PF(p_dev) ? true : false);
out:
- if (IS_PF(p_dev))
+ if (IS_PF(p_dev) && p_ptt)
ecore_ptt_release(p_hwfn, p_ptt);
}
}
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index 5594a08..c8419a3 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -9,62 +9,13 @@
#ifndef __ECORE_L2_H__
#define __ECORE_L2_H__
+
#include "ecore.h"
#include "ecore_hw.h"
#include "ecore_spq.h"
#include "ecore_l2_api.h"
/**
- * @brief ecore_sp_vf_start - VF Function Start
- *
- * This ramrod is sent to initialize a virtual function (VF) is loaded.
- * It will configure the function related parameters.
- *
- * @note Final phase API.
- *
- * @param p_hwfn
- * @param concrete_vfid VF ID
- * @param opaque_vfid
- *
- * @return enum _ecore_status_t
- */
-
-enum _ecore_status_t ecore_sp_vf_start(struct ecore_hwfn *p_hwfn,
- u32 concrete_vfid, u16 opaque_vfid);
-
-/**
- * @brief ecore_sp_vf_update - VF Function Update Ramrod
- *
- * This ramrod performs updates of a virtual function (VF).
- * It currently contains no functionality.
- *
- * @note Final phase API.
- *
- * @param p_hwfn
- *
- * @return enum _ecore_status_t
- */
-
-enum _ecore_status_t ecore_sp_vf_update(struct ecore_hwfn *p_hwfn);
-
-/**
- * @brief ecore_sp_vf_stop - VF Function Stop Ramrod
- *
- * This ramrod is sent to unload a virtual function (VF).
- *
- * @note Final phase API.
- *
- * @param p_hwfn
- * @param concrete_vfid
- * @param opaque_vfid
- *
- * @return enum _ecore_status_t
- */
-
-enum _ecore_status_t ecore_sp_vf_stop(struct ecore_hwfn *p_hwfn,
- u32 concrete_vfid, u16 opaque_vfid);
-
-/**
* @brief ecore_sp_eth_tx_queue_update -
*
* This ramrod updates a TX queue. It is used for setting the active
@@ -98,7 +49,7 @@ ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
* @param bd_chain_phys_addr
* @param cqe_pbl_addr
* @param cqe_pbl_size
- * @param leading
+ * @param b_use_zone_a_prod - support legacy VF producers
*
* @return enum _ecore_status_t
*/
@@ -107,13 +58,15 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
u16 opaque_fid,
u32 cid,
u16 rx_queue_id,
+ u8 vf_rx_queue_id,
u8 vport_id,
u8 stats_id,
u16 sb,
u8 sb_index,
u16 bd_max_bytes,
dma_addr_t bd_chain_phys_addr,
- dma_addr_t cqe_pbl_addr, u16 cqe_pbl_size);
+ dma_addr_t cqe_pbl_addr,
+ u16 cqe_pbl_size, bool b_use_zone_a_prod);
/**
* @brief - Starts a Tx queue; Should be used where contexts are handled
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 65a508c..09688eb 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -218,11 +218,12 @@ ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
* @param opaque_fid
* @param tx_queue_id TX Queue ID
* @param vport_id VPort ID
- * @param stats_id VPort ID which the queue stats
+ * @param u8 stats_id VPort ID which the queue stats
* will be added to
* @param sb Status Block of the Function Event Ring
* @param sb_index Index into the status block of the Function
* Event Ring
+ * @param tc traffic class to use with this L2 txq
* @param pbl_addr address of the pbl array
* @param pbl_size number of entries in pbl
* @param pp_doorbell Pointer to place doorbell pointer (May be NULL).
@@ -238,10 +239,10 @@ enum _ecore_status_t ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
u8 stats_id,
u16 sb,
u8 sb_index,
+ u8 tc,
dma_addr_t pbl_addr,
u16 pbl_size,
- void OSAL_IOMEM * *
- pp_doorbell);
+ void OSAL_IOMEM **pp_doorbell);
/**
* @brief ecore_sp_eth_tx_queue_stop -
@@ -277,6 +278,8 @@ struct ecore_sp_vport_start_params {
u8 vport_id; /* VPORT ID */
u16 mtu; /* VPORT MTU */
bool zero_placement_offset;
+ bool check_mac;
+ bool check_ethtype;
};
/**
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 5baa5a7..5002ee3 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -15,6 +15,7 @@
#include "ecore_hw.h"
#include "ecore_init_fw_funcs.h"
#include "ecore_sriov.h"
+#include "ecore_vf.h"
#include "ecore_iov_api.h"
#include "ecore_gtt_reg_addr.h"
#include "ecore_iro.h"
@@ -227,7 +228,8 @@ static enum _ecore_status_t ecore_mcp_mb_lock(struct ecore_hwfn *p_hwfn,
if (p_hwfn->mcp_info->block_mb_sending) {
DP_NOTICE(p_hwfn, false,
- "Trying to send a MFW mailbox command [0x%x] in parallel to [UN]LOAD_REQ. Aborting.\n",
+ "Trying to send a MFW mailbox command [0x%x]"
+ " in parallel to [UN]LOAD_REQ. Aborting.\n",
cmd);
OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->lock);
return ECORE_BUSY;
@@ -259,6 +261,7 @@ enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
delay = EMUL_MCP_RESP_ITER_US;
#endif
+
/* Ensure that only a single thread is accessing the mailbox at a
* certain time.
*/
@@ -292,7 +295,6 @@ enum _ecore_status_t ecore_mcp_reset(struct ecore_hwfn *p_hwfn,
return rc;
}
-/* Should be called while the dedicated spinlock is acquired */
static enum _ecore_status_t ecore_do_mcp_cmd(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u32 cmd, u32 param,
@@ -300,12 +302,16 @@ static enum _ecore_status_t ecore_do_mcp_cmd(struct ecore_hwfn *p_hwfn,
u32 *o_mcp_param)
{
u32 delay = CHIP_MCP_RESP_ITER_US;
+ u32 max_retries = ECORE_DRV_MB_MAX_RETRIES;
u32 seq, cnt = 1, actual_mb_seq;
enum _ecore_status_t rc = ECORE_SUCCESS;
#ifndef ASIC_ONLY
if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
delay = EMUL_MCP_RESP_ITER_US;
+ /* There is a built-in delay of 100usec in each MFW response read */
+ if (CHIP_REV_IS_FPGA(p_hwfn->p_dev))
+ max_retries /= 10;
#endif
/* Get actual driver mailbox sequence */
@@ -329,10 +335,6 @@ static enum _ecore_status_t ecore_do_mcp_cmd(struct ecore_hwfn *p_hwfn,
/* Set drv command along with the updated sequence */
DRV_MB_WR(p_hwfn, p_ptt, drv_mb_header, (cmd | seq));
- DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
- "wrote command (%x) to MFW MB param 0x%08x\n",
- (cmd | seq), param);
-
do {
/* Wait for MFW response */
OSAL_UDELAY(delay);
@@ -340,11 +342,7 @@ static enum _ecore_status_t ecore_do_mcp_cmd(struct ecore_hwfn *p_hwfn,
/* Give the FW up to 5 second (500*10ms) */
} while ((seq != (*o_mcp_resp & FW_MSG_SEQ_NUMBER_MASK)) &&
- (cnt++ < ECORE_DRV_MB_MAX_RETRIES));
-
- DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
- "[after %d ms] read (%x) seq is (%x) from FW MB\n",
- cnt * delay, *o_mcp_resp, seq);
+ (cnt++ < max_retries));
/* Is this a reply to our command? */
if (seq == (*o_mcp_resp & FW_MSG_SEQ_NUMBER_MASK)) {
@@ -362,9 +360,9 @@ static enum _ecore_status_t ecore_do_mcp_cmd(struct ecore_hwfn *p_hwfn,
return rc;
}
-
static enum _ecore_status_t
-ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ecore_mcp_cmd_and_union(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
struct ecore_mcp_mb_params *p_mb_params)
{
u32 union_data_addr;
@@ -423,6 +421,7 @@ enum _ecore_status_t ecore_mcp_cmd(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
#endif
+
OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
mb_params.cmd = cmd;
mb_params.param = param;
@@ -451,7 +450,7 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn,
OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
mb_params.cmd = cmd;
mb_params.param = param;
- OSAL_MEMCPY(&union_data.raw_data, i_buf, i_txn_size);
+ OSAL_MEMCPY((u32 *)&union_data.raw_data, i_buf, i_txn_size);
mb_params.p_data_src = &union_data;
rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
if (rc != ECORE_SUCCESS)
@@ -471,30 +470,25 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
u32 *o_mcp_param,
u32 *o_txn_size, u32 *o_buf)
{
+ struct ecore_mcp_mb_params mb_params;
+ union drv_union_data union_data;
enum _ecore_status_t rc;
- u32 i;
-
- /* MCP not initialized */
- if (!ecore_mcp_is_init(p_hwfn)) {
- DP_NOTICE(p_hwfn, true, "MFW is not initialized !\n");
- return ECORE_BUSY;
- }
- OSAL_SPIN_LOCK(&p_hwfn->mcp_info->lock);
- rc = ecore_do_mcp_cmd(p_hwfn, p_ptt, cmd, param, o_mcp_resp,
- o_mcp_param);
+ OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+ mb_params.cmd = cmd;
+ mb_params.param = param;
+ mb_params.p_data_dst = &union_data;
+ rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
if (rc != ECORE_SUCCESS)
- goto out;
+ return rc;
+
+ *o_mcp_resp = mb_params.mcp_resp;
+ *o_mcp_param = mb_params.mcp_param;
- /* Get payload after operation completes successfully */
*o_txn_size = *o_mcp_param;
- for (i = 0; i < *o_txn_size; i += 4)
- o_buf[i / sizeof(u32)] = DRV_MB_RD(p_hwfn, p_ptt,
- union_data.raw_data[i]);
+ OSAL_MEMCPY(o_buf, (u32 *)&union_data.raw_data, *o_txn_size);
-out:
- OSAL_SPIN_UNLOCK(&p_hwfn->mcp_info->lock);
- return rc;
+ return ECORE_SUCCESS;
}
#ifndef ASIC_ONLY
@@ -532,7 +526,6 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
struct ecore_dev *p_dev = p_hwfn->p_dev;
struct ecore_mcp_mb_params mb_params;
union drv_union_data union_data;
- u32 param;
enum _ecore_status_t rc;
#ifndef ASIC_ONLY
@@ -556,6 +549,8 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
return rc;
}
+ *p_load_code = mb_params.mcp_resp;
+
/* If MFW refused (e.g. other port is in diagnostic mode) we
* must abort. This can happen in the following cases:
* - Other port is in diagnostic mode
@@ -617,7 +612,6 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
MCP_PF_ID(p_hwfn));
struct ecore_mcp_mb_params mb_params;
union drv_union_data union_data;
- u32 resp, param;
enum _ecore_status_t rc;
int i;
@@ -630,9 +624,10 @@ enum _ecore_status_t ecore_mcp_ack_vf_flr(struct ecore_hwfn *p_hwfn,
mb_params.cmd = DRV_MSG_CODE_VF_DISABLED_DONE;
OSAL_MEMCPY(&union_data.ack_vf_disabled, vfs_to_ack, VF_MAX_STATIC / 8);
mb_params.p_data_src = &union_data;
- rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+ rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt,
+ &mb_params);
if (rc != ECORE_SUCCESS) {
- DP_NOTICE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IOV),
+ DP_NOTICE(p_hwfn, false,
"Failed to pass ACK for VF flr to MFW\n");
return ECORE_TIMEOUT;
}
@@ -673,9 +668,11 @@ static void ecore_mcp_handle_transceiver_change(struct ecore_hwfn *p_hwfn,
}
static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, bool b_reset)
+ struct ecore_ptt *p_ptt,
+ bool b_reset)
{
struct ecore_mcp_link_state *p_link;
+ u8 max_bw, min_bw;
u32 status = 0;
p_link = &p_hwfn->mcp_info->link_output;
@@ -739,23 +736,18 @@ static void ecore_mcp_handle_link_change(struct ecore_hwfn *p_hwfn,
else
p_link->line_speed = 0;
- /* Correct speed according to bandwidth allocation */
- if (p_hwfn->mcp_info->func_info.bandwidth_max && p_link->speed) {
- u8 max_bw = p_hwfn->mcp_info->func_info.bandwidth_max;
+ max_bw = p_hwfn->mcp_info->func_info.bandwidth_max;
+ min_bw = p_hwfn->mcp_info->func_info.bandwidth_min;
+ /* Max bandwidth configuration */
__ecore_configure_pf_max_bandwidth(p_hwfn, p_ptt,
p_link, max_bw);
- }
-
- if (p_hwfn->mcp_info->func_info.bandwidth_min && p_link->speed) {
- u8 min_bw = p_hwfn->mcp_info->func_info.bandwidth_min;
+ /* Mintz bandwidth configuration */
__ecore_configure_pf_min_bandwidth(p_hwfn, p_ptt,
p_link, min_bw);
-
ecore_configure_vp_wfq_on_link_change(p_hwfn->p_dev,
p_link->min_pf_rate);
- }
p_link->an = !!(status & LINK_STATUS_AUTO_NEGOTIATE_ENABLED);
p_link->an_complete = !!(status & LINK_STATUS_AUTO_NEGOTIATE_COMPLETE);
@@ -822,8 +814,8 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
struct ecore_mcp_mb_params mb_params;
union drv_union_data union_data;
struct pmm_phy_cfg *p_phy_cfg;
- u32 param = 0, reply = 0, cmd;
enum _ecore_status_t rc = ECORE_SUCCESS;
+ u32 cmd;
#ifndef ASIC_ONLY
if (CHIP_REV_IS_EMUL(p_hwfn->p_dev))
@@ -841,16 +833,6 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
p_phy_cfg->pause |= (params->pause.forced_tx) ? PMM_PAUSE_TX : 0;
p_phy_cfg->adv_speed = params->speed.advertised_speeds;
p_phy_cfg->loopback_mode = params->loopback_mode;
-
-#ifndef ASIC_ONLY
- if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
- DP_INFO(p_hwfn,
- "Link on FPGA - Ask for loopback mode '5' at 10G\n");
- p_phy_cfg->loopback_mode = 5;
- p_phy_cfg->speed = 10000;
- }
-#endif
-
p_hwfn->b_drv_link_init = b_up;
if (b_up)
@@ -945,8 +927,8 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
enum ecore_mcp_protocol_type stats_type;
union ecore_mcp_protocol_stats stats;
struct ecore_mcp_mb_params mb_params;
- u32 hsi_param, param = 0, reply = 0;
union drv_union_data union_data;
+ u32 hsi_param;
switch (type) {
case MFW_DRV_MSG_GET_LAN_STATS:
@@ -968,26 +950,6 @@ static void ecore_mcp_send_protocol_stats(struct ecore_hwfn *p_hwfn,
ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
}
-static u32 ecore_mcp_get_shmem_func(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct public_func *p_data, int pfid)
-{
- u32 addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
- PUBLIC_FUNC);
- u32 mfw_path_offsize = ecore_rd(p_hwfn, p_ptt, addr);
- u32 func_addr = SECTION_ADDR(mfw_path_offsize, pfid);
- u32 i, size;
-
- OSAL_MEM_ZERO(p_data, sizeof(*p_data));
-
- size = OSAL_MIN_T(u32, sizeof(*p_data), SECTION_SIZE(mfw_path_offsize));
- for (i = 0; i < size / sizeof(u32); i++)
- ((u32 *)p_data)[i] = ecore_rd(p_hwfn, p_ptt,
- func_addr + (i << 2));
-
- return size;
-}
-
static void
ecore_read_pf_bandwidth(struct ecore_hwfn *p_hwfn,
struct public_func *p_shmem_info)
@@ -1023,6 +985,28 @@ ecore_read_pf_bandwidth(struct ecore_hwfn *p_hwfn,
}
}
+static u32 ecore_mcp_get_shmem_func(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct public_func *p_data,
+ int pfid)
+{
+ u32 addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
+ PUBLIC_FUNC);
+ u32 mfw_path_offsize = ecore_rd(p_hwfn, p_ptt, addr);
+ u32 func_addr = SECTION_ADDR(mfw_path_offsize, pfid);
+ u32 i, size;
+
+ OSAL_MEM_ZERO(p_data, sizeof(*p_data));
+
+ size = OSAL_MIN_T(u32, sizeof(*p_data),
+ SECTION_SIZE(mfw_path_offsize));
+ for (i = 0; i < size / sizeof(u32); i++)
+ ((u32 *)p_data)[i] = ecore_rd(p_hwfn, p_ptt,
+ func_addr + (i << 2));
+
+ return size;
+}
+
static void
ecore_mcp_update_bw(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
{
@@ -1102,6 +1086,9 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn,
ecore_dcbx_mib_update_event(p_hwfn, p_ptt,
ECORE_DCBX_OPERATIONAL_MIB);
break;
+ case MFW_DRV_MSG_TRANSCEIVER_STATE_CHANGE:
+ ecore_mcp_handle_transceiver_change(p_hwfn, p_ptt);
+ break;
case MFW_DRV_MSG_ERROR_RECOVERY:
ecore_mcp_handle_process_kill(p_hwfn, p_ptt);
break;
@@ -1114,9 +1101,6 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn,
case MFW_DRV_MSG_BW_UPDATE:
ecore_mcp_update_bw(p_hwfn, p_ptt);
break;
- case MFW_DRV_MSG_TRANSCEIVER_STATE_CHANGE:
- ecore_mcp_handle_transceiver_change(p_hwfn, p_ptt);
- break;
case MFW_DRV_MSG_FAILURE_DETECTED:
ecore_mcp_handle_fan_failure(p_hwfn, p_ptt);
break;
@@ -1152,34 +1136,33 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn,
return rc;
}
-enum _ecore_status_t ecore_mcp_get_mfw_ver(struct ecore_dev *p_dev,
+enum _ecore_status_t ecore_mcp_get_mfw_ver(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u32 *p_mfw_ver,
u32 *p_running_bundle_id)
{
- struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
u32 global_offsize;
#ifndef ASIC_ONLY
- if (CHIP_REV_IS_EMUL(p_dev)) {
- DP_NOTICE(p_dev, false, "Emulation - can't get MFW version\n");
+ if (CHIP_REV_IS_EMUL(p_hwfn->p_dev)) {
+ DP_NOTICE(p_hwfn, false, "Emulation - can't get MFW version\n");
return ECORE_SUCCESS;
}
#endif
- if (IS_VF(p_dev)) {
+ if (IS_VF(p_hwfn->p_dev)) {
if (p_hwfn->vf_iov_info) {
struct pfvf_acquire_resp_tlv *p_resp;
p_resp = &p_hwfn->vf_iov_info->acquire_resp;
*p_mfw_ver = p_resp->pfdev_info.mfw_ver;
return ECORE_SUCCESS;
- }
-
- DP_VERBOSE(p_dev, ECORE_MSG_IOV,
- "VF requested MFW vers prior to ACQUIRE\n");
+ } else {
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "VF requested MFW version prior to ACQUIRE\n");
return ECORE_INVAL;
}
+ }
global_offsize = ecore_rd(p_hwfn, p_ptt,
SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->
@@ -1280,18 +1263,25 @@ enum _ecore_status_t ecore_mcp_fill_shmem_func_info(struct ecore_hwfn *p_hwfn,
DP_NOTICE(p_hwfn, false, "MAC is 0 in shmem\n");
}
+ /* TODO - are these calculations true for BE machine? */
+ info->wwn_port = (u64)shmem_info.fcoe_wwn_port_name_upper |
+ (((u64)shmem_info.fcoe_wwn_port_name_lower) << 32);
+ info->wwn_node = (u64)shmem_info.fcoe_wwn_node_name_upper |
+ (((u64)shmem_info.fcoe_wwn_node_name_lower) << 32);
+
info->ovlan = (u16)(shmem_info.ovlan_stag & FUNC_MF_CFG_OV_STAG_MASK);
DP_VERBOSE(p_hwfn, (ECORE_MSG_SP | ECORE_MSG_IFUP),
"Read configuration from shmem: pause_on_host %02x"
" protocol %02x BW [%02x - %02x]"
- " MAC %02x:%02x:%02x:%02x:%02x:%02x wwn port %" PRIx64
- " node %" PRIx64 " ovlan %04x\n",
+ " MAC %02x:%02x:%02x:%02x:%02x:%02x wwn port %lx"
+ " node %lx ovlan %04x\n",
info->pause_on_host, info->protocol,
info->bandwidth_min, info->bandwidth_max,
info->mac[0], info->mac[1], info->mac[2],
info->mac[3], info->mac[4], info->mac[5],
- info->wwn_port, info->wwn_node, info->ovlan);
+ (unsigned long)info->wwn_port,
+ (unsigned long)info->wwn_node, info->ovlan);
return ECORE_SUCCESS;
}
@@ -1331,14 +1321,14 @@ struct ecore_mcp_link_capabilities
enum _ecore_status_t ecore_mcp_drain(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
{
- enum _ecore_status_t rc;
u32 resp = 0, param = 0;
+ enum _ecore_status_t rc;
rc = ecore_mcp_cmd(p_hwfn, p_ptt,
- DRV_MSG_CODE_NIG_DRAIN, 100, &resp, ¶m);
+ DRV_MSG_CODE_NIG_DRAIN, 1000, &resp, ¶m);
/* Wait for the drain to complete before returning */
- OSAL_MSLEEP(120);
+ OSAL_MSLEEP(1020);
return rc;
}
@@ -1464,6 +1454,12 @@ enum _ecore_status_t ecore_mcp_config_vf_msix(struct ecore_hwfn *p_hwfn,
u32 resp = 0, param = 0, rc_param = 0;
enum _ecore_status_t rc;
+/* Only Leader can configure MSIX, and need to take CMT into account */
+
+ if (!IS_LEAD_HWFN(p_hwfn))
+ return ECORE_SUCCESS;
+ num *= p_hwfn->p_dev->num_hwfns;
+
param |= (vf_id << DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_SHIFT) &
DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_MASK;
param |= (num << DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_SHIFT) &
@@ -1476,6 +1472,10 @@ enum _ecore_status_t ecore_mcp_config_vf_msix(struct ecore_hwfn *p_hwfn,
DP_NOTICE(p_hwfn, true, "VF[%d]: MFW failed to set MSI-X\n",
vf_id);
rc = ECORE_INVAL;
+ } else {
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "Requested 0x%02x MSI-x interrupts from VF 0x%02x\n",
+ num, vf_id);
}
return rc;
@@ -1485,10 +1485,10 @@ enum _ecore_status_t
ecore_mcp_send_drv_version(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
struct ecore_mcp_drv_version *p_ver)
{
- u32 param = 0, reply = 0, num_words, i;
struct drv_version_stc *p_drv_version;
struct ecore_mcp_mb_params mb_params;
union drv_union_data union_data;
+ u32 num_words, i;
void *p_name;
OSAL_BE32 val;
enum _ecore_status_t rc;
@@ -1705,6 +1705,14 @@ enum _ecore_status_t ecore_mcp_nvm_read(struct ecore_dev *p_dev, u32 addr,
DP_NOTICE(p_dev, false, "MCP command rc = %d\n", rc);
break;
}
+
+ /* This can be a lengthy process, and it's possible scheduler
+ * isn't preemptible. Sleep a bit to prevent CPU hogging.
+ */
+ if (bytes_left % 0x1000 <
+ (bytes_left - *params.nvm_rd.buf_size) % 0x1000)
+ OSAL_MSLEEP(1);
+
offset += *params.nvm_rd.buf_size;
bytes_left -= *params.nvm_rd.buf_size;
}
@@ -1842,6 +1850,13 @@ enum _ecore_status_t ecore_mcp_nvm_write(struct ecore_dev *p_dev, u32 cmd,
FW_MSG_CODE_NVM_PUT_FILE_FINISH_OK)))
DP_NOTICE(p_dev, false, "MCP command rc = %d\n", rc);
+ /* This can be a lengthy process, and it's possible scheduler
+ * isn't preemptible. Sleep a bit to prevent CPU hogging.
+ */
+ if (buf_idx % 0x1000 >
+ (buf_idx + buf_size) % 0x1000)
+ OSAL_MSLEEP(1);
+
buf_idx += buf_size;
}
@@ -1912,10 +1927,9 @@ enum _ecore_status_t ecore_mcp_phy_sfp_read(struct ecore_hwfn *p_hwfn,
u32 bytes_left, bytes_to_copy, buf_size;
OSAL_MEMSET(¶ms, 0, sizeof(struct ecore_mcp_nvm_params));
- SET_FIELD(params.nvm_common.offset,
- DRV_MB_PARAM_TRANSCEIVER_PORT, port);
- SET_FIELD(params.nvm_common.offset,
- DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS, addr);
+ params.nvm_common.offset =
+ (port << DRV_MB_PARAM_TRANSCEIVER_PORT_SHIFT) |
+ (addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_SHIFT);
addr = offset;
offset = 0;
bytes_left = len;
@@ -1926,10 +1940,14 @@ enum _ecore_status_t ecore_mcp_phy_sfp_read(struct ecore_hwfn *p_hwfn,
bytes_to_copy = OSAL_MIN_T(u32, bytes_left,
MAX_I2C_TRANSACTION_SIZE);
params.nvm_rd.buf = (u32 *)(p_buf + offset);
- SET_FIELD(params.nvm_common.offset,
- DRV_MB_PARAM_TRANSCEIVER_OFFSET, addr + offset);
- SET_FIELD(params.nvm_common.offset,
- DRV_MB_PARAM_TRANSCEIVER_SIZE, bytes_to_copy);
+ params.nvm_common.offset &=
+ (DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_MASK |
+ DRV_MB_PARAM_TRANSCEIVER_PORT_MASK);
+ params.nvm_common.offset |=
+ ((addr + offset) <<
+ DRV_MB_PARAM_TRANSCEIVER_OFFSET_SHIFT);
+ params.nvm_common.offset |=
+ (bytes_to_copy << DRV_MB_PARAM_TRANSCEIVER_SIZE_SHIFT);
rc = ecore_mcp_nvm_command(p_hwfn, p_ptt, ¶ms);
if ((params.nvm_common.resp & FW_MSG_CODE_MASK) ==
FW_MSG_CODE_TRANSCEIVER_NOT_PRESENT) {
@@ -1955,20 +1973,23 @@ enum _ecore_status_t ecore_mcp_phy_sfp_write(struct ecore_hwfn *p_hwfn,
u32 buf_idx, buf_size;
OSAL_MEMSET(¶ms, 0, sizeof(struct ecore_mcp_nvm_params));
- SET_FIELD(params.nvm_common.offset,
- DRV_MB_PARAM_TRANSCEIVER_PORT, port);
- SET_FIELD(params.nvm_common.offset,
- DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS, addr);
+ params.nvm_common.offset =
+ (port << DRV_MB_PARAM_TRANSCEIVER_PORT_SHIFT) |
+ (addr << DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_SHIFT);
params.type = ECORE_MCP_NVM_WR;
params.nvm_common.cmd = DRV_MSG_CODE_TRANSCEIVER_WRITE;
buf_idx = 0;
while (buf_idx < len) {
buf_size = OSAL_MIN_T(u32, (len - buf_idx),
MAX_I2C_TRANSACTION_SIZE);
- SET_FIELD(params.nvm_common.offset,
- DRV_MB_PARAM_TRANSCEIVER_OFFSET, offset + buf_idx);
- SET_FIELD(params.nvm_common.offset,
- DRV_MB_PARAM_TRANSCEIVER_SIZE, buf_size);
+ params.nvm_common.offset &=
+ (DRV_MB_PARAM_TRANSCEIVER_I2C_ADDRESS_MASK |
+ DRV_MB_PARAM_TRANSCEIVER_PORT_MASK);
+ params.nvm_common.offset |=
+ ((offset + buf_idx) <<
+ DRV_MB_PARAM_TRANSCEIVER_OFFSET_SHIFT);
+ params.nvm_common.offset |=
+ (buf_size << DRV_MB_PARAM_TRANSCEIVER_SIZE_SHIFT);
params.nvm_wr.buf_size = buf_size;
params.nvm_wr.buf = (u32 *)&p_buf[buf_idx];
rc = ecore_mcp_nvm_command(p_hwfn, p_ptt, ¶ms);
@@ -1992,11 +2013,14 @@ enum _ecore_status_t ecore_mcp_gpio_read(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t rc = ECORE_SUCCESS;
u32 drv_mb_param = 0, rsp;
- SET_FIELD(drv_mb_param, DRV_MB_PARAM_GPIO_NUMBER, gpio);
+ drv_mb_param = (gpio << DRV_MB_PARAM_GPIO_NUMBER_SHIFT);
rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_READ,
drv_mb_param, &rsp, gpio_val);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
if ((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_GPIO_OK)
return ECORE_UNKNOWN_ERROR;
@@ -2010,12 +2034,15 @@ enum _ecore_status_t ecore_mcp_gpio_write(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t rc = ECORE_SUCCESS;
u32 drv_mb_param = 0, param, rsp;
- SET_FIELD(drv_mb_param, DRV_MB_PARAM_GPIO_NUMBER, gpio);
- SET_FIELD(drv_mb_param, DRV_MB_PARAM_GPIO_VALUE, gpio_val);
+ drv_mb_param = (gpio << DRV_MB_PARAM_GPIO_NUMBER_SHIFT) |
+ (gpio_val << DRV_MB_PARAM_GPIO_VALUE_SHIFT);
rc = ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_GPIO_WRITE,
drv_mb_param, &rsp, ¶m);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
if ((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_GPIO_OK)
return ECORE_UNKNOWN_ERROR;
@@ -2143,78 +2170,6 @@ enum _ecore_status_t ecore_mcp_bist_nvm_test_get_image_att(
return rc;
}
-enum _ecore_status_t ecore_mcp_get_nvm_image(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- enum ecore_nvm_images image_id,
- char *p_buffer, u16 buffer_len)
-{
- struct bist_nvm_image_att image_att;
- /* enum nvm_image_type type; */ /* @DPDK */
- u32 type;
- u32 num_images, i;
- enum _ecore_status_t rc;
-
- OSAL_MEM_ZERO(p_buffer, buffer_len);
-
- /* Translate image_id into MFW definitions */
- switch (image_id) {
- case ECORE_NVM_IMAGE_ISCSI_CFG:
- /* @DPDK */
- type = 0x1d; /* NVM_TYPE_ISCSI_CFG; */
- break;
- case ECORE_NVM_IMAGE_FCOE_CFG:
- type = 0x1f; /* NVM_TYPE_FCOE_CFG; */
- break;
- default:
- DP_NOTICE(p_hwfn, false, "Unknown request of image_id %08x\n",
- image_id);
- return ECORE_INVAL;
- }
-
- /* Learn number of images, then traverse and see if one fits */
- rc = ecore_mcp_bist_nvm_test_get_num_images(p_hwfn, p_ptt,
- &num_images);
- if ((rc != ECORE_SUCCESS) || (!num_images))
- return ECORE_INVAL;
-
- for (i = 0; i < num_images; i++) {
- rc = ecore_mcp_bist_nvm_test_get_image_att(p_hwfn, p_ptt,
- &image_att, i);
- if (rc != ECORE_SUCCESS)
- return ECORE_INVAL;
-
- if (type == image_att.image_type)
- break;
- }
- if (i == num_images) {
- DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
- "Failed to find nvram image of type %08x\n",
- image_id);
- return ECORE_INVAL;
- }
-
- /* Validate sizes - both the image's and the supplied buffer's */
- if (image_att.len <= 4) {
- DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
- "Image [%d] is too small - only %d bytes\n",
- image_id, image_att.len);
- return ECORE_INVAL;
- }
-
- /* Each NVM image is suffixed by CRC; Upper-layer has no need for it */
- image_att.len -= 4;
-
- if (image_att.len > buffer_len) {
- DP_VERBOSE(p_hwfn, ECORE_MSG_STORAGE,
- "Image [%d] is too big - %08x bytes where only %08x are available\n",
- image_id, image_att.len, buffer_len);
- return ECORE_NOMEM;
- }
-
- return ecore_mcp_nvm_read(p_hwfn->p_dev, image_att.nvm_start_addr,
- (u8 *)p_buffer, image_att.len);
-}
-
enum _ecore_status_t
ecore_mcp_get_temperature_info(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
@@ -2245,9 +2200,9 @@ ecore_mcp_get_temperature_info(struct ecore_hwfn *p_hwfn,
val = p_mfw_temp_info->sensor[i];
p_temp_sensor = &p_temp_info->sensors[i];
p_temp_sensor->sensor_location = (val & SENSOR_LOCATION_MASK) >>
- SENSOR_LOCATION_SHIFT;
+ SENSOR_LOCATION_SHIFT;
p_temp_sensor->threshold_high = (val & THRESHOLD_HIGH_MASK) >>
- THRESHOLD_HIGH_SHIFT;
+ THRESHOLD_HIGH_SHIFT;
p_temp_sensor->critical = (val & CRITICAL_TEMPERATURE_MASK) >>
CRITICAL_TEMPERATURE_SHIFT;
p_temp_sensor->current_temp = (val & CURRENT_TEMP_MASK) >>
@@ -2297,12 +2252,12 @@ enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
0, &rsp, (u32 *)num_events);
}
-#define ECORE_RESC_ALLOC_VERSION_MAJOR 1
-#define ECORE_RESC_ALLOC_VERSION_MINOR 0
-#define ECORE_RESC_ALLOC_VERSION \
- ((ECORE_RESC_ALLOC_VERSION_MAJOR << \
- DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_SHIFT) | \
- (ECORE_RESC_ALLOC_VERSION_MINOR << \
+#define ECORE_RESC_ALLOC_VERSION_MAJOR 1
+#define ECORE_RESC_ALLOC_VERSION_MINOR 0
+#define ECORE_RESC_ALLOC_VERSION \
+ ((ECORE_RESC_ALLOC_VERSION_MAJOR << \
+ DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_SHIFT) | \
+ (ECORE_RESC_ALLOC_VERSION_MINOR << \
DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_SHIFT))
enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
@@ -2328,7 +2283,8 @@ enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
*p_mcp_param = mb_params.mcp_param;
DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
- "MFW resource_info: version 0x%x, res_id 0x%x, size 0x%x, offset 0x%x, vf_size 0x%x, vf_offset 0x%x, flags 0x%x\n",
+ "MFW resource_info: version 0x%x, res_id 0x%x, size 0x%x,"
+ " offset 0x%x, vf_size 0x%x, vf_offset 0x%x, flags 0x%x\n",
*p_mcp_param, p_resc_info->res_id, p_resc_info->size,
p_resc_info->offset, p_resc_info->vf_size,
p_resc_info->vf_offset, p_resc_info->flags);
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index e055835..8a2c84c 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -25,27 +25,24 @@
rel_pfid)
#define MCP_PF_ID(p_hwfn) MCP_PF_ID_BY_REL(p_hwfn, (p_hwfn)->rel_pf_id)
-/* TODO - this is only correct as long as only BB is supported, and
- * no port-swapping is implemented; Afterwards we'll need to fix it.
- */
#define MFW_PORT(_p_hwfn) ((_p_hwfn)->abs_pf_id % \
- ((_p_hwfn)->p_dev->num_ports_in_engines * 2))
+ ((_p_hwfn)->p_dev->num_ports_in_engines * \
+ ecore_device_num_engines((_p_hwfn)->p_dev)))
+
struct ecore_mcp_info {
osal_spinlock_t lock; /* Spinlock used for accessing MCP mailbox */
-
/* Flag to indicate whether sending a MFW mailbox is forbidden */
bool block_mb_sending;
-
u32 public_base; /* Address of the MCP public area */
u32 drv_mb_addr; /* Address of the driver mailbox */
u32 mfw_mb_addr; /* Address of the MFW mailbox */
u32 port_addr; /* Address of the port configuration (link) */
u16 drv_mb_seq; /* Current driver mailbox sequence */
u16 drv_pulse_seq; /* Current driver pulse sequence */
- struct ecore_mcp_link_params link_input;
- struct ecore_mcp_link_state link_output;
+ struct ecore_mcp_link_params link_input;
+ struct ecore_mcp_link_state link_output;
struct ecore_mcp_link_capabilities link_capabilities;
- struct ecore_mcp_function_info func_info;
+ struct ecore_mcp_function_info func_info;
u8 *mfw_mb_cur;
u8 *mfw_mb_shadow;
@@ -153,7 +150,8 @@ enum _ecore_status_t ecore_mcp_load_req(struct ecore_hwfn *p_hwfn,
* @param p_hwfn
* @param p_ptt
*/
-void ecore_mcp_read_mb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
+void ecore_mcp_read_mb(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt);
/**
* @brief Ack to mfw that driver finished FLR process for VFs
@@ -211,7 +209,8 @@ enum _ecore_status_t ecore_mcp_nvm_wr_cmd(struct ecore_hwfn *p_hwfn,
u32 param,
u32 *o_mcp_resp,
u32 *o_mcp_param,
- u32 i_txn_size, u32 *i_buf);
+ u32 i_txn_size,
+ u32 *i_buf);
/**
* @brief - Sends an NVM read command request to the MFW to get
@@ -235,7 +234,8 @@ enum _ecore_status_t ecore_mcp_nvm_rd_cmd(struct ecore_hwfn *p_hwfn,
u32 param,
u32 *o_mcp_resp,
u32 *o_mcp_param,
- u32 *o_txn_size, u32 *o_buf);
+ u32 *o_txn_size,
+ u32 *o_buf);
/**
* @brief indicates whether the MFW objects [under mcp_info] are accessible
@@ -292,4 +292,9 @@ int __ecore_configure_pf_min_bandwidth(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t ecore_mcp_mask_parities(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u32 mask_parities);
+enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct resource_info *p_resc_info,
+ u32 *p_mcp_resp, u32 *p_mcp_param);
+
#endif /* __ECORE_MCP_H__ */
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index f21f87a..ff4f1ca 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -31,6 +31,8 @@ struct ecore_mcp_link_params {
struct ecore_mcp_link_capabilities {
u32 speed_capabilities;
+ bool default_speed_autoneg; /* In Mb/s */
+ u32 default_speed; /* In Mb/s */
};
struct ecore_mcp_link_state {
@@ -115,10 +117,12 @@ struct ecore_mcp_nvm_params {
};
};
+#ifndef __EXTRACT__LINUX__
enum ecore_nvm_images {
ECORE_NVM_IMAGE_ISCSI_CFG,
ECORE_NVM_IMAGE_FCOE_CFG,
};
+#endif
struct ecore_mcp_drv_version {
u32 version;
@@ -133,13 +137,39 @@ struct ecore_mcp_lan_stats {
#ifndef ECORE_PROTO_STATS
#define ECORE_PROTO_STATS
+struct ecore_mcp_fcoe_stats {
+ u64 rx_pkts;
+ u64 tx_pkts;
+ u32 fcs_err;
+ u32 login_failure;
+};
+
+struct ecore_mcp_iscsi_stats {
+ u64 rx_pdus;
+ u64 tx_pdus;
+ u64 rx_bytes;
+ u64 tx_bytes;
+};
+
+struct ecore_mcp_rdma_stats {
+ u64 rx_pkts;
+ u64 tx_pkts;
+ u64 rx_bytes;
+ u64 tx_byts;
+};
enum ecore_mcp_protocol_type {
ECORE_MCP_LAN_STATS,
+ ECORE_MCP_FCOE_STATS,
+ ECORE_MCP_ISCSI_STATS,
+ ECORE_MCP_RDMA_STATS
};
union ecore_mcp_protocol_stats {
struct ecore_mcp_lan_stats lan_stats;
+ struct ecore_mcp_fcoe_stats fcoe_stats;
+ struct ecore_mcp_iscsi_stats iscsi_stats;
+ struct ecore_mcp_rdma_stats rdma_stats;
};
#endif
@@ -183,7 +213,7 @@ struct ecore_temperature_sensor {
u8 current_temp;
};
-#define ECORE_MAX_NUM_OF_SENSORS 7
+#define ECORE_MAX_NUM_OF_SENSORS 7
struct ecore_temperature_info {
u32 num_sensors;
struct ecore_temperature_sensor sensors[ECORE_MAX_NUM_OF_SENSORS];
@@ -243,19 +273,20 @@ struct ecore_mcp_link_capabilities
* @return enum _ecore_status_t
*/
enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, bool b_up);
+ struct ecore_ptt *p_ptt,
+ bool b_up);
/**
* @brief Get the management firmware version value
*
- * @param p_dev - ecore dev pointer
+ * @param p_hwfn
* @param p_ptt
* @param p_mfw_ver - mfw version value
* @param p_running_bundle_id - image id in nvram; Optional.
*
* @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
*/
-enum _ecore_status_t ecore_mcp_get_mfw_ver(struct ecore_dev *p_dev,
+enum _ecore_status_t ecore_mcp_get_mfw_ver(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u32 *p_mfw_ver,
u32 *p_running_bundle_id);
@@ -301,6 +332,7 @@ enum _ecore_status_t ecore_mcp_cmd(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t ecore_mcp_drain(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt);
+#ifndef LINUX_REMOVE
/**
* @brief - return the mcp function info of the hw function
*
@@ -310,6 +342,7 @@ enum _ecore_status_t ecore_mcp_drain(struct ecore_hwfn *p_hwfn,
*/
const struct ecore_mcp_function_info
*ecore_mcp_get_function_info(struct ecore_hwfn *p_hwfn);
+#endif
/**
* @brief - Function for reading/manipulating the nvram. Following are supported
@@ -349,6 +382,7 @@ enum _ecore_status_t ecore_mcp_nvm_command(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct ecore_mcp_nvm_params *params);
+#ifndef LINUX_REMOVE
/**
* @brief - count number of function with a matching personality on engine.
*
@@ -360,7 +394,9 @@ enum _ecore_status_t ecore_mcp_nvm_command(struct ecore_hwfn *p_hwfn,
* the bitsmasks.
*/
int ecore_mcp_get_personality_cnt(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u32 personalities);
+ struct ecore_ptt *p_ptt,
+ u32 personalities);
+#endif
/**
* @brief Get the flash size value
@@ -539,7 +575,8 @@ enum _ecore_status_t ecore_mcp_nvm_put_file_begin(struct ecore_dev *p_dev,
*
* @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
*/
-enum _ecore_status_t ecore_mcp_nvm_del_file(struct ecore_dev *p_dev, u32 addr);
+enum _ecore_status_t ecore_mcp_nvm_del_file(struct ecore_dev *p_dev,
+ u32 addr);
/**
* @brief Check latest response
@@ -641,6 +678,7 @@ enum _ecore_status_t ecore_mcp_gpio_read(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t ecore_mcp_gpio_write(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u16 gpio, u16 gpio_val);
+
/**
* @brief Gpio get information
*
@@ -686,14 +724,14 @@ enum _ecore_status_t ecore_mcp_bist_clock_test(struct ecore_hwfn *p_hwfn,
* @param p_hwfn - hw function
* @param p_ptt - PTT required for register access
* @param num_images - number of images if operation was
- * successful. 0 if not.
+ * successful. 0 if not.
*
* @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
*/
-enum _ecore_status_t
-ecore_mcp_bist_nvm_test_get_num_images(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u32 *num_images);
+enum _ecore_status_t ecore_mcp_bist_nvm_test_get_num_images(
+ struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u32 *num_images);
/**
* @brief Bist nvm test - get image attributes by index
@@ -705,11 +743,11 @@ ecore_mcp_bist_nvm_test_get_num_images(struct ecore_hwfn *p_hwfn,
*
* @return enum _ecore_status_t - ECORE_SUCCESS - operation was successful.
*/
-enum _ecore_status_t
-ecore_mcp_bist_nvm_test_get_image_att(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct bist_nvm_image_att *p_image_att,
- u32 image_index);
+enum _ecore_status_t ecore_mcp_bist_nvm_test_get_image_att(
+ struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct bist_nvm_image_att *p_image_att,
+ u32 image_index);
/**
* @brief ecore_mcp_get_temperature_info - get the status of the temperature
@@ -726,6 +764,7 @@ enum _ecore_status_t
ecore_mcp_get_temperature_info(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct ecore_temperature_info *p_temp_info);
+
/**
* @brief Get MBA versions - get MBA sub images versions
*
@@ -753,14 +792,4 @@ enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u64 *num_events);
-/**
- * @brief Sets whether a critical error notification from the MFW is acked, or
- * is it being ignored and thus allowing the MFW crash dump.
- *
- * @param p_dev
- * @param mdump_enable
- *
- */
-void ecore_mcp_mdump_enable(struct ecore_dev *p_dev, bool mdump_enable);
-
#endif
diff --git a/drivers/net/qede/base/ecore_proto_if.h b/drivers/net/qede/base/ecore_proto_if.h
index bcbd9f0..e252d52 100644
--- a/drivers/net/qede/base/ecore_proto_if.h
+++ b/drivers/net/qede/base/ecore_proto_if.h
@@ -13,6 +13,8 @@
* PF parameters (according to personality/protocol)
*/
+#define ECORE_ROCE_PROTOCOL_INDEX (3)
+
struct ecore_eth_pf_params {
/* The following parameters are used during HW-init
* and these parameters need to be passed as arguments
@@ -21,8 +23,65 @@ struct ecore_eth_pf_params {
u16 num_cons;
};
+/* Most of the the parameters below are described in the FW iSCSI / TCP HSI */
+struct ecore_iscsi_pf_params {
+ u64 glbl_q_params_addr;
+ u64 bdq_pbl_base_addr[2];
+ u16 cq_num_entries;
+ u16 cmdq_num_entries;
+ u32 two_msl_timer;
+ u16 tx_sws_timer;
+ /* The following parameters are used during HW-init
+ * and these parameters need to be passed as arguments
+ * to update_pf_params routine invoked before slowpath start
+ */
+ u16 num_cons;
+ u16 num_tasks;
+
+ /* The following parameters are used during protocol-init */
+ u16 half_way_close_timeout;
+ u16 bdq_xoff_threshold[2];
+ u16 bdq_xon_threshold[2];
+ u16 cmdq_xoff_threshold;
+ u16 cmdq_xon_threshold;
+ u16 rq_buffer_size;
+
+ u8 num_sq_pages_in_ring;
+ u8 num_r2tq_pages_in_ring;
+ u8 num_uhq_pages_in_ring;
+ u8 num_queues;
+ u8 log_page_size;
+ u8 rqe_log_size;
+ u8 max_fin_rt;
+ u8 gl_rq_pi;
+ u8 gl_cmd_pi;
+ u8 debug_mode;
+ u8 ll2_ooo_queue_id;
+ u8 ooo_enable;
+
+ u8 is_target;
+ u8 bdq_pbl_num_entries[2];
+};
+
+struct ecore_rdma_pf_params {
+ /* Supplied to ECORE during resource allocation (may affect the ILT and
+ * the doorbell BAR).
+ */
+ u32 min_dpis; /* number of requested DPIs */
+ u32 num_mrs; /* number of requested memory regions*/
+ u32 num_qps; /* number of requested Queue Pairs */
+ u32 num_srqs; /* number of requested SRQ */
+ u8 roce_edpm_mode; /* see QED_ROCE_EDPM_MODE_ENABLE */
+ u8 gl_pi; /* protocol index */
+
+ /* Will allocate rate limiters to be used with QPs */
+ u8 enable_dcqcn;
+};
+
struct ecore_pf_params {
struct ecore_eth_pf_params eth_pf_params;
+ struct ecore_iscsi_pf_params iscsi_pf_params;
+ struct ecore_rdma_pf_params rdma_pf_params;
};
#endif
diff --git a/drivers/net/qede/base/ecore_rt_defs.h b/drivers/net/qede/base/ecore_rt_defs.h
index cc8a8ed..01a29e3 100644
--- a/drivers/net/qede/base/ecore_rt_defs.h
+++ b/drivers/net/qede/base/ecore_rt_defs.h
@@ -49,6 +49,10 @@
#define PRS_REG_TASK_ID_MAX_TARGET_PF_RT_OFFSET 6652
#define PRS_REG_TASK_ID_MAX_TARGET_VF_RT_OFFSET 6653
#define PRS_REG_SEARCH_TCP_RT_OFFSET 6654
+#define PRS_REG_SEARCH_FCOE_RT_OFFSET 6655
+#define PRS_REG_SEARCH_ROCE_RT_OFFSET 6656
+#define PRS_REG_ROCE_DEST_QP_MAX_VF_RT_OFFSET 6657
+#define PRS_REG_ROCE_DEST_QP_MAX_PF_RT_OFFSET 6658
#define PRS_REG_SEARCH_OPENFLOW_RT_OFFSET 6659
#define PRS_REG_SEARCH_NON_IP_AS_OPENFLOW_RT_OFFSET 6660
#define PRS_REG_OPENFLOW_SUPPORT_ONLY_KNOWN_OVER_IP_RT_OFFSET 6661
@@ -97,350 +101,353 @@
#define PSWRQ2_REG_ILT_MEMORY_RT_OFFSET 6704
#define PSWRQ2_REG_ILT_MEMORY_RT_SIZE 22000
#define PGLUE_REG_B_VF_BASE_RT_OFFSET 28704
-#define PGLUE_REG_B_CACHE_LINE_SIZE_RT_OFFSET 28705
-#define PGLUE_REG_B_PF_BAR0_SIZE_RT_OFFSET 28706
-#define PGLUE_REG_B_PF_BAR1_SIZE_RT_OFFSET 28707
-#define PGLUE_REG_B_VF_BAR1_SIZE_RT_OFFSET 28708
-#define TM_REG_VF_ENABLE_CONN_RT_OFFSET 28709
-#define TM_REG_PF_ENABLE_CONN_RT_OFFSET 28710
-#define TM_REG_PF_ENABLE_TASK_RT_OFFSET 28711
-#define TM_REG_GROUP_SIZE_RESOLUTION_CONN_RT_OFFSET 28712
-#define TM_REG_GROUP_SIZE_RESOLUTION_TASK_RT_OFFSET 28713
-#define TM_REG_CONFIG_CONN_MEM_RT_OFFSET 28714
+#define PGLUE_REG_B_MSDM_OFFSET_MASK_B_RT_OFFSET 28705
+#define PGLUE_REG_B_MSDM_VF_SHIFT_B_RT_OFFSET 28706
+#define PGLUE_REG_B_CACHE_LINE_SIZE_RT_OFFSET 28707
+#define PGLUE_REG_B_PF_BAR0_SIZE_RT_OFFSET 28708
+#define PGLUE_REG_B_PF_BAR1_SIZE_RT_OFFSET 28709
+#define PGLUE_REG_B_VF_BAR1_SIZE_RT_OFFSET 28710
+#define TM_REG_VF_ENABLE_CONN_RT_OFFSET 28711
+#define TM_REG_PF_ENABLE_CONN_RT_OFFSET 28712
+#define TM_REG_PF_ENABLE_TASK_RT_OFFSET 28713
+#define TM_REG_GROUP_SIZE_RESOLUTION_CONN_RT_OFFSET 28714
+#define TM_REG_GROUP_SIZE_RESOLUTION_TASK_RT_OFFSET 28715
+#define TM_REG_CONFIG_CONN_MEM_RT_OFFSET 28716
#define TM_REG_CONFIG_CONN_MEM_RT_SIZE 416
-#define TM_REG_CONFIG_TASK_MEM_RT_OFFSET 29130
+#define TM_REG_CONFIG_TASK_MEM_RT_OFFSET 29132
#define TM_REG_CONFIG_TASK_MEM_RT_SIZE 512
-#define QM_REG_MAXPQSIZE_0_RT_OFFSET 29642
-#define QM_REG_MAXPQSIZE_1_RT_OFFSET 29643
-#define QM_REG_MAXPQSIZE_2_RT_OFFSET 29644
-#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET 29645
-#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET 29646
-#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET 29647
-#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET 29648
-#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET 29649
-#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET 29650
-#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET 29651
-#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET 29652
-#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET 29653
-#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET 29654
-#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET 29655
-#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET 29656
-#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET 29657
-#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET 29658
-#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET 29659
-#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET 29660
-#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET 29661
-#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET 29662
-#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET 29663
-#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET 29664
-#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET 29665
-#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET 29666
-#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET 29667
-#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET 29668
-#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET 29669
-#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET 29670
-#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET 29671
-#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET 29672
-#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET 29673
-#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET 29674
-#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET 29675
-#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET 29676
-#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET 29677
-#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET 29678
-#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET 29679
-#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET 29680
-#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET 29681
-#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET 29682
-#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET 29683
-#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET 29684
-#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET 29685
-#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET 29686
-#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET 29687
-#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET 29688
-#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET 29689
-#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET 29690
-#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET 29691
-#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET 29692
-#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET 29693
-#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET 29694
-#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET 29695
-#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET 29696
-#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET 29697
-#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET 29698
-#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET 29699
-#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET 29700
-#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET 29701
-#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET 29702
-#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET 29703
-#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET 29704
-#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET 29705
-#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET 29706
-#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET 29707
-#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET 29708
-#define QM_REG_BASEADDROTHERPQ_RT_OFFSET 29709
+#define QM_REG_MAXPQSIZE_0_RT_OFFSET 29644
+#define QM_REG_MAXPQSIZE_1_RT_OFFSET 29645
+#define QM_REG_MAXPQSIZE_2_RT_OFFSET 29646
+#define QM_REG_MAXPQSIZETXSEL_0_RT_OFFSET 29647
+#define QM_REG_MAXPQSIZETXSEL_1_RT_OFFSET 29648
+#define QM_REG_MAXPQSIZETXSEL_2_RT_OFFSET 29649
+#define QM_REG_MAXPQSIZETXSEL_3_RT_OFFSET 29650
+#define QM_REG_MAXPQSIZETXSEL_4_RT_OFFSET 29651
+#define QM_REG_MAXPQSIZETXSEL_5_RT_OFFSET 29652
+#define QM_REG_MAXPQSIZETXSEL_6_RT_OFFSET 29653
+#define QM_REG_MAXPQSIZETXSEL_7_RT_OFFSET 29654
+#define QM_REG_MAXPQSIZETXSEL_8_RT_OFFSET 29655
+#define QM_REG_MAXPQSIZETXSEL_9_RT_OFFSET 29656
+#define QM_REG_MAXPQSIZETXSEL_10_RT_OFFSET 29657
+#define QM_REG_MAXPQSIZETXSEL_11_RT_OFFSET 29658
+#define QM_REG_MAXPQSIZETXSEL_12_RT_OFFSET 29659
+#define QM_REG_MAXPQSIZETXSEL_13_RT_OFFSET 29660
+#define QM_REG_MAXPQSIZETXSEL_14_RT_OFFSET 29661
+#define QM_REG_MAXPQSIZETXSEL_15_RT_OFFSET 29662
+#define QM_REG_MAXPQSIZETXSEL_16_RT_OFFSET 29663
+#define QM_REG_MAXPQSIZETXSEL_17_RT_OFFSET 29664
+#define QM_REG_MAXPQSIZETXSEL_18_RT_OFFSET 29665
+#define QM_REG_MAXPQSIZETXSEL_19_RT_OFFSET 29666
+#define QM_REG_MAXPQSIZETXSEL_20_RT_OFFSET 29667
+#define QM_REG_MAXPQSIZETXSEL_21_RT_OFFSET 29668
+#define QM_REG_MAXPQSIZETXSEL_22_RT_OFFSET 29669
+#define QM_REG_MAXPQSIZETXSEL_23_RT_OFFSET 29670
+#define QM_REG_MAXPQSIZETXSEL_24_RT_OFFSET 29671
+#define QM_REG_MAXPQSIZETXSEL_25_RT_OFFSET 29672
+#define QM_REG_MAXPQSIZETXSEL_26_RT_OFFSET 29673
+#define QM_REG_MAXPQSIZETXSEL_27_RT_OFFSET 29674
+#define QM_REG_MAXPQSIZETXSEL_28_RT_OFFSET 29675
+#define QM_REG_MAXPQSIZETXSEL_29_RT_OFFSET 29676
+#define QM_REG_MAXPQSIZETXSEL_30_RT_OFFSET 29677
+#define QM_REG_MAXPQSIZETXSEL_31_RT_OFFSET 29678
+#define QM_REG_MAXPQSIZETXSEL_32_RT_OFFSET 29679
+#define QM_REG_MAXPQSIZETXSEL_33_RT_OFFSET 29680
+#define QM_REG_MAXPQSIZETXSEL_34_RT_OFFSET 29681
+#define QM_REG_MAXPQSIZETXSEL_35_RT_OFFSET 29682
+#define QM_REG_MAXPQSIZETXSEL_36_RT_OFFSET 29683
+#define QM_REG_MAXPQSIZETXSEL_37_RT_OFFSET 29684
+#define QM_REG_MAXPQSIZETXSEL_38_RT_OFFSET 29685
+#define QM_REG_MAXPQSIZETXSEL_39_RT_OFFSET 29686
+#define QM_REG_MAXPQSIZETXSEL_40_RT_OFFSET 29687
+#define QM_REG_MAXPQSIZETXSEL_41_RT_OFFSET 29688
+#define QM_REG_MAXPQSIZETXSEL_42_RT_OFFSET 29689
+#define QM_REG_MAXPQSIZETXSEL_43_RT_OFFSET 29690
+#define QM_REG_MAXPQSIZETXSEL_44_RT_OFFSET 29691
+#define QM_REG_MAXPQSIZETXSEL_45_RT_OFFSET 29692
+#define QM_REG_MAXPQSIZETXSEL_46_RT_OFFSET 29693
+#define QM_REG_MAXPQSIZETXSEL_47_RT_OFFSET 29694
+#define QM_REG_MAXPQSIZETXSEL_48_RT_OFFSET 29695
+#define QM_REG_MAXPQSIZETXSEL_49_RT_OFFSET 29696
+#define QM_REG_MAXPQSIZETXSEL_50_RT_OFFSET 29697
+#define QM_REG_MAXPQSIZETXSEL_51_RT_OFFSET 29698
+#define QM_REG_MAXPQSIZETXSEL_52_RT_OFFSET 29699
+#define QM_REG_MAXPQSIZETXSEL_53_RT_OFFSET 29700
+#define QM_REG_MAXPQSIZETXSEL_54_RT_OFFSET 29701
+#define QM_REG_MAXPQSIZETXSEL_55_RT_OFFSET 29702
+#define QM_REG_MAXPQSIZETXSEL_56_RT_OFFSET 29703
+#define QM_REG_MAXPQSIZETXSEL_57_RT_OFFSET 29704
+#define QM_REG_MAXPQSIZETXSEL_58_RT_OFFSET 29705
+#define QM_REG_MAXPQSIZETXSEL_59_RT_OFFSET 29706
+#define QM_REG_MAXPQSIZETXSEL_60_RT_OFFSET 29707
+#define QM_REG_MAXPQSIZETXSEL_61_RT_OFFSET 29708
+#define QM_REG_MAXPQSIZETXSEL_62_RT_OFFSET 29709
+#define QM_REG_MAXPQSIZETXSEL_63_RT_OFFSET 29710
+#define QM_REG_BASEADDROTHERPQ_RT_OFFSET 29711
#define QM_REG_BASEADDROTHERPQ_RT_SIZE 128
-#define QM_REG_VOQCRDLINE_RT_OFFSET 29837
+#define QM_REG_VOQCRDLINE_RT_OFFSET 29839
#define QM_REG_VOQCRDLINE_RT_SIZE 20
-#define QM_REG_VOQINITCRDLINE_RT_OFFSET 29857
+#define QM_REG_VOQINITCRDLINE_RT_OFFSET 29859
#define QM_REG_VOQINITCRDLINE_RT_SIZE 20
-#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET 29877
-#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET 29878
-#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET 29879
-#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET 29880
-#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET 29881
-#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET 29882
-#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET 29883
-#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET 29884
-#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET 29885
-#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET 29886
-#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET 29887
-#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET 29888
-#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET 29889
-#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET 29890
-#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET 29891
-#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET 29892
-#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET 29893
-#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET 29894
-#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET 29895
-#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET 29896
-#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET 29897
-#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET 29898
-#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET 29899
-#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET 29900
-#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET 29901
-#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET 29902
-#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET 29903
-#define QM_REG_PQTX2PF_0_RT_OFFSET 29904
-#define QM_REG_PQTX2PF_1_RT_OFFSET 29905
-#define QM_REG_PQTX2PF_2_RT_OFFSET 29906
-#define QM_REG_PQTX2PF_3_RT_OFFSET 29907
-#define QM_REG_PQTX2PF_4_RT_OFFSET 29908
-#define QM_REG_PQTX2PF_5_RT_OFFSET 29909
-#define QM_REG_PQTX2PF_6_RT_OFFSET 29910
-#define QM_REG_PQTX2PF_7_RT_OFFSET 29911
-#define QM_REG_PQTX2PF_8_RT_OFFSET 29912
-#define QM_REG_PQTX2PF_9_RT_OFFSET 29913
-#define QM_REG_PQTX2PF_10_RT_OFFSET 29914
-#define QM_REG_PQTX2PF_11_RT_OFFSET 29915
-#define QM_REG_PQTX2PF_12_RT_OFFSET 29916
-#define QM_REG_PQTX2PF_13_RT_OFFSET 29917
-#define QM_REG_PQTX2PF_14_RT_OFFSET 29918
-#define QM_REG_PQTX2PF_15_RT_OFFSET 29919
-#define QM_REG_PQTX2PF_16_RT_OFFSET 29920
-#define QM_REG_PQTX2PF_17_RT_OFFSET 29921
-#define QM_REG_PQTX2PF_18_RT_OFFSET 29922
-#define QM_REG_PQTX2PF_19_RT_OFFSET 29923
-#define QM_REG_PQTX2PF_20_RT_OFFSET 29924
-#define QM_REG_PQTX2PF_21_RT_OFFSET 29925
-#define QM_REG_PQTX2PF_22_RT_OFFSET 29926
-#define QM_REG_PQTX2PF_23_RT_OFFSET 29927
-#define QM_REG_PQTX2PF_24_RT_OFFSET 29928
-#define QM_REG_PQTX2PF_25_RT_OFFSET 29929
-#define QM_REG_PQTX2PF_26_RT_OFFSET 29930
-#define QM_REG_PQTX2PF_27_RT_OFFSET 29931
-#define QM_REG_PQTX2PF_28_RT_OFFSET 29932
-#define QM_REG_PQTX2PF_29_RT_OFFSET 29933
-#define QM_REG_PQTX2PF_30_RT_OFFSET 29934
-#define QM_REG_PQTX2PF_31_RT_OFFSET 29935
-#define QM_REG_PQTX2PF_32_RT_OFFSET 29936
-#define QM_REG_PQTX2PF_33_RT_OFFSET 29937
-#define QM_REG_PQTX2PF_34_RT_OFFSET 29938
-#define QM_REG_PQTX2PF_35_RT_OFFSET 29939
-#define QM_REG_PQTX2PF_36_RT_OFFSET 29940
-#define QM_REG_PQTX2PF_37_RT_OFFSET 29941
-#define QM_REG_PQTX2PF_38_RT_OFFSET 29942
-#define QM_REG_PQTX2PF_39_RT_OFFSET 29943
-#define QM_REG_PQTX2PF_40_RT_OFFSET 29944
-#define QM_REG_PQTX2PF_41_RT_OFFSET 29945
-#define QM_REG_PQTX2PF_42_RT_OFFSET 29946
-#define QM_REG_PQTX2PF_43_RT_OFFSET 29947
-#define QM_REG_PQTX2PF_44_RT_OFFSET 29948
-#define QM_REG_PQTX2PF_45_RT_OFFSET 29949
-#define QM_REG_PQTX2PF_46_RT_OFFSET 29950
-#define QM_REG_PQTX2PF_47_RT_OFFSET 29951
-#define QM_REG_PQTX2PF_48_RT_OFFSET 29952
-#define QM_REG_PQTX2PF_49_RT_OFFSET 29953
-#define QM_REG_PQTX2PF_50_RT_OFFSET 29954
-#define QM_REG_PQTX2PF_51_RT_OFFSET 29955
-#define QM_REG_PQTX2PF_52_RT_OFFSET 29956
-#define QM_REG_PQTX2PF_53_RT_OFFSET 29957
-#define QM_REG_PQTX2PF_54_RT_OFFSET 29958
-#define QM_REG_PQTX2PF_55_RT_OFFSET 29959
-#define QM_REG_PQTX2PF_56_RT_OFFSET 29960
-#define QM_REG_PQTX2PF_57_RT_OFFSET 29961
-#define QM_REG_PQTX2PF_58_RT_OFFSET 29962
-#define QM_REG_PQTX2PF_59_RT_OFFSET 29963
-#define QM_REG_PQTX2PF_60_RT_OFFSET 29964
-#define QM_REG_PQTX2PF_61_RT_OFFSET 29965
-#define QM_REG_PQTX2PF_62_RT_OFFSET 29966
-#define QM_REG_PQTX2PF_63_RT_OFFSET 29967
-#define QM_REG_PQOTHER2PF_0_RT_OFFSET 29968
-#define QM_REG_PQOTHER2PF_1_RT_OFFSET 29969
-#define QM_REG_PQOTHER2PF_2_RT_OFFSET 29970
-#define QM_REG_PQOTHER2PF_3_RT_OFFSET 29971
-#define QM_REG_PQOTHER2PF_4_RT_OFFSET 29972
-#define QM_REG_PQOTHER2PF_5_RT_OFFSET 29973
-#define QM_REG_PQOTHER2PF_6_RT_OFFSET 29974
-#define QM_REG_PQOTHER2PF_7_RT_OFFSET 29975
-#define QM_REG_PQOTHER2PF_8_RT_OFFSET 29976
-#define QM_REG_PQOTHER2PF_9_RT_OFFSET 29977
-#define QM_REG_PQOTHER2PF_10_RT_OFFSET 29978
-#define QM_REG_PQOTHER2PF_11_RT_OFFSET 29979
-#define QM_REG_PQOTHER2PF_12_RT_OFFSET 29980
-#define QM_REG_PQOTHER2PF_13_RT_OFFSET 29981
-#define QM_REG_PQOTHER2PF_14_RT_OFFSET 29982
-#define QM_REG_PQOTHER2PF_15_RT_OFFSET 29983
-#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET 29984
-#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET 29985
-#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET 29986
-#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET 29987
-#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET 29988
-#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET 29989
-#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET 29990
-#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET 29991
-#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET 29992
-#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET 29993
-#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET 29994
-#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET 29995
-#define QM_REG_RLGLBLINCVAL_RT_OFFSET 29996
+#define QM_REG_AFULLQMBYPTHRPFWFQ_RT_OFFSET 29879
+#define QM_REG_AFULLQMBYPTHRVPWFQ_RT_OFFSET 29880
+#define QM_REG_AFULLQMBYPTHRPFRL_RT_OFFSET 29881
+#define QM_REG_AFULLQMBYPTHRGLBLRL_RT_OFFSET 29882
+#define QM_REG_AFULLOPRTNSTCCRDMASK_RT_OFFSET 29883
+#define QM_REG_WRROTHERPQGRP_0_RT_OFFSET 29884
+#define QM_REG_WRROTHERPQGRP_1_RT_OFFSET 29885
+#define QM_REG_WRROTHERPQGRP_2_RT_OFFSET 29886
+#define QM_REG_WRROTHERPQGRP_3_RT_OFFSET 29887
+#define QM_REG_WRROTHERPQGRP_4_RT_OFFSET 29888
+#define QM_REG_WRROTHERPQGRP_5_RT_OFFSET 29889
+#define QM_REG_WRROTHERPQGRP_6_RT_OFFSET 29890
+#define QM_REG_WRROTHERPQGRP_7_RT_OFFSET 29891
+#define QM_REG_WRROTHERPQGRP_8_RT_OFFSET 29892
+#define QM_REG_WRROTHERPQGRP_9_RT_OFFSET 29893
+#define QM_REG_WRROTHERPQGRP_10_RT_OFFSET 29894
+#define QM_REG_WRROTHERPQGRP_11_RT_OFFSET 29895
+#define QM_REG_WRROTHERPQGRP_12_RT_OFFSET 29896
+#define QM_REG_WRROTHERPQGRP_13_RT_OFFSET 29897
+#define QM_REG_WRROTHERPQGRP_14_RT_OFFSET 29898
+#define QM_REG_WRROTHERPQGRP_15_RT_OFFSET 29899
+#define QM_REG_WRROTHERGRPWEIGHT_0_RT_OFFSET 29900
+#define QM_REG_WRROTHERGRPWEIGHT_1_RT_OFFSET 29901
+#define QM_REG_WRROTHERGRPWEIGHT_2_RT_OFFSET 29902
+#define QM_REG_WRROTHERGRPWEIGHT_3_RT_OFFSET 29903
+#define QM_REG_WRRTXGRPWEIGHT_0_RT_OFFSET 29904
+#define QM_REG_WRRTXGRPWEIGHT_1_RT_OFFSET 29905
+#define QM_REG_PQTX2PF_0_RT_OFFSET 29906
+#define QM_REG_PQTX2PF_1_RT_OFFSET 29907
+#define QM_REG_PQTX2PF_2_RT_OFFSET 29908
+#define QM_REG_PQTX2PF_3_RT_OFFSET 29909
+#define QM_REG_PQTX2PF_4_RT_OFFSET 29910
+#define QM_REG_PQTX2PF_5_RT_OFFSET 29911
+#define QM_REG_PQTX2PF_6_RT_OFFSET 29912
+#define QM_REG_PQTX2PF_7_RT_OFFSET 29913
+#define QM_REG_PQTX2PF_8_RT_OFFSET 29914
+#define QM_REG_PQTX2PF_9_RT_OFFSET 29915
+#define QM_REG_PQTX2PF_10_RT_OFFSET 29916
+#define QM_REG_PQTX2PF_11_RT_OFFSET 29917
+#define QM_REG_PQTX2PF_12_RT_OFFSET 29918
+#define QM_REG_PQTX2PF_13_RT_OFFSET 29919
+#define QM_REG_PQTX2PF_14_RT_OFFSET 29920
+#define QM_REG_PQTX2PF_15_RT_OFFSET 29921
+#define QM_REG_PQTX2PF_16_RT_OFFSET 29922
+#define QM_REG_PQTX2PF_17_RT_OFFSET 29923
+#define QM_REG_PQTX2PF_18_RT_OFFSET 29924
+#define QM_REG_PQTX2PF_19_RT_OFFSET 29925
+#define QM_REG_PQTX2PF_20_RT_OFFSET 29926
+#define QM_REG_PQTX2PF_21_RT_OFFSET 29927
+#define QM_REG_PQTX2PF_22_RT_OFFSET 29928
+#define QM_REG_PQTX2PF_23_RT_OFFSET 29929
+#define QM_REG_PQTX2PF_24_RT_OFFSET 29930
+#define QM_REG_PQTX2PF_25_RT_OFFSET 29931
+#define QM_REG_PQTX2PF_26_RT_OFFSET 29932
+#define QM_REG_PQTX2PF_27_RT_OFFSET 29933
+#define QM_REG_PQTX2PF_28_RT_OFFSET 29934
+#define QM_REG_PQTX2PF_29_RT_OFFSET 29935
+#define QM_REG_PQTX2PF_30_RT_OFFSET 29936
+#define QM_REG_PQTX2PF_31_RT_OFFSET 29937
+#define QM_REG_PQTX2PF_32_RT_OFFSET 29938
+#define QM_REG_PQTX2PF_33_RT_OFFSET 29939
+#define QM_REG_PQTX2PF_34_RT_OFFSET 29940
+#define QM_REG_PQTX2PF_35_RT_OFFSET 29941
+#define QM_REG_PQTX2PF_36_RT_OFFSET 29942
+#define QM_REG_PQTX2PF_37_RT_OFFSET 29943
+#define QM_REG_PQTX2PF_38_RT_OFFSET 29944
+#define QM_REG_PQTX2PF_39_RT_OFFSET 29945
+#define QM_REG_PQTX2PF_40_RT_OFFSET 29946
+#define QM_REG_PQTX2PF_41_RT_OFFSET 29947
+#define QM_REG_PQTX2PF_42_RT_OFFSET 29948
+#define QM_REG_PQTX2PF_43_RT_OFFSET 29949
+#define QM_REG_PQTX2PF_44_RT_OFFSET 29950
+#define QM_REG_PQTX2PF_45_RT_OFFSET 29951
+#define QM_REG_PQTX2PF_46_RT_OFFSET 29952
+#define QM_REG_PQTX2PF_47_RT_OFFSET 29953
+#define QM_REG_PQTX2PF_48_RT_OFFSET 29954
+#define QM_REG_PQTX2PF_49_RT_OFFSET 29955
+#define QM_REG_PQTX2PF_50_RT_OFFSET 29956
+#define QM_REG_PQTX2PF_51_RT_OFFSET 29957
+#define QM_REG_PQTX2PF_52_RT_OFFSET 29958
+#define QM_REG_PQTX2PF_53_RT_OFFSET 29959
+#define QM_REG_PQTX2PF_54_RT_OFFSET 29960
+#define QM_REG_PQTX2PF_55_RT_OFFSET 29961
+#define QM_REG_PQTX2PF_56_RT_OFFSET 29962
+#define QM_REG_PQTX2PF_57_RT_OFFSET 29963
+#define QM_REG_PQTX2PF_58_RT_OFFSET 29964
+#define QM_REG_PQTX2PF_59_RT_OFFSET 29965
+#define QM_REG_PQTX2PF_60_RT_OFFSET 29966
+#define QM_REG_PQTX2PF_61_RT_OFFSET 29967
+#define QM_REG_PQTX2PF_62_RT_OFFSET 29968
+#define QM_REG_PQTX2PF_63_RT_OFFSET 29969
+#define QM_REG_PQOTHER2PF_0_RT_OFFSET 29970
+#define QM_REG_PQOTHER2PF_1_RT_OFFSET 29971
+#define QM_REG_PQOTHER2PF_2_RT_OFFSET 29972
+#define QM_REG_PQOTHER2PF_3_RT_OFFSET 29973
+#define QM_REG_PQOTHER2PF_4_RT_OFFSET 29974
+#define QM_REG_PQOTHER2PF_5_RT_OFFSET 29975
+#define QM_REG_PQOTHER2PF_6_RT_OFFSET 29976
+#define QM_REG_PQOTHER2PF_7_RT_OFFSET 29977
+#define QM_REG_PQOTHER2PF_8_RT_OFFSET 29978
+#define QM_REG_PQOTHER2PF_9_RT_OFFSET 29979
+#define QM_REG_PQOTHER2PF_10_RT_OFFSET 29980
+#define QM_REG_PQOTHER2PF_11_RT_OFFSET 29981
+#define QM_REG_PQOTHER2PF_12_RT_OFFSET 29982
+#define QM_REG_PQOTHER2PF_13_RT_OFFSET 29983
+#define QM_REG_PQOTHER2PF_14_RT_OFFSET 29984
+#define QM_REG_PQOTHER2PF_15_RT_OFFSET 29985
+#define QM_REG_RLGLBLPERIOD_0_RT_OFFSET 29986
+#define QM_REG_RLGLBLPERIOD_1_RT_OFFSET 29987
+#define QM_REG_RLGLBLPERIODTIMER_0_RT_OFFSET 29988
+#define QM_REG_RLGLBLPERIODTIMER_1_RT_OFFSET 29989
+#define QM_REG_RLGLBLPERIODSEL_0_RT_OFFSET 29990
+#define QM_REG_RLGLBLPERIODSEL_1_RT_OFFSET 29991
+#define QM_REG_RLGLBLPERIODSEL_2_RT_OFFSET 29992
+#define QM_REG_RLGLBLPERIODSEL_3_RT_OFFSET 29993
+#define QM_REG_RLGLBLPERIODSEL_4_RT_OFFSET 29994
+#define QM_REG_RLGLBLPERIODSEL_5_RT_OFFSET 29995
+#define QM_REG_RLGLBLPERIODSEL_6_RT_OFFSET 29996
+#define QM_REG_RLGLBLPERIODSEL_7_RT_OFFSET 29997
+#define QM_REG_RLGLBLINCVAL_RT_OFFSET 29998
#define QM_REG_RLGLBLINCVAL_RT_SIZE 256
-#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET 30252
+#define QM_REG_RLGLBLUPPERBOUND_RT_OFFSET 30254
#define QM_REG_RLGLBLUPPERBOUND_RT_SIZE 256
-#define QM_REG_RLGLBLCRD_RT_OFFSET 30508
+#define QM_REG_RLGLBLCRD_RT_OFFSET 30510
#define QM_REG_RLGLBLCRD_RT_SIZE 256
-#define QM_REG_RLGLBLENABLE_RT_OFFSET 30764
-#define QM_REG_RLPFPERIOD_RT_OFFSET 30765
-#define QM_REG_RLPFPERIODTIMER_RT_OFFSET 30766
-#define QM_REG_RLPFINCVAL_RT_OFFSET 30767
+#define QM_REG_RLGLBLENABLE_RT_OFFSET 30766
+#define QM_REG_RLPFPERIOD_RT_OFFSET 30767
+#define QM_REG_RLPFPERIODTIMER_RT_OFFSET 30768
+#define QM_REG_RLPFINCVAL_RT_OFFSET 30769
#define QM_REG_RLPFINCVAL_RT_SIZE 16
-#define QM_REG_RLPFUPPERBOUND_RT_OFFSET 30783
+#define QM_REG_RLPFUPPERBOUND_RT_OFFSET 30785
#define QM_REG_RLPFUPPERBOUND_RT_SIZE 16
-#define QM_REG_RLPFCRD_RT_OFFSET 30799
+#define QM_REG_RLPFCRD_RT_OFFSET 30801
#define QM_REG_RLPFCRD_RT_SIZE 16
-#define QM_REG_RLPFENABLE_RT_OFFSET 30815
-#define QM_REG_RLPFVOQENABLE_RT_OFFSET 30816
-#define QM_REG_WFQPFWEIGHT_RT_OFFSET 30817
+#define QM_REG_RLPFENABLE_RT_OFFSET 30817
+#define QM_REG_RLPFVOQENABLE_RT_OFFSET 30818
+#define QM_REG_WFQPFWEIGHT_RT_OFFSET 30819
#define QM_REG_WFQPFWEIGHT_RT_SIZE 16
-#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET 30833
+#define QM_REG_WFQPFUPPERBOUND_RT_OFFSET 30835
#define QM_REG_WFQPFUPPERBOUND_RT_SIZE 16
-#define QM_REG_WFQPFCRD_RT_OFFSET 30849
+#define QM_REG_WFQPFCRD_RT_OFFSET 30851
#define QM_REG_WFQPFCRD_RT_SIZE 160
-#define QM_REG_WFQPFENABLE_RT_OFFSET 31009
-#define QM_REG_WFQVPENABLE_RT_OFFSET 31010
-#define QM_REG_BASEADDRTXPQ_RT_OFFSET 31011
+#define QM_REG_WFQPFENABLE_RT_OFFSET 31011
+#define QM_REG_WFQVPENABLE_RT_OFFSET 31012
+#define QM_REG_BASEADDRTXPQ_RT_OFFSET 31013
#define QM_REG_BASEADDRTXPQ_RT_SIZE 512
-#define QM_REG_TXPQMAP_RT_OFFSET 31523
+#define QM_REG_TXPQMAP_RT_OFFSET 31525
#define QM_REG_TXPQMAP_RT_SIZE 512
-#define QM_REG_WFQVPWEIGHT_RT_OFFSET 32035
+#define QM_REG_WFQVPWEIGHT_RT_OFFSET 32037
#define QM_REG_WFQVPWEIGHT_RT_SIZE 512
-#define QM_REG_WFQVPCRD_RT_OFFSET 32547
+#define QM_REG_WFQVPCRD_RT_OFFSET 32549
#define QM_REG_WFQVPCRD_RT_SIZE 512
-#define QM_REG_WFQVPMAP_RT_OFFSET 33059
+#define QM_REG_WFQVPMAP_RT_OFFSET 33061
#define QM_REG_WFQVPMAP_RT_SIZE 512
-#define QM_REG_WFQPFCRD_MSB_RT_OFFSET 33571
+#define QM_REG_WFQPFCRD_MSB_RT_OFFSET 33573
#define QM_REG_WFQPFCRD_MSB_RT_SIZE 160
-#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET 33731
-#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET 33732
-#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET 33733
-#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET 33734
-#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET 33735
-#define NIG_REG_OUTER_TAG_VALUE_MASK_RT_OFFSET 33736
-#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET 33737
-#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET 33738
+#define NIG_REG_TAG_ETHERTYPE_0_RT_OFFSET 33733
+#define NIG_REG_OUTER_TAG_VALUE_LIST0_RT_OFFSET 33734
+#define NIG_REG_OUTER_TAG_VALUE_LIST1_RT_OFFSET 33735
+#define NIG_REG_OUTER_TAG_VALUE_LIST2_RT_OFFSET 33736
+#define NIG_REG_OUTER_TAG_VALUE_LIST3_RT_OFFSET 33737
+#define NIG_REG_OUTER_TAG_VALUE_MASK_RT_OFFSET 33738
+#define NIG_REG_LLH_FUNC_TAGMAC_CLS_TYPE_RT_OFFSET 33739
+#define NIG_REG_LLH_FUNC_TAG_EN_RT_OFFSET 33740
#define NIG_REG_LLH_FUNC_TAG_EN_RT_SIZE 4
-#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_OFFSET 33742
+#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_OFFSET 33744
#define NIG_REG_LLH_FUNC_TAG_HDR_SEL_RT_SIZE 4
-#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET 33746
+#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_OFFSET 33748
#define NIG_REG_LLH_FUNC_TAG_VALUE_RT_SIZE 4
-#define NIG_REG_LLH_FUNC_NO_TAG_RT_OFFSET 33750
-#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET 33751
+#define NIG_REG_LLH_FUNC_NO_TAG_RT_OFFSET 33752
+#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_OFFSET 33753
#define NIG_REG_LLH_FUNC_FILTER_VALUE_RT_SIZE 32
-#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET 33783
+#define NIG_REG_LLH_FUNC_FILTER_EN_RT_OFFSET 33785
#define NIG_REG_LLH_FUNC_FILTER_EN_RT_SIZE 16
-#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET 33799
+#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_OFFSET 33801
#define NIG_REG_LLH_FUNC_FILTER_MODE_RT_SIZE 16
-#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET 33815
+#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET 33817
#define NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE 16
-#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET 33831
+#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET 33833
#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_SIZE 16
-#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET 33847
-#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET 33848
-#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET 33849
-#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET 33850
-#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET 33851
-#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET 33852
-#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET 33853
-#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET 33854
-#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET 33855
-#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET 33856
-#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET 33857
-#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET 33858
-#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET 33859
-#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET 33860
-#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET 33861
-#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET 33862
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET 33863
-#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET 33864
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET 33865
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET 33866
-#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET 33867
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET 33868
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET 33869
-#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET 33870
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET 33871
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET 33872
-#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET 33873
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET 33874
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET 33875
-#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET 33876
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET 33877
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET 33878
-#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET 33879
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET 33880
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET 33881
-#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET 33882
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET 33883
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET 33884
-#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET 33885
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET 33886
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET 33887
-#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET 33888
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET 33889
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET 33890
-#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET 33891
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET 33892
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET 33893
-#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET 33894
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET 33895
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET 33896
-#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET 33897
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET 33898
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET 33899
-#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET 33900
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET 33901
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET 33902
-#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET 33903
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET 33904
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET 33905
-#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET 33906
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET 33907
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET 33908
-#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET 33909
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET 33910
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET 33911
-#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET 33912
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET 33913
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET 33914
-#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET 33915
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET 33916
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET 33917
-#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET 33918
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET 33919
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET 33920
-#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET 33921
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET 33922
-#define XCM_REG_CON_PHY_Q3_RT_OFFSET 33923
+#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET 33849
+#define NIG_REG_ROCE_DUPLICATE_TO_HOST_RT_OFFSET 33850
+#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET 33851
+#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET 33852
+#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET 33853
+#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET 33854
+#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET 33855
+#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET 33856
+#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET 33857
+#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET 33858
+#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET 33859
+#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET 33860
+#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET 33861
+#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET 33862
+#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET 33863
+#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET 33864
+#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET 33865
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET 33866
+#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET 33867
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET 33868
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET 33869
+#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET 33870
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET 33871
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET 33872
+#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET 33873
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET 33874
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET 33875
+#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET 33876
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET 33877
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET 33878
+#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET 33879
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET 33880
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET 33881
+#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET 33882
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET 33883
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET 33884
+#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET 33885
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET 33886
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET 33887
+#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET 33888
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET 33889
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET 33890
+#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET 33891
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET 33892
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET 33893
+#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET 33894
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET 33895
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET 33896
+#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET 33897
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET 33898
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET 33899
+#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET 33900
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET 33901
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET 33902
+#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET 33903
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET 33904
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET 33905
+#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET 33906
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET 33907
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET 33908
+#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET 33909
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET 33910
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET 33911
+#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET 33912
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET 33913
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET 33914
+#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET 33915
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET 33916
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET 33917
+#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET 33918
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET 33919
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET 33920
+#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET 33921
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET 33922
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET 33923
+#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET 33924
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET 33925
+#define XCM_REG_CON_PHY_Q3_RT_OFFSET 33926
-#define RUNTIME_ARRAY_SIZE 33924
+#define RUNTIME_ARRAY_SIZE 33927
#endif /* __RT_DEFS_H__ */
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index 58df3f5..b3736a8 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -21,6 +21,7 @@
#include "ecore_int.h"
#include "ecore_hw.h"
#include "ecore_dcbx.h"
+#include "ecore_sriov.h"
enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
struct ecore_spq_entry **pp_ent,
@@ -32,6 +33,9 @@ enum _ecore_status_t ecore_sp_init_request(struct ecore_hwfn *p_hwfn,
struct ecore_spq_entry *p_ent = OSAL_NULL;
enum _ecore_status_t rc = ECORE_NOTIMPL;
+ if (!pp_ent)
+ return ECORE_INVAL;
+
/* Get an SPQ entry */
rc = ecore_spq_get_entry(p_hwfn, pp_ent);
if (rc != ECORE_SUCCESS)
@@ -95,6 +99,8 @@ static enum tunnel_clss ecore_tunn_get_clss_type(u8 type)
return TUNNEL_CLSS_INNER_MAC_VLAN;
case ECORE_TUNN_CLSS_INNER_MAC_VNI:
return TUNNEL_CLSS_INNER_MAC_VNI;
+ case ECORE_TUNN_CLSS_MAC_VLAN_DUAL_STAGE:
+ return TUNNEL_CLSS_MAC_VLAN_DUAL_STAGE;
default:
return TUNNEL_CLSS_MAC_VLAN;
}
@@ -351,7 +357,6 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
p_ramrod->event_ring_sb_id = OSAL_CPU_TO_LE16(sb);
p_ramrod->event_ring_sb_index = sb_index;
p_ramrod->path_id = ECORE_PATH_ID(p_hwfn);
- p_ramrod->outer_tag = p_hwfn->hw_info.ovlan;
/* For easier debugging */
p_ramrod->dont_log_ramrods = 0;
@@ -370,6 +375,7 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
"Unsupported MF mode, init as DEFAULT\n");
p_ramrod->mf_mode = MF_NPAR;
}
+ p_ramrod->outer_tag = p_hwfn->hw_info.ovlan;
/* Place EQ address in RAMROD */
DMA_REGPAIR_LE(p_ramrod->event_ring_pbl_addr,
@@ -395,8 +401,17 @@ enum _ecore_status_t ecore_sp_pf_start(struct ecore_hwfn *p_hwfn,
p_ramrod->personality = PERSONALITY_ETH;
}
- p_ramrod->base_vf_id = (u8)p_hwfn->hw_info.first_vf_in_pf;
- p_ramrod->num_vfs = (u8)p_hwfn->p_dev->sriov_info.total_vfs;
+ if (p_hwfn->p_dev->p_iov_info) {
+ struct ecore_hw_sriov_info *p_iov = p_hwfn->p_dev->p_iov_info;
+
+ p_ramrod->base_vf_id = (u8)p_iov->first_vf_in_pf;
+ p_ramrod->num_vfs = (u8)p_iov->total_vfs;
+ }
+ /* @@@TBD - update also the "ROCE_VER_KEY" entries when the FW RoCE HSI
+ * version is available.
+ */
+ p_ramrod->hsi_fp_ver.major_ver_arr[ETH_VER_KEY] = ETH_HSI_VER_MAJOR;
+ p_ramrod->hsi_fp_ver.minor_ver_arr[ETH_VER_KEY] = ETH_HSI_VER_MINOR;
DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
"Setting event_ring_sb [id %04x index %02x], outer_tag [%d]\n",
@@ -437,6 +452,49 @@ enum _ecore_status_t ecore_sp_pf_update(struct ecore_hwfn *p_hwfn)
return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
}
+enum _ecore_status_t ecore_sp_rl_update(struct ecore_hwfn *p_hwfn,
+ struct ecore_rl_update_params *params)
+{
+ struct ecore_spq_entry *p_ent = OSAL_NULL;
+ enum _ecore_status_t rc = ECORE_NOTIMPL;
+ struct rl_update_ramrod_data *rl_update;
+ struct ecore_sp_init_data init_data;
+
+ /* Get SPQ entry */
+ OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+ init_data.cid = ecore_spq_get_cid(p_hwfn);
+ init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+ init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
+
+ rc = ecore_sp_init_request(p_hwfn, &p_ent,
+ COMMON_RAMROD_RL_UPDATE, PROTOCOLID_COMMON,
+ &init_data);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ rl_update = &p_ent->ramrod.rl_update;
+
+ rl_update->qcn_update_param_flg = params->qcn_update_param_flg;
+ rl_update->dcqcn_update_param_flg = params->dcqcn_update_param_flg;
+ rl_update->rl_init_flg = params->rl_init_flg;
+ rl_update->rl_start_flg = params->rl_start_flg;
+ rl_update->rl_stop_flg = params->rl_stop_flg;
+ rl_update->rl_id_first = params->rl_id_first;
+ rl_update->rl_id_last = params->rl_id_last;
+ rl_update->rl_dc_qcn_flg = params->rl_dc_qcn_flg;
+ rl_update->rl_bc_rate = OSAL_CPU_TO_LE32(params->rl_bc_rate);
+ rl_update->rl_max_rate = OSAL_CPU_TO_LE16(params->rl_max_rate);
+ rl_update->rl_r_ai = OSAL_CPU_TO_LE16(params->rl_r_ai);
+ rl_update->rl_r_hai = OSAL_CPU_TO_LE16(params->rl_r_hai);
+ rl_update->dcqcn_g = OSAL_CPU_TO_LE16(params->dcqcn_g);
+ rl_update->dcqcn_k_us = OSAL_CPU_TO_LE32(params->dcqcn_k_us);
+ rl_update->dcqcn_timeuot_us = OSAL_CPU_TO_LE32(
+ params->dcqcn_timeuot_us);
+ rl_update->qcn_timeuot_us = OSAL_CPU_TO_LE32(params->qcn_timeuot_us);
+
+ return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
+
/* Set pf update ramrod command params */
enum _ecore_status_t
ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
@@ -465,19 +523,18 @@ ecore_sp_pf_update_tunn_cfg(struct ecore_hwfn *p_hwfn,
&p_ent->ramrod.pf_update.tunnel_config);
rc = ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+ if (rc != ECORE_SUCCESS)
+ return rc;
- if ((rc == ECORE_SUCCESS) && p_tunn) {
- if (p_tunn->update_vxlan_udp_port)
- ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
- p_tunn->vxlan_udp_port);
- if (p_tunn->update_geneve_udp_port)
- ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
- p_tunn->geneve_udp_port);
+ if (p_tunn->update_vxlan_udp_port)
+ ecore_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+ p_tunn->vxlan_udp_port);
+ if (p_tunn->update_geneve_udp_port)
+ ecore_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+ p_tunn->geneve_udp_port);
- ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt,
- p_tunn->tunn_mode);
- p_hwfn->p_dev->tunn_mode = p_tunn->tunn_mode;
- }
+ ecore_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn->tunn_mode);
+ p_hwfn->p_dev->tunn_mode = p_tunn->tunn_mode;
return rc;
}
diff --git a/drivers/net/qede/base/ecore_sp_commands.h b/drivers/net/qede/base/ecore_sp_commands.h
index 22c7462..66c9a69 100644
--- a/drivers/net/qede/base/ecore_sp_commands.h
+++ b/drivers/net/qede/base/ecore_sp_commands.h
@@ -134,4 +134,34 @@ enum _ecore_status_t ecore_sp_pf_stop(struct ecore_hwfn *p_hwfn);
enum _ecore_status_t ecore_sp_heartbeat_ramrod(struct ecore_hwfn *p_hwfn);
+struct ecore_rl_update_params {
+ u8 qcn_update_param_flg;
+ u8 dcqcn_update_param_flg;
+ u8 rl_init_flg;
+ u8 rl_start_flg;
+ u8 rl_stop_flg;
+ u8 rl_id_first;
+ u8 rl_id_last;
+ u8 rl_dc_qcn_flg; /* If set, RL will used for DCQCN */
+ u32 rl_bc_rate; /* Byte Counter Limit */
+ u16 rl_max_rate; /* Maximum rate in 1.6 Mbps resolution */
+ u16 rl_r_ai; /* Active increase rate */
+ u16 rl_r_hai; /* Hyper active increase rate */
+ u16 dcqcn_g; /* DCQCN Alpha update gain in 1/64K resolution */
+ u32 dcqcn_k_us; /* DCQCN Alpha update interval */
+ u32 dcqcn_timeuot_us;
+ u32 qcn_timeuot_us;
+};
+
+/**
+ * @brief ecore_sp_rl_update - Update rate limiters
+ *
+ * @param p_hwfn
+ * @param params
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_sp_rl_update(struct ecore_hwfn *p_hwfn,
+ struct ecore_rl_update_params *params);
+
#endif /*__ECORE_SP_COMMANDS_H__*/
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 28a658e..0d744dd 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -27,7 +27,11 @@
***************************************************************************/
#define SPQ_HIGH_PRI_RESERVE_DEFAULT (1)
-#define SPQ_BLOCK_SLEEP_LENGTH (1000)
+
+#define SPQ_BLOCK_DELAY_MAX_ITER (10)
+#define SPQ_BLOCK_DELAY_US (10)
+#define SPQ_BLOCK_SLEEP_MAX_ITER (1000)
+#define SPQ_BLOCK_SLEEP_MS (5)
/***************************************************************************
* Blocking Imp. (BLOCK/EBLOCK mode)
@@ -48,53 +52,76 @@ static void ecore_spq_blocking_cb(struct ecore_hwfn *p_hwfn,
OSAL_SMP_WMB(p_hwfn->p_dev);
}
-static enum _ecore_status_t ecore_spq_block(struct ecore_hwfn *p_hwfn,
+static enum _ecore_status_t __ecore_spq_block(struct ecore_hwfn *p_hwfn,
struct ecore_spq_entry *p_ent,
- u8 *p_fw_ret)
+ u8 *p_fw_ret,
+ bool sleep_between_iter)
{
- int sleep_count = SPQ_BLOCK_SLEEP_LENGTH;
struct ecore_spq_comp_done *comp_done;
- enum _ecore_status_t rc;
+ u32 iter_cnt;
comp_done = (struct ecore_spq_comp_done *)p_ent->comp_cb.cookie;
- while (sleep_count) {
+ iter_cnt = sleep_between_iter ? SPQ_BLOCK_SLEEP_MAX_ITER
+ : SPQ_BLOCK_DELAY_MAX_ITER;
+
+ while (iter_cnt--) {
OSAL_POLL_MODE_DPC(p_hwfn);
- /* validate we receive completion update */
OSAL_SMP_RMB(p_hwfn->p_dev);
if (comp_done->done == 1) {
if (p_fw_ret)
*p_fw_ret = comp_done->fw_return_code;
return ECORE_SUCCESS;
}
- OSAL_MSLEEP(5);
- sleep_count--;
+
+ if (sleep_between_iter)
+ OSAL_MSLEEP(SPQ_BLOCK_SLEEP_MS);
+ else
+ OSAL_UDELAY(SPQ_BLOCK_DELAY_US);
}
+ return ECORE_TIMEOUT;
+}
+
+static enum _ecore_status_t ecore_spq_block(struct ecore_hwfn *p_hwfn,
+ struct ecore_spq_entry *p_ent,
+ u8 *p_fw_ret, bool skip_quick_poll)
+{
+ struct ecore_spq_comp_done *comp_done;
+ enum _ecore_status_t rc;
+
+ /* A relatively short polling period w/o sleeping, to allow the FW to
+ * complete the ramrod and thus possibly to avoid the following sleeps.
+ */
+ if (!skip_quick_poll) {
+ rc = __ecore_spq_block(p_hwfn, p_ent, p_fw_ret, false);
+ if (rc == ECORE_SUCCESS)
+ return ECORE_SUCCESS;
+ }
+
+ /* Move to polling with a sleeping period between iterations */
+ rc = __ecore_spq_block(p_hwfn, p_ent, p_fw_ret, true);
+ if (rc == ECORE_SUCCESS)
+ return ECORE_SUCCESS;
+
DP_INFO(p_hwfn, "Ramrod is stuck, requesting MCP drain\n");
rc = ecore_mcp_drain(p_hwfn, p_hwfn->p_main_ptt);
- if (rc != ECORE_SUCCESS)
+ if (rc != ECORE_SUCCESS) {
DP_NOTICE(p_hwfn, true, "MCP drain failed\n");
+ goto err;
+ }
/* Retry after drain */
- sleep_count = SPQ_BLOCK_SLEEP_LENGTH;
- while (sleep_count) {
- /* validate we receive completion update */
- OSAL_SMP_RMB(p_hwfn->p_dev);
- if (comp_done->done == 1) {
- if (p_fw_ret)
- *p_fw_ret = comp_done->fw_return_code;
+ rc = __ecore_spq_block(p_hwfn, p_ent, p_fw_ret, true);
+ if (rc == ECORE_SUCCESS)
return ECORE_SUCCESS;
- }
- OSAL_MSLEEP(5);
- sleep_count--;
- }
+ comp_done = (struct ecore_spq_comp_done *)p_ent->comp_cb.cookie;
if (comp_done->done == 1) {
if (p_fw_ret)
*p_fw_ret = comp_done->fw_return_code;
return ECORE_SUCCESS;
}
-
+err:
DP_NOTICE(p_hwfn, true,
"Ramrod is stuck [CID %08x cmd %02x proto %02x echo %04x]\n",
OSAL_LE32_TO_CPU(p_ent->elem.hdr.cid),
@@ -187,10 +214,8 @@ static void ecore_spq_hw_initialize(struct ecore_hwfn *p_hwfn,
p_cxt->xstorm_st_context.spq_base_hi =
DMA_HI_LE(p_spq->chain.p_phys_addr);
- p_cxt->xstorm_st_context.consolid_base_addr.lo =
- DMA_LO_LE(p_hwfn->p_consq->chain.p_phys_addr);
- p_cxt->xstorm_st_context.consolid_base_addr.hi =
- DMA_HI_LE(p_hwfn->p_consq->chain.p_phys_addr);
+ DMA_REGPAIR_LE(p_cxt->xstorm_st_context.consolid_base_addr,
+ p_hwfn->p_consq->chain.p_phys_addr);
}
static enum _ecore_status_t ecore_spq_hw_post(struct ecore_hwfn *p_hwfn,
@@ -218,19 +243,15 @@ static enum _ecore_status_t ecore_spq_hw_post(struct ecore_hwfn *p_hwfn,
SET_FIELD(db.params, CORE_DB_DATA_AGG_VAL_SEL,
DQ_XCM_CORE_SPQ_PROD_CMD);
db.agg_flags = DQ_XCM_CORE_DQ_CF_CMD;
-
- /* validate producer is up to-date */
- OSAL_RMB(p_hwfn->p_dev);
-
db.spq_prod = OSAL_CPU_TO_LE16(ecore_chain_get_prod_idx(p_chain));
- /* do not reorder */
- OSAL_BARRIER(p_hwfn->p_dev);
+ /* make sure the SPQE is updated before the doorbell */
+ OSAL_WMB(p_hwfn->p_dev);
DOORBELL(p_hwfn, DB_ADDR(p_spq->cid, DQ_DEMS_LEGACY), *(u32 *)&db);
/* make sure doorbell is rang */
- OSAL_MMIOWB(p_hwfn->p_dev);
+ OSAL_WMB(p_hwfn->p_dev);
DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
"Doorbelled [0x%08x, CID 0x%08x] with Flags: %02x"
@@ -305,10 +326,11 @@ enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn,
}
DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
- "op %x prot %x res0 %x echo %x "
- "fwret %x flags %x\n", p_eqe->opcode,
+ "op %x prot %x res0 %x echo %x fwret %x flags %x\n",
+ p_eqe->opcode, /* Event Opcode */
p_eqe->protocol_id, /* Event Protocol ID */
p_eqe->reserved0, /* Reserved */
+ /* Echo value from ramrod data on the host */
OSAL_LE16_TO_CPU(p_eqe->echo),
p_eqe->fw_return_code, /* FW return code for SP
* ramrods
@@ -338,7 +360,7 @@ struct ecore_eq *ecore_eq_alloc(struct ecore_hwfn *p_hwfn, u16 num_elem)
struct ecore_eq *p_eq;
/* Allocate EQ struct */
- p_eq = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(struct ecore_eq));
+ p_eq = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_eq));
if (!p_eq) {
DP_NOTICE(p_hwfn, true,
"Failed to allocate `struct ecore_eq'\n");
@@ -436,8 +458,7 @@ void ecore_spq_setup(struct ecore_hwfn *p_hwfn)
capacity = ecore_chain_get_capacity(&p_spq->chain);
for (i = 0; i < capacity; i++) {
- p_virt->elem.data_ptr.hi = DMA_HI_LE(p_phys);
- p_virt->elem.data_ptr.lo = DMA_LO_LE(p_phys);
+ DMA_REGPAIR_LE(p_virt->elem.data_ptr, p_phys);
OSAL_LIST_PUSH_TAIL(&p_virt->list, &p_spq->free_pool);
@@ -537,18 +558,18 @@ ecore_spq_get_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry **pp_ent)
{
struct ecore_spq *p_spq = p_hwfn->p_spq;
struct ecore_spq_entry *p_ent = OSAL_NULL;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
OSAL_SPIN_LOCK(&p_spq->lock);
if (OSAL_LIST_IS_EMPTY(&p_spq->free_pool)) {
- p_ent = OSAL_ZALLOC(p_hwfn->p_dev, GFP_ATOMIC,
- sizeof(struct ecore_spq_entry));
+ p_ent = OSAL_ZALLOC(p_hwfn->p_dev, GFP_ATOMIC, sizeof(*p_ent));
if (!p_ent) {
- OSAL_SPIN_UNLOCK(&p_spq->lock);
DP_NOTICE(p_hwfn, true,
- "Failed to allocate an SPQ entry"
- " for a pending ramrod\n");
- return ECORE_NOMEM;
+ "Failed to allocate an SPQ entry for a pending"
+ " ramrod\n");
+ rc = ECORE_NOMEM;
+ goto out_unlock;
}
p_ent->queue = &p_spq->unlimited_pending;
} else {
@@ -560,9 +581,9 @@ ecore_spq_get_entry(struct ecore_hwfn *p_hwfn, struct ecore_spq_entry **pp_ent)
*pp_ent = p_ent;
+out_unlock:
OSAL_SPIN_UNLOCK(&p_spq->lock);
-
- return ECORE_SUCCESS;
+ return rc;
}
/* Locked variant; Should be called while the SPQ lock is taken */
@@ -607,6 +628,7 @@ ecore_spq_add_entry(struct ecore_hwfn *p_hwfn,
p_spq->unlimited_pending_count++;
return ECORE_SUCCESS;
+
} else {
struct ecore_spq_entry *p_en2;
@@ -621,14 +643,10 @@ ecore_spq_add_entry(struct ecore_hwfn *p_hwfn,
*/
p_ent->elem.data_ptr = p_en2->elem.data_ptr;
- /* Setting the cookie to the comp_done of the
- * new element.
- */
- if (p_ent->comp_cb.cookie == &p_ent->comp_done)
- p_ent->comp_cb.cookie = &p_en2->comp_done;
-
*p_en2 = *p_ent;
+ /* EBLOCK responsible to free the allocated p_ent */
+ if (p_ent->comp_mode != ECORE_SPQ_MODE_EBLOCK)
OSAL_FREE(p_hwfn->p_dev, p_ent);
p_ent = p_en2;
@@ -682,8 +700,13 @@ static enum _ecore_status_t ecore_spq_post_list(struct ecore_hwfn *p_hwfn,
!OSAL_LIST_IS_EMPTY(head)) {
struct ecore_spq_entry *p_ent =
OSAL_LIST_FIRST_ENTRY(head, struct ecore_spq_entry, list);
+ if (p_ent != OSAL_NULL) {
+#if defined(_NTDDK_)
+#pragma warning(suppress : 6011 28182)
+#endif
OSAL_LIST_REMOVE_ENTRY(&p_ent->list, head);
- OSAL_LIST_PUSH_TAIL(&p_ent->list, &p_spq->completion_pending);
+ OSAL_LIST_PUSH_TAIL(&p_ent->list,
+ &p_spq->completion_pending);
p_spq->comp_sent_count++;
rc = ecore_spq_hw_post(p_hwfn, p_spq, p_ent);
@@ -694,13 +717,13 @@ static enum _ecore_status_t ecore_spq_post_list(struct ecore_hwfn *p_hwfn,
return rc;
}
}
+ }
return ECORE_SUCCESS;
}
static enum _ecore_status_t ecore_spq_pend_post(struct ecore_hwfn *p_hwfn)
{
- enum _ecore_status_t rc = ECORE_NOTIMPL;
struct ecore_spq *p_spq = p_hwfn->p_spq;
struct ecore_spq_entry *p_ent = OSAL_NULL;
@@ -713,17 +736,16 @@ static enum _ecore_status_t ecore_spq_pend_post(struct ecore_hwfn *p_hwfn)
if (!p_ent)
return ECORE_INVAL;
+#if defined(_NTDDK_)
+#pragma warning(suppress : 6011)
+#endif
OSAL_LIST_REMOVE_ENTRY(&p_ent->list, &p_spq->unlimited_pending);
ecore_spq_add_entry(p_hwfn, p_ent, p_ent->priority);
}
- rc = ecore_spq_post_list(p_hwfn,
+ return ecore_spq_post_list(p_hwfn,
&p_spq->pending, SPQ_HIGH_PRI_RESERVE_DEFAULT);
- if (rc)
- return rc;
-
- return ECORE_SUCCESS;
}
enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn,
@@ -785,7 +807,21 @@ enum _ecore_status_t ecore_spq_post(struct ecore_hwfn *p_hwfn,
* access p_ent here to see whether it's successful or not.
* Thus, after gaining the answer perform the cleanup here.
*/
- rc = ecore_spq_block(p_hwfn, p_ent, fw_return_code);
+ rc = ecore_spq_block(p_hwfn, p_ent, fw_return_code,
+ p_ent->queue == &p_spq->unlimited_pending);
+
+ if (p_ent->queue == &p_spq->unlimited_pending) {
+ /* This is an allocated p_ent which does not need to
+ * return to pool.
+ */
+ OSAL_FREE(p_hwfn->p_dev, p_ent);
+
+ /* TBD: handle error flow and remove p_ent from
+ * completion pending
+ */
+ return rc;
+ }
+
if (rc)
goto spq_post_fail2;
@@ -885,10 +921,13 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn,
found->comp_cb.function(p_hwfn, found->comp_cb.cookie, p_data,
fw_return_code);
- if (found->comp_mode != ECORE_SPQ_MODE_EBLOCK) {
- /* EBLOCK is responsible for freeing its own entry */
+ if ((found->comp_mode != ECORE_SPQ_MODE_EBLOCK) ||
+ (found->queue == &p_spq->unlimited_pending))
+ /* EBLOCK is responsible for returning its own entry into the
+ * free list, unless it originally added the entry into the
+ * unlimited pending list.
+ */
ecore_spq_return_entry(p_hwfn, found);
- }
/* Attempt to post pending requests */
OSAL_SPIN_LOCK(&p_spq->lock);
@@ -904,7 +943,7 @@ struct ecore_consq *ecore_consq_alloc(struct ecore_hwfn *p_hwfn)
/* Allocate ConsQ struct */
p_consq =
- OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(struct ecore_consq));
+ OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_consq));
if (!p_consq) {
DP_NOTICE(p_hwfn, true,
"Failed to allocate `struct ecore_consq'\n");
diff --git a/drivers/net/qede/base/ecore_spq.h b/drivers/net/qede/base/ecore_spq.h
index 490b7d9..717ede3 100644
--- a/drivers/net/qede/base/ecore_spq.h
+++ b/drivers/net/qede/base/ecore_spq.h
@@ -18,6 +18,7 @@
union ramrod_data {
struct pf_start_ramrod_data pf_start;
struct pf_update_ramrod_data pf_update;
+ struct rl_update_ramrod_data rl_update;
struct rx_queue_start_ramrod_data rx_queue_start;
struct rx_queue_update_ramrod_data rx_queue_update;
struct rx_queue_stop_ramrod_data rx_queue_stop;
@@ -101,8 +102,8 @@ struct ecore_spq {
/* Bitmap for handling out-of-order completions */
#define SPQ_RING_SIZE \
(CORE_SPQE_PAGE_SIZE_BYTES / sizeof(struct slow_path_element))
-#define SPQ_COMP_BMAP_SIZE \
-(SPQ_RING_SIZE / (sizeof(unsigned long) * 8 /* BITS_PER_LONG */))
+/* BITS_PER_LONG */
+#define SPQ_COMP_BMAP_SIZE (SPQ_RING_SIZE / (sizeof(unsigned long) * 8))
unsigned long p_comp_bitmap[SPQ_COMP_BMAP_SIZE];
u8 comp_bitmap_idx;
#define SPQ_COMP_BMAP_SET_BIT(p_spq, idx) \
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index d8d1aac..eb3a1e2 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -25,8 +25,8 @@
#include "ecore_cxt.h"
#include "ecore_vf.h"
#include "ecore_init_fw_funcs.h"
+#include "ecore_sp_commands.h"
-/* TEMPORARY until we implement print_enums... */
const char *ecore_channel_tlvs_string[] = {
"CHANNEL_TLV_NONE", /* ends tlv sequence */
"CHANNEL_TLV_ACQUIRE",
@@ -54,6 +54,178 @@ const char *ecore_channel_tlvs_string[] = {
"CHANNEL_TLV_MAX"
};
+/* IOV ramrods */
+static enum _ecore_status_t ecore_sp_vf_start(struct ecore_hwfn *p_hwfn,
+ struct ecore_vf_info *p_vf)
+{
+ struct vf_start_ramrod_data *p_ramrod = OSAL_NULL;
+ struct ecore_spq_entry *p_ent = OSAL_NULL;
+ struct ecore_sp_init_data init_data;
+ enum _ecore_status_t rc = ECORE_NOTIMPL;
+ u8 fp_minor;
+
+ /* Get SPQ entry */
+ OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+ init_data.cid = ecore_spq_get_cid(p_hwfn);
+ init_data.opaque_fid = p_vf->opaque_fid;
+ init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
+
+ rc = ecore_sp_init_request(p_hwfn, &p_ent,
+ COMMON_RAMROD_VF_START,
+ PROTOCOLID_COMMON, &init_data);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ p_ramrod = &p_ent->ramrod.vf_start;
+
+ p_ramrod->vf_id = GET_FIELD(p_vf->concrete_fid, PXP_CONCRETE_FID_VFID);
+ p_ramrod->opaque_fid = OSAL_CPU_TO_LE16(p_vf->opaque_fid);
+
+ switch (p_hwfn->hw_info.personality) {
+ case ECORE_PCI_ETH:
+ p_ramrod->personality = PERSONALITY_ETH;
+ break;
+ case ECORE_PCI_ETH_ROCE:
+ p_ramrod->personality = PERSONALITY_RDMA_AND_ETH;
+ break;
+ default:
+ DP_NOTICE(p_hwfn, true, "Unknown VF personality %d\n",
+ p_hwfn->hw_info.personality);
+ return ECORE_INVAL;
+ }
+
+ fp_minor = p_vf->acquire.vfdev_info.eth_fp_hsi_minor;
+ if (fp_minor > ETH_HSI_VER_MINOR &&
+ fp_minor != ETH_HSI_VER_NO_PKT_LEN_TUNN) {
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "VF [%d] - Requested fp hsi %02x.%02x which is"
+ " slightly newer than PF's %02x.%02x; Configuring"
+ " PFs version\n",
+ p_vf->abs_vf_id,
+ ETH_HSI_VER_MAJOR, fp_minor,
+ ETH_HSI_VER_MAJOR, ETH_HSI_VER_MINOR);
+ fp_minor = ETH_HSI_VER_MINOR;
+ }
+
+ p_ramrod->hsi_fp_ver.major_ver_arr[ETH_VER_KEY] = ETH_HSI_VER_MAJOR;
+ p_ramrod->hsi_fp_ver.minor_ver_arr[ETH_VER_KEY] = fp_minor;
+
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "VF[%d] - Starting using HSI %02x.%02x\n",
+ p_vf->abs_vf_id, ETH_HSI_VER_MAJOR, fp_minor);
+
+ return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
+
+static enum _ecore_status_t ecore_sp_vf_stop(struct ecore_hwfn *p_hwfn,
+ u32 concrete_vfid,
+ u16 opaque_vfid)
+{
+ struct vf_stop_ramrod_data *p_ramrod = OSAL_NULL;
+ struct ecore_spq_entry *p_ent = OSAL_NULL;
+ struct ecore_sp_init_data init_data;
+ enum _ecore_status_t rc = ECORE_NOTIMPL;
+
+ /* Get SPQ entry */
+ OSAL_MEMSET(&init_data, 0, sizeof(init_data));
+ init_data.cid = ecore_spq_get_cid(p_hwfn);
+ init_data.opaque_fid = opaque_vfid;
+ init_data.comp_mode = ECORE_SPQ_MODE_EBLOCK;
+
+ rc = ecore_sp_init_request(p_hwfn, &p_ent,
+ COMMON_RAMROD_VF_STOP,
+ PROTOCOLID_COMMON, &init_data);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ p_ramrod = &p_ent->ramrod.vf_stop;
+
+ p_ramrod->vf_id = GET_FIELD(concrete_vfid, PXP_CONCRETE_FID_VFID);
+
+ return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
+}
+
+bool ecore_iov_is_valid_vfid(struct ecore_hwfn *p_hwfn, int rel_vf_id,
+ bool b_enabled_only)
+{
+ if (!p_hwfn->pf_iov_info) {
+ DP_NOTICE(p_hwfn->p_dev, true, "No iov info\n");
+ return false;
+ }
+
+ if ((rel_vf_id >= p_hwfn->p_dev->p_iov_info->total_vfs) ||
+ (rel_vf_id < 0))
+ return false;
+
+ if ((!p_hwfn->pf_iov_info->vfs_array[rel_vf_id].b_init) &&
+ b_enabled_only)
+ return false;
+
+ return true;
+}
+
+struct ecore_vf_info *ecore_iov_get_vf_info(struct ecore_hwfn *p_hwfn,
+ u16 relative_vf_id,
+ bool b_enabled_only)
+{
+ struct ecore_vf_info *vf = OSAL_NULL;
+
+ if (!p_hwfn->pf_iov_info) {
+ DP_NOTICE(p_hwfn->p_dev, true, "No iov info\n");
+ return OSAL_NULL;
+ }
+
+ if (ecore_iov_is_valid_vfid(p_hwfn, relative_vf_id, b_enabled_only))
+ vf = &p_hwfn->pf_iov_info->vfs_array[relative_vf_id];
+ else
+ DP_ERR(p_hwfn, "ecore_iov_get_vf_info: VF[%d] is not enabled\n",
+ relative_vf_id);
+
+ return vf;
+}
+
+static bool ecore_iov_validate_rxq(struct ecore_hwfn *p_hwfn,
+ struct ecore_vf_info *p_vf,
+ u16 rx_qid)
+{
+ if (rx_qid >= p_vf->num_rxqs)
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "VF[0x%02x] - can't touch Rx queue[%04x];"
+ " Only 0x%04x are allocated\n",
+ p_vf->abs_vf_id, rx_qid, p_vf->num_rxqs);
+ return rx_qid < p_vf->num_rxqs;
+}
+
+static bool ecore_iov_validate_txq(struct ecore_hwfn *p_hwfn,
+ struct ecore_vf_info *p_vf,
+ u16 tx_qid)
+{
+ if (tx_qid >= p_vf->num_txqs)
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "VF[0x%02x] - can't touch Tx queue[%04x];"
+ " Only 0x%04x are allocated\n",
+ p_vf->abs_vf_id, tx_qid, p_vf->num_txqs);
+ return tx_qid < p_vf->num_txqs;
+}
+
+static bool ecore_iov_validate_sb(struct ecore_hwfn *p_hwfn,
+ struct ecore_vf_info *p_vf,
+ u16 sb_idx)
+{
+ int i;
+
+ for (i = 0; i < p_vf->num_sbs; i++)
+ if (p_vf->igu_sbs[i] == sb_idx)
+ return true;
+
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "VF[0%02x] - tried using sb_idx %04x which doesn't exist as"
+ " one of its 0x%02x SBs\n",
+ p_vf->abs_vf_id, sb_idx, p_vf->num_sbs);
+
+ return false;
+}
+
/* TODO - this is linux crc32; Need a way to ifdef it out for linux */
u32 ecore_crc32(u32 crc, u8 *ptr, u32 length)
{
@@ -72,9 +244,9 @@ enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
{
struct ecore_bulletin_content *p_bulletin;
+ int crc_size = sizeof(p_bulletin->crc);
struct ecore_dmae_params params;
struct ecore_vf_info *p_vf;
- int crc_size = sizeof(p_bulletin->crc);
p_vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
if (!p_vf)
@@ -106,7 +278,7 @@ enum _ecore_status_t ecore_iov_post_vf_bulletin(struct ecore_hwfn *p_hwfn,
static enum _ecore_status_t ecore_iov_pci_cfg_info(struct ecore_dev *p_dev)
{
- struct ecore_hw_sriov_info *iov = &p_dev->sriov_info;
+ struct ecore_hw_sriov_info *iov = p_dev->p_iov_info;
int pos = iov->pos;
DP_VERBOSE(p_dev, ECORE_MSG_IOV, "sriov ext pos %d\n", pos);
@@ -148,6 +320,7 @@ static enum _ecore_status_t ecore_iov_pci_cfg_info(struct ecore_dev *p_dev)
DP_VERBOSE(p_dev, ECORE_MSG_IOV, "IOV info[%d]: nres %d, cap 0x%x,"
"ctrl 0x%x, total %d, initial %d, num vfs %d, offset %d,"
" stride %d, page size 0x%x\n", 0,
+ /* @@@TBD MichalK - function id */
iov->nres, iov->cap, iov->ctrl,
iov->total_vfs, iov->initial_vfs, iov->nr_virtfn,
iov->offset, iov->stride, iov->pgsz);
@@ -200,16 +373,14 @@ static void ecore_iov_clear_vf_igu_blocks(struct ecore_hwfn *p_hwfn,
static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
{
- u16 num_vfs = p_hwfn->p_dev->sriov_info.total_vfs;
- union pfvf_tlvs *p_reply_virt_addr;
- union vfpf_tlvs *p_req_virt_addr;
+ struct ecore_hw_sriov_info *p_iov = p_hwfn->p_dev->p_iov_info;
+ struct ecore_pf_iov *p_iov_info = p_hwfn->pf_iov_info;
struct ecore_bulletin_content *p_bulletin_virt;
- struct ecore_pf_iov *p_iov_info;
dma_addr_t req_p, rply_p, bulletin_p;
+ union pfvf_tlvs *p_reply_virt_addr;
+ union vfpf_tlvs *p_req_virt_addr;
u8 idx = 0;
- p_iov_info = p_hwfn->pf_iov_info;
-
OSAL_MEMSET(p_iov_info->vfs_array, 0, sizeof(p_iov_info->vfs_array));
p_req_virt_addr = p_iov_info->mbx_msg_virt_addr;
@@ -226,7 +397,7 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
p_iov_info->base_vport_id = 1; /* @@@TBD resource allocation */
- for (idx = 0; idx < num_vfs; idx++) {
+ for (idx = 0; idx < p_iov->total_vfs; idx++) {
struct ecore_vf_info *vf = &p_iov_info->vfs_array[idx];
u32 concrete;
@@ -240,6 +411,7 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
vf->vf_mbx.sw_mbx.mbx_state = VF_PF_WAIT_FOR_START_REQUEST;
#endif
vf->state = VF_STOPPED;
+ vf->b_init = false;
vf->bulletin.phys = idx *
sizeof(struct ecore_bulletin_content) + bulletin_p;
@@ -247,7 +419,7 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
vf->bulletin.size = sizeof(struct ecore_bulletin_content);
vf->relative_vf_id = idx;
- vf->abs_vf_id = idx + p_hwfn->hw_info.first_vf_in_pf;
+ vf->abs_vf_id = idx + p_iov->first_vf_in_pf;
concrete = ecore_vfid_to_concrete(p_hwfn, vf->abs_vf_id);
vf->concrete_fid = concrete;
/* TODO - need to devise a better way of getting opaque */
@@ -255,6 +427,9 @@ static void ecore_iov_setup_vfdb(struct ecore_hwfn *p_hwfn)
(vf->abs_vf_id << 8);
/* @@TBD MichalK - add base vport_id of VFs to equation */
vf->vport_id = p_iov_info->base_vport_id + idx;
+
+ vf->num_mac_filters = ECORE_ETH_VF_NUM_MAC_FILTERS;
+ vf->num_vlan_filters = ECORE_ETH_VF_NUM_VLAN_FILTERS;
}
}
@@ -264,7 +439,7 @@ static enum _ecore_status_t ecore_iov_allocate_vfdb(struct ecore_hwfn *p_hwfn)
void **p_v_addr;
u16 num_vfs = 0;
- num_vfs = p_hwfn->p_dev->sriov_info.total_vfs;
+ num_vfs = p_hwfn->p_dev->p_iov_info->total_vfs;
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
"ecore_iov_allocate_vfdb for %d VFs\n", num_vfs);
@@ -297,16 +472,15 @@ static enum _ecore_status_t ecore_iov_allocate_vfdb(struct ecore_hwfn *p_hwfn)
return ECORE_NOMEM;
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "PF's Requests mailbox [%p virt 0x%" PRIx64 " phys], "
- "Response mailbox [%p virt 0x%" PRIx64 " phys] Bulletins"
- " [%p virt 0x%" PRIx64 " phys]\n",
+ "PF's Requests mailbox [%p virt 0x%lx phys], "
+ "Response mailbox [%p virt 0x%lx phys] Bulletinsi"
+ " [%p virt 0x%lx phys]\n",
p_iov_info->mbx_msg_virt_addr,
- (u64)p_iov_info->mbx_msg_phys_addr,
+ (unsigned long)p_iov_info->mbx_msg_phys_addr,
p_iov_info->mbx_reply_virt_addr,
- (u64)p_iov_info->mbx_reply_phys_addr,
- p_iov_info->p_bulletins, (u64)p_iov_info->bulletins_phys);
-
- /* @@@TBD MichalK - statistics / RSS */
+ (unsigned long)p_iov_info->mbx_reply_phys_addr,
+ p_iov_info->p_bulletins,
+ (unsigned long)p_iov_info->bulletins_phys);
return ECORE_SUCCESS;
}
@@ -332,38 +506,33 @@ static void ecore_iov_free_vfdb(struct ecore_hwfn *p_hwfn)
p_iov_info->p_bulletins,
p_iov_info->bulletins_phys,
p_iov_info->bulletins_size);
-
- /* @@@TBD MichalK - statistics / RSS */
}
enum _ecore_status_t ecore_iov_alloc(struct ecore_hwfn *p_hwfn)
{
- enum _ecore_status_t rc = ECORE_SUCCESS;
struct ecore_pf_iov *p_sriov;
if (!IS_PF_SRIOV(p_hwfn)) {
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
"No SR-IOV - no need for IOV db\n");
- return rc;
+ return ECORE_SUCCESS;
}
p_sriov = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_sriov));
if (!p_sriov) {
DP_NOTICE(p_hwfn, true,
- "Failed to allocate `struct ecore_sriov'");
+ "Failed to allocate `struct ecore_sriov'\n");
return ECORE_NOMEM;
}
p_hwfn->pf_iov_info = p_sriov;
- rc = ecore_iov_allocate_vfdb(p_hwfn);
-
- return rc;
+ return ecore_iov_allocate_vfdb(p_hwfn);
}
void ecore_iov_setup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
{
- if (!IS_PF_SRIOV(p_hwfn) || !p_hwfn->pf_iov_info)
+ if (!IS_PF_SRIOV(p_hwfn) || !IS_PF_SRIOV_ALLOC(p_hwfn))
return;
ecore_iov_setup_vfdb(p_hwfn);
@@ -372,39 +541,59 @@ void ecore_iov_setup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
void ecore_iov_free(struct ecore_hwfn *p_hwfn)
{
- if (p_hwfn->pf_iov_info) {
+ if (IS_PF_SRIOV_ALLOC(p_hwfn)) {
ecore_iov_free_vfdb(p_hwfn);
OSAL_FREE(p_hwfn->p_dev, p_hwfn->pf_iov_info);
}
}
-enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
+void ecore_iov_free_hw_info(struct ecore_dev *p_dev)
+{
+ OSAL_FREE(p_dev, p_dev->p_iov_info);
+ p_dev->p_iov_info = OSAL_NULL;
+}
+
+enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn)
{
+ struct ecore_dev *p_dev = p_hwfn->p_dev;
+ int pos;
enum _ecore_status_t rc;
- /* @@@ TBD get this information from shmem / pci cfg */
if (IS_VF(p_hwfn->p_dev))
return ECORE_SUCCESS;
- /* First hwfn should learn the PCI configuration */
- if (IS_LEAD_HWFN(p_hwfn)) {
- struct ecore_dev *p_dev = p_hwfn->p_dev;
- int *pos = &p_hwfn->p_dev->sriov_info.pos;
-
- *pos = OSAL_PCI_FIND_EXT_CAPABILITY(p_hwfn->p_dev,
+ /* Learn the PCI configuration */
+ pos = OSAL_PCI_FIND_EXT_CAPABILITY(p_hwfn->p_dev,
PCI_EXT_CAP_ID_SRIOV);
- if (!*pos) {
- DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "No PCIe IOV support\n");
+ if (!pos) {
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "No PCIe IOV support\n");
return ECORE_SUCCESS;
}
+ /* Allocate a new struct for IOV information */
+ /* TODO - can change to VALLOC when its available */
+ p_dev->p_iov_info = OSAL_ZALLOC(p_dev, GFP_KERNEL,
+ sizeof(*p_dev->p_iov_info));
+ if (!p_dev->p_iov_info) {
+ DP_NOTICE(p_hwfn, true,
+ "Can't support IOV due to lack of memory\n");
+ return ECORE_NOMEM;
+ }
+ p_dev->p_iov_info->pos = pos;
+
rc = ecore_iov_pci_cfg_info(p_dev);
if (rc)
return rc;
- } else if (!p_hwfn->p_dev->sriov_info.pos) {
- DP_VERBOSE(p_hwfn, ECORE_MSG_IOV, "No PCIe IOV support\n");
+
+ /* We want PF IOV to be synonemous with the existence of p_iov_info;
+ * In case the capability is published but there are no VFs, simply
+ * de-allocate the struct.
+ */
+ if (!p_dev->p_iov_info->total_vfs) {
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "IOV capabilities, but no VFs are published\n");
+ OSAL_FREE(p_dev, p_dev->p_iov_info);
+ p_dev->p_iov_info = OSAL_NULL;
return ECORE_SUCCESS;
}
@@ -412,61 +601,66 @@ enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn,
* VFs start at offset 16 relative to PF0, and 2nd engine VFs begin
* after the first engine's VFs.
*/
- p_hwfn->hw_info.first_vf_in_pf = p_hwfn->p_dev->sriov_info.offset +
+ p_dev->p_iov_info->first_vf_in_pf = p_hwfn->p_dev->p_iov_info->offset +
p_hwfn->abs_pf_id - 16;
if (ECORE_PATH_ID(p_hwfn))
- p_hwfn->hw_info.first_vf_in_pf -= MAX_NUM_VFS_BB;
+ p_dev->p_iov_info->first_vf_in_pf -= MAX_NUM_VFS_BB;
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "First VF in hwfn 0x%08x\n", p_hwfn->hw_info.first_vf_in_pf);
+ "First VF in hwfn 0x%08x\n",
+ p_dev->p_iov_info->first_vf_in_pf);
return ECORE_SUCCESS;
}
-struct ecore_vf_info *ecore_iov_get_vf_info(struct ecore_hwfn *p_hwfn,
- u16 relative_vf_id,
- bool b_enabled_only)
+bool ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid)
{
- struct ecore_vf_info *vf = OSAL_NULL;
-
- if (!p_hwfn->pf_iov_info) {
- DP_NOTICE(p_hwfn->p_dev, true, "No iov info\n");
- return OSAL_NULL;
- }
+ /* Check PF supports sriov */
+ if (IS_VF(p_hwfn->p_dev) || !IS_ECORE_SRIOV(p_hwfn->p_dev) ||
+ !IS_PF_SRIOV_ALLOC(p_hwfn))
+ return false;
- if (ecore_iov_is_valid_vfid(p_hwfn, relative_vf_id, b_enabled_only))
- vf = &p_hwfn->pf_iov_info->vfs_array[relative_vf_id];
- else
- DP_ERR(p_hwfn, "ecore_iov_get_vf_info: VF[%d] is not enabled\n",
- relative_vf_id);
+ /* Check VF validity */
+ if (!ecore_iov_is_valid_vfid(p_hwfn, vfid, true))
+ return false;
- return vf;
+ return true;
}
-void ecore_iov_set_vf_to_disable(struct ecore_hwfn *p_hwfn,
+void ecore_iov_set_vf_to_disable(struct ecore_dev *p_dev,
u16 rel_vf_id, u8 to_disable)
{
struct ecore_vf_info *vf;
+ int i;
+
+ for_each_hwfn(p_dev, i) {
+ struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, false);
if (!vf)
- return;
+ continue;
vf->to_disable = to_disable;
}
+}
-void ecore_iov_set_vfs_to_disable(struct ecore_hwfn *p_hwfn, u8 to_disable)
+void ecore_iov_set_vfs_to_disable(struct ecore_dev *p_dev,
+ u8 to_disable)
{
u16 i;
- for (i = 0; i < p_hwfn->p_dev->sriov_info.total_vfs; i++)
- ecore_iov_set_vf_to_disable(p_hwfn, i, to_disable);
+ if (!IS_ECORE_SRIOV(p_dev))
+ return;
+
+ for (i = 0; i < p_dev->p_iov_info->total_vfs; i++)
+ ecore_iov_set_vf_to_disable(p_dev, i, to_disable);
}
#ifndef LINUX_REMOVE
/* @@@TBD Consider taking outside of ecore... */
enum _ecore_status_t ecore_iov_set_vf_ctx(struct ecore_hwfn *p_hwfn,
- u16 vf_id, void *ctx)
+ u16 vf_id,
+ void *ctx)
{
enum _ecore_status_t rc = ECORE_SUCCESS;
struct ecore_vf_info *vf = ecore_iov_get_vf_info(p_hwfn, vf_id, true);
@@ -483,29 +677,9 @@ enum _ecore_status_t ecore_iov_set_vf_ctx(struct ecore_hwfn *p_hwfn,
}
#endif
-/**
- * VF enable primitives
- *
- * when pretend is required the caller is reponsible
- * for calling pretend prioir to calling these routines
- */
-
-/* clears vf error in all semi blocks
- * Assumption: called under VF pretend...
- */
-static OSAL_INLINE void ecore_iov_vf_semi_clear_err(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
-{
- ecore_wr(p_hwfn, p_ptt, TSEM_REG_VF_ERROR, 1);
- ecore_wr(p_hwfn, p_ptt, USEM_REG_VF_ERROR, 1);
- ecore_wr(p_hwfn, p_ptt, MSEM_REG_VF_ERROR, 1);
- ecore_wr(p_hwfn, p_ptt, XSEM_REG_VF_ERROR, 1);
- ecore_wr(p_hwfn, p_ptt, YSEM_REG_VF_ERROR, 1);
- ecore_wr(p_hwfn, p_ptt, PSEM_REG_VF_ERROR, 1);
-}
-
static void ecore_iov_vf_pglue_clear_err(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u8 abs_vfid)
+ struct ecore_ptt *p_ptt,
+ u8 abs_vfid)
{
ecore_wr(p_hwfn, p_ptt,
PGLUE_B_REG_WAS_ERROR_VF_31_0_CLR + (abs_vfid >> 5) * 4,
@@ -517,30 +691,20 @@ static void ecore_iov_vf_igu_reset(struct ecore_hwfn *p_hwfn,
struct ecore_vf_info *vf)
{
int i;
- u16 igu_sb_id;
/* Set VF masks and configuration - pretend */
ecore_fid_pretend(p_hwfn, p_ptt, (u16)vf->concrete_fid);
ecore_wr(p_hwfn, p_ptt, IGU_REG_STATISTIC_NUM_VF_MSG_SENT, 0);
- DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "value in VF_CONFIGURATION of vf %d after write %x\n",
- vf->abs_vf_id,
- ecore_rd(p_hwfn, p_ptt, IGU_REG_VF_CONFIGURATION));
-
/* unpretend */
ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_hwfn->hw_info.concrete_fid);
- /* iterate ove all queues, clear sb consumer */
- for (i = 0; i < vf->num_sbs; i++) {
- igu_sb_id = vf->igu_sbs[i];
- /* Set then clear... */
- ecore_int_igu_cleanup_sb(p_hwfn, p_ptt, igu_sb_id, 1,
- vf->opaque_fid);
- ecore_int_igu_cleanup_sb(p_hwfn, p_ptt, igu_sb_id, 0,
- vf->opaque_fid);
- }
+ /* iterate over all queues, clear sb consumer */
+ for (i = 0; i < vf->num_sbs; i++)
+ ecore_int_igu_init_pure_rt_single(p_hwfn, p_ptt,
+ vf->igu_sbs[i],
+ vf->opaque_fid, true);
}
static void ecore_iov_vf_igu_set_int(struct ecore_hwfn *p_hwfn,
@@ -581,9 +745,11 @@ ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn,
ecore_iov_vf_pglue_clear_err(p_hwfn, p_ptt,
ECORE_VF_ABS_ID(p_hwfn, vf));
+ ecore_iov_vf_igu_reset(p_hwfn, p_ptt, vf);
+
rc = ecore_mcp_config_vf_msix(p_hwfn, p_ptt,
vf->abs_vf_id, vf->num_sbs);
- if (rc)
+ if (rc != ECORE_SUCCESS)
return rc;
ecore_fid_pretend(p_hwfn, p_ptt, (u16)vf->concrete_fid);
@@ -597,18 +763,6 @@ ecore_iov_enable_vf_access(struct ecore_hwfn *p_hwfn,
/* unpretend */
ecore_fid_pretend(p_hwfn, p_ptt, (u16)p_hwfn->hw_info.concrete_fid);
- if (vf->state != VF_STOPPED) {
- DP_NOTICE(p_hwfn, true, "VF[%02x] is already started\n",
- vf->abs_vf_id);
- return ECORE_INVAL;
- }
-
- /* Start VF */
- rc = ecore_sp_vf_start(p_hwfn, vf->concrete_fid, vf->opaque_fid);
- if (rc != ECORE_SUCCESS)
- DP_NOTICE(p_hwfn, true, "Failed to start VF[%02x]\n",
- vf->abs_vf_id);
-
vf->state = VF_FREE;
return rc;
@@ -631,8 +785,7 @@ static void ecore_iov_config_perm_table(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct ecore_vf_info *vf, u8 enable)
{
- u32 reg_addr;
- u32 val;
+ u32 reg_addr, val;
u16 qzone_id = 0;
int qid;
@@ -650,13 +803,13 @@ static void ecore_iov_enable_vf_traffic(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct ecore_vf_info *vf)
{
- /* Reset vf in IGU interrupts are still disabled */
+ /* Reset vf in IGU - interrupts are still disabled */
ecore_iov_vf_igu_reset(p_hwfn, p_ptt, vf);
- ecore_iov_vf_igu_set_int(p_hwfn, p_ptt, vf, 1 /* enable */);
+ ecore_iov_vf_igu_set_int(p_hwfn, p_ptt, vf, 1);
/* Permission Table */
- ecore_iov_config_perm_table(p_hwfn, p_ptt, vf, true /* enable */);
+ ecore_iov_config_perm_table(p_hwfn, p_ptt, vf, true);
}
static u8 ecore_iov_alloc_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
@@ -664,11 +817,11 @@ static u8 ecore_iov_alloc_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
struct ecore_vf_info *vf,
u16 num_rx_queues)
{
- int igu_id = 0;
- int qid = 0;
+ struct ecore_igu_block *igu_blocks;
+ int qid = 0, igu_id = 0;
u32 val = 0;
- struct ecore_igu_block *igu_blocks =
- p_hwfn->hw_info.p_igu_info->igu_map.igu_blocks;
+
+ igu_blocks = p_hwfn->hw_info.p_igu_info->igu_map.igu_blocks;
if (num_rx_queues > p_hwfn->hw_info.p_igu_info->free_blks)
num_rx_queues = p_hwfn->hw_info.p_igu_info->free_blks;
@@ -752,24 +905,24 @@ enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u16 rel_vf_id, u16 num_rx_queues)
{
- enum _ecore_status_t rc = ECORE_SUCCESS;
- struct ecore_vf_info *vf = OSAL_NULL;
u8 num_of_vf_available_chains = 0;
+ struct ecore_vf_info *vf = OSAL_NULL;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
u32 cids;
u8 i;
- if (ECORE_IS_VF_ACTIVE(p_hwfn->p_dev, rel_vf_id)) {
- DP_NOTICE(p_hwfn, true, "VF[%d] is already active.\n",
- rel_vf_id);
- return ECORE_INVAL;
- }
-
vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, false);
if (!vf) {
DP_ERR(p_hwfn, "ecore_iov_init_hw_for_vf : vf is OSAL_NULL\n");
return ECORE_UNKNOWN_ERROR;
}
+ if (vf->b_init) {
+ DP_NOTICE(p_hwfn, true, "VF[%d] is already active.\n",
+ rel_vf_id);
+ return ECORE_INVAL;
+ }
+
/* Limit number of queues according to number of CIDs */
ecore_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH, &cids);
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
@@ -817,22 +970,62 @@ enum _ecore_status_t ecore_iov_init_hw_for_vf(struct ecore_hwfn *p_hwfn,
rc = ecore_iov_enable_vf_access(p_hwfn, p_ptt, vf);
if (rc == ECORE_SUCCESS) {
- struct ecore_hw_sriov_info *p_iov = &p_hwfn->p_dev->sriov_info;
- u16 vf_id = vf->relative_vf_id;
+ vf->b_init = true;
+ p_hwfn->pf_iov_info->active_vfs[vf->relative_vf_id / 64] |=
+ (1ULL << (vf->relative_vf_id % 64));
- p_iov->num_vfs++;
- p_iov->active_vfs[vf_id / 64] |= (1ULL << (vf_id % 64));
+ if (IS_LEAD_HWFN(p_hwfn))
+ p_hwfn->p_dev->p_iov_info->num_vfs++;
}
return rc;
}
+void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
+ u16 vfid,
+ struct ecore_mcp_link_params *params,
+ struct ecore_mcp_link_state *link,
+ struct ecore_mcp_link_capabilities *p_caps)
+{
+ struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
+ struct ecore_bulletin_content *p_bulletin;
+
+ if (!p_vf)
+ return;
+
+ p_bulletin = p_vf->bulletin.p_virt;
+ p_bulletin->req_autoneg = params->speed.autoneg;
+ p_bulletin->req_adv_speed = params->speed.advertised_speeds;
+ p_bulletin->req_forced_speed = params->speed.forced_speed;
+ p_bulletin->req_autoneg_pause = params->pause.autoneg;
+ p_bulletin->req_forced_rx = params->pause.forced_rx;
+ p_bulletin->req_forced_tx = params->pause.forced_tx;
+ p_bulletin->req_loopback = params->loopback_mode;
+
+ p_bulletin->link_up = link->link_up;
+ p_bulletin->speed = link->speed;
+ p_bulletin->full_duplex = link->full_duplex;
+ p_bulletin->autoneg = link->an;
+ p_bulletin->autoneg_complete = link->an_complete;
+ p_bulletin->parallel_detection = link->parallel_detection;
+ p_bulletin->pfc_enabled = link->pfc_enabled;
+ p_bulletin->partner_adv_speed = link->partner_adv_speed;
+ p_bulletin->partner_tx_flow_ctrl_en = link->partner_tx_flow_ctrl_en;
+ p_bulletin->partner_rx_flow_ctrl_en = link->partner_rx_flow_ctrl_en;
+ p_bulletin->partner_adv_pause = link->partner_adv_pause;
+ p_bulletin->sfp_tx_fault = link->sfp_tx_fault;
+
+ p_bulletin->capability_speed = p_caps->speed_capabilities;
+}
+
enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u16 rel_vf_id)
{
+ struct ecore_mcp_link_capabilities caps;
+ struct ecore_mcp_link_params params;
+ struct ecore_mcp_link_state link;
struct ecore_vf_info *vf = OSAL_NULL;
- enum _ecore_status_t rc = ECORE_SUCCESS;
vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, true);
if (!vf) {
@@ -840,38 +1033,45 @@ enum _ecore_status_t ecore_iov_release_hw_for_vf(struct ecore_hwfn *p_hwfn,
return ECORE_UNKNOWN_ERROR;
}
- if (vf->state != VF_STOPPED) {
- /* Stopping the VF */
- rc = ecore_sp_vf_stop(p_hwfn, vf->concrete_fid, vf->opaque_fid);
+ if (vf->bulletin.p_virt)
+ OSAL_MEMSET(vf->bulletin.p_virt, 0,
+ sizeof(*vf->bulletin.p_virt));
- if (rc != ECORE_SUCCESS) {
- DP_ERR(p_hwfn, "ecore_sp_vf_stop returned error %d\n",
- rc);
- return rc;
- }
+ OSAL_MEMSET(&vf->p_vf_info, 0, sizeof(vf->p_vf_info));
- vf->state = VF_STOPPED;
- }
+ /* Get the link configuration back in bulletin so
+ * that when VFs are re-enabled they get the actual
+ * link configuration.
+ */
+ OSAL_MEMCPY(¶ms, ecore_mcp_get_link_params(p_hwfn), sizeof(params));
+ OSAL_MEMCPY(&link, ecore_mcp_get_link_state(p_hwfn), sizeof(link));
+ OSAL_MEMCPY(&caps, ecore_mcp_get_link_capabilities(p_hwfn),
+ sizeof(caps));
+ ecore_iov_set_link(p_hwfn, rel_vf_id, ¶ms, &link, &caps);
+
+ /* Forget the VF's acquisition message */
+ OSAL_MEMSET(&vf->acquire, 0, sizeof(vf->acquire));
/* disablng interrupts and resetting permission table was done during
* vf-close, however, we could get here without going through vf_close
*/
/* Disable Interrupts for VF */
- ecore_iov_vf_igu_set_int(p_hwfn, p_ptt, vf, 0 /* disable */);
+ ecore_iov_vf_igu_set_int(p_hwfn, p_ptt, vf, 0);
/* Reset Permission table */
- ecore_iov_config_perm_table(p_hwfn, p_ptt, vf, 0 /* disable */);
+ ecore_iov_config_perm_table(p_hwfn, p_ptt, vf, 0);
vf->num_rxqs = 0;
vf->num_txqs = 0;
ecore_iov_free_vf_igu_sbs(p_hwfn, p_ptt, vf);
- if (ECORE_IS_VF_ACTIVE(p_hwfn->p_dev, rel_vf_id)) {
- struct ecore_hw_sriov_info *p_iov = &p_hwfn->p_dev->sriov_info;
- u16 vf_id = vf->relative_vf_id;
+ if (vf->b_init) {
+ vf->b_init = false;
+ p_hwfn->pf_iov_info->active_vfs[vf->relative_vf_id / 64] &=
+ ~(1ULL << (vf->relative_vf_id / 64));
- p_iov->num_vfs--;
- p_iov->active_vfs[vf_id / 64] &= ~(1ULL << (vf_id % 64));
+ if (IS_LEAD_HWFN(p_hwfn))
+ p_hwfn->p_dev->p_iov_info->num_vfs--;
}
return ECORE_SUCCESS;
@@ -885,10 +1085,6 @@ static bool ecore_iov_tlv_supported(u16 tlvtype)
static void ecore_iov_lock_vf_pf_channel(struct ecore_hwfn *p_hwfn,
struct ecore_vf_info *vf, u16 tlv)
{
- /* we don't lock the channel for unsupported tlvs */
- if (!ecore_iov_tlv_supported(tlv))
- return;
-
/* lock the channel */
/* mutex_lock(&vf->op_mutex); @@@TBD MichalK - add lock... */
@@ -896,35 +1092,35 @@ static void ecore_iov_lock_vf_pf_channel(struct ecore_hwfn *p_hwfn,
/* vf->op_current = tlv; @@@TBD MichalK */
/* log the lock */
+ if (ecore_iov_tlv_supported(tlv))
DP_VERBOSE(p_hwfn,
ECORE_MSG_IOV,
"VF[%d]: vf pf channel locked by %s\n",
- vf->abs_vf_id, ecore_channel_tlvs_string[tlv]);
+ vf->abs_vf_id,
+ ecore_channel_tlvs_string[tlv]);
+ else
+ DP_VERBOSE(p_hwfn,
+ ECORE_MSG_IOV,
+ "VF[%d]: vf pf channel locked by %04x\n",
+ vf->abs_vf_id, tlv);
}
static void ecore_iov_unlock_vf_pf_channel(struct ecore_hwfn *p_hwfn,
struct ecore_vf_info *vf,
u16 expected_tlv)
{
- /* we don't unlock the channel for unsupported tlvs */
- if (!ecore_iov_tlv_supported(expected_tlv))
- return;
-
- /* WARN(expected_tlv != vf->op_current,
- * "lock mismatch: expected %s found %s",
- * channel_tlvs_string[expected_tlv],
- * channel_tlvs_string[vf->op_current]);
- * @@@TBD MichalK
- */
-
- /* lock the channel */
- /* mutex_unlock(&vf->op_mutex); @@@TBD MichalK add the lock */
-
/* log the unlock */
+ if (ecore_iov_tlv_supported(expected_tlv))
DP_VERBOSE(p_hwfn,
ECORE_MSG_IOV,
"VF[%d]: vf pf channel unlocked by %s\n",
- vf->abs_vf_id, ecore_channel_tlvs_string[expected_tlv]);
+ vf->abs_vf_id,
+ ecore_channel_tlvs_string[expected_tlv]);
+ else
+ DP_VERBOSE(p_hwfn,
+ ECORE_MSG_IOV,
+ "VF[%d]: vf pf channel unlocked by %04x\n",
+ vf->abs_vf_id, expected_tlv);
/* record the locking op */
/* vf->op_current = CHANNEL_TLV_NONE; */
@@ -996,15 +1192,15 @@ static void ecore_iov_send_response(struct ecore_hwfn *p_hwfn,
mbx->reply_virt->default_resp.hdr.status = status;
+ ecore_dp_tlv_list(p_hwfn, mbx->reply_virt);
+
#ifdef CONFIG_ECORE_SW_CHANNEL
mbx->sw_mbx.response_size =
length + sizeof(struct channel_list_end_tlv);
-#endif
-
- ecore_dp_tlv_list(p_hwfn, mbx->reply_virt);
- if (!p_hwfn->p_dev->sriov_info.b_hw_channel)
+ if (!p_hwfn->p_dev->b_hw_channel)
return;
+#endif
eng_vf_id = p_vf->abs_vf_id;
@@ -1062,7 +1258,7 @@ static u16 ecore_iov_prep_vp_update_resp_tlvs(struct ecore_hwfn *p_hwfn,
u16 size, total_len, i;
OSAL_MEMSET(p_mbx->reply_virt, 0, sizeof(union pfvf_tlvs));
- p_mbx->offset = (u8 *)(p_mbx->reply_virt);
+ p_mbx->offset = (u8 *)p_mbx->reply_virt;
size = sizeof(struct pfvf_def_resp_tlv);
total_len = size;
@@ -1102,23 +1298,37 @@ static void ecore_iov_prepare_resp(struct ecore_hwfn *p_hwfn,
{
struct ecore_iov_vf_mbx *mbx = &vf_info->vf_mbx;
- mbx->offset = (u8 *)(mbx->reply_virt);
+ mbx->offset = (u8 *)mbx->reply_virt;
ecore_add_tlv(p_hwfn, &mbx->offset, type, length);
ecore_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_LIST_END,
sizeof(struct channel_list_end_tlv));
ecore_iov_send_response(p_hwfn, p_ptt, vf_info, length, status);
+
+ OSAL_IOV_PF_RESP_TYPE(p_hwfn, vf_info->relative_vf_id, status);
+}
+
+struct ecore_public_vf_info
+*ecore_iov_get_public_vf_info(struct ecore_hwfn *p_hwfn,
+ u16 relative_vf_id,
+ bool b_enabled_only)
+{
+ struct ecore_vf_info *vf = OSAL_NULL;
+
+ vf = ecore_iov_get_vf_info(p_hwfn, relative_vf_id, b_enabled_only);
+ if (!vf)
+ return OSAL_NULL;
+
+ return &vf->p_vf_info;
}
static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
struct ecore_vf_info *p_vf)
{
+ u32 i;
p_vf->vf_bulletin = 0;
p_vf->vport_instance = 0;
- p_vf->num_mac_filters = 0;
- p_vf->num_vlan_filters = 0;
- p_vf->num_mc_filters = 0;
p_vf->configured_features = 0;
/* If VF previously requested less resources, go back to default */
@@ -1127,39 +1337,169 @@ static void ecore_iov_vf_cleanup(struct ecore_hwfn *p_hwfn,
p_vf->num_active_rxqs = 0;
+ for (i = 0; i < ECORE_MAX_VF_CHAINS_PER_PF; i++)
+ p_vf->vf_queues[i].rxq_active = 0;
+
OSAL_MEMSET(&p_vf->shadow_config, 0, sizeof(p_vf->shadow_config));
+ OSAL_MEMSET(&p_vf->acquire, 0, sizeof(p_vf->acquire));
OSAL_IOV_VF_CLEANUP(p_hwfn, p_vf->relative_vf_id);
}
+static u8 ecore_iov_vf_mbx_acquire_resc(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct ecore_vf_info *p_vf,
+ struct vf_pf_resc_request *p_req,
+ struct pf_vf_resc *p_resp)
+{
+ int i;
+
+ /* Queue related information */
+ p_resp->num_rxqs = p_vf->num_rxqs;
+ p_resp->num_txqs = p_vf->num_txqs;
+ p_resp->num_sbs = p_vf->num_sbs;
+
+ for (i = 0; i < p_resp->num_sbs; i++) {
+ p_resp->hw_sbs[i].hw_sb_id = p_vf->igu_sbs[i];
+ /* TODO - what's this sb_qid field? Is it deprecated?
+ * or is there an ecore_client that looks at this?
+ */
+ p_resp->hw_sbs[i].sb_qid = 0;
+ }
+
+ /* These fields are filled for backward compatibility.
+ * Unused by modern vfs.
+ */
+ for (i = 0; i < p_resp->num_rxqs; i++) {
+ ecore_fw_l2_queue(p_hwfn, p_vf->vf_queues[i].fw_rx_qid,
+ (u16 *)&p_resp->hw_qid[i]);
+ p_resp->cid[i] = p_vf->vf_queues[i].fw_cid;
+ }
+
+ /* Filter related information */
+ p_resp->num_mac_filters = OSAL_MIN_T(u8, p_vf->num_mac_filters,
+ p_req->num_mac_filters);
+ p_resp->num_vlan_filters = OSAL_MIN_T(u8, p_vf->num_vlan_filters,
+ p_req->num_vlan_filters);
+
+ /* This isn't really needed/enforced, but some legacy VFs might depend
+ * on the correct filling of this field.
+ */
+ p_resp->num_mc_filters = ECORE_MAX_MC_ADDRS;
+
+ /* Validate sufficient resources for VF */
+ if (p_resp->num_rxqs < p_req->num_rxqs ||
+ p_resp->num_txqs < p_req->num_txqs ||
+ p_resp->num_sbs < p_req->num_sbs ||
+ p_resp->num_mac_filters < p_req->num_mac_filters ||
+ p_resp->num_vlan_filters < p_req->num_vlan_filters ||
+ p_resp->num_mc_filters < p_req->num_mc_filters) {
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "VF[%d] - Insufficient resources: rxq [%02x/%02x]"
+ " txq [%02x/%02x] sbs [%02x/%02x] mac [%02x/%02x]"
+ " vlan [%02x/%02x] mc [%02x/%02x]\n",
+ p_vf->abs_vf_id,
+ p_req->num_rxqs, p_resp->num_rxqs,
+ p_req->num_rxqs, p_resp->num_txqs,
+ p_req->num_sbs, p_resp->num_sbs,
+ p_req->num_mac_filters, p_resp->num_mac_filters,
+ p_req->num_vlan_filters, p_resp->num_vlan_filters,
+ p_req->num_mc_filters, p_resp->num_mc_filters);
+
+ /* Some legacy OSes are incapable of correctly handling this
+ * failure.
+ */
+ if ((p_vf->acquire.vfdev_info.eth_fp_hsi_minor ==
+ ETH_HSI_VER_NO_PKT_LEN_TUNN) &&
+ (p_vf->acquire.vfdev_info.os_type ==
+ VFPF_ACQUIRE_OS_WINDOWS))
+ return PFVF_STATUS_SUCCESS;
+
+ return PFVF_STATUS_NO_RESOURCE;
+ }
+
+ return PFVF_STATUS_SUCCESS;
+}
+
+static void ecore_iov_vf_mbx_acquire_stats(struct ecore_hwfn *p_hwfn,
+ struct pfvf_stats_info *p_stats)
+{
+ p_stats->mstats.address = PXP_VF_BAR0_START_MSDM_ZONE_B +
+ OFFSETOF(struct mstorm_vf_zone,
+ non_trigger.eth_queue_stat);
+ p_stats->mstats.len = sizeof(struct eth_mstorm_per_queue_stat);
+ p_stats->ustats.address = PXP_VF_BAR0_START_USDM_ZONE_B +
+ OFFSETOF(struct ustorm_vf_zone,
+ non_trigger.eth_queue_stat);
+ p_stats->ustats.len = sizeof(struct eth_ustorm_per_queue_stat);
+ p_stats->pstats.address = PXP_VF_BAR0_START_PSDM_ZONE_B +
+ OFFSETOF(struct pstorm_vf_zone,
+ non_trigger.eth_queue_stat);
+ p_stats->pstats.len = sizeof(struct eth_pstorm_per_queue_stat);
+ p_stats->tstats.address = 0;
+ p_stats->tstats.len = 0;
+}
+
static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct ecore_vf_info *vf)
{
struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
- struct vfpf_acquire_tlv *req = &mbx->req_virt->acquire;
struct pfvf_acquire_resp_tlv *resp = &mbx->reply_virt->acquire_resp;
- struct pf_vf_resc *resc = &resp->resc;
struct pf_vf_pfdev_info *pfdev_info = &resp->pfdev_info;
- u16 length;
- u8 i, vfpf_status = PFVF_STATUS_SUCCESS;
+ struct vfpf_acquire_tlv *req = &mbx->req_virt->acquire;
+ u8 vfpf_status = PFVF_STATUS_NOT_SUPPORTED;
+ struct pf_vf_resc *resc = &resp->resc;
+ enum _ecore_status_t rc;
+
+ OSAL_MEMSET(resp, 0, sizeof(*resp));
+
+ /* Write the PF version so that VF would know which version
+ * is supported - might be later overridden. This guarantees that
+ * VF could recognize legacy PF based on lack of versions in reply.
+ */
+ pfdev_info->major_fp_hsi = ETH_HSI_VER_MAJOR;
+ pfdev_info->minor_fp_hsi = ETH_HSI_VER_MINOR;
/* Validate FW compatibility */
- if (req->vfdev_info.fw_major != FW_MAJOR_VERSION ||
- req->vfdev_info.fw_minor != FW_MINOR_VERSION ||
- req->vfdev_info.fw_revision != FW_REVISION_VERSION ||
- req->vfdev_info.fw_engineering != FW_ENGINEERING_VERSION) {
+ if (req->vfdev_info.eth_fp_hsi_major != ETH_HSI_VER_MAJOR) {
+ if (req->vfdev_info.capabilities &
+ VFPF_ACQUIRE_CAP_PRE_FP_HSI) {
+ struct vf_pf_vfdev_info *p_vfdev = &req->vfdev_info;
+
+ /* This legacy support would need to be removed once
+ * the major has changed.
+ */
+ OSAL_BUILD_BUG_ON(ETH_HSI_VER_MAJOR != 3);
+
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "VF[%d] is pre-fastpath HSI\n",
+ vf->abs_vf_id);
+ p_vfdev->eth_fp_hsi_major = ETH_HSI_VER_MAJOR;
+ p_vfdev->eth_fp_hsi_minor = ETH_HSI_VER_NO_PKT_LEN_TUNN;
+ } else {
DP_INFO(p_hwfn,
- "VF[%d] is running an incompatible driver [VF needs"
- " FW %02x:%02x:%02x:%02x but Hypervisor is"
- " using %02x:%02x:%02x:%02x]\n",
- vf->abs_vf_id, req->vfdev_info.fw_major,
- req->vfdev_info.fw_minor, req->vfdev_info.fw_revision,
- req->vfdev_info.fw_engineering, FW_MAJOR_VERSION,
- FW_MINOR_VERSION, FW_REVISION_VERSION,
- FW_ENGINEERING_VERSION);
- vfpf_status = PFVF_STATUS_NOT_SUPPORTED;
+ "VF[%d] needs fastpath HSI %02x.%02x, which is"
+ " incompatible with loaded FW's faspath"
+ " HSI %02x.%02x\n",
+ vf->abs_vf_id,
+ req->vfdev_info.eth_fp_hsi_major,
+ req->vfdev_info.eth_fp_hsi_minor,
+ ETH_HSI_VER_MAJOR, ETH_HSI_VER_MINOR);
+
+ goto out;
+ }
+ }
+
+ /* On 100g PFs, prevent old VFs from loading */
+ if ((p_hwfn->p_dev->num_hwfns > 1) &&
+ !(req->vfdev_info.capabilities & VFPF_ACQUIRE_CAP_100G)) {
+ DP_INFO(p_hwfn,
+ "VF[%d] is running an old driver that doesn't support"
+ " 100g\n",
+ vf->abs_vf_id);
goto out;
}
+
#ifndef __EXTRACT__LINUX__
if (OSAL_IOV_VF_ACQUIRE(p_hwfn, vf->relative_vf_id) != ECORE_SUCCESS) {
vfpf_status = PFVF_STATUS_NOT_SUPPORTED;
@@ -1167,13 +1507,10 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn,
}
#endif
- OSAL_MEMSET(resp, 0, sizeof(*resp));
+ /* Store the acquire message */
+ OSAL_MEMCPY(&vf->acquire, req, sizeof(vf->acquire));
- /* Fill in vf info stuff : @@@TBD MichalK Hard Coded for now... */
vf->opaque_fid = req->vfdev_info.opaque_fid;
- vf->num_mac_filters = 1;
- vf->num_vlan_filters = ECORE_ETH_VF_NUM_VLAN_FILTERS;
- vf->num_mc_filters = ECORE_MAX_MC_ADDRS;
vf->vf_bulletin = req->bulletin_addr;
vf->bulletin.size = (vf->bulletin.size < req->bulletin_size) ?
@@ -1183,28 +1520,13 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn,
pfdev_info->chip_num = p_hwfn->p_dev->chip_num;
pfdev_info->db_size = 0; /* @@@ TBD MichalK Vf Doorbells */
pfdev_info->indices_per_sb = PIS_PER_SB;
- pfdev_info->capabilities = PFVF_ACQUIRE_CAP_DEFAULT_UNTAGGED;
-
- pfdev_info->stats_info.mstats.address =
- PXP_VF_BAR0_START_MSDM_ZONE_B +
- OFFSETOF(struct mstorm_vf_zone, non_trigger.eth_queue_stat);
- pfdev_info->stats_info.mstats.len =
- sizeof(struct eth_mstorm_per_queue_stat);
-
- pfdev_info->stats_info.ustats.address =
- PXP_VF_BAR0_START_USDM_ZONE_B +
- OFFSETOF(struct ustorm_vf_zone, non_trigger.eth_queue_stat);
- pfdev_info->stats_info.ustats.len =
- sizeof(struct eth_ustorm_per_queue_stat);
- pfdev_info->stats_info.pstats.address =
- PXP_VF_BAR0_START_PSDM_ZONE_B +
- OFFSETOF(struct pstorm_vf_zone, non_trigger.eth_queue_stat);
- pfdev_info->stats_info.pstats.len =
- sizeof(struct eth_pstorm_per_queue_stat);
+ pfdev_info->capabilities = PFVF_ACQUIRE_CAP_DEFAULT_UNTAGGED |
+ PFVF_ACQUIRE_CAP_POST_FW_OVERRIDE;
+ if (p_hwfn->p_dev->num_hwfns > 1)
+ pfdev_info->capabilities |= PFVF_ACQUIRE_CAP_100G;
- pfdev_info->stats_info.tstats.address = 0;
- pfdev_info->stats_info.tstats.len = 0;
+ ecore_iov_vf_mbx_acquire_stats(p_hwfn, &pfdev_info->stats_info);
OSAL_MEMCPY(pfdev_info->port_mac, p_hwfn->hw_info.hw_mac_addr,
ETH_ALEN);
@@ -1213,35 +1535,36 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn,
pfdev_info->fw_minor = FW_MINOR_VERSION;
pfdev_info->fw_rev = FW_REVISION_VERSION;
pfdev_info->fw_eng = FW_ENGINEERING_VERSION;
+
+ /* Incorrect when legacy, but doesn't matter as legacy isn't reading
+ * this field.
+ */
+ pfdev_info->minor_fp_hsi = OSAL_MIN_T(u8, ETH_HSI_VER_MINOR,
+ req->vfdev_info.eth_fp_hsi_minor);
pfdev_info->os_type = OSAL_IOV_GET_OS_TYPE();
- ecore_mcp_get_mfw_ver(p_hwfn->p_dev, p_ptt, &pfdev_info->mfw_ver,
+ ecore_mcp_get_mfw_ver(p_hwfn, p_ptt, &pfdev_info->mfw_ver,
OSAL_NULL);
pfdev_info->dev_type = p_hwfn->p_dev->type;
pfdev_info->chip_rev = p_hwfn->p_dev->chip_rev;
- /* Fill in resc : @@@TBD MichalK Hard Coded for now... */
- resc->num_rxqs = vf->num_rxqs;
- resc->num_txqs = vf->num_txqs;
- resc->num_sbs = vf->num_sbs;
- for (i = 0; i < resc->num_sbs; i++) {
- resc->hw_sbs[i].hw_sb_id = vf->igu_sbs[i];
- resc->hw_sbs[i].sb_qid = 0;
- }
+ /* Fill resources available to VF; Make sure there are enough to
+ * satisfy the VF's request.
+ */
+ vfpf_status = ecore_iov_vf_mbx_acquire_resc(p_hwfn, p_ptt, vf,
+ &req->resc_request, resc);
+ if (vfpf_status != PFVF_STATUS_SUCCESS)
+ goto out;
- for (i = 0; i < resc->num_rxqs; i++) {
- ecore_fw_l2_queue(p_hwfn, vf->vf_queues[i].fw_rx_qid,
- (u16 *)&resc->hw_qid[i]);
- resc->cid[i] = vf->vf_queues[i].fw_cid;
+ /* Start the VF in FW */
+ rc = ecore_sp_vf_start(p_hwfn, vf);
+ if (rc != ECORE_SUCCESS) {
+ DP_NOTICE(p_hwfn, true, "Failed to start VF[%02x]\n",
+ vf->abs_vf_id);
+ vfpf_status = PFVF_STATUS_FAILURE;
+ goto out;
}
- resc->num_mac_filters = OSAL_MIN_T(u8, vf->num_mac_filters,
- req->resc_request.num_mac_filters);
- resc->num_vlan_filters = OSAL_MIN_T(u8, vf->num_vlan_filters,
- req->resc_request.num_vlan_filters);
- resc->num_mc_filters = OSAL_MIN_T(u8, vf->num_mc_filters,
- req->resc_request.num_mc_filters);
-
/* Fill agreed size of bulletin board in response, and post
* an initial image to the bulletin board.
*/
@@ -1250,25 +1573,22 @@ static void ecore_iov_vf_mbx_acquire(struct ecore_hwfn *p_hwfn,
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
"VF[%d] ACQUIRE_RESPONSE: pfdev_info- chip_num=0x%x,"
- " db_size=%d, idx_per_sb=%d, pf_cap=0x%" PRIx64 "\n"
+ " db_size=%d, idx_per_sb=%d, pf_cap=0x%lx\n"
"resources- n_rxq-%d, n_txq-%d, n_sbs-%d, n_macs-%d,"
" n_vlans-%d, n_mcs-%d\n",
vf->abs_vf_id, resp->pfdev_info.chip_num,
resp->pfdev_info.db_size, resp->pfdev_info.indices_per_sb,
- resp->pfdev_info.capabilities, resc->num_rxqs,
+ (unsigned long)resp->pfdev_info.capabilities, resc->num_rxqs,
resc->num_txqs, resc->num_sbs, resc->num_mac_filters,
resc->num_vlan_filters, resc->num_mc_filters);
vf->state = VF_ACQUIRED;
- /* Prepare Response */
- length = sizeof(struct pfvf_acquire_resp_tlv);
-
out:
+ /* Prepare Response */
ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_ACQUIRE,
- length, vfpf_status);
-
- /* @@@TBD Bulletin */
+ sizeof(struct pfvf_acquire_resp_tlv),
+ vfpf_status);
}
static enum _ecore_status_t
@@ -1310,8 +1630,8 @@ static enum _ecore_status_t
ecore_iov_reconfigure_unicast_vlan(struct ecore_hwfn *p_hwfn,
struct ecore_vf_info *p_vf)
{
- enum _ecore_status_t rc = ECORE_SUCCESS;
struct ecore_filter_ucast filter;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
int i;
OSAL_MEMSET(&filter, 0, sizeof(filter));
@@ -1322,11 +1642,13 @@ ecore_iov_reconfigure_unicast_vlan(struct ecore_hwfn *p_hwfn,
/* Reconfigure vlans */
for (i = 0; i < ECORE_ETH_VF_NUM_VLAN_FILTERS + 1; i++) {
- if (p_vf->shadow_config.vlans[i].used) {
+ if (!p_vf->shadow_config.vlans[i].used)
+ continue;
+
filter.type = ECORE_FILTER_VLAN;
filter.vlan = p_vf->shadow_config.vlans[i].vid;
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "Reconfig VLAN [0x%04x] for VF [%04x]\n",
+ "Reconfiguring VLAN [0x%04x] for VF [%04x]\n",
filter.vlan, p_vf->relative_vf_id);
rc = ecore_sp_eth_filter_ucast(p_hwfn,
p_vf->opaque_fid,
@@ -1341,7 +1663,6 @@ ecore_iov_reconfigure_unicast_vlan(struct ecore_hwfn *p_hwfn,
break;
}
}
- }
return rc;
}
@@ -1457,7 +1778,7 @@ static int ecore_iov_configure_vport_forced(struct ecore_hwfn *p_hwfn,
if (rc) {
DP_NOTICE(p_hwfn, true,
"Failed to send Rx update"
- " queue[0x%04x]\n",
+ " fo queue[0x%04x]\n",
qid);
return rc;
}
@@ -1482,14 +1803,14 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct ecore_vf_info *vf)
{
- struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
- struct vfpf_vport_start_tlv *start = &mbx->req_virt->start_vport;
struct ecore_sp_vport_start_params params = { 0 };
+ struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+ struct vfpf_vport_start_tlv *start;
u8 status = PFVF_STATUS_SUCCESS;
struct ecore_vf_info *vf_info;
- enum _ecore_status_t rc;
u64 *p_bitmap;
int sb_id;
+ enum _ecore_status_t rc;
vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vf->relative_vf_id, true);
if (!vf_info) {
@@ -1500,6 +1821,7 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
}
vf->state = VF_ENABLED;
+ start = &mbx->req_virt->start_vport;
/* Initialize Status block in CAU */
for (sb_id = 0; sb_id < vf->num_sbs; sb_id++) {
@@ -1513,7 +1835,7 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
ecore_int_cau_conf_sb(p_hwfn, p_ptt,
start->sb_addr[sb_id],
vf->igu_sbs[sb_id],
- vf->abs_vf_id, 1 /* VF Valid */);
+ vf->abs_vf_id, 1);
}
ecore_iov_enable_vf_traffic(p_hwfn, p_ptt, vf);
@@ -1539,7 +1861,7 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
#ifndef ASIC_ONLY
if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
DP_NOTICE(p_hwfn, false,
- "FPGA: Don't confi VF for Tx-switching [no pVFC]\n");
+ "FPGA: Don't config VF for Tx-switching [no pVFC]\n");
params.tx_switching = false;
}
#endif
@@ -1551,6 +1873,7 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
params.vport_id = vf->vport_id;
params.max_buffers_per_cqe = start->max_buffers_per_cqe;
params.mtu = vf->mtu;
+ params.check_mac = true;
rc = ecore_sp_eth_vport_start(p_hwfn, ¶ms);
if (rc != ECORE_SUCCESS) {
@@ -1596,19 +1919,76 @@ static void ecore_iov_vf_mbx_stop_vport(struct ecore_hwfn *p_hwfn,
sizeof(struct pfvf_def_resp_tlv), status);
}
+static void ecore_iov_vf_mbx_start_rxq_resp(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct ecore_vf_info *vf,
+ u8 status, bool b_legacy)
+{
+ struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+ struct pfvf_start_queue_resp_tlv *p_tlv;
+ struct vfpf_start_rxq_tlv *req;
+ u16 length;
+
+ mbx->offset = (u8 *)mbx->reply_virt;
+
+ /* Taking a bigger struct instead of adding a TLV to list was a
+ * mistake, but one which we're now stuck with, as some older
+ * clients assume the size of the previous response.
+ */
+ if (!b_legacy)
+ length = sizeof(*p_tlv);
+ else
+ length = sizeof(struct pfvf_def_resp_tlv);
+
+ p_tlv = ecore_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_START_RXQ,
+ length);
+ ecore_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_LIST_END,
+ sizeof(struct channel_list_end_tlv));
+
+ /* Update the TLV with the response */
+ if ((status == PFVF_STATUS_SUCCESS) && !b_legacy) {
+ req = &mbx->req_virt->start_rxq;
+ p_tlv->offset = PXP_VF_BAR0_START_MSDM_ZONE_B +
+ OFFSETOF(struct mstorm_vf_zone,
+ non_trigger.eth_rx_queue_producers) +
+ sizeof(struct eth_rx_prod_data) * req->rx_qid;
+ }
+
+ ecore_iov_send_response(p_hwfn, p_ptt, vf, length, status);
+}
+
static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct ecore_vf_info *vf)
{
struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
- struct vfpf_start_rxq_tlv *req = &mbx->req_virt->start_rxq;
- u16 length = sizeof(struct pfvf_def_resp_tlv);
- u8 status = PFVF_STATUS_SUCCESS;
+ u8 status = PFVF_STATUS_NO_RESOURCE;
+ struct vfpf_start_rxq_tlv *req;
+ bool b_legacy_vf = false;
enum _ecore_status_t rc;
+ req = &mbx->req_virt->start_rxq;
+
+ if (!ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid) ||
+ !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
+ goto out;
+
+ /* Legacy VFs have their Producers in a different location, which they
+ * calculate on their own and clean the producer prior to this.
+ */
+ if (vf->acquire.vfdev_info.eth_fp_hsi_minor ==
+ ETH_HSI_VER_NO_PKT_LEN_TUNN)
+ b_legacy_vf = true;
+ else
+ REG_WR(p_hwfn,
+ GTT_BAR0_MAP_REG_MSDM_RAM +
+ MSTORM_ETH_VF_PRODS_OFFSET(vf->abs_vf_id, req->rx_qid),
+ 0);
+
rc = ecore_sp_eth_rxq_start_ramrod(p_hwfn, vf->opaque_fid,
vf->vf_queues[req->rx_qid].fw_cid,
vf->vf_queues[req->rx_qid].fw_rx_qid,
+ (u8)req->rx_qid,
vf->vport_id,
vf->abs_vf_id + 0x10,
req->hw_sb,
@@ -1616,17 +1996,61 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
req->bd_max_bytes,
req->rxq_addr,
req->cqe_pbl_addr,
- req->cqe_pbl_size);
+ req->cqe_pbl_size,
+ b_legacy_vf);
if (rc) {
status = PFVF_STATUS_FAILURE;
} else {
+ status = PFVF_STATUS_SUCCESS;
vf->vf_queues[req->rx_qid].rxq_active = true;
vf->num_active_rxqs++;
}
- ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_START_RXQ,
- length, status);
+out:
+ ecore_iov_vf_mbx_start_rxq_resp(p_hwfn, p_ptt, vf,
+ status, b_legacy_vf);
+}
+
+static void ecore_iov_vf_mbx_start_txq_resp(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct ecore_vf_info *p_vf,
+ u8 status)
+{
+ struct ecore_iov_vf_mbx *mbx = &p_vf->vf_mbx;
+ struct pfvf_start_queue_resp_tlv *p_tlv;
+ bool b_legacy = false;
+ u16 length;
+
+ mbx->offset = (u8 *)mbx->reply_virt;
+
+ /* Taking a bigger struct instead of adding a TLV to list was a
+ * mistake, but one which we're now stuck with, as some older
+ * clients assume the size of the previous response.
+ */
+ if (p_vf->acquire.vfdev_info.eth_fp_hsi_minor ==
+ ETH_HSI_VER_NO_PKT_LEN_TUNN)
+ b_legacy = true;
+
+ if (!b_legacy)
+ length = sizeof(*p_tlv);
+ else
+ length = sizeof(struct pfvf_def_resp_tlv);
+
+ p_tlv = ecore_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_START_TXQ,
+ length);
+ ecore_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_LIST_END,
+ sizeof(struct channel_list_end_tlv));
+
+ /* Update the TLV with the response */
+ if ((status == PFVF_STATUS_SUCCESS) && !b_legacy) {
+ u16 qid = mbx->req_virt->start_txq.tx_qid;
+
+ p_tlv->offset = DB_ADDR_VF(p_vf->vf_queues[qid].fw_cid,
+ DQ_DEMS_LEGACY);
+ }
+
+ ecore_iov_send_response(p_hwfn, p_ptt, p_vf, length, status);
}
static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
@@ -1634,10 +2058,9 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
struct ecore_vf_info *vf)
{
struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
- struct vfpf_start_txq_tlv *req = &mbx->req_virt->start_txq;
- u16 length = sizeof(struct pfvf_def_resp_tlv);
+ u8 status = PFVF_STATUS_NO_RESOURCE;
union ecore_qm_pq_params pq_params;
- u8 status = PFVF_STATUS_SUCCESS;
+ struct vfpf_start_txq_tlv *req;
enum _ecore_status_t rc;
/* Prepare the parameters which would choose the right PQ */
@@ -1645,7 +2068,14 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
pq_params.eth.is_vf = 1;
pq_params.eth.vf_id = vf->relative_vf_id;
- rc = ecore_sp_eth_txq_start_ramrod(p_hwfn,
+ req = &mbx->req_virt->start_txq;
+
+ if (!ecore_iov_validate_txq(p_hwfn, vf, req->tx_qid) ||
+ !ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
+ goto out;
+
+ rc = ecore_sp_eth_txq_start_ramrod(
+ p_hwfn,
vf->opaque_fid,
vf->vf_queues[req->tx_qid].fw_tx_qid,
vf->vf_queues[req->tx_qid].fw_cid,
@@ -1654,15 +2084,18 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
req->hw_sb,
req->sb_index,
req->pbl_addr,
- req->pbl_size, &pq_params);
+ req->pbl_size,
+ &pq_params);
if (rc)
status = PFVF_STATUS_FAILURE;
- else
+ else {
+ status = PFVF_STATUS_SUCCESS;
vf->vf_queues[req->tx_qid].txq_active = true;
+ }
- ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_START_TXQ,
- length, status);
+out:
+ ecore_iov_vf_mbx_start_txq_resp(p_hwfn, p_ptt, vf, status);
}
static enum _ecore_status_t ecore_iov_vf_stop_rxqs(struct ecore_hwfn *p_hwfn,
@@ -1722,16 +2155,17 @@ static void ecore_iov_vf_mbx_stop_rxqs(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct ecore_vf_info *vf)
{
- struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
- struct vfpf_stop_rxqs_tlv *req = &mbx->req_virt->stop_rxqs;
u16 length = sizeof(struct pfvf_def_resp_tlv);
+ struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
u8 status = PFVF_STATUS_SUCCESS;
+ struct vfpf_stop_rxqs_tlv *req;
enum _ecore_status_t rc;
/* We give the option of starting from qid != 0, in this case we
* need to make sure that qid + num_qs doesn't exceed the actual
* amount of queues that exist.
*/
+ req = &mbx->req_virt->stop_rxqs;
rc = ecore_iov_vf_stop_rxqs(p_hwfn, vf, req->rx_qid,
req->num_rxqs, req->cqe_completion);
if (rc)
@@ -1745,16 +2179,17 @@ static void ecore_iov_vf_mbx_stop_txqs(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct ecore_vf_info *vf)
{
- struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
- struct vfpf_stop_txqs_tlv *req = &mbx->req_virt->stop_txqs;
u16 length = sizeof(struct pfvf_def_resp_tlv);
+ struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
u8 status = PFVF_STATUS_SUCCESS;
+ struct vfpf_stop_txqs_tlv *req;
enum _ecore_status_t rc;
/* We give the option of starting from qid != 0, in this case we
* need to make sure that qid + num_qs doesn't exceed the actual
* amount of queues that exist.
*/
+ req = &mbx->req_virt->stop_txqs;
rc = ecore_iov_vf_stop_txqs(p_hwfn, vf, req->tx_qid, req->num_txqs);
if (rc)
status = PFVF_STATUS_FAILURE;
@@ -1767,16 +2202,17 @@ static void ecore_iov_vf_mbx_update_rxqs(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct ecore_vf_info *vf)
{
- struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
- struct vfpf_update_rxq_tlv *req = &mbx->req_virt->update_rxq;
u16 length = sizeof(struct pfvf_def_resp_tlv);
+ struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+ struct vfpf_update_rxq_tlv *req;
u8 status = PFVF_STATUS_SUCCESS;
u8 complete_event_flg;
u8 complete_cqe_flg;
- enum _ecore_status_t rc;
u16 qid;
+ enum _ecore_status_t rc;
u8 i;
+ req = &mbx->req_virt->update_rxq;
complete_cqe_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_CQE_FLAG);
complete_event_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG);
@@ -1851,14 +2287,15 @@ ecore_iov_vp_update_act_param(struct ecore_hwfn *p_hwfn,
p_act_tlv = (struct vfpf_vport_update_activate_tlv *)
ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
- if (p_act_tlv) {
+ if (!p_act_tlv)
+ return;
+
p_data->update_vport_active_rx_flg = p_act_tlv->update_rx;
p_data->vport_active_rx_flg = p_act_tlv->active_rx;
p_data->update_vport_active_tx_flg = p_act_tlv->update_tx;
p_data->vport_active_tx_flg = p_act_tlv->active_tx;
*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_ACTIVATE;
}
-}
static void
ecore_iov_vp_update_vlan_param(struct ecore_hwfn *p_hwfn,
@@ -1895,21 +2332,22 @@ ecore_iov_vp_update_tx_switch(struct ecore_hwfn *p_hwfn,
p_tx_switch_tlv = (struct vfpf_vport_update_tx_switch_tlv *)
ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+ if (!p_tx_switch_tlv)
+ return;
#ifndef ASIC_ONLY
if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
DP_NOTICE(p_hwfn, false,
- "FPGA: Ignore tx-switching configuration originating from VFs\n");
+ "FPGA: Ignore tx-switching configuration originating"
+ " from VFs\n");
return;
}
#endif
- if (p_tx_switch_tlv) {
p_data->update_tx_switching_flg = 1;
p_data->tx_switching_flg = p_tx_switch_tlv->tx_switching;
*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_TX_SWITCH;
}
-}
static void
ecore_iov_vp_update_mcast_bin_param(struct ecore_hwfn *p_hwfn,
@@ -1922,39 +2360,36 @@ ecore_iov_vp_update_mcast_bin_param(struct ecore_hwfn *p_hwfn,
p_mcast_tlv = (struct vfpf_vport_update_mcast_bin_tlv *)
ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+ if (!p_mcast_tlv)
+ return;
- if (p_mcast_tlv) {
p_data->update_approx_mcast_flg = 1;
OSAL_MEMCPY(p_data->bins, p_mcast_tlv->bins,
sizeof(unsigned long) *
ETH_MULTICAST_MAC_BINS_IN_REGS);
*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_MCAST;
}
-}
static void
ecore_iov_vp_update_accept_flag(struct ecore_hwfn *p_hwfn,
struct ecore_sp_vport_update_params *p_data,
struct ecore_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
{
+ struct ecore_filter_accept_flags *p_flags = &p_data->accept_flags;
struct vfpf_vport_update_accept_param_tlv *p_accept_tlv;
u16 tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM;
p_accept_tlv = (struct vfpf_vport_update_accept_param_tlv *)
ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+ if (!p_accept_tlv)
+ return;
- if (p_accept_tlv) {
- p_data->accept_flags.update_rx_mode_config =
- p_accept_tlv->update_rx_mode;
- p_data->accept_flags.rx_accept_filter =
- p_accept_tlv->rx_accept_filter;
- p_data->accept_flags.update_tx_mode_config =
- p_accept_tlv->update_tx_mode;
- p_data->accept_flags.tx_accept_filter =
- p_accept_tlv->tx_accept_filter;
+ p_flags->update_rx_mode_config = p_accept_tlv->update_rx_mode;
+ p_flags->rx_accept_filter = p_accept_tlv->rx_accept_filter;
+ p_flags->update_tx_mode_config = p_accept_tlv->update_tx_mode;
+ p_flags->tx_accept_filter = p_accept_tlv->tx_accept_filter;
*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_ACCEPT_PARAM;
}
-}
static void
ecore_iov_vp_update_accept_any_vlan(struct ecore_hwfn *p_hwfn,
@@ -1967,14 +2402,14 @@ ecore_iov_vp_update_accept_any_vlan(struct ecore_hwfn *p_hwfn,
p_accept_any_vlan = (struct vfpf_vport_update_accept_any_vlan_tlv *)
ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+ if (!p_accept_any_vlan)
+ return;
- if (p_accept_any_vlan) {
p_data->accept_any_vlan = p_accept_any_vlan->accept_any_vlan;
p_data->update_accept_any_vlan_flg =
p_accept_any_vlan->update_accept_any_vlan_flg;
*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_ACCEPT_ANY_VLAN;
}
-}
static void
ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
@@ -1985,12 +2420,16 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
{
struct vfpf_vport_update_rss_tlv *p_rss_tlv;
u16 tlv = CHANNEL_TLV_VPORT_UPDATE_RSS;
- u16 table_size;
u16 i, q_idx, max_q_idx;
+ u16 table_size;
p_rss_tlv = (struct vfpf_vport_update_rss_tlv *)
ecore_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
- if (p_rss_tlv) {
+ if (!p_rss_tlv) {
+ p_data->rss_params = OSAL_NULL;
+ return;
+ }
+
OSAL_MEMSET(p_rss, 0, sizeof(struct ecore_rss_params));
p_rss->update_rss_config =
@@ -2003,7 +2442,8 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
!!(p_rss_tlv->update_rss_flags &
VFPF_UPDATE_RSS_IND_TABLE_FLAG);
p_rss->update_rss_key =
- !!(p_rss_tlv->update_rss_flags & VFPF_UPDATE_RSS_KEY_FLAG);
+ !!(p_rss_tlv->update_rss_flags &
+ VFPF_UPDATE_RSS_KEY_FLAG);
p_rss->rss_enable = p_rss_tlv->rss_enable;
p_rss->rss_eng_id = vf->relative_vf_id + 1;
@@ -2014,39 +2454,31 @@ ecore_iov_vp_update_rss_param(struct ecore_hwfn *p_hwfn,
OSAL_MEMCPY(p_rss->rss_key, p_rss_tlv->rss_key,
sizeof(p_rss->rss_key));
- table_size = OSAL_MIN_T(u16,
- OSAL_ARRAY_SIZE(p_rss->rss_ind_table),
+ table_size = OSAL_MIN_T(u16, OSAL_ARRAY_SIZE(p_rss->rss_ind_table),
(1 << p_rss_tlv->rss_table_size_log));
max_q_idx = OSAL_ARRAY_SIZE(vf->vf_queues);
for (i = 0; i < table_size; i++) {
+ u16 index = vf->vf_queues[0].fw_rx_qid;
+
q_idx = p_rss->rss_ind_table[i];
- if (q_idx >= max_q_idx) {
+ if (q_idx >= max_q_idx)
DP_NOTICE(p_hwfn, true,
- "rss_ind_table[%d] = %d, rxq is out of range\n",
+ "rss_ind_table[%d] = %d,"
+ " rxq is out of range\n",
i, q_idx);
- /* TBD: fail the request mark VF as malicious */
- p_rss->rss_ind_table[i] =
- vf->vf_queues[0].fw_rx_qid;
- } else if (!vf->vf_queues[q_idx].rxq_active) {
+ else if (!vf->vf_queues[q_idx].rxq_active)
DP_NOTICE(p_hwfn, true,
"rss_ind_table[%d] = %d, rxq is not active\n",
i, q_idx);
- /* TBD: fail the request mark VF as malicious */
- p_rss->rss_ind_table[i] =
- vf->vf_queues[0].fw_rx_qid;
- } else {
- p_rss->rss_ind_table[i] =
- vf->vf_queues[q_idx].fw_rx_qid;
- }
+ else
+ index = vf->vf_queues[q_idx].fw_rx_qid;
+ p_rss->rss_ind_table[i] = index;
}
p_data->rss_params = p_rss;
*tlvs_mask |= 1 << ECORE_IOV_VP_UPDATE_RSS;
- } else {
- p_data->rss_params = OSAL_NULL;
- }
}
static void
@@ -2105,11 +2537,21 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
struct ecore_sp_vport_update_params params;
struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
struct ecore_sge_tpa_params sge_tpa_params;
+ u16 tlvs_mask = 0, tlvs_accepted = 0;
struct ecore_rss_params rss_params;
u8 status = PFVF_STATUS_SUCCESS;
- enum _ecore_status_t rc;
- u16 tlvs_mask = 0, tlvs_accepted;
u16 length;
+ enum _ecore_status_t rc;
+
+ /* Valiate PF can send such a request */
+ if (!vf->vport_instance) {
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "No VPORT instance available for VF[%d],"
+ " failing vport update\n",
+ vf->abs_vf_id);
+ status = PFVF_STATUS_FAILURE;
+ goto out;
+ }
OSAL_MEMSET(¶ms, 0, sizeof(params));
params.opaque_fid = vf->opaque_fid;
@@ -2137,7 +2579,7 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
*/
tlvs_accepted = tlvs_mask;
-#ifndef __EXTRACT__LINUX__
+#ifndef LINUX_REMOVE
if (OSAL_IOV_VF_VPORT_UPDATE(p_hwfn, vf->relative_vf_id,
¶ms, &tlvs_accepted) !=
ECORE_SUCCESS) {
@@ -2150,7 +2592,8 @@ static void ecore_iov_vf_mbx_vport_update(struct ecore_hwfn *p_hwfn,
if (!tlvs_accepted) {
if (tlvs_mask)
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "Upper-layer prevents said VF configuration\n");
+ "Upper-layer prevents said VF"
+ " configuration\n");
else
DP_NOTICE(p_hwfn, true,
"No feature tlvs found for vport update\n");
@@ -2171,16 +2614,12 @@ out:
}
static enum _ecore_status_t
-ecore_iov_vf_update_unicast_shadow(struct ecore_hwfn *p_hwfn,
+ecore_iov_vf_update_vlan_shadow(struct ecore_hwfn *p_hwfn,
struct ecore_vf_info *p_vf,
struct ecore_filter_ucast *p_params)
{
int i;
- /* TODO - do we need a MAC shadow registery? */
- if (p_params->type == ECORE_FILTER_MAC)
- return ECORE_SUCCESS;
-
/* First remove entries and then add new ones */
if (p_params->opcode == ECORE_FILTER_REMOVE) {
for (i = 0; i < ECORE_ETH_VF_NUM_VLAN_FILTERS + 1; i++)
@@ -2192,7 +2631,8 @@ ecore_iov_vf_update_unicast_shadow(struct ecore_hwfn *p_hwfn,
}
if (i == ECORE_ETH_VF_NUM_VLAN_FILTERS + 1) {
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "VF [%d] - Tries to remove a non-existing vlan\n",
+ "VF [%d] - Tries to remove a non-existing"
+ " vlan\n",
p_vf->relative_vf_id);
return ECORE_INVAL;
}
@@ -2210,16 +2650,19 @@ ecore_iov_vf_update_unicast_shadow(struct ecore_hwfn *p_hwfn,
if (p_params->opcode == ECORE_FILTER_ADD ||
p_params->opcode == ECORE_FILTER_REPLACE) {
- for (i = 0; i < ECORE_ETH_VF_NUM_VLAN_FILTERS + 1; i++)
- if (!p_vf->shadow_config.vlans[i].used) {
+ for (i = 0; i < ECORE_ETH_VF_NUM_VLAN_FILTERS + 1; i++) {
+ if (p_vf->shadow_config.vlans[i].used)
+ continue;
+
p_vf->shadow_config.vlans[i].used = true;
- p_vf->shadow_config.vlans[i].vid =
- p_params->vlan;
+ p_vf->shadow_config.vlans[i].vid = p_params->vlan;
break;
}
+
if (i == ECORE_ETH_VF_NUM_VLAN_FILTERS + 1) {
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "VF [%d] - Tries to configure more than %d vlan filters\n",
+ "VF [%d] - Tries to configure more than %d"
+ " vlan filters\n",
p_vf->relative_vf_id,
ECORE_ETH_VF_NUM_VLAN_FILTERS + 1);
return ECORE_INVAL;
@@ -2229,19 +2672,104 @@ ecore_iov_vf_update_unicast_shadow(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
+static enum _ecore_status_t
+ecore_iov_vf_update_mac_shadow(struct ecore_hwfn *p_hwfn,
+ struct ecore_vf_info *p_vf,
+ struct ecore_filter_ucast *p_params)
+{
+ char empty_mac[ETH_ALEN];
+ int i;
+
+ OSAL_MEM_ZERO(empty_mac, ETH_ALEN);
+
+ /* If we're in forced-mode, we don't allow any change */
+ /* TODO - this would change if we were ever to implement logic for
+ * removing a forced MAC altogether [in which case, like for vlans,
+ * we should be able to re-trace previous configuration.
+ */
+ if (p_vf->bulletin.p_virt->valid_bitmap & (1 << MAC_ADDR_FORCED))
+ return ECORE_SUCCESS;
+
+ /* First remove entries and then add new ones */
+ if (p_params->opcode == ECORE_FILTER_REMOVE) {
+ for (i = 0; i < ECORE_ETH_VF_NUM_MAC_FILTERS; i++) {
+ if (!OSAL_MEMCMP(p_vf->shadow_config.macs[i],
+ p_params->mac, ETH_ALEN)) {
+ OSAL_MEM_ZERO(p_vf->shadow_config.macs[i],
+ ETH_ALEN);
+ break;
+ }
+ }
+
+ if (i == ECORE_ETH_VF_NUM_MAC_FILTERS) {
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "MAC isn't configured\n");
+ return ECORE_INVAL;
+ }
+ } else if (p_params->opcode == ECORE_FILTER_REPLACE ||
+ p_params->opcode == ECORE_FILTER_FLUSH) {
+ for (i = 0; i < ECORE_ETH_VF_NUM_MAC_FILTERS; i++)
+ OSAL_MEM_ZERO(p_vf->shadow_config.macs[i], ETH_ALEN);
+ }
+
+ /* List the new MAC address */
+ if (p_params->opcode != ECORE_FILTER_ADD &&
+ p_params->opcode != ECORE_FILTER_REPLACE)
+ return ECORE_SUCCESS;
+
+ for (i = 0; i < ECORE_ETH_VF_NUM_MAC_FILTERS; i++) {
+ if (!OSAL_MEMCMP(p_vf->shadow_config.macs[i],
+ empty_mac, ETH_ALEN)) {
+ OSAL_MEMCPY(p_vf->shadow_config.macs[i],
+ p_params->mac, ETH_ALEN);
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "Added MAC at %d entry in shadow\n", i);
+ break;
+ }
+ }
+
+ if (i == ECORE_ETH_VF_NUM_MAC_FILTERS) {
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "No available place for MAC\n");
+ return ECORE_INVAL;
+ }
+
+ return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_iov_vf_update_unicast_shadow(struct ecore_hwfn *p_hwfn,
+ struct ecore_vf_info *p_vf,
+ struct ecore_filter_ucast *p_params)
+{
+ enum _ecore_status_t rc = ECORE_SUCCESS;
+
+ if (p_params->type == ECORE_FILTER_MAC) {
+ rc = ecore_iov_vf_update_mac_shadow(p_hwfn, p_vf, p_params);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+ }
+
+ if (p_params->type == ECORE_FILTER_VLAN)
+ rc = ecore_iov_vf_update_vlan_shadow(p_hwfn, p_vf, p_params);
+
+ return rc;
+}
+
static void ecore_iov_vf_mbx_ucast_filter(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct ecore_vf_info *vf)
{
- struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
- struct vfpf_ucast_filter_tlv *req = &mbx->req_virt->ucast_filter;
struct ecore_bulletin_content *p_bulletin = vf->bulletin.p_virt;
- struct ecore_filter_ucast params;
+ struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+ struct vfpf_ucast_filter_tlv *req;
u8 status = PFVF_STATUS_SUCCESS;
+ struct ecore_filter_ucast params;
enum _ecore_status_t rc;
/* Prepare the unicast filter params */
OSAL_MEMSET(¶ms, 0, sizeof(struct ecore_filter_ucast));
+ req = &mbx->req_virt->ucast_filter;
params.opcode = (enum ecore_filter_opcode)req->opcode;
params.type = (enum ecore_filter_ucast_type)req->type;
@@ -2254,7 +2782,8 @@ static void ecore_iov_vf_mbx_ucast_filter(struct ecore_hwfn *p_hwfn,
params.vlan = req->vlan;
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "VF[%d]: opcode 0x%02x type 0x%02x [%s %s] [vport 0x%02x] MAC %02x:%02x:%02x:%02x:%02x:%02x, vlan 0x%04x\n",
+ "VF[%d]: opcode 0x%02x type 0x%02x [%s %s] [vport 0x%02x]"
+ " MAC %02x:%02x:%02x:%02x:%02x:%02x, vlan 0x%04x\n",
vf->abs_vf_id, params.opcode, params.type,
params.is_rx_filter ? "RX" : "",
params.is_tx_filter ? "TX" : "",
@@ -2264,7 +2793,8 @@ static void ecore_iov_vf_mbx_ucast_filter(struct ecore_hwfn *p_hwfn,
if (!vf->vport_instance) {
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "No VPORT instance available for VF[%d], failing ucast MAC configuration\n",
+ "No VPORT instance available for VF[%d],"
+ " failing ucast MAC configuration\n",
vf->abs_vf_id);
status = PFVF_STATUS_FAILURE;
goto out;
@@ -2343,10 +2873,10 @@ static void ecore_iov_vf_mbx_close(struct ecore_hwfn *p_hwfn,
u8 status = PFVF_STATUS_SUCCESS;
/* Disable Interrupts for VF */
- ecore_iov_vf_igu_set_int(p_hwfn, p_ptt, vf, 0 /* disable */);
+ ecore_iov_vf_igu_set_int(p_hwfn, p_ptt, vf, 0);
/* Reset Permission table */
- ecore_iov_config_perm_table(p_hwfn, p_ptt, vf, 0 /* disable */);
+ ecore_iov_config_perm_table(p_hwfn, p_ptt, vf, 0);
ecore_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_CLOSE,
length, status);
@@ -2357,11 +2887,27 @@ static void ecore_iov_vf_mbx_release(struct ecore_hwfn *p_hwfn,
struct ecore_vf_info *p_vf)
{
u16 length = sizeof(struct pfvf_def_resp_tlv);
+ u8 status = PFVF_STATUS_SUCCESS;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
ecore_iov_vf_cleanup(p_hwfn, p_vf);
+ if (p_vf->state != VF_STOPPED && p_vf->state != VF_FREE) {
+ /* Stopping the VF */
+ rc = ecore_sp_vf_stop(p_hwfn, p_vf->concrete_fid,
+ p_vf->opaque_fid);
+
+ if (rc != ECORE_SUCCESS) {
+ DP_ERR(p_hwfn, "ecore_sp_vf_stop returned error %d\n",
+ rc);
+ status = PFVF_STATUS_FAILURE;
+ }
+
+ p_vf->state = VF_STOPPED;
+ }
+
ecore_iov_prepare_resp(p_hwfn, p_ptt, p_vf, CHANNEL_TLV_RELEASE,
- length, PFVF_STATUS_SUCCESS);
+ length, status);
}
static enum _ecore_status_t
@@ -2439,61 +2985,6 @@ ecore_iov_vf_flr_poll_pbf(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-static enum _ecore_status_t
-ecore_iov_vf_flr_poll_prs(struct ecore_hwfn *p_hwfn,
- struct ecore_vf_info *p_vf, struct ecore_ptt *p_ptt)
-{
- u16 tc_cons[NUM_OF_TCS], tc_lb_cons[NUM_OF_TCS];
- u16 prod[NUM_OF_TCS];
- int i, cnt;
-
- /* Read initial consumers & producers */
- for (i = 0; i < NUM_OF_TCS; i++) {
- tc_cons[i] = (u16)ecore_rd(p_hwfn, p_ptt,
- PRS_REG_MSG_CT_MAIN_0 + i * 0x4);
- tc_lb_cons[i] = (u16)ecore_rd(p_hwfn, p_ptt,
- PRS_REG_MSG_CT_LB_0 + i * 0x4);
- prod[i] = (u16)ecore_rd(p_hwfn, p_ptt,
- BRB_REG_PER_TC_COUNTERS +
- p_hwfn->port_id * 0x20 + i * 0x4);
- }
-
- /* Wait for consumers to pass the producers */
- i = 0;
- for (cnt = 0; cnt < 50; cnt++) {
- for (; i < NUM_OF_TCS; i++) {
- u16 cons;
-
- cons = (u16)ecore_rd(p_hwfn, p_ptt,
- PRS_REG_MSG_CT_MAIN_0 + i * 0x4);
- if (prod[i] - tc_cons[i] > cons - tc_cons[i])
- break;
-
- cons = (u16)ecore_rd(p_hwfn, p_ptt,
- PRS_REG_MSG_CT_LB_0 + i * 0x4);
- if (prod[i] - tc_lb_cons[i] > cons - tc_lb_cons[i])
- break;
- }
-
- if (i == NUM_OF_TCS)
- break;
-
- /* 16-bit counters; Delay instead of sleep... */
- OSAL_UDELAY(10);
- }
-
- /* This is only optional polling for BB, since registers are only
- * 16-bit wide and guarantee is not good enough. Don't fail things
- * if polling didn't return the expected results.
- */
- if (cnt == 50)
- DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "VF[%d] - prs polling failed on TC %d\n",
- p_vf->abs_vf_id, i);
-
- return ECORE_SUCCESS;
-}
-
static enum _ecore_status_t ecore_iov_vf_flr_poll(struct ecore_hwfn *p_hwfn,
struct ecore_vf_info *p_vf,
struct ecore_ptt *p_ptt)
@@ -2510,10 +3001,6 @@ static enum _ecore_status_t ecore_iov_vf_flr_poll(struct ecore_hwfn *p_hwfn,
if (rc)
return rc;
- rc = ecore_iov_vf_flr_poll_prs(p_hwfn, p_vf, p_ptt);
- if (rc)
- return rc;
-
return ECORE_SUCCESS;
}
@@ -2522,8 +3009,8 @@ ecore_iov_execute_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u16 rel_vf_id, u32 *ack_vfs)
{
- enum _ecore_status_t rc = ECORE_SUCCESS;
struct ecore_vf_info *p_vf;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
p_vf = ecore_iov_get_vf_info(p_hwfn, rel_vf_id, false);
if (!p_vf)
@@ -2541,7 +3028,7 @@ ecore_iov_execute_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
ecore_iov_vf_cleanup(p_hwfn, p_vf);
/* If VF isn't active, no need for anything but SW */
- if (!ECORE_IS_VF_ACTIVE(p_hwfn->p_dev, p_vf->relative_vf_id))
+ if (!p_vf->b_init)
goto cleanup;
/* TODO - what to do in case of failure? */
@@ -2591,7 +3078,13 @@ enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
OSAL_MEMSET(ack_vfs, 0, sizeof(u32) * (VF_MAX_STATIC / 32));
- for (i = 0; i < p_hwfn->p_dev->sriov_info.total_vfs; i++)
+ /* Since BRB <-> PRS interface can't be tested as part of the flr
+ * polling due to HW limitations, simply sleep a bit. And since
+ * there's no need to wait per-vf, do it before looping.
+ */
+ OSAL_MSLEEP(100);
+
+ for (i = 0; i < p_hwfn->p_dev->p_iov_info->total_vfs; i++)
ecore_iov_execute_vf_flr_cleanup(p_hwfn, p_ptt, i, ack_vfs);
rc = ecore_mcp_ack_vf_flr(p_hwfn, p_ptt, ack_vfs);
@@ -2607,6 +3100,9 @@ ecore_iov_single_vf_flr_cleanup(struct ecore_hwfn *p_hwfn,
OSAL_MEMSET(ack_vfs, 0, sizeof(u32) * (VF_MAX_STATIC / 32));
+ /* Wait instead of polling the BRB <-> PRS interface */
+ OSAL_MSLEEP(100);
+
ecore_iov_execute_vf_flr_cleanup(p_hwfn, p_ptt, rel_vf_id, ack_vfs);
rc = ecore_mcp_ack_vf_flr(p_hwfn, p_ptt, ack_vfs);
@@ -2623,8 +3119,13 @@ int ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
"[%08x,...,%08x]: %08x\n",
i * 32, (i + 1) * 32 - 1, p_disabled_vfs[i]);
+ if (!p_hwfn->p_dev->p_iov_info) {
+ DP_NOTICE(p_hwfn, true, "VF flr but no IOV\n");
+ return 0;
+ }
+
/* Mark VFs */
- for (i = 0; i < p_hwfn->p_dev->sriov_info.total_vfs; i++) {
+ for (i = 0; i < p_hwfn->p_dev->p_iov_info->total_vfs; i++) {
struct ecore_vf_info *p_vf;
u8 vfid;
@@ -2656,43 +3157,6 @@ int ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *p_disabled_vfs)
return found;
}
-void ecore_iov_set_link(struct ecore_hwfn *p_hwfn,
- u16 vfid,
- struct ecore_mcp_link_params *params,
- struct ecore_mcp_link_state *link,
- struct ecore_mcp_link_capabilities *p_caps)
-{
- struct ecore_vf_info *p_vf = ecore_iov_get_vf_info(p_hwfn, vfid, false);
- struct ecore_bulletin_content *p_bulletin;
-
- if (!p_vf)
- return;
-
- p_bulletin = p_vf->bulletin.p_virt;
- p_bulletin->req_autoneg = params->speed.autoneg;
- p_bulletin->req_adv_speed = params->speed.advertised_speeds;
- p_bulletin->req_forced_speed = params->speed.forced_speed;
- p_bulletin->req_autoneg_pause = params->pause.autoneg;
- p_bulletin->req_forced_rx = params->pause.forced_rx;
- p_bulletin->req_forced_tx = params->pause.forced_tx;
- p_bulletin->req_loopback = params->loopback_mode;
-
- p_bulletin->link_up = link->link_up;
- p_bulletin->speed = link->speed;
- p_bulletin->full_duplex = link->full_duplex;
- p_bulletin->autoneg = link->an;
- p_bulletin->autoneg_complete = link->an_complete;
- p_bulletin->parallel_detection = link->parallel_detection;
- p_bulletin->pfc_enabled = link->pfc_enabled;
- p_bulletin->partner_adv_speed = link->partner_adv_speed;
- p_bulletin->partner_tx_flow_ctrl_en = link->partner_tx_flow_ctrl_en;
- p_bulletin->partner_rx_flow_ctrl_en = link->partner_rx_flow_ctrl_en;
- p_bulletin->partner_adv_pause = link->partner_adv_pause;
- p_bulletin->sfp_tx_fault = link->sfp_tx_fault;
-
- p_bulletin->capability_speed = p_caps->speed_capabilities;
-}
-
void ecore_iov_get_link(struct ecore_hwfn *p_hwfn,
u16 vfid,
struct ecore_mcp_link_params *p_params,
@@ -2720,7 +3184,6 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
{
struct ecore_iov_vf_mbx *mbx;
struct ecore_vf_info *p_vf;
- int i;
p_vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
if (!p_vf)
@@ -2731,18 +3194,22 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
/* ecore_iov_process_mbx_request */
DP_VERBOSE(p_hwfn,
ECORE_MSG_IOV,
- "ecore_iov_process_mbx_req vfid %d\n", p_vf->abs_vf_id);
+ "VF[%02x]: Processing mailbox message\n", p_vf->abs_vf_id);
mbx->first_tlv = mbx->req_virt->first_tlv;
- /* check if tlv type is known */
- if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type)) {
+ OSAL_IOV_VF_MSG_TYPE(p_hwfn,
+ p_vf->relative_vf_id,
+ mbx->first_tlv.tl.type);
+
/* Lock the per vf op mutex and note the locker's identity.
* The unlock will take place in mbx response.
*/
ecore_iov_lock_vf_pf_channel(p_hwfn,
p_vf, mbx->first_tlv.tl.type);
+ /* check if tlv type is known */
+ if (ecore_iov_tlv_supported(mbx->first_tlv.tl.type)) {
/* switch on the opcode */
switch (mbx->first_tlv.tl.type) {
case CHANNEL_TLV_ACQUIRE:
@@ -2785,10 +3252,6 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
ecore_iov_vf_mbx_release(p_hwfn, p_ptt, p_vf);
break;
}
-
- ecore_iov_unlock_vf_pf_channel(p_hwfn,
- p_vf, mbx->first_tlv.tl.type);
-
} else {
/* unknown TLV - this may belong to a VF driver from the future
* - a version written after this PF driver was written, which
@@ -2796,54 +3259,78 @@ void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
* support them. Or this may be because someone wrote a crappy
* VF driver and is sending garbage over the channel.
*/
- DP_ERR(p_hwfn,
- "unknown TLV. type %d length %d. first 20 bytes of mailbox buffer:\n",
- mbx->first_tlv.tl.type, mbx->first_tlv.tl.length);
-
- for (i = 0; i < 20; i++) {
- DP_VERBOSE(p_hwfn,
- ECORE_MSG_IOV,
- "%x ",
- mbx->req_virt->tlv_buf_size.tlv_buffer[i]);
- }
-
- /* test whether we can respond to the VF (do we have an address
- * for it?)
+ DP_NOTICE(p_hwfn, false,
+ "VF[%02x]: unknown TLV. type %04x length %04x"
+ " padding %08x reply address %lu\n",
+ p_vf->abs_vf_id,
+ mbx->first_tlv.tl.type,
+ mbx->first_tlv.tl.length,
+ mbx->first_tlv.padding,
+ (unsigned long)mbx->first_tlv.reply_address);
+
+ /* Try replying in case reply address matches the acquisition's
+ * posted address.
*/
- if (p_vf->state == VF_ACQUIRED)
- DP_ERR(p_hwfn, "UNKNOWN TLV Not supported yet\n");
+ if (p_vf->acquire.first_tlv.reply_address &&
+ (mbx->first_tlv.reply_address ==
+ p_vf->acquire.first_tlv.reply_address))
+ ecore_iov_prepare_resp(p_hwfn, p_ptt, p_vf,
+ mbx->first_tlv.tl.type,
+ sizeof(struct pfvf_def_resp_tlv),
+ PFVF_STATUS_NOT_SUPPORTED);
+ else
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "VF[%02x]: Can't respond to TLV -"
+ " no valid reply address\n",
+ p_vf->abs_vf_id);
}
+ ecore_iov_unlock_vf_pf_channel(p_hwfn, p_vf,
+ mbx->first_tlv.tl.type);
+
#ifdef CONFIG_ECORE_SW_CHANNEL
mbx->sw_mbx.mbx_state = VF_PF_RESPONSE_READY;
mbx->sw_mbx.response_offset = 0;
#endif
}
+void ecore_iov_pf_add_pending_events(struct ecore_hwfn *p_hwfn, u8 vfid)
+{
+ u64 add_bit = 1ULL << (vfid % 64);
+
+ /* TODO - add locking mechanisms [no atomics in ecore, so we can't
+ * add the lock inside the ecore_pf_iov struct].
+ */
+ p_hwfn->pf_iov_info->pending_events[vfid / 64] |= add_bit;
+}
+
+void ecore_iov_pf_get_and_clear_pending_events(struct ecore_hwfn *p_hwfn,
+ u64 *events)
+{
+ u64 *p_pending_events = p_hwfn->pf_iov_info->pending_events;
+
+ /* TODO - Take a lock */
+ OSAL_MEMCPY(events, p_pending_events,
+ sizeof(u64) * ECORE_VF_ARRAY_LENGTH);
+ OSAL_MEMSET(p_pending_events, 0,
+ sizeof(u64) * ECORE_VF_ARRAY_LENGTH);
+}
+
static enum _ecore_status_t ecore_sriov_vfpf_msg(struct ecore_hwfn *p_hwfn,
- __le16 vfid,
+ u16 abs_vfid,
struct regpair *vf_msg)
{
+ u8 min = (u8)p_hwfn->p_dev->p_iov_info->first_vf_in_pf;
struct ecore_vf_info *p_vf;
- u8 min, max;
- if (!p_hwfn->pf_iov_info || !p_hwfn->pf_iov_info->vfs_array) {
+ if (!ecore_iov_pf_sanity_check(p_hwfn, (int)abs_vfid - min)) {
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "Got a message from VF while PF is not initialized for IOV support\n");
+ "Got a message from VF [abs 0x%08x] that cannot be"
+ " handled by PF\n",
+ abs_vfid);
return ECORE_SUCCESS;
}
-
- /* Find the VF record - message comes with realtive [engine] vfid */
- min = (u8)p_hwfn->hw_info.first_vf_in_pf;
- max = min + p_hwfn->p_dev->sriov_info.total_vfs;
- /* @@@TBD - for BE machines, should echo field be reversed? */
- if ((u8)vfid < min || (u8)vfid >= max) {
- DP_INFO(p_hwfn,
- "Got a message from VF with relative id 0x%08x, but PF's range is [0x%02x,...,0x%02x)\n",
- (u8)vfid, min, max);
- return ECORE_INVAL;
- }
- p_vf = &p_hwfn->pf_iov_info->vfs_array[(u8)vfid - min];
+ p_vf = &p_hwfn->pf_iov_info->vfs_array[(u8)abs_vfid - min];
/* List the physical address of the request so that handler
* could later on copy the message from it.
@@ -2860,7 +3347,7 @@ enum _ecore_status_t ecore_sriov_eqe_event(struct ecore_hwfn *p_hwfn,
{
switch (opcode) {
case COMMON_EVENT_VF_PF_CHANNEL:
- return ecore_sriov_vfpf_msg(p_hwfn, echo,
+ return ecore_sriov_vfpf_msg(p_hwfn, OSAL_LE16_TO_CPU(echo),
&data->vf_pf_channel.msg_addr);
case COMMON_EVENT_VF_FLR:
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
@@ -2879,51 +3366,20 @@ bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
(1ULL << (rel_vf_id % 64)));
}
-bool ecore_iov_is_valid_vfid(struct ecore_hwfn *p_hwfn, int rel_vf_id,
- bool b_enabled_only)
+u16 ecore_iov_get_next_active_vf(struct ecore_hwfn *p_hwfn, u16 rel_vf_id)
{
- if (!p_hwfn->pf_iov_info) {
- DP_NOTICE(p_hwfn->p_dev, true, "No iov info\n");
- return false;
- }
-
- return b_enabled_only ? ECORE_IS_VF_ACTIVE(p_hwfn->p_dev, rel_vf_id) :
- (rel_vf_id < p_hwfn->p_dev->sriov_info.total_vfs);
-}
-
-struct ecore_public_vf_info *ecore_iov_get_public_vf_info(struct ecore_hwfn
- *p_hwfn,
- u16 relative_vf_id,
- bool b_enabled_only)
-{
- struct ecore_vf_info *vf = OSAL_NULL;
-
- vf = ecore_iov_get_vf_info(p_hwfn, relative_vf_id, b_enabled_only);
- if (!vf)
- return OSAL_NULL;
-
- return &vf->p_vf_info;
-}
-
-void ecore_iov_pf_add_pending_events(struct ecore_hwfn *p_hwfn, u8 vfid)
-{
- u64 add_bit = 1ULL << (vfid % 64);
+ struct ecore_hw_sriov_info *p_iov = p_hwfn->p_dev->p_iov_info;
+ u16 i;
- /* TODO - add locking mechanisms [no atomics in ecore, so we can't
- * add the lock inside the ecore_pf_iov struct].
- */
- p_hwfn->pf_iov_info->pending_events[vfid / 64] |= add_bit;
-}
+ if (!p_iov)
+ goto out;
-void ecore_iov_pf_get_and_clear_pending_events(struct ecore_hwfn *p_hwfn,
- u64 *events)
-{
- u64 *p_pending_events = p_hwfn->pf_iov_info->pending_events;
+ for (i = rel_vf_id; i < p_iov->total_vfs; i++)
+ if (ecore_iov_is_valid_vfid(p_hwfn, rel_vf_id, true))
+ return i;
- /* TODO - Take a lock */
- OSAL_MEMCPY(events, p_pending_events,
- sizeof(u64) * ECORE_VF_ARRAY_LENGTH);
- OSAL_MEMSET(p_pending_events, 0, sizeof(u64) * ECORE_VF_ARRAY_LENGTH);
+out:
+ return MAX_NUM_VFS;
}
enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn *p_hwfn,
@@ -3004,29 +3460,6 @@ enum _ecore_status_t ecore_iov_bulletin_set_mac(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
- u16 pvid, int vfid)
-{
- struct ecore_vf_info *vf_info;
- u64 feature;
-
- vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
- if (!vf_info) {
- DP_NOTICE(p_hwfn->p_dev, true,
- "Can not set forced MAC, invalid vfid [%d]\n", vfid);
- return;
- }
-
- feature = 1 << VLAN_ADDR_FORCED;
- vf_info->bulletin.p_virt->pvid = pvid;
- if (pvid)
- vf_info->bulletin.p_virt->valid_bitmap |= feature;
- else
- vf_info->bulletin.p_virt->valid_bitmap &= ~feature;
-
- ecore_iov_configure_vport_forced(p_hwfn, vf_info, feature);
-}
-
enum _ecore_status_t
ecore_iov_bulletin_set_forced_untagged_default(struct ecore_hwfn *p_hwfn,
bool b_untagged_only, int vfid)
@@ -3046,7 +3479,8 @@ ecore_iov_bulletin_set_forced_untagged_default(struct ecore_hwfn *p_hwfn,
*/
if (vf_info->state == VF_ENABLED) {
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "Can't support untagged change for vfid[%d] - VF is already active\n",
+ "Can't support untagged change for vfid[%d] -"
+ " VF is already active\n",
vfid);
return ECORE_INVAL;
}
@@ -3088,6 +3522,30 @@ void ecore_iov_get_vfs_vport_id(struct ecore_hwfn *p_hwfn, int vfid,
*p_vort_id = vf_info->vport_id;
}
+void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn *p_hwfn,
+ u16 pvid, int vfid)
+{
+ struct ecore_vf_info *vf_info;
+ u64 feature;
+
+ vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+ if (!vf_info) {
+ DP_NOTICE(p_hwfn->p_dev, true,
+ "Can not set forced MAC, invalid vfid [%d]\n",
+ vfid);
+ return;
+ }
+
+ feature = 1 << VLAN_ADDR_FORCED;
+ vf_info->bulletin.p_virt->pvid = pvid;
+ if (pvid)
+ vf_info->bulletin.p_virt->valid_bitmap |= feature;
+ else
+ vf_info->bulletin.p_virt->valid_bitmap &= ~feature;
+
+ ecore_iov_configure_vport_forced(p_hwfn, vf_info, feature);
+}
+
bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn *p_hwfn, int vfid)
{
struct ecore_vf_info *p_vf_info;
@@ -3104,6 +3562,8 @@ bool ecore_iov_is_vf_stopped(struct ecore_hwfn *p_hwfn, int vfid)
struct ecore_vf_info *p_vf_info;
p_vf_info = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+ if (!p_vf_info)
+ return true;
return p_vf_info->state == VF_STOPPED;
}
@@ -3119,21 +3579,11 @@ bool ecore_iov_spoofchk_get(struct ecore_hwfn *p_hwfn, int vfid)
return vf_info->spoof_chk;
}
-bool ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn, int vfid)
-{
- if (IS_VF(p_hwfn->p_dev) || !IS_ECORE_SRIOV(p_hwfn->p_dev) ||
- !IS_PF_SRIOV_ALLOC(p_hwfn) ||
- !ECORE_IS_VF_ACTIVE(p_hwfn->p_dev, vfid))
- return false;
- else
- return true;
-}
-
enum _ecore_status_t ecore_iov_spoofchk_set(struct ecore_hwfn *p_hwfn,
int vfid, bool val)
{
- enum _ecore_status_t rc = ECORE_INVAL;
struct ecore_vf_info *vf;
+ enum _ecore_status_t rc = ECORE_INVAL;
if (!ecore_iov_pf_sanity_check(p_hwfn, vfid)) {
DP_NOTICE(p_hwfn, true,
@@ -3263,8 +3713,8 @@ enum _ecore_status_t ecore_iov_configure_tx_rate(struct ecore_hwfn *p_hwfn,
int vfid, int val)
{
struct ecore_vf_info *vf;
- enum _ecore_status_t rc;
u8 abs_vp_id = 0;
+ enum _ecore_status_t rc;
vf = ecore_iov_get_vf_info(p_hwfn, (u16)vfid, true);
@@ -3275,16 +3725,13 @@ enum _ecore_status_t ecore_iov_configure_tx_rate(struct ecore_hwfn *p_hwfn,
if (rc != ECORE_SUCCESS)
return rc;
- rc = ecore_init_vport_rl(p_hwfn, p_ptt, abs_vp_id, (u32)val);
-
- return rc;
+ return ecore_init_vport_rl(p_hwfn, p_ptt, abs_vp_id, (u32)val);
}
enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
int vfid, u32 rate)
{
struct ecore_vf_info *vf;
- enum _ecore_status_t rc;
u8 vport_id;
int i;
@@ -3293,7 +3740,8 @@ enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
if (!ecore_iov_pf_sanity_check(p_hwfn, vfid)) {
DP_NOTICE(p_hwfn, true,
- "SR-IOV sanity check failed, can't set min rate\n");
+ "SR-IOV sanity check failed,"
+ " can't set min rate\n");
return ECORE_INVAL;
}
}
@@ -3301,9 +3749,7 @@ enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
vf = ecore_iov_get_vf_info(ECORE_LEADING_HWFN(p_dev), (u16)vfid, true);
vport_id = vf->vport_id;
- rc = ecore_configure_vport_wfq(p_dev, vport_id, rate);
-
- return rc;
+ return ecore_configure_vport_wfq(p_dev, vport_id, rate);
}
enum _ecore_status_t ecore_iov_get_vf_stats(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_sriov.h b/drivers/net/qede/base/ecore_sriov.h
index 7eca169..ed6ddc4 100644
--- a/drivers/net/qede/base/ecore_sriov.h
+++ b/drivers/net/qede/base/ecore_sriov.h
@@ -14,8 +14,6 @@
#include "ecore_iov_api.h"
#include "ecore_hsi_common.h"
-#define ECORE_ETH_VF_NUM_VLAN_FILTERS 2
-
#define ECORE_ETH_MAX_VF_NUM_VLAN_FILTERS \
(MAX_NUM_VFS * ECORE_ETH_VF_NUM_VLAN_FILTERS)
@@ -33,25 +31,6 @@ struct ecore_vf_mbx_msg {
union pfvf_tlvs resp;
};
-/* This data is held in the ecore_hwfn structure for VFs only. */
-struct ecore_vf_iov {
- union vfpf_tlvs *vf2pf_request;
- dma_addr_t vf2pf_request_phys;
- union pfvf_tlvs *pf2vf_reply;
- dma_addr_t pf2vf_reply_phys;
-
- /* Should be taken whenever the mailbox buffers are accessed */
- osal_mutex_t mutex;
- u8 *offset;
-
- /* Bulletin Board */
- struct ecore_bulletin bulletin;
- struct ecore_bulletin_content bulletin_shadow;
-
- /* we set aside a copy of the acquire response */
- struct pfvf_acquire_resp_tlv acquire_resp;
-};
-
/* This mailbox is maintained per VF in its PF
* contains all information required for sending / receiving
* a message
@@ -91,15 +70,6 @@ struct ecore_vf_q_info {
u8 txq_active;
};
-enum int_mod {
- VPORT_INT_MOD_UNDEFINED = 0,
- VPORT_INT_MOD_ADAPTIVE = 1,
- VPORT_INT_MOD_OFF = 2,
- VPORT_INT_MOD_LOW = 100,
- VPORT_INT_MOD_MEDIUM = 200,
- VPORT_INT_MOD_HIGH = 300
-};
-
enum vf_state {
VF_FREE = 0, /* VF ready to be acquired holds no resc */
VF_ACQUIRED = 1, /* VF, acquired, but not initalized */
@@ -117,6 +87,8 @@ struct ecore_vf_shadow_config {
/* Shadow copy of all guest vlans */
struct ecore_vf_vlan_shadow vlans[ECORE_ETH_VF_NUM_VLAN_FILTERS + 1];
+ /* Shadow copy of all configured MACs; Empty if forcing MACs */
+ u8 macs[ECORE_ETH_VF_NUM_MAC_FILTERS][ETH_ALEN];
u8 inner_vlan_removal;
};
@@ -124,11 +96,15 @@ struct ecore_vf_shadow_config {
struct ecore_vf_info {
struct ecore_iov_vf_mbx vf_mbx;
enum vf_state state;
+ bool b_init;
u8 to_disable;
struct ecore_bulletin bulletin;
dma_addr_t vf_bulletin;
+ /* PF saves a copy of the last VF acquire message */
+ struct vfpf_acquire_tlv acquire;
+
u32 concrete_fid;
u16 opaque_fid;
u16 mtu;
@@ -148,7 +124,6 @@ struct ecore_vf_info {
u8 num_mac_filters;
u8 num_vlan_filters;
- u8 num_mc_filters;
struct ecore_vf_q_info vf_queues[ECORE_MAX_VF_CHAINS_PER_PF];
u16 igu_sbs[ECORE_MAX_VF_CHAINS_PER_PF];
@@ -181,6 +156,13 @@ struct ecore_pf_iov {
u64 pending_flr[ECORE_VF_ARRAY_LENGTH];
u16 base_vport_id;
+#ifndef REMOVE_DBG
+ /* This doesn't serve anything functionally, but it makes windows
+ * debugging of IOV related issues easier.
+ */
+ u64 active_vfs[ECORE_VF_ARRAY_LENGTH];
+#endif
+
/* Allocate message address continuosuly and split to each VF */
void *mbx_msg_virt_addr;
dma_addr_t mbx_msg_phys_addr;
@@ -196,16 +178,13 @@ struct ecore_pf_iov {
#ifdef CONFIG_ECORE_SRIOV
/**
* @brief Read sriov related information and allocated resources
- * reads from configuraiton space, shmem, and allocates the VF
- * database in the PF.
+ * reads from configuraiton space, shmem, etc.
*
* @param p_hwfn
- * @param p_ptt
*
* @return enum _ecore_status_t
*/
-enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt);
+enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn);
/**
* @brief ecore_add_tlv - place a given tlv on the tlv buffer at next offset
@@ -218,7 +197,9 @@ enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn *p_hwfn,
* @return pointer to the newly placed tlv
*/
void *ecore_add_tlv(struct ecore_hwfn *p_hwfn,
- u8 **offset, u16 type, u16 length);
+ u8 **offset,
+ u16 type,
+ u16 length);
/**
* @brief list the types and lengths of the tlvs on the buffer
@@ -226,7 +207,8 @@ void *ecore_add_tlv(struct ecore_hwfn *p_hwfn,
* @param p_hwfn
* @param tlvs_list
*/
-void ecore_dp_tlv_list(struct ecore_hwfn *p_hwfn, void *tlvs_list);
+void ecore_dp_tlv_list(struct ecore_hwfn *p_hwfn,
+ void *tlvs_list);
/**
* @brief ecore_iov_alloc - allocate sriov related resources
@@ -243,7 +225,8 @@ enum _ecore_status_t ecore_iov_alloc(struct ecore_hwfn *p_hwfn);
* @param p_hwfn
* @param p_ptt
*/
-void ecore_iov_setup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
+void ecore_iov_setup(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt);
/**
* @brief ecore_iov_free - free sriov related resources
@@ -253,6 +236,13 @@ void ecore_iov_setup(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt);
void ecore_iov_free(struct ecore_hwfn *p_hwfn);
/**
+ * @brief free sriov related memory that was allocated during hw_prepare
+ *
+ * @param p_dev
+ */
+void ecore_iov_free_hw_info(struct ecore_dev *p_dev);
+
+/**
* @brief ecore_sriov_eqe_event - handle async sriov event arrived on eqe.
*
* @param p_hwfn
@@ -274,7 +264,9 @@ enum _ecore_status_t ecore_sriov_eqe_event(struct ecore_hwfn *p_hwfn,
*
* @return calculated crc over buffer [with respect to seed].
*/
-u32 ecore_crc32(u32 crc, u8 *ptr, u32 length);
+u32 ecore_crc32(u32 crc,
+ u8 *ptr,
+ u32 length);
/**
* @brief Mark structs of vfs that have been FLR-ed.
@@ -284,7 +276,8 @@ u32 ecore_crc32(u32 crc, u8 *ptr, u32 length);
*
* @return 1 iff one of the PF's vfs got FLRed. 0 otherwise.
*/
-int ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn, u32 *disabled_vfs);
+int ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn,
+ u32 *disabled_vfs);
/**
* @brief Search extended TLVs in request/reply buffer.
@@ -312,79 +305,5 @@ void *ecore_iov_search_list_tlvs(struct ecore_hwfn *p_hwfn,
struct ecore_vf_info *ecore_iov_get_vf_info(struct ecore_hwfn *p_hwfn,
u16 relative_vf_id,
bool b_enabled_only);
-#else
-static OSAL_INLINE enum _ecore_status_t ecore_iov_hw_info(struct ecore_hwfn
- *p_hwfn,
- struct ecore_ptt
- *p_ptt)
-{
- return ECORE_SUCCESS;
-}
-
-static OSAL_INLINE void *ecore_add_tlv(struct ecore_hwfn *p_hwfn, u8 **offset,
- u16 type, u16 length)
-{
- return OSAL_NULL;
-}
-
-static OSAL_INLINE void ecore_dp_tlv_list(struct ecore_hwfn *p_hwfn,
- void *tlvs_list)
-{
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_iov_alloc(struct ecore_hwfn
- *p_hwfn)
-{
- return ECORE_SUCCESS;
-}
-
-static OSAL_INLINE void ecore_iov_setup(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
-{
-}
-
-static OSAL_INLINE void ecore_iov_free(struct ecore_hwfn *p_hwfn)
-{
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_sriov_eqe_event(struct ecore_hwfn
- *p_hwfn,
- u8 opcode,
- __le16 echo,
- union
- event_ring_data
- * data)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE u32 ecore_crc32(u32 crc, u8 *ptr, u32 length)
-{
- return 0;
-}
-
-static OSAL_INLINE int ecore_iov_mark_vf_flr(struct ecore_hwfn *p_hwfn,
- u32 *disabled_vfs)
-{
- return 0;
-}
-
-static OSAL_INLINE void *ecore_iov_search_list_tlvs(struct ecore_hwfn *p_hwfn,
- void *p_tlvs_list,
- u16 req_type)
-{
- return OSAL_NULL;
-}
-
-static OSAL_INLINE struct ecore_vf_info *ecore_iov_get_vf_info(struct ecore_hwfn
- *p_hwfn,
- u16
- relative_vf_id,
- bool
- b_enabled_only)
-{
- return OSAL_NULL;
-}
-
#endif
#endif /* __ECORE_SRIOV_H__ */
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index a03a2ce..634e2bb 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -53,15 +53,27 @@ static void *ecore_vf_pf_prep(struct ecore_hwfn *p_hwfn, u16 type, u16 length)
return p_tlv;
}
+static void ecore_vf_pf_req_end(struct ecore_hwfn *p_hwfn,
+ enum _ecore_status_t req_status)
+{
+ union pfvf_tlvs *resp = p_hwfn->vf_iov_info->pf2vf_reply;
+
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "VF request status = 0x%x, PF reply status = 0x%x\n",
+ req_status, resp->default_resp.hdr.status);
+
+ OSAL_MUTEX_RELEASE(&p_hwfn->vf_iov_info->mutex);
+}
+
static int ecore_send_msg2pf(struct ecore_hwfn *p_hwfn,
u8 *done, u32 resp_size)
{
- struct ustorm_vf_zone *zone_data = (struct ustorm_vf_zone *)
- ((u8 *)PXP_VF_BAR0_START_USDM_ZONE_B);
union vfpf_tlvs *p_req = p_hwfn->vf_iov_info->vf2pf_request;
struct ustorm_trigger_vf_zone trigger;
+ struct ustorm_vf_zone *zone_data;
int rc = ECORE_SUCCESS, time = 100;
- u8 pf_id;
+
+ zone_data = (struct ustorm_vf_zone *)PXP_VF_BAR0_START_USDM_ZONE_B;
/* output tlvs list */
ecore_dp_tlv_list(p_hwfn, p_req);
@@ -69,25 +81,25 @@ static int ecore_send_msg2pf(struct ecore_hwfn *p_hwfn,
/* need to add the END TLV to the message size */
resp_size += sizeof(struct channel_list_end_tlv);
- if (!p_hwfn->p_dev->sriov_info.b_hw_channel) {
+ if (!p_hwfn->p_dev->b_hw_channel) {
rc = OSAL_VF_SEND_MSG2PF(p_hwfn->p_dev,
done,
p_req,
p_hwfn->vf_iov_info->pf2vf_reply,
sizeof(union vfpf_tlvs), resp_size);
/* TODO - no prints about message ? */
- goto exit;
+ return rc;
}
/* Send TLVs over HW channel */
OSAL_MEMSET(&trigger, 0, sizeof(struct ustorm_trigger_vf_zone));
trigger.vf_pf_msg_valid = 1;
- /* TODO - FW should remove this requirement */
- pf_id = GET_FIELD(p_hwfn->hw_info.concrete_fid, PXP_CONCRETE_FID_PFID);
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "VF -> PF [%02x] message: [%08x, %08x] --> %p, %08x --> %p\n",
- pf_id,
+ "VF -> PF [%02x] message: [%08x, %08x] --> %p,"
+ " %08x --> %p\n",
+ GET_FIELD(p_hwfn->hw_info.concrete_fid,
+ PXP_CONCRETE_FID_PFID),
U64_HI(p_hwfn->vf_iov_info->vf2pf_request_phys),
U64_LO(p_hwfn->vf_iov_info->vf2pf_request_phys),
&zone_data->non_trigger.vf_pf_msg_addr,
@@ -109,6 +121,9 @@ static int ecore_send_msg2pf(struct ecore_hwfn *p_hwfn,
REG_WR(p_hwfn, (osal_uintptr_t)&zone_data->trigger,
*((u32 *)&trigger));
+ /* When PF would be done with the response, it would write back to the
+ * `done' address. Poll until then.
+ */
while ((!*done) && time) {
OSAL_MSLEEP(25);
time--;
@@ -119,22 +134,41 @@ static int ecore_send_msg2pf(struct ecore_hwfn *p_hwfn,
"VF <-- PF Timeout [Type %d]\n",
p_req->first_tlv.tl.type);
rc = ECORE_TIMEOUT;
- goto exit;
+ return rc;
} else {
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
"PF response: %d [Type %d]\n",
*done, p_req->first_tlv.tl.type);
}
-exit:
- OSAL_MUTEX_RELEASE(&p_hwfn->vf_iov_info->mutex);
-
return rc;
}
#define VF_ACQUIRE_THRESH 3
-#define VF_ACQUIRE_MAC_FILTERS 1
-#define VF_ACQUIRE_MC_FILTERS 10
+static void ecore_vf_pf_acquire_reduce_resc(struct ecore_hwfn *p_hwfn,
+ struct vf_pf_resc_request *p_req,
+ struct pf_vf_resc *p_resp)
+{
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "PF unwilling to fullill resource request: rxq [%02x/%02x]"
+ " txq [%02x/%02x] sbs [%02x/%02x] mac [%02x/%02x]"
+ " vlan [%02x/%02x] mc [%02x/%02x]."
+ " Try PF recommended amount\n",
+ p_req->num_rxqs, p_resp->num_rxqs,
+ p_req->num_rxqs, p_resp->num_txqs,
+ p_req->num_sbs, p_resp->num_sbs,
+ p_req->num_mac_filters, p_resp->num_mac_filters,
+ p_req->num_vlan_filters, p_resp->num_vlan_filters,
+ p_req->num_mc_filters, p_resp->num_mc_filters);
+
+ /* humble our request */
+ p_req->num_txqs = p_resp->num_txqs;
+ p_req->num_rxqs = p_resp->num_rxqs;
+ p_req->num_sbs = p_resp->num_sbs;
+ p_req->num_mac_filters = p_resp->num_mac_filters;
+ p_req->num_vlan_filters = p_resp->num_vlan_filters;
+ p_req->num_mc_filters = p_resp->num_mc_filters;
+}
static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
{
@@ -142,26 +176,24 @@ static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
struct pfvf_acquire_resp_tlv *resp = &p_iov->pf2vf_reply->acquire_resp;
struct pf_vf_pfdev_info *pfdev_info = &resp->pfdev_info;
struct ecore_vf_acquire_sw_info vf_sw_info;
- struct vfpf_acquire_tlv *req;
- int rc = 0, attempts = 0;
+ struct vf_pf_resc_request *p_resc;
bool resources_acquired = false;
-
- /* @@@ TBD: MichalK take this from somewhere else... */
- u8 rx_count = 1, tx_count = 1, num_sbs = 1;
- u8 num_mac = VF_ACQUIRE_MAC_FILTERS, num_mc = VF_ACQUIRE_MC_FILTERS;
+ struct vfpf_acquire_tlv *req;
+ int attempts = 0;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
/* clear mailbox and prep first tlv */
req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_ACQUIRE, sizeof(*req));
+ p_resc = &req->resc_request;
/* @@@ TBD: PF may not be ready bnx2x_get_vf_id... */
req->vfdev_info.opaque_fid = p_hwfn->hw_info.opaque_fid;
- req->resc_request.num_rxqs = rx_count;
- req->resc_request.num_txqs = tx_count;
- req->resc_request.num_sbs = num_sbs;
- req->resc_request.num_mac_filters = num_mac;
- req->resc_request.num_mc_filters = num_mc;
- req->resc_request.num_vlan_filters = ECORE_ETH_VF_NUM_VLAN_FILTERS;
+ p_resc->num_rxqs = ECORE_MAX_VF_CHAINS_PER_PF;
+ p_resc->num_txqs = ECORE_MAX_VF_CHAINS_PER_PF;
+ p_resc->num_sbs = ECORE_MAX_VF_CHAINS_PER_PF;
+ p_resc->num_mac_filters = ECORE_ETH_VF_NUM_MAC_FILTERS;
+ p_resc->num_vlan_filters = ECORE_ETH_VF_NUM_VLAN_FILTERS;
OSAL_MEMSET(&vf_sw_info, 0, sizeof(vf_sw_info));
OSAL_VF_FILL_ACQUIRE_RESC_REQ(p_hwfn, &req->resc_request, &vf_sw_info);
@@ -172,9 +204,11 @@ static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
req->vfdev_info.fw_minor = FW_MINOR_VERSION;
req->vfdev_info.fw_revision = FW_REVISION_VERSION;
req->vfdev_info.fw_engineering = FW_ENGINEERING_VERSION;
+ req->vfdev_info.eth_fp_hsi_major = ETH_HSI_VER_MAJOR;
+ req->vfdev_info.eth_fp_hsi_minor = ETH_HSI_VER_MINOR;
- if (vf_sw_info.override_fw_version)
- req->vfdev_info.capabilties |= VFPF_ACQUIRE_CAP_OVERRIDE_FW_VER;
+ /* Fill capability field with any non-deprecated config we support */
+ req->vfdev_info.capabilities |= VFPF_ACQUIRE_CAP_100G;
/* pf 2 vf bulletin board address */
req->bulletin_addr = p_iov->bulletin.phys;
@@ -189,6 +223,10 @@ static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
"attempting to acquire resources\n");
+ /* Clear response buffer, as this might be a re-send */
+ OSAL_MEMSET(p_iov->pf2vf_reply, 0,
+ sizeof(union pfvf_tlvs));
+
/* send acquire request */
rc = ecore_send_msg2pf(p_hwfn,
&resp->hdr.status, sizeof(*resp));
@@ -203,46 +241,83 @@ static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
attempts++;
- /* PF agrees to allocate our resources */
if (resp->hdr.status == PFVF_STATUS_SUCCESS) {
+ /* PF agrees to allocate our resources */
+ if (!(resp->pfdev_info.capabilities &
+ PFVF_ACQUIRE_CAP_POST_FW_OVERRIDE)) {
+ /* It's possible legacy PF mistakenly accepted;
+ * but we don't care - simply mark it as
+ * legacy and continue.
+ */
+ req->vfdev_info.capabilities |=
+ VFPF_ACQUIRE_CAP_PRE_FP_HSI;
+ }
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
"resources acquired\n");
resources_acquired = true;
} /* PF refuses to allocate our resources */
- else if (resp->hdr.status ==
- PFVF_STATUS_NO_RESOURCE &&
+ else if (resp->hdr.status == PFVF_STATUS_NO_RESOURCE &&
attempts < VF_ACQUIRE_THRESH) {
- DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "PF unwilling to fullfill resource request. Try PF recommended amount\n");
-
- /* humble our request */
- req->resc_request.num_txqs = resp->resc.num_txqs;
- req->resc_request.num_rxqs = resp->resc.num_rxqs;
- req->resc_request.num_sbs = resp->resc.num_sbs;
- req->resc_request.num_mac_filters =
- resp->resc.num_mac_filters;
- req->resc_request.num_vlan_filters =
- resp->resc.num_vlan_filters;
- req->resc_request.num_mc_filters =
- resp->resc.num_mc_filters;
-
- /* Clear response buffer */
- OSAL_MEMSET(p_iov->pf2vf_reply, 0,
- sizeof(union pfvf_tlvs));
+ ecore_vf_pf_acquire_reduce_resc(p_hwfn, p_resc,
+ &resp->resc);
+
+ } else if (resp->hdr.status == PFVF_STATUS_NOT_SUPPORTED) {
+ if (pfdev_info->major_fp_hsi &&
+ (pfdev_info->major_fp_hsi != ETH_HSI_VER_MAJOR)) {
+ DP_NOTICE(p_hwfn, false,
+ "PF uses an incompatible fastpath HSI"
+ " %02x.%02x [VF requires %02x.%02x]."
+ " Please change to a VF driver using"
+ " %02x.xx.\n",
+ pfdev_info->major_fp_hsi,
+ pfdev_info->minor_fp_hsi,
+ ETH_HSI_VER_MAJOR, ETH_HSI_VER_MINOR,
+ pfdev_info->major_fp_hsi);
+ rc = ECORE_INVAL;
+ goto exit;
+ }
+
+ if (!pfdev_info->major_fp_hsi) {
+ if (req->vfdev_info.capabilities &
+ VFPF_ACQUIRE_CAP_PRE_FP_HSI) {
+ DP_NOTICE(p_hwfn, false,
+ "PF uses very old drivers."
+ " Please change to a VF"
+ " driver using no later than"
+ " 8.8.x.x.\n");
+ rc = ECORE_INVAL;
+ goto exit;
+ } else {
+ DP_INFO(p_hwfn,
+ "PF is old - try re-acquire to"
+ " see if it supports FW-version"
+ " override\n");
+ req->vfdev_info.capabilities |=
+ VFPF_ACQUIRE_CAP_PRE_FP_HSI;
+ }
+ }
} else {
DP_ERR(p_hwfn,
- "PF returned error %d to VF acquisition request\n",
+ "PF returned err %d to VF acquisition request\n",
resp->hdr.status);
- return ECORE_AGAIN;
+ rc = ECORE_AGAIN;
+ goto exit;
}
}
+ /* Mark the PF as legacy, if needed */
+ if (req->vfdev_info.capabilities &
+ VFPF_ACQUIRE_CAP_PRE_FP_HSI)
+ p_iov->b_pre_fp_hsi = true;
+
rc = OSAL_VF_UPDATE_ACQUIRE_RESC_RESP(p_hwfn, &resp->resc);
if (rc) {
DP_NOTICE(p_hwfn, true,
- "VF_UPDATE_ACQUIRE_RESC_RESP Failed: status = 0x%x.\n",
+ "VF_UPDATE_ACQUIRE_RESC_RESP Failed:"
+ " status = 0x%x.\n",
rc);
- return ECORE_AGAIN;
+ rc = ECORE_AGAIN;
+ goto exit;
}
/* Update bulletin board size with response from PF */
@@ -256,129 +331,127 @@ static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
ECORE_IS_BB(p_hwfn->p_dev) ? "BB" : "AH",
CHIP_REV_IS_A0(p_hwfn->p_dev) ? 0 : 1);
- /* @@@TBD MichalK: Fw ver... */
- /* strlcpy(p_hwfn->fw_ver, p_hwfn->acquire_resp.pfdev_info.fw_ver,
- * sizeof(p_hwfn->fw_ver));
- */
-
p_hwfn->p_dev->chip_num = pfdev_info->chip_num & 0xffff;
- return 0;
+ /* Learn of the possibility of CMT */
+ if (IS_LEAD_HWFN(p_hwfn)) {
+ if (resp->pfdev_info.capabilities & PFVF_ACQUIRE_CAP_100G) {
+ DP_NOTICE(p_hwfn, false, "100g VF\n");
+ p_hwfn->p_dev->num_hwfns = 2;
+ }
}
-enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_dev *p_dev)
-{
- enum _ecore_status_t rc = ECORE_NOMEM;
- struct ecore_vf_iov *p_sriov;
- struct ecore_hwfn *p_hwfn = &p_dev->hwfns[0]; /* @@@TBD CMT */
+ /* @DPDK */
+ if ((~p_iov->b_pre_fp_hsi &
+ ETH_HSI_VER_MINOR) &&
+ (resp->pfdev_info.minor_fp_hsi < ETH_HSI_VER_MINOR))
+ DP_INFO(p_hwfn,
+ "PF is using older fastpath HSI;"
+ " %02x.%02x is configured\n",
+ ETH_HSI_VER_MAJOR,
+ resp->pfdev_info.minor_fp_hsi);
- p_dev->num_hwfns = 1; /* @@@TBD CMT must be fixed... */
+exit:
+ ecore_vf_pf_req_end(p_hwfn, rc);
- p_hwfn->regview = p_dev->regview;
- if (p_hwfn->regview == OSAL_NULL) {
- DP_ERR(p_hwfn,
- "regview should be initialized before"
- " ecore_vf_hw_prepare is called\n");
- return ECORE_INVAL;
+ return rc;
}
+enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn)
+{
+ struct ecore_vf_iov *p_iov;
+ u32 reg;
+
+ /* Set number of hwfns - might be overridden once leading hwfn learns
+ * actual configuration from PF.
+ */
+ if (IS_LEAD_HWFN(p_hwfn))
+ p_hwfn->p_dev->num_hwfns = 1;
+
/* Set the doorbell bar. Assumption: regview is set */
p_hwfn->doorbells = (u8 OSAL_IOMEM *)p_hwfn->regview +
PXP_VF_BAR0_START_DQ;
- p_hwfn->hw_info.opaque_fid = (u16)REG_RD(p_hwfn,
- PXP_VF_BAR0_ME_OPAQUE_ADDRESS);
+ reg = PXP_VF_BAR0_ME_OPAQUE_ADDRESS;
+ p_hwfn->hw_info.opaque_fid = (u16)REG_RD(p_hwfn, reg);
- p_hwfn->hw_info.concrete_fid = REG_RD(p_hwfn,
- PXP_VF_BAR0_ME_CONCRETE_ADDRESS);
+ reg = PXP_VF_BAR0_ME_CONCRETE_ADDRESS;
+ p_hwfn->hw_info.concrete_fid = REG_RD(p_hwfn, reg);
/* Allocate vf sriov info */
- p_sriov = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_sriov));
- if (!p_sriov) {
+ p_iov = OSAL_ZALLOC(p_hwfn->p_dev, GFP_KERNEL, sizeof(*p_iov));
+ if (!p_iov) {
DP_NOTICE(p_hwfn, true,
"Failed to allocate `struct ecore_sriov'\n");
return ECORE_NOMEM;
}
- OSAL_MEMSET(p_sriov, 0, sizeof(*p_sriov));
+ OSAL_MEMSET(p_iov, 0, sizeof(*p_iov));
/* Allocate vf2pf msg */
- p_sriov->vf2pf_request = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
- &p_sriov->
+ p_iov->vf2pf_request = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
+ &p_iov->
vf2pf_request_phys,
sizeof(union
vfpf_tlvs));
- if (!p_sriov->vf2pf_request) {
+ if (!p_iov->vf2pf_request) {
DP_NOTICE(p_hwfn, true,
"Failed to allocate `vf2pf_request' DMA memory\n");
- goto free_p_sriov;
+ goto free_p_iov;
}
- p_sriov->pf2vf_reply = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
- &p_sriov->
+ p_iov->pf2vf_reply = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
+ &p_iov->
pf2vf_reply_phys,
sizeof(union pfvf_tlvs));
- if (!p_sriov->pf2vf_reply) {
+ if (!p_iov->pf2vf_reply) {
DP_NOTICE(p_hwfn, true,
"Failed to allocate `pf2vf_reply' DMA memory\n");
goto free_vf2pf_request;
}
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "VF's Request mailbox [%p virt 0x%" PRIx64 " phys], "
- "Response mailbox [%p virt 0x%" PRIx64 " phys]\n",
- p_sriov->vf2pf_request,
- (u64)p_sriov->vf2pf_request_phys,
- p_sriov->pf2vf_reply, (u64)p_sriov->pf2vf_reply_phys);
+ "VF's Request mailbox [%p virt 0x%lx phys], "
+ "Response mailbox [%p virt 0x%lx phys]\n",
+ p_iov->vf2pf_request,
+ (unsigned long)p_iov->vf2pf_request_phys,
+ p_iov->pf2vf_reply,
+ (unsigned long)p_iov->pf2vf_reply_phys);
/* Allocate Bulletin board */
- p_sriov->bulletin.size = sizeof(struct ecore_bulletin_content);
- p_sriov->bulletin.p_virt = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
- &p_sriov->bulletin.
+ p_iov->bulletin.size = sizeof(struct ecore_bulletin_content);
+ p_iov->bulletin.p_virt = OSAL_DMA_ALLOC_COHERENT(p_hwfn->p_dev,
+ &p_iov->bulletin.
phys,
- p_sriov->bulletin.
+ p_iov->bulletin.
size);
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "VF's bulletin Board [%p virt 0x%" PRIx64 " phys 0x%08x bytes]\n",
- p_sriov->bulletin.p_virt, (u64)p_sriov->bulletin.phys,
- p_sriov->bulletin.size);
+ "VF's bulletin Board [%p virt 0x%lx phys 0x%08x bytes]\n",
+ p_iov->bulletin.p_virt, (unsigned long)p_iov->bulletin.phys,
+ p_iov->bulletin.size);
- OSAL_MUTEX_ALLOC(p_hwfn, &p_sriov->mutex);
- OSAL_MUTEX_INIT(&p_sriov->mutex);
+ OSAL_MUTEX_ALLOC(p_hwfn, &p_iov->mutex);
+ OSAL_MUTEX_INIT(&p_iov->mutex);
- p_hwfn->vf_iov_info = p_sriov;
+ p_hwfn->vf_iov_info = p_iov;
p_hwfn->hw_info.personality = ECORE_PCI_ETH;
- /* First VF needs to query for information from PF */
- if (!p_hwfn->my_id)
- rc = ecore_vf_pf_acquire(p_hwfn);
-
- return rc;
+ return ecore_vf_pf_acquire(p_hwfn);
free_vf2pf_request:
- OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev, p_sriov->vf2pf_request,
- p_sriov->vf2pf_request_phys,
+ OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev, p_iov->vf2pf_request,
+ p_iov->vf2pf_request_phys,
sizeof(union vfpf_tlvs));
-free_p_sriov:
- OSAL_FREE(p_hwfn->p_dev, p_sriov);
-
- return rc;
-}
-
-enum _ecore_status_t ecore_vf_pf_init(struct ecore_hwfn *p_hwfn)
-{
- p_hwfn->b_int_enabled = 1;
+free_p_iov:
+ OSAL_FREE(p_hwfn->p_dev, p_iov);
- return 0;
+ return ECORE_NOMEM;
}
-/* TEMP TEMP until in HSI */
#define TSTORM_QZONE_START PXP_VF_BAR0_START_SDM_ZONE_A
#define MSTORM_QZONE_START(dev) (TSTORM_QZONE_START + \
(TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
-#define USTORM_QZONE_START(dev) (MSTORM_QZONE_START + \
- (MSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
u8 rx_qid,
@@ -391,51 +464,73 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
void OSAL_IOMEM **pp_prod)
{
struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct pfvf_start_queue_resp_tlv *resp;
struct vfpf_start_rxq_tlv *req;
- struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
int rc;
- u8 hw_qid;
- u64 init_prod_val = 0;
/* clear mailbox and prep first tlv */
req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_START_RXQ, sizeof(*req));
- /* @@@TBD MichalK TPA */
-
req->rx_qid = rx_qid;
req->cqe_pbl_addr = cqe_pbl_addr;
req->cqe_pbl_size = cqe_pbl_size;
req->rxq_addr = bd_chain_phys_addr;
req->hw_sb = sb;
req->sb_index = sb_index;
- req->hc_rate = 0; /* @@@TBD MichalK -> host coalescing! */
req->bd_max_bytes = bd_max_bytes;
- req->stat_id = -1; /* No stats at the moment */
-
- /* add list termination tlv */
- ecore_add_tlv(p_hwfn, &p_iov->offset,
- CHANNEL_TLV_LIST_END,
- sizeof(struct channel_list_end_tlv));
+ req->stat_id = -1; /* Keep initialized, for future compatibility */
- if (pp_prod) {
- hw_qid = p_iov->acquire_resp.resc.hw_qid[rx_qid];
+ /* If PF is legacy, we'll need to calculate producers ourselves
+ * as well as clean them.
+ */
+ if (pp_prod && p_iov->b_pre_fp_hsi) {
+ u8 hw_qid = p_iov->acquire_resp.resc.hw_qid[rx_qid];
+ u32 init_prod_val = 0;
*pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview +
MSTORM_QZONE_START(p_hwfn->p_dev) +
- (hw_qid) * MSTORM_QZONE_SIZE +
- OFFSETOF(struct mstorm_eth_queue_zone, rx_producers);
+ (hw_qid) * MSTORM_QZONE_SIZE;
/* Init the rcq, rx bd and rx sge (if valid) producers to 0 */
- __internal_ram_wr(p_hwfn, *pp_prod, sizeof(u64),
+ __internal_ram_wr(p_hwfn, *pp_prod, sizeof(u32),
(u32 *)(&init_prod_val));
}
+ /* add list termination tlv */
+ ecore_add_tlv(p_hwfn, &p_iov->offset,
+ CHANNEL_TLV_LIST_END,
+ sizeof(struct channel_list_end_tlv));
+
+ resp = &p_iov->pf2vf_reply->queue_start;
rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
if (rc)
- return rc;
+ goto exit;
- if (resp->hdr.status != PFVF_STATUS_SUCCESS)
- return ECORE_INVAL;
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
+ rc = ECORE_INVAL;
+ goto exit;
+ }
+
+ /* Learn the address of the producer from the response */
+ if (pp_prod && !p_iov->b_pre_fp_hsi) {
+ u32 init_prod_val = 0;
+
+ *pp_prod = (u8 OSAL_IOMEM *)p_hwfn->regview + resp->offset;
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "Rxq[0x%02x]: producer at %p [offset 0x%08x]\n",
+ rx_qid, *pp_prod, resp->offset);
+
+ /* Init the rcq, rx bd and rx sge (if valid) producers to 0.
+ * It was actually the PF's responsibility, but since some
+ * old PFs might fail to do so, we do this as well.
+ */
+ OSAL_BUILD_BUG_ON(ETH_HSI_VER_MAJOR != 3);
+ __internal_ram_wr(p_hwfn, *pp_prod, sizeof(u32),
+ (u32 *)&init_prod_val);
+ }
+
+exit:
+ ecore_vf_pf_req_end(p_hwfn, rc);
return rc;
}
@@ -445,17 +540,12 @@ enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
{
struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
struct vfpf_stop_rxqs_tlv *req;
- struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+ struct pfvf_def_resp_tlv *resp;
int rc;
/* clear mailbox and prep first tlv */
req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_STOP_RXQS, sizeof(*req));
- /* @@@TBD MichalK TPA */
-
- /* @@@TBD MichalK - relevant ???
- * flags VFPF_QUEUE_FLG_OV VFPF_QUEUE_FLG_VLAN
- */
req->rx_qid = rx_qid;
req->num_rxqs = 1;
req->cqe_completion = cqe_completion;
@@ -465,12 +555,18 @@ enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
CHANNEL_TLV_LIST_END,
sizeof(struct channel_list_end_tlv));
+ resp = &p_iov->pf2vf_reply->default_resp;
rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
if (rc)
- return rc;
+ goto exit;
- if (resp->hdr.status != PFVF_STATUS_SUCCESS)
- return ECORE_INVAL;
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
+ rc = ECORE_INVAL;
+ goto exit;
+ }
+
+exit:
+ ecore_vf_pf_req_end(p_hwfn, rc);
return rc;
}
@@ -484,15 +580,13 @@ enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
void OSAL_IOMEM **pp_doorbell)
{
struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct pfvf_start_queue_resp_tlv *resp;
struct vfpf_start_txq_tlv *req;
- struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
int rc;
/* clear mailbox and prep first tlv */
req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_START_TXQ, sizeof(*req));
- /* @@@TBD MichalK TPA */
-
req->tx_qid = tx_queue_id;
/* Tx */
@@ -500,28 +594,44 @@ enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
req->pbl_size = pbl_size;
req->hw_sb = sb;
req->sb_index = sb_index;
- req->hc_rate = 0; /* @@@TBD MichalK -> host coalescing! */
- req->flags = 0; /* @@@TBD MichalK -> flags... */
/* add list termination tlv */
ecore_add_tlv(p_hwfn, &p_iov->offset,
CHANNEL_TLV_LIST_END,
sizeof(struct channel_list_end_tlv));
+ resp = &p_iov->pf2vf_reply->queue_start;
rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
if (rc)
- return rc;
+ goto exit;
- if (resp->hdr.status != PFVF_STATUS_SUCCESS)
- return ECORE_INVAL;
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
+ rc = ECORE_INVAL;
+ goto exit;
+ }
if (pp_doorbell) {
- u8 cid = p_iov->acquire_resp.resc.cid[tx_queue_id];
+ /* Modern PFs provide the actual offsets, while legacy
+ * provided only the queue id.
+ */
+ if (!p_iov->b_pre_fp_hsi) {
+ *pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
+ resp->offset;
+ } else {
+ u8 cid = p_iov->acquire_resp.resc.cid[tx_queue_id];
*pp_doorbell = (u8 OSAL_IOMEM *)p_hwfn->doorbells +
DB_ADDR_VF(cid, DQ_DEMS_LEGACY);
}
+ DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
+ "Txq[0x%02x]: doorbell at %p [offset 0x%08x]\n",
+ tx_queue_id, *pp_doorbell, resp->offset);
+ }
+
+exit:
+ ecore_vf_pf_req_end(p_hwfn, rc);
+
return rc;
}
@@ -529,17 +639,12 @@ enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn, u16 tx_qid)
{
struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
struct vfpf_stop_txqs_tlv *req;
- struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+ struct pfvf_def_resp_tlv *resp;
int rc;
/* clear mailbox and prep first tlv */
req = ecore_vf_pf_prep(p_hwfn, CHANNEL_TLV_STOP_TXQS, sizeof(*req));
- /* @@@TBD MichalK TPA */
-
- /* @@@TBD MichalK - relevant ??? flags
- * VFPF_QUEUE_FLG_OV VFPF_QUEUE_FLG_VLAN
- */
req->tx_qid = tx_qid;
req->num_txqs = 1;
@@ -548,12 +653,18 @@ enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn, u16 tx_qid)
CHANNEL_TLV_LIST_END,
sizeof(struct channel_list_end_tlv));
+ resp = &p_iov->pf2vf_reply->default_resp;
rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
if (rc)
- return rc;
+ goto exit;
- if (resp->hdr.status != PFVF_STATUS_SUCCESS)
- return ECORE_INVAL;
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
+ rc = ECORE_INVAL;
+ goto exit;
+ }
+
+exit:
+ ecore_vf_pf_req_end(p_hwfn, rc);
return rc;
}
@@ -586,10 +697,15 @@ enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
if (rc)
- return rc;
+ goto exit;
- if (resp->hdr.status != PFVF_STATUS_SUCCESS)
- return ECORE_INVAL;
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
+ rc = ECORE_INVAL;
+ goto exit;
+ }
+
+exit:
+ ecore_vf_pf_req_end(p_hwfn, rc);
return rc;
}
@@ -602,7 +718,7 @@ ecore_vf_pf_vport_start(struct ecore_hwfn *p_hwfn, u8 vport_id,
{
struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
struct vfpf_vport_start_tlv *req;
- struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+ struct pfvf_def_resp_tlv *resp;
int rc, i;
/* clear mailbox and prep first tlv */
@@ -625,12 +741,18 @@ ecore_vf_pf_vport_start(struct ecore_hwfn *p_hwfn, u8 vport_id,
CHANNEL_TLV_LIST_END,
sizeof(struct channel_list_end_tlv));
+ resp = &p_iov->pf2vf_reply->default_resp;
rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
if (rc)
- return rc;
+ goto exit;
- if (resp->hdr.status != PFVF_STATUS_SUCCESS)
- return ECORE_INVAL;
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
+ rc = ECORE_INVAL;
+ goto exit;
+ }
+
+exit:
+ ecore_vf_pf_req_end(p_hwfn, rc);
return rc;
}
@@ -652,14 +774,56 @@ enum _ecore_status_t ecore_vf_pf_vport_stop(struct ecore_hwfn *p_hwfn)
rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
if (rc)
- return rc;
+ goto exit;
- if (resp->hdr.status != PFVF_STATUS_SUCCESS)
- return ECORE_INVAL;
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
+ rc = ECORE_INVAL;
+ goto exit;
+ }
+
+exit:
+ ecore_vf_pf_req_end(p_hwfn, rc);
return rc;
}
+static bool
+ecore_vf_handle_vp_update_is_needed(struct ecore_hwfn *p_hwfn,
+ struct ecore_sp_vport_update_params *p_data,
+ u16 tlv)
+{
+ switch (tlv) {
+ case CHANNEL_TLV_VPORT_UPDATE_ACTIVATE:
+ return !!(p_data->update_vport_active_rx_flg ||
+ p_data->update_vport_active_tx_flg);
+ case CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH:
+#ifndef ASIC_ONLY
+ /* FPGA doesn't have PVFC and so can't support tx-switching */
+ return !!(p_data->update_tx_switching_flg &&
+ !CHIP_REV_IS_FPGA(p_hwfn->p_dev));
+#else
+ return !!p_data->update_tx_switching_flg;
+#endif
+ case CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP:
+ return !!p_data->update_inner_vlan_removal_flg;
+ case CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN:
+ return !!p_data->update_accept_any_vlan_flg;
+ case CHANNEL_TLV_VPORT_UPDATE_MCAST:
+ return !!p_data->update_approx_mcast_flg;
+ case CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM:
+ return !!(p_data->accept_flags.update_rx_mode_config ||
+ p_data->accept_flags.update_tx_mode_config);
+ case CHANNEL_TLV_VPORT_UPDATE_RSS:
+ return !!p_data->rss_params;
+ case CHANNEL_TLV_VPORT_UPDATE_SGE_TPA:
+ return !!p_data->sge_tpa_params;
+ default:
+ DP_INFO(p_hwfn, "Unexpected vport-update TLV[%d] %s\n",
+ tlv, ecore_channel_tlvs_string[tlv]);
+ return false;
+ }
+}
+
static void
ecore_vf_handle_vp_update_tlvs_resp(struct ecore_hwfn *p_hwfn,
struct ecore_sp_vport_update_params *p_data)
@@ -668,96 +832,20 @@ ecore_vf_handle_vp_update_tlvs_resp(struct ecore_hwfn *p_hwfn,
struct pfvf_def_resp_tlv *p_resp;
u16 tlv;
- if (p_data->update_vport_active_rx_flg ||
- p_data->update_vport_active_tx_flg) {
- tlv = CHANNEL_TLV_VPORT_UPDATE_ACTIVATE;
- p_resp = (struct pfvf_def_resp_tlv *)
- ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply, tlv);
- if (p_resp && p_resp->hdr.status)
- DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "VP update activate tlv configured\n");
- else
- DP_NOTICE(p_hwfn, true,
- "VP update activate tlv config failed\n");
- }
-
- if (p_data->update_tx_switching_flg) {
- tlv = CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH;
- p_resp = (struct pfvf_def_resp_tlv *)
- ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply, tlv);
- if (p_resp && p_resp->hdr.status)
- DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "VP update tx switch tlv configured\n");
-#ifndef ASIC_ONLY
- else if (CHIP_REV_IS_FPGA(p_hwfn->p_dev))
- DP_NOTICE(p_hwfn, false,
- "FPGA: Skip checking whether PF"
- " replied to Tx-switching request\n");
-#endif
- else
- DP_NOTICE(p_hwfn, true,
- "VP update tx switch tlv config failed\n");
- }
-
- if (p_data->update_inner_vlan_removal_flg) {
- tlv = CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP;
- p_resp = (struct pfvf_def_resp_tlv *)
- ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply, tlv);
- if (p_resp && p_resp->hdr.status)
- DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "VP update vlan strip tlv configured\n");
- else
- DP_NOTICE(p_hwfn, true,
- "VP update vlan strip tlv config failed\n");
- }
-
- if (p_data->update_approx_mcast_flg) {
- tlv = CHANNEL_TLV_VPORT_UPDATE_MCAST;
- p_resp = (struct pfvf_def_resp_tlv *)
- ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply, tlv);
- if (p_resp && p_resp->hdr.status)
- DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "VP update mcast tlv configured\n");
- else
- DP_NOTICE(p_hwfn, true,
- "VP update mcast tlv config failed\n");
- }
-
- if (p_data->accept_flags.update_rx_mode_config ||
- p_data->accept_flags.update_tx_mode_config) {
- tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM;
- p_resp = (struct pfvf_def_resp_tlv *)
- ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply, tlv);
- if (p_resp && p_resp->hdr.status)
- DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "VP update accept_mode tlv configured\n");
- else
- DP_NOTICE(p_hwfn, true,
- "VP update accept_mode tlv config failed\n");
- }
-
- if (p_data->rss_params) {
- tlv = CHANNEL_TLV_VPORT_UPDATE_RSS;
- p_resp = (struct pfvf_def_resp_tlv *)
- ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply, tlv);
- if (p_resp && p_resp->hdr.status)
- DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "VP update rss tlv configured\n");
- else
- DP_NOTICE(p_hwfn, true,
- "VP update rss tlv config failed\n");
- }
+ for (tlv = CHANNEL_TLV_VPORT_UPDATE_ACTIVATE;
+ tlv < CHANNEL_TLV_VPORT_UPDATE_MAX;
+ tlv++) {
+ if (!ecore_vf_handle_vp_update_is_needed(p_hwfn, p_data, tlv))
+ continue;
- if (p_data->sge_tpa_params) {
- tlv = CHANNEL_TLV_VPORT_UPDATE_SGE_TPA;
p_resp = (struct pfvf_def_resp_tlv *)
ecore_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply, tlv);
if (p_resp && p_resp->hdr.status)
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
- "VP update sge tpa tlv configured\n");
- else
- DP_NOTICE(p_hwfn, true,
- "VP update sge tpa tlv config failed\n");
+ "TLV[%d] type %s Configuration %s\n",
+ tlv, ecore_channel_tlvs_string[tlv],
+ (p_resp && p_resp->hdr.status) ? "succeeded"
+ : "failed");
}
}
@@ -765,15 +853,7 @@ enum _ecore_status_t
ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
struct ecore_sp_vport_update_params *p_params)
{
- struct vfpf_vport_update_accept_any_vlan_tlv *p_any_vlan_tlv;
- struct vfpf_vport_update_accept_param_tlv *p_accept_tlv;
- struct vfpf_vport_update_tx_switch_tlv *p_tx_switch_tlv;
- struct vfpf_vport_update_mcast_bin_tlv *p_mcast_tlv;
- struct vfpf_vport_update_vlan_strip_tlv *p_vlan_tlv;
- struct vfpf_vport_update_sge_tpa_tlv *p_sge_tpa_tlv;
- struct vfpf_vport_update_activate_tlv *p_act_tlv;
struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
- struct vfpf_vport_update_rss_tlv *p_rss_tlv;
struct vfpf_vport_update_tlv *req;
struct pfvf_def_resp_tlv *resp;
u8 update_rx, update_tx;
@@ -792,6 +872,8 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
/* Prepare extended tlvs */
if (update_rx || update_tx) {
+ struct vfpf_vport_update_activate_tlv *p_act_tlv;
+
size = sizeof(struct vfpf_vport_update_activate_tlv);
p_act_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
CHANNEL_TLV_VPORT_UPDATE_ACTIVATE,
@@ -810,6 +892,8 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
}
if (p_params->update_inner_vlan_removal_flg) {
+ struct vfpf_vport_update_vlan_strip_tlv *p_vlan_tlv;
+
size = sizeof(struct vfpf_vport_update_vlan_strip_tlv);
p_vlan_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP,
@@ -820,6 +904,8 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
}
if (p_params->update_tx_switching_flg) {
+ struct vfpf_vport_update_tx_switch_tlv *p_tx_switch_tlv;
+
size = sizeof(struct vfpf_vport_update_tx_switch_tlv);
tlv = CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH;
p_tx_switch_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
@@ -830,6 +916,8 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
}
if (p_params->update_approx_mcast_flg) {
+ struct vfpf_vport_update_mcast_bin_tlv *p_mcast_tlv;
+
size = sizeof(struct vfpf_vport_update_mcast_bin_tlv);
p_mcast_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
CHANNEL_TLV_VPORT_UPDATE_MCAST,
@@ -845,6 +933,8 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
update_tx = p_params->accept_flags.update_tx_mode_config;
if (update_rx || update_tx) {
+ struct vfpf_vport_update_accept_param_tlv *p_accept_tlv;
+
tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM;
size = sizeof(struct vfpf_vport_update_accept_param_tlv);
p_accept_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset, tlv, size);
@@ -865,6 +955,7 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
if (p_params->rss_params) {
struct ecore_rss_params *rss_params = p_params->rss_params;
+ struct vfpf_vport_update_rss_tlv *p_rss_tlv;
size = sizeof(struct vfpf_vport_update_rss_tlv);
p_rss_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
@@ -893,6 +984,8 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
}
if (p_params->update_accept_any_vlan_flg) {
+ struct vfpf_vport_update_accept_any_vlan_tlv *p_any_vlan_tlv;
+
size = sizeof(struct vfpf_vport_update_accept_any_vlan_tlv);
tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN;
p_any_vlan_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
@@ -905,9 +998,10 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
}
if (p_params->sge_tpa_params) {
- struct ecore_sge_tpa_params *sge_tpa_params =
- p_params->sge_tpa_params;
+ struct ecore_sge_tpa_params *sge_tpa_params;
+ struct vfpf_vport_update_sge_tpa_tlv *p_sge_tpa_tlv;
+ sge_tpa_params = p_params->sge_tpa_params;
size = sizeof(struct vfpf_vport_update_sge_tpa_tlv);
p_sge_tpa_tlv = ecore_add_tlv(p_hwfn, &p_iov->offset,
CHANNEL_TLV_VPORT_UPDATE_SGE_TPA,
@@ -953,21 +1047,26 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, resp_size);
if (rc)
- return rc;
+ goto exit;
- if (resp->hdr.status != PFVF_STATUS_SUCCESS)
- return ECORE_INVAL;
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
+ rc = ECORE_INVAL;
+ goto exit;
+ }
ecore_vf_handle_vp_update_tlvs_resp(p_hwfn, p_params);
+exit:
+ ecore_vf_pf_req_end(p_hwfn, rc);
+
return rc;
}
enum _ecore_status_t ecore_vf_pf_reset(struct ecore_hwfn *p_hwfn)
{
struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct pfvf_def_resp_tlv *resp;
struct vfpf_first_tlv *req;
- struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
int rc;
/* clear mailbox and prep first tlv */
@@ -978,23 +1077,29 @@ enum _ecore_status_t ecore_vf_pf_reset(struct ecore_hwfn *p_hwfn)
CHANNEL_TLV_LIST_END,
sizeof(struct channel_list_end_tlv));
+ resp = &p_iov->pf2vf_reply->default_resp;
rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
if (rc)
- return rc;
+ goto exit;
- if (resp->hdr.status != PFVF_STATUS_SUCCESS)
- return ECORE_AGAIN;
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
+ rc = ECORE_AGAIN;
+ goto exit;
+ }
p_hwfn->b_int_enabled = 0;
- return ECORE_SUCCESS;
+exit:
+ ecore_vf_pf_req_end(p_hwfn, rc);
+
+ return rc;
}
enum _ecore_status_t ecore_vf_pf_release(struct ecore_hwfn *p_hwfn)
{
struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct pfvf_def_resp_tlv *resp;
struct vfpf_first_tlv *req;
- struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
u32 size;
int rc;
@@ -1006,16 +1111,15 @@ enum _ecore_status_t ecore_vf_pf_release(struct ecore_hwfn *p_hwfn)
CHANNEL_TLV_LIST_END,
sizeof(struct channel_list_end_tlv));
+ resp = &p_iov->pf2vf_reply->default_resp;
rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
if (rc == ECORE_SUCCESS && resp->hdr.status != PFVF_STATUS_SUCCESS)
rc = ECORE_AGAIN;
- p_hwfn->b_int_enabled = 0;
+ ecore_vf_pf_req_end(p_hwfn, rc);
- /* TODO - might need to revise this for 100g */
- if (IS_LEAD_HWFN(p_hwfn))
- OSAL_MUTEX_DEALLOC(&p_iov->mutex);
+ p_hwfn->b_int_enabled = 0;
if (p_iov->vf2pf_request)
OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev,
@@ -1068,7 +1172,7 @@ enum _ecore_status_t ecore_vf_pf_filter_ucast(struct ecore_hwfn *p_hwfn,
{
struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
struct vfpf_ucast_filter_tlv *req;
- struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+ struct pfvf_def_resp_tlv *resp;
int rc;
/* Sanitize */
@@ -1090,14 +1194,20 @@ enum _ecore_status_t ecore_vf_pf_filter_ucast(struct ecore_hwfn *p_hwfn,
CHANNEL_TLV_LIST_END,
sizeof(struct channel_list_end_tlv));
+ resp = &p_iov->pf2vf_reply->default_resp;
rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
if (rc)
- return rc;
+ goto exit;
- if (resp->hdr.status != PFVF_STATUS_SUCCESS)
- return ECORE_AGAIN;
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
+ rc = ECORE_AGAIN;
+ goto exit;
+ }
- return ECORE_SUCCESS;
+exit:
+ ecore_vf_pf_req_end(p_hwfn, rc);
+
+ return rc;
}
enum _ecore_status_t ecore_vf_pf_int_cleanup(struct ecore_hwfn *p_hwfn)
@@ -1117,21 +1227,40 @@ enum _ecore_status_t ecore_vf_pf_int_cleanup(struct ecore_hwfn *p_hwfn)
rc = ecore_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
if (rc)
+ goto exit;
+
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS) {
+ rc = ECORE_INVAL;
+ goto exit;
+ }
+
+exit:
+ ecore_vf_pf_req_end(p_hwfn, rc);
+
return rc;
+}
- if (resp->hdr.status != PFVF_STATUS_SUCCESS)
- return ECORE_INVAL;
+u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn,
+ u16 sb_id)
+{
+ struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
- return ECORE_SUCCESS;
+ if (!p_iov) {
+ DP_NOTICE(p_hwfn, true, "vf_sriov_info isn't initialized\n");
+ return 0;
+ }
+
+ return p_iov->acquire_resp.resc.hw_sbs[sb_id].hw_sb_id;
}
enum _ecore_status_t ecore_vf_read_bulletin(struct ecore_hwfn *p_hwfn,
u8 *p_change)
{
- struct ecore_bulletin_content shadow;
struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
- u32 crc, crc_size = sizeof(p_iov->bulletin.p_virt->crc);
+ struct ecore_bulletin_content shadow;
+ u32 crc, crc_size;
+ crc_size = sizeof(p_iov->bulletin.p_virt->crc);
*p_change = 0;
/* Need to guarantee PF is not in the middle of writing it */
@@ -1158,18 +1287,6 @@ enum _ecore_status_t ecore_vf_read_bulletin(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id)
-{
- struct ecore_vf_iov *p_iov = p_hwfn->vf_iov_info;
-
- if (!p_iov) {
- DP_NOTICE(p_hwfn, true, "vf_sriov_info isn't initialized\n");
- return 0;
- }
-
- return p_iov->acquire_resp.resc.hw_sbs[sb_id].hw_sb_id;
-}
-
void __ecore_vf_get_link_params(struct ecore_hwfn *p_hwfn,
struct ecore_mcp_link_params *p_params,
struct ecore_bulletin_content *p_bulletin)
@@ -1317,6 +1434,11 @@ bool ecore_vf_bulletin_get_forced_vlan(struct ecore_hwfn *hwfn, u16 *dst_pvid)
return true;
}
+bool ecore_vf_get_pre_fp_hsi(struct ecore_hwfn *p_hwfn)
+{
+ return p_hwfn->vf_iov_info->b_pre_fp_hsi;
+}
+
void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn,
u16 *fw_major, u16 *fw_minor, u16 *fw_rev,
u16 *fw_eng)
diff --git a/drivers/net/qede/base/ecore_vf.h b/drivers/net/qede/base/ecore_vf.h
index 7600710..6077d60 100644
--- a/drivers/net/qede/base/ecore_vf.h
+++ b/drivers/net/qede/base/ecore_vf.h
@@ -14,31 +14,42 @@
#include "ecore_l2_api.h"
#include "ecore_vfpf_if.h"
+/* This data is held in the ecore_hwfn structure for VFs only. */
+struct ecore_vf_iov {
+ union vfpf_tlvs *vf2pf_request;
+ dma_addr_t vf2pf_request_phys;
+ union pfvf_tlvs *pf2vf_reply;
+ dma_addr_t pf2vf_reply_phys;
+
+ /* Should be taken whenever the mailbox buffers are accessed */
+ osal_mutex_t mutex;
+ u8 *offset;
+
+ /* Bulletin Board */
+ struct ecore_bulletin bulletin;
+ struct ecore_bulletin_content bulletin_shadow;
+
+ /* we set aside a copy of the acquire response */
+ struct pfvf_acquire_resp_tlv acquire_resp;
+
+ /* In case PF originates prior to the fp-hsi version comparison,
+ * this has to be propagated as it affects the fastpath.
+ */
+ bool b_pre_fp_hsi;
+};
+
#ifdef CONFIG_ECORE_SRIOV
/**
- *
* @brief hw preparation for VF
* sends ACQUIRE message
*
- * @param p_dev
- *
- * @return enum _ecore_status_t
- */
-enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_dev *p_dev);
-
-/**
- *
- * @brief VF init in hw (equivalent to hw_init in PF)
- * mark interrupts as enabled
- *
* @param p_hwfn
*
* @return enum _ecore_status_t
*/
-enum _ecore_status_t ecore_vf_pf_init(struct ecore_hwfn *p_hwfn);
+enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn);
/**
- *
* @brief VF - start the RX Queue by sending a message to the PF
*
* @param p_hwfn
@@ -51,7 +62,7 @@ enum _ecore_status_t ecore_vf_pf_init(struct ecore_hwfn *p_hwfn);
* @param cqe_pbl_addr - physical address of pbl
* @param cqe_pbl_size - pbl size
* @param pp_prod - pointer to the producer to be
- * used in fasthwfn
+ * used in fasthpath
*
* @return enum _ecore_status_t
*/
@@ -66,7 +77,6 @@ enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn *p_hwfn,
void OSAL_IOMEM **pp_prod);
/**
- *
* @brief VF - start the TX queue by sending a message to the
* PF.
*
@@ -89,7 +99,6 @@ enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
void OSAL_IOMEM **pp_doorbell);
/**
- *
* @brief VF - stop the RX queue by sending a message to the PF
*
* @param p_hwfn
@@ -99,10 +108,10 @@ enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn *p_hwfn,
* @return enum _ecore_status_t
*/
enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
- u16 rx_qid, bool cqe_completion);
+ u16 rx_qid,
+ bool cqe_completion);
/**
- *
* @brief VF - stop the TX queue by sending a message to the PF
*
* @param p_hwfn
@@ -113,6 +122,7 @@ enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
u16 tx_qid);
+#ifndef LINUX_REMOVE
/**
* @brief VF - update the RX queue by sending a message to the
* PF
@@ -126,14 +136,15 @@ enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn *p_hwfn,
*
* @return enum _ecore_status_t
*/
-enum _ecore_status_t ecore_vf_pf_rxqs_update(struct ecore_hwfn *p_hwfn,
+enum _ecore_status_t ecore_vf_pf_rxqs_update(
+ struct ecore_hwfn *p_hwfn,
u16 rx_queue_id,
u8 num_rxqs,
u8 comp_cqe_flg,
u8 comp_event_flg);
+#endif
/**
- *
* @brief VF - send a vport update command
*
* @param p_hwfn
@@ -146,7 +157,6 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
struct ecore_sp_vport_update_params *p_params);
/**
- *
* @brief VF - send a close message to PF
*
* @param p_hwfn
@@ -156,7 +166,6 @@ ecore_vf_pf_vport_update(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t ecore_vf_pf_reset(struct ecore_hwfn *p_hwfn);
/**
- *
* @brief VF - free vf`s memories
*
* @param p_hwfn
@@ -166,7 +175,6 @@ enum _ecore_status_t ecore_vf_pf_reset(struct ecore_hwfn *p_hwfn);
enum _ecore_status_t ecore_vf_pf_release(struct ecore_hwfn *p_hwfn);
/**
- *
* @brief ecore_vf_get_igu_sb_id - Get the IGU SB ID for a given
* sb_id. For VFs igu sbs don't have to be contiguous
*
@@ -175,7 +183,9 @@ enum _ecore_status_t ecore_vf_pf_release(struct ecore_hwfn *p_hwfn);
*
* @return INLINE u16
*/
-u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id);
+u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn,
+ u16 sb_id);
+
/**
* @brief ecore_vf_pf_vport_start - perform vport start for VF.
@@ -190,7 +200,8 @@ u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn, u16 sb_id);
*
* @return enum _ecore_status
*/
-enum _ecore_status_t ecore_vf_pf_vport_start(struct ecore_hwfn *p_hwfn,
+enum _ecore_status_t ecore_vf_pf_vport_start(
+ struct ecore_hwfn *p_hwfn,
u8 vport_id,
u16 mtu,
u8 inner_vlan_removal,
@@ -207,9 +218,9 @@ enum _ecore_status_t ecore_vf_pf_vport_start(struct ecore_hwfn *p_hwfn,
*/
enum _ecore_status_t ecore_vf_pf_vport_stop(struct ecore_hwfn *p_hwfn);
-enum _ecore_status_t ecore_vf_pf_filter_ucast(struct ecore_hwfn *p_hwfn,
- struct ecore_filter_ucast
- *p_param);
+enum _ecore_status_t ecore_vf_pf_filter_ucast(
+ struct ecore_hwfn *p_hwfn,
+ struct ecore_filter_ucast *p_param);
void ecore_vf_pf_filter_mcast(struct ecore_hwfn *p_hwfn,
struct ecore_filter_mcast *p_filter_cmd);
@@ -256,160 +267,5 @@ void __ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
struct ecore_mcp_link_capabilities *p_link_caps,
struct ecore_bulletin_content *p_bulletin);
-#else
-static OSAL_INLINE enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_dev
- *p_dev)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_init(struct ecore_hwfn
- *p_hwfn)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_rxq_start(struct ecore_hwfn
- *p_hwfn,
- u8 rx_queue_id,
- u16 sb,
- u8 sb_index,
- u16 bd_max_bytes,
- dma_addr_t
- bd_chain_phys_adr,
- dma_addr_t
- cqe_pbl_addr,
- u16 cqe_pbl_size,
- void OSAL_IOMEM *
- *pp_prod)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_txq_start(struct ecore_hwfn
- *p_hwfn,
- u16 tx_queue_id,
- u16 sb,
- u8 sb_index,
- dma_addr_t
- pbl_addr,
- u16 pbl_size,
- void OSAL_IOMEM *
- *pp_doorbell)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_rxq_stop(struct ecore_hwfn
- *p_hwfn,
- u16 rx_qid,
- bool
- cqe_completion)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_txq_stop(struct ecore_hwfn
- *p_hwfn,
- u16 tx_qid)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_rxqs_update(struct
- ecore_hwfn
- * p_hwfn,
- u16 rx_queue_id,
- u8 num_rxqs,
- u8 comp_cqe_flg,
- u8
- comp_event_flg)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_vport_update(
- struct ecore_hwfn *p_hwfn,
- struct ecore_sp_vport_update_params *p_params)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_reset(struct ecore_hwfn
- *p_hwfn)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_release(struct ecore_hwfn
- *p_hwfn)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE u16 ecore_vf_get_igu_sb_id(struct ecore_hwfn *p_hwfn,
- u16 sb_id)
-{
- return 0;
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_vport_start(
- struct ecore_hwfn *p_hwfn, u8 vport_id, u16 mtu,
- u8 inner_vlan_removal, enum ecore_tpa_mode tpa_mode,
- u8 max_buffers_per_cqe, u8 only_untagged)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_vport_stop(
- struct ecore_hwfn *p_hwfn)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_filter_ucast(
- struct ecore_hwfn *p_hwfn, struct ecore_filter_ucast *p_param)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE void ecore_vf_pf_filter_mcast(struct ecore_hwfn *p_hwfn,
- struct ecore_filter_mcast
- *p_filter_cmd)
-{
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_vf_pf_int_cleanup(struct
- ecore_hwfn
- * p_hwfn)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE void __ecore_vf_get_link_params(struct ecore_hwfn *p_hwfn,
- struct ecore_mcp_link_params
- *p_params,
- struct ecore_bulletin_content
- *p_bulletin)
-{
-}
-
-static OSAL_INLINE void __ecore_vf_get_link_state(struct ecore_hwfn *p_hwfn,
- struct ecore_mcp_link_state
- *p_link,
- struct ecore_bulletin_content
- *p_bulletin)
-{
-}
-
-static OSAL_INLINE void __ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
- struct
- ecore_mcp_link_capabilities
- * p_link_caps,
- struct ecore_bulletin_content
- *p_bulletin)
-{
-}
#endif
-
#endif /* __ECORE_VF_H__ */
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
index a2b4ba5..c73244f 100644
--- a/drivers/net/qede/base/ecore_vf_api.h
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -57,7 +57,8 @@ void ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
* @param p_hwfn
* @param num_rxqs - allocated RX queues
*/
-void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn, u8 *num_rxqs);
+void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn,
+ u8 *num_rxqs);
/**
* @brief Get port mac address for VF
@@ -65,7 +66,8 @@ void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn, u8 *num_rxqs);
* @param p_hwfn
* @param port_mac - destination location for port mac
*/
-void ecore_vf_get_port_mac(struct ecore_hwfn *p_hwfn, u8 *port_mac);
+void ecore_vf_get_port_mac(struct ecore_hwfn *p_hwfn,
+ u8 *port_mac);
/**
* @brief Get number of VLAN filters allocated for VF by ecore
@@ -80,7 +82,7 @@ void ecore_vf_get_num_vlan_filters(struct ecore_hwfn *p_hwfn,
* @brief Get number of MAC filters allocated for VF by ecore
*
* @param p_hwfn
- * @param num_mac - allocated MAC filters
+ * @param num_mac_filters - allocated MAC filters
*/
void ecore_vf_get_num_mac_filters(struct ecore_hwfn *p_hwfn,
u32 *num_mac_filters);
@@ -95,6 +97,7 @@ void ecore_vf_get_num_mac_filters(struct ecore_hwfn *p_hwfn,
*/
bool ecore_vf_check_mac(struct ecore_hwfn *p_hwfn, u8 *mac);
+#ifndef LINUX_REMOVE
/**
* @brief Copy forced MAC address from bulletin board
*
@@ -120,8 +123,20 @@ bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn *hwfn, u8 *dst_mac,
bool ecore_vf_bulletin_get_forced_vlan(struct ecore_hwfn *hwfn, u16 *dst_pvid);
/**
- * @brief Set firmware version information in dev_info from VFs acquire response
- * tlv
+ * @brief Check if VF is based on PF whose driver is pre-fp-hsi version;
+ * This affects the fastpath implementation of the driver.
+ *
+ * @param p_hwfn
+ *
+ * @return bool - true iff PF is pre-fp-hsi version.
+ */
+bool ecore_vf_get_pre_fp_hsi(struct ecore_hwfn *p_hwfn);
+
+#endif
+
+/**
+ * @brief Set firmware version information in dev_info from VFs acquire
+ * response tlv
*
* @param p_hwfn
* @param fw_major
@@ -131,70 +146,8 @@ bool ecore_vf_bulletin_get_forced_vlan(struct ecore_hwfn *hwfn, u16 *dst_pvid);
*/
void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn,
u16 *fw_major,
- u16 *fw_minor, u16 *fw_rev, u16 *fw_eng);
-#else
-static OSAL_INLINE enum _ecore_status_t ecore_vf_read_bulletin(struct ecore_hwfn
- *p_hwfn,
- u8 *p_change)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE void ecore_vf_get_link_params(struct ecore_hwfn *p_hwfn,
- struct ecore_mcp_link_params
- *params)
-{
-}
-
-static OSAL_INLINE void ecore_vf_get_link_state(struct ecore_hwfn *p_hwfn,
- struct ecore_mcp_link_state
- *link)
-{
-}
-
-static OSAL_INLINE void ecore_vf_get_link_caps(struct ecore_hwfn *p_hwfn,
- struct
- ecore_mcp_link_capabilities
- * p_link_caps)
-{
-}
-
-static OSAL_INLINE void ecore_vf_get_num_rxqs(struct ecore_hwfn *p_hwfn,
- u8 *num_rxqs)
-{
-}
-
-static OSAL_INLINE void ecore_vf_get_port_mac(struct ecore_hwfn *p_hwfn,
- u8 *port_mac)
-{
-}
-
-static OSAL_INLINE void ecore_vf_get_num_vlan_filters(struct ecore_hwfn *p_hwfn,
- u8 *num_vlan_filters)
-{
-}
-
-static OSAL_INLINE void ecore_vf_get_num_mac_filters(struct ecore_hwfn *p_hwfn,
- u32 *num_mac)
-{
-}
-
-static OSAL_INLINE bool ecore_vf_check_mac(struct ecore_hwfn *p_hwfn, u8 *mac)
-{
- return false;
-}
-
-static OSAL_INLINE bool ecore_vf_bulletin_get_forced_mac(struct ecore_hwfn
- *hwfn, u8 *dst_mac,
- u8 *p_is_forced)
-{
- return false;
-}
-
-static OSAL_INLINE void ecore_vf_get_fw_version(struct ecore_hwfn *p_hwfn,
- u16 *fw_major, u16 *fw_minor,
- u16 *fw_rev, u16 *fw_eng)
-{
-}
+ u16 *fw_minor,
+ u16 *fw_rev,
+ u16 *fw_eng);
#endif
#endif
diff --git a/drivers/net/qede/base/ecore_vfpf_if.h b/drivers/net/qede/base/ecore_vfpf_if.h
index e98a2a7..149d092 100644
--- a/drivers/net/qede/base/ecore_vfpf_if.h
+++ b/drivers/net/qede/base/ecore_vfpf_if.h
@@ -9,11 +9,9 @@
#ifndef __ECORE_VF_PF_IF_H__
#define __ECORE_VF_PF_IF_H__
+/* @@@ TBD MichalK this should be HSI? */
#define T_ETH_INDIRECTION_TABLE_SIZE 128
-#define T_ETH_RSS_KEY_SIZE 10
-#ifndef aligned_u64
-#define aligned_u64 u64
-#endif
+#define T_ETH_RSS_KEY_SIZE 10 /* @@@ TBD this should be HSI? */
/***********************************************
*
@@ -44,29 +42,6 @@ struct hw_sb_info {
*
**/
#define TLV_BUFFER_SIZE 1024
-#define TLV_ALIGN sizeof(u64)
-#define PF_VF_BULLETIN_SIZE 512
-
-#define VFPF_RX_MASK_ACCEPT_NONE 0x00000000
-#define VFPF_RX_MASK_ACCEPT_MATCHED_UNICAST 0x00000001
-#define VFPF_RX_MASK_ACCEPT_MATCHED_MULTICAST 0x00000002
-#define VFPF_RX_MASK_ACCEPT_ALL_UNICAST 0x00000004
-#define VFPF_RX_MASK_ACCEPT_ALL_MULTICAST 0x00000008
-#define VFPF_RX_MASK_ACCEPT_BROADCAST 0x00000010
-/* TODO: #define VFPF_RX_MASK_ACCEPT_ANY_VLAN 0x00000020 */
-
-#define BULLETIN_CONTENT_SIZE (sizeof(struct pf_vf_bulletin_content))
-#define BULLETIN_ATTEMPTS 5 /* crc failures before throwing towel */
-#define BULLETIN_CRC_SEED 0
-
-enum {
- PFVF_STATUS_WAITING = 0,
- PFVF_STATUS_SUCCESS,
- PFVF_STATUS_FAILURE,
- PFVF_STATUS_NOT_SUPPORTED,
- PFVF_STATUS_NO_RESOURCE,
- PFVF_STATUS_FORCED,
-};
/* vf pf channel tlvs */
/* general tlv header (used for both vf->pf request and pf->vf response) */
@@ -81,7 +56,7 @@ struct channel_tlv {
struct vfpf_first_tlv {
struct channel_tlv tl;
u32 padding;
- aligned_u64 reply_address;
+ u64 reply_address;
};
/* header of pf->vf tlvs, carries the status of handling the request */
@@ -107,8 +82,17 @@ struct vfpf_acquire_tlv {
struct vfpf_first_tlv first_tlv;
struct vf_pf_vfdev_info {
-#define VFPF_ACQUIRE_CAP_OVERRIDE_FW_VER (1 << 0)
- aligned_u64 capabilties;
+#ifndef LINUX_REMOVE
+ /* First bit was used on 8.7.x and 8.8.x versions, which had different
+ * FWs used but with the same faspath HSI. As this was prior to the
+ * fastpath versioning, wanted to have ability to override fw matching
+ * and allow them to interact.
+ */
+#endif
+/* VF pre-FP hsi version */
+#define VFPF_ACQUIRE_CAP_PRE_FP_HSI (1 << 0)
+#define VFPF_ACQUIRE_CAP_100G (1 << 1) /* VF can support 100g */
+ u64 capabilities;
u8 fw_major;
u8 fw_minor;
u8 fw_revision;
@@ -116,12 +100,14 @@ struct vfpf_acquire_tlv {
u32 driver_version;
u16 opaque_fid; /* ME register value */
u8 os_type; /* VFPF_ACQUIRE_OS_* value */
- u8 padding[5];
+ u8 eth_fp_hsi_major;
+ u8 eth_fp_hsi_minor;
+ u8 padding[3];
} vfdev_info;
struct vf_pf_resc_request resc_request;
- aligned_u64 bulletin_addr;
+ u64 bulletin_addr;
u32 bulletin_size;
u32 padding;
};
@@ -168,14 +154,27 @@ struct pfvf_acquire_resp_tlv {
u16 fw_rev;
u16 fw_eng;
- aligned_u64 capabilities;
+ u64 capabilities;
#define PFVF_ACQUIRE_CAP_DEFAULT_UNTAGGED (1 << 0)
+#define PFVF_ACQUIRE_CAP_100G (1 << 1) /* If set, 100g PF */
+/* There are old PF versions where the PF might mistakenly override the sanity
+ * mechanism [version-based] and allow a VF that can't be supported to pass
+ * the acquisition phase.
+ * To overcome this, PFs now indicate that they're past that point and the new
+ * VFs would fail probe on the older PFs that fail to do so.
+ */
+#ifndef LINUX_REMOVE
+/* Said bug was in quest/serpens; Can't be certain no official release included
+ * the bug since the fix arrived very late in the programs.
+ */
+#endif
+#define PFVF_ACQUIRE_CAP_POST_FW_OVERRIDE (1 << 2)
u16 db_size;
u8 indices_per_sb;
u8 os_type;
- /* Thesee should match the PF's ecore_dev values */
+ /* These should match the PF's ecore_dev values */
u16 chip_rev;
u8 dev_type;
@@ -184,7 +183,14 @@ struct pfvf_acquire_resp_tlv {
struct pfvf_stats_info stats_info;
u8 port_mac[ETH_ALEN];
- u8 padding2[2];
+
+ /* It's possible PF had to configure an older fastpath HSI
+ * [in case VF is newer than PF]. This is communicated back
+ * to the VF. It can also be used in case of error due to
+ * non-matching versions to shed light in VF about failure.
+ */
+ u8 major_fp_hsi;
+ u8 minor_fp_hsi;
} pfdev_info;
struct pf_vf_resc {
@@ -211,16 +217,10 @@ struct pfvf_acquire_resp_tlv {
u32 padding;
};
-/* Init VF */
-struct vfpf_init_tlv {
- struct vfpf_first_tlv first_tlv;
- aligned_u64 stats_addr;
-
- u16 rx_mask;
- u16 tx_mask;
- u8 drop_ttl0_flg;
- u8 padding[3];
-
+struct pfvf_start_queue_resp_tlv {
+ struct pfvf_tlv hdr;
+ u32 offset; /* offset to consumer/producer of queue */
+ u8 padding[4];
};
/* Setup Queue */
@@ -228,9 +228,9 @@ struct vfpf_start_rxq_tlv {
struct vfpf_first_tlv first_tlv;
/* physical addresses */
- aligned_u64 rxq_addr;
- aligned_u64 deprecated_sge_addr;
- aligned_u64 cqe_pbl_addr;
+ u64 rxq_addr;
+ u64 deprecated_sge_addr;
+ u64 cqe_pbl_addr;
u16 cqe_pbl_size;
u16 hw_sb;
@@ -248,7 +248,7 @@ struct vfpf_start_txq_tlv {
struct vfpf_first_tlv first_tlv;
/* physical addresses */
- aligned_u64 pbl_addr;
+ u64 pbl_addr;
u16 pbl_size;
u16 stat_id;
u16 tx_qid;
@@ -282,7 +282,7 @@ struct vfpf_stop_txqs_tlv {
struct vfpf_update_rxq_tlv {
struct vfpf_first_tlv first_tlv;
- aligned_u64 deprecated_sge_addr[PFVF_MAX_QUEUES_PER_VF];
+ u64 deprecated_sge_addr[PFVF_MAX_QUEUES_PER_VF];
u16 rx_qid;
u8 num_rxqs;
@@ -311,7 +311,7 @@ struct vfpf_q_mac_vlan_filter {
struct vfpf_vport_start_tlv {
struct vfpf_first_tlv first_tlv;
- aligned_u64 sb_addr[PFVF_MAX_SBS_PER_VF];
+ u64 sb_addr[PFVF_MAX_SBS_PER_VF];
u32 tpa_mode;
u16 dep1;
@@ -351,7 +351,7 @@ struct vfpf_vport_update_mcast_bin_tlv {
struct channel_tlv tl;
u8 padding[4];
- aligned_u64 bins[8];
+ u64 bins[8];
};
struct vfpf_vport_update_accept_param_tlv {
@@ -423,7 +423,6 @@ struct tlv_buffer_size {
union vfpf_tlvs {
struct vfpf_first_tlv first_tlv;
struct vfpf_acquire_tlv acquire;
- struct vfpf_init_tlv init;
struct vfpf_start_rxq_tlv start_rxq;
struct vfpf_start_txq_tlv start_txq;
struct vfpf_stop_rxqs_tlv stop_rxqs;
@@ -432,15 +431,14 @@ union vfpf_tlvs {
struct vfpf_vport_start_tlv start_vport;
struct vfpf_vport_update_tlv vport_update;
struct vfpf_ucast_filter_tlv ucast_filter;
- struct channel_list_end_tlv list_end;
struct tlv_buffer_size tlv_buf_size;
};
union pfvf_tlvs {
struct pfvf_def_resp_tlv default_resp;
struct pfvf_acquire_resp_tlv acquire_resp;
- struct channel_list_end_tlv list_end;
struct tlv_buffer_size tlv_buf_size;
+ struct pfvf_start_queue_resp_tlv queue_start;
};
/* This is a structure which is allocated in the VF, which the PF may update
@@ -469,20 +467,19 @@ enum ecore_bulletin_bit {
};
struct ecore_bulletin_content {
- u32 crc; /* crc of structure to ensure is not in
- * mid-update
- */
+ /* crc of structure to ensure is not in mid-update */
+ u32 crc;
+
u32 version;
- aligned_u64 valid_bitmap; /* bitmap indicating wich fields
- * hold valid values
- */
+ /* bitmap indicating which fields hold valid values */
+ u64 valid_bitmap;
- u8 mac[ETH_ALEN]; /* used for MAC_ADDR or MAC_ADDR_FORCED */
+ /* used for MAC_ADDR or MAC_ADDR_FORCED */
+ u8 mac[ETH_ALEN];
- u8 default_only_untagged; /* If valid, 1 => only untagged Rx
- * if no vlan filter is configured.
- */
+ /* If valid, 1 => only untagged Rx if no vlan is configured */
+ u8 default_only_untagged;
u8 padding;
/* The following is a 'copy' of ecore_mcp_link_state,
@@ -529,7 +526,6 @@ struct ecore_bulletin {
u32 size;
};
-#ifndef print_enum
enum {
/*!!!!! Make sure to update STRINGS structure accordingly !!!!!*/
@@ -556,35 +552,15 @@ enum {
CHANNEL_TLV_VPORT_UPDATE_RSS,
CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN,
CHANNEL_TLV_VPORT_UPDATE_SGE_TPA,
- CHANNEL_TLV_MAX
+ CHANNEL_TLV_MAX,
+
+ /* Required for iterating over vport-update tlvs.
+ * Will break in case non-sequential vport-update tlvs.
+ */
+ CHANNEL_TLV_VPORT_UPDATE_MAX = CHANNEL_TLV_VPORT_UPDATE_SGE_TPA + 1,
+
/*!!!!! Make sure to update STRINGS structure accordingly !!!!!*/
};
extern const char *ecore_channel_tlvs_string[];
-#else
-print_enum(channel_tlvs, CHANNEL_TLV_NONE, /* ends tlv sequence */
- CHANNEL_TLV_ACQUIRE,
- CHANNEL_TLV_VPORT_START,
- CHANNEL_TLV_VPORT_UPDATE,
- CHANNEL_TLV_VPORT_TEARDOWN,
- CHANNEL_TLV_SETUP_RXQ,
- CHANNEL_TLV_SETUP_TXQ,
- CHANNEL_TLV_STOP_RXQS,
- CHANNEL_TLV_STOP_TXQS,
- CHANNEL_TLV_UPDATE_RXQ,
- CHANNEL_TLV_INT_CLEANUP,
- CHANNEL_TLV_CLOSE,
- CHANNEL_TLV_RELEASE,
- CHANNEL_TLV_LIST_END,
- CHANNEL_TLV_UCAST_FILTER,
- CHANNEL_TLV_VPORT_UPDATE_ACTIVATE,
- CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH,
- CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP,
- CHANNEL_TLV_VPORT_UPDATE_MCAST,
- CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM,
- CHANNEL_TLV_VPORT_UPDATE_RSS,
- CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN,
- CHANNEL_TLV_VPORT_UPDATE_SGE_TPA, CHANNEL_TLV_MAX);
-#endif
-
#endif /* __ECORE_VF_PF_IF_H__ */
diff --git a/drivers/net/qede/base/eth_common.h b/drivers/net/qede/base/eth_common.h
index cc310e3..137b374 100644
--- a/drivers/net/qede/base/eth_common.h
+++ b/drivers/net/qede/base/eth_common.h
@@ -11,6 +11,21 @@
/********************/
/* ETH FW CONSTANTS */
/********************/
+
+/* FP HSI version. FP HSI is compatible if (fwVer.major == drvVer.major &&
+ * fwVer.minor >= drvVer.minor)
+ */
+/* ETH FP HSI Major version */
+#define ETH_HSI_VER_MAJOR 3
+/* ETH FP HSI Minor version */
+#define ETH_HSI_VER_MINOR 10
+
+/* Alias for 8.7.x.x/8.8.x.x ETH FP HSI MINOR version. In this version driver
+ * is not required to set pkt_len field in eth_tx_1st_bd struct, and tunneling
+ * offload is not supported.
+ */
+#define ETH_HSI_VER_NO_PKT_LEN_TUNN 5
+
#define ETH_CACHE_LINE_SIZE 64
#define ETH_RX_CQE_GAP 32
#define ETH_MAX_RAMROD_PER_CON 8
@@ -21,19 +36,40 @@
#define ETH_TX_MIN_BDS_PER_NON_LSO_PKT 1
#define ETH_TX_MAX_BDS_PER_NON_LSO_PACKET 18
+#define ETH_TX_MAX_BDS_PER_LSO_PACKET 255
#define ETH_TX_MAX_LSO_HDR_NBD 4
#define ETH_TX_MIN_BDS_PER_LSO_PKT 3
#define ETH_TX_MIN_BDS_PER_TUNN_IPV6_WITH_EXT_PKT 3
#define ETH_TX_MIN_BDS_PER_IPV6_WITH_EXT_PKT 2
#define ETH_TX_MIN_BDS_PER_PKT_W_LOOPBACK_MODE 2
-#define ETH_TX_MAX_NON_LSO_PKT_LEN (9700 - (4 + 12 + 8))
+/* (QM_REG_TASKBYTECRDCOST_0, QM_VOQ_BYTE_CRD_TASK_COST) -
+ * (VLAN-TAG + CRC + IPG + PREAMBLE)
+ */
+#define ETH_TX_MAX_NON_LSO_PKT_LEN (9700 - (4 + 4 + 12 + 8))
#define ETH_TX_MAX_LSO_HDR_BYTES 510
-#define ETH_TX_LSO_WINDOW_BDS_NUM 18
+/* Number of BDs to consider for LSO sliding window restriction is
+ * (ETH_TX_LSO_WINDOW_BDS_NUM - hdr_nbd)
+ */
+#define ETH_TX_LSO_WINDOW_BDS_NUM (18 - 1)
+/* Minimum data length (in bytes) in LSO sliding window */
#define ETH_TX_LSO_WINDOW_MIN_LEN 9700
-#define ETH_TX_MAX_LSO_PAYLOAD_LEN 0xFFFF
-
+/* Maximum LSO packet TCP payload length (in bytes) */
+#define ETH_TX_MAX_LSO_PAYLOAD_LEN 0xFE000
+/* Number of same-as-last resources in tx switching */
+#define ETH_TX_NUM_SAME_AS_LAST_ENTRIES 320
+/* Value for a connection for which same as last feature is disabled */
+#define ETH_TX_INACTIVE_SAME_AS_LAST 0xFFFF
+
+/* Maximum number of statistics counters */
#define ETH_NUM_STATISTIC_COUNTERS MAX_NUM_VPORTS
-
+/* Maximum number of statistics counters when doubled VF zone used */
+#define ETH_NUM_STATISTIC_COUNTERS_DOUBLE_VF_ZONE \
+ (ETH_NUM_STATISTIC_COUNTERS - MAX_NUM_VFS / 2)
+/* Maximum number of statistics counters when quad VF zone used */
+#define ETH_NUM_STATISTIC_COUNTERS_QUAD_VF_ZONE \
+ (ETH_NUM_STATISTIC_COUNTERS - 3 * MAX_NUM_VFS / 4)
+
+/* Maximum number of buffers, used for RX packet placement */
#define ETH_RX_MAX_BUFF_PER_PKT 5
/* num of MAC/VLAN filters */
@@ -62,6 +98,12 @@
#define ETH_TPA_CQE_CONT_LEN_LIST_SIZE 6
#define ETH_TPA_CQE_END_LEN_LIST_SIZE 4
+/* Control frame check constants */
+/* Number of etherType values configured by driver for control frame check */
+#define ETH_CTL_FRAME_ETH_TYPE_NUM 4
+
+
+
/*
* Destination port mode
*/
@@ -108,19 +150,21 @@ struct eth_tx_1st_bd_flags {
* The parsing information data for the first tx bd of a given packet.
*/
struct eth_tx_data_1st_bd {
- __le16 vlan /* VLAN to insert to packet (if needed). */;
- /* Number of BDs in packet. Should be at least 2 in non-LSO
- * packet and at least 3 in LSO (or Tunnel with IPv6+ext) packet.
+ __le16 vlan /* VLAN tag to insert to packet (if needed). */;
+/* Number of BDs in packet. Should be at least 2 in non-LSO packet and at least
+ * 3 in LSO (or Tunnel with IPv6+ext) packet.
*/
u8 nbds;
struct eth_tx_1st_bd_flags bd_flags;
__le16 bitfields;
-#define ETH_TX_DATA_1ST_BD_TUNN_CFG_OVERRIDE_MASK 0x1
-#define ETH_TX_DATA_1ST_BD_TUNN_CFG_OVERRIDE_SHIFT 0
+/* Indicates a tunneled packet. Must be set for encapsulated packet. */
+#define ETH_TX_DATA_1ST_BD_TUNN_FLAG_MASK 0x1
+#define ETH_TX_DATA_1ST_BD_TUNN_FLAG_SHIFT 0
#define ETH_TX_DATA_1ST_BD_RESERVED0_MASK 0x1
#define ETH_TX_DATA_1ST_BD_RESERVED0_SHIFT 1
-#define ETH_TX_DATA_1ST_BD_FW_USE_ONLY_MASK 0x3FFF
-#define ETH_TX_DATA_1ST_BD_FW_USE_ONLY_SHIFT 2
+/* Total packet length - must be filled for non-LSO packets. */
+#define ETH_TX_DATA_1ST_BD_PKT_LEN_MASK 0x3FFF
+#define ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT 2
};
/*
@@ -171,25 +215,51 @@ struct eth_edpm_fw_data {
* FW debug.
*/
struct eth_fast_path_cqe_fw_debug {
- u8 reserved0 /* FW reserved. */;
- u8 reserved1 /* FW reserved. */;
__le16 reserved2 /* FW reserved. */;
};
-struct tunnel_parsing_flags {
+
+/*
+ * tunneling parsing flags
+ */
+struct eth_tunnel_parsing_flags {
u8 flags;
-#define TUNNEL_PARSING_FLAGS_TYPE_MASK 0x3
-#define TUNNEL_PARSING_FLAGS_TYPE_SHIFT 0
-#define TUNNEL_PARSING_FLAGS_TENNANT_ID_EXIST_MASK 0x1
-#define TUNNEL_PARSING_FLAGS_TENNANT_ID_EXIST_SHIFT 2
-#define TUNNEL_PARSING_FLAGS_NEXT_PROTOCOL_MASK 0x3
-#define TUNNEL_PARSING_FLAGS_NEXT_PROTOCOL_SHIFT 3
-#define TUNNEL_PARSING_FLAGS_FIRSTHDRIPMATCH_MASK 0x1
-#define TUNNEL_PARSING_FLAGS_FIRSTHDRIPMATCH_SHIFT 5
-#define TUNNEL_PARSING_FLAGS_IPV4_FRAGMENT_MASK 0x1
-#define TUNNEL_PARSING_FLAGS_IPV4_FRAGMENT_SHIFT 6
-#define TUNNEL_PARSING_FLAGS_IPV4_OPTIONS_MASK 0x1
-#define TUNNEL_PARSING_FLAGS_IPV4_OPTIONS_SHIFT 7
+/* 0 - no tunneling, 1 - GENEVE, 2 - GRE, 3 - VXLAN
+ * (use enum eth_rx_tunn_type)
+ */
+#define ETH_TUNNEL_PARSING_FLAGS_TYPE_MASK 0x3
+#define ETH_TUNNEL_PARSING_FLAGS_TYPE_SHIFT 0
+/* If it s not an encapsulated packet then put 0x0. If it s an encapsulated
+ * packet but the tenant-id doesn t exist then put 0x0. Else put 0x1
+ *
+ */
+#define ETH_TUNNEL_PARSING_FLAGS_TENNANT_ID_EXIST_MASK 0x1
+#define ETH_TUNNEL_PARSING_FLAGS_TENNANT_ID_EXIST_SHIFT 2
+/* Type of the next header above the tunneling: 0 - unknown, 1 - L2, 2 - Ipv4,
+ * 3 - IPv6 (use enum tunnel_next_protocol)
+ */
+#define ETH_TUNNEL_PARSING_FLAGS_NEXT_PROTOCOL_MASK 0x3
+#define ETH_TUNNEL_PARSING_FLAGS_NEXT_PROTOCOL_SHIFT 3
+/* The result of comparing the DA-ip of the tunnel header. */
+#define ETH_TUNNEL_PARSING_FLAGS_FIRSTHDRIPMATCH_MASK 0x1
+#define ETH_TUNNEL_PARSING_FLAGS_FIRSTHDRIPMATCH_SHIFT 5
+#define ETH_TUNNEL_PARSING_FLAGS_IPV4_FRAGMENT_MASK 0x1
+#define ETH_TUNNEL_PARSING_FLAGS_IPV4_FRAGMENT_SHIFT 6
+#define ETH_TUNNEL_PARSING_FLAGS_IPV4_OPTIONS_MASK 0x1
+#define ETH_TUNNEL_PARSING_FLAGS_IPV4_OPTIONS_SHIFT 7
+};
+
+/*
+ * PMD flow control bits
+ */
+struct eth_pmd_flow_flags {
+ u8 flags;
+#define ETH_PMD_FLOW_FLAGS_VALID_MASK 0x1 /* CQE valid bit */
+#define ETH_PMD_FLOW_FLAGS_VALID_SHIFT 0
+#define ETH_PMD_FLOW_FLAGS_TOGGLE_MASK 0x1 /* CQE ring toggle bit */
+#define ETH_PMD_FLOW_FLAGS_TOGGLE_SHIFT 1
+#define ETH_PMD_FLOW_FLAGS_RESERVED_MASK 0x3F
+#define ETH_PMD_FLOW_FLAGS_RESERVED_SHIFT 2
};
/*
@@ -205,25 +275,19 @@ struct eth_fast_path_rx_reg_cqe {
#define ETH_FAST_PATH_RX_REG_CQE_RESERVED0_MASK 0x1
#define ETH_FAST_PATH_RX_REG_CQE_RESERVED0_SHIFT 7
__le16 pkt_len /* Total packet length (from the parser) */;
- struct parsing_and_err_flags pars_flags
- /* Parsing and error flags from the parser */;
+/* Parsing and error flags from the parser */
+ struct parsing_and_err_flags pars_flags;
__le16 vlan_tag /* 802.1q VLAN tag */;
__le32 rss_hash /* RSS hash result */;
__le16 len_on_first_bd /* Number of bytes placed on first BD */;
u8 placement_offset /* Offset of placement from BD start */;
- struct tunnel_parsing_flags tunnel_pars_flags /* Tunnel Parsing Flags */
- ;
+/* Tunnel Parsing Flags */
+ struct eth_tunnel_parsing_flags tunnel_pars_flags;
u8 bd_num /* Number of BDs, used for packet */;
- u8 reserved[7];
+ u8 reserved[9];
struct eth_fast_path_cqe_fw_debug fw_debug /* FW reserved. */;
u8 reserved1[3];
- u8 flags;
-#define ETH_FAST_PATH_RX_REG_CQE_VALID_MASK 0x1
-#define ETH_FAST_PATH_RX_REG_CQE_VALID_SHIFT 0
-#define ETH_FAST_PATH_RX_REG_CQE_VALID_TOGGLE_MASK 0x1
-#define ETH_FAST_PATH_RX_REG_CQE_VALID_TOGGLE_SHIFT 1
-#define ETH_FAST_PATH_RX_REG_CQE_RESERVED2_MASK 0x3F
-#define ETH_FAST_PATH_RX_REG_CQE_RESERVED2_SHIFT 2
+ struct eth_pmd_flow_flags pmd_flags /* CQE valid and toggle bits */;
};
/*
@@ -234,9 +298,11 @@ struct eth_fast_path_rx_tpa_cont_cqe {
u8 tpa_agg_index /* TPA aggregation index */;
__le16 len_list[ETH_TPA_CQE_CONT_LEN_LIST_SIZE]
/* List of the segment sizes */;
- u8 reserved[5];
+ u8 reserved;
u8 reserved1 /* FW reserved. */;
__le16 reserved2[ETH_TPA_CQE_CONT_LEN_LIST_SIZE] /* FW reserved. */;
+ u8 reserved3[3];
+ struct eth_pmd_flow_flags pmd_flags /* CQE valid and toggle bits */;
};
/*
@@ -253,9 +319,10 @@ struct eth_fast_path_rx_tpa_end_cqe {
__le32 ts_delta /* TCP timestamp delta */;
__le16 len_list[ETH_TPA_CQE_END_LEN_LIST_SIZE]
/* List of the segment sizes */;
- u8 reserved1[3];
- u8 reserved2 /* FW reserved. */;
__le16 reserved3[ETH_TPA_CQE_END_LEN_LIST_SIZE] /* FW reserved. */;
+ __le16 reserved1;
+ u8 reserved2 /* FW reserved. */;
+ struct eth_pmd_flow_flags pmd_flags /* CQE valid and toggle bits */;
};
/*
@@ -277,13 +344,15 @@ struct eth_fast_path_rx_tpa_start_cqe {
__le32 rss_hash /* RSS hash result */;
__le16 len_on_first_bd /* Number of bytes placed on first BD */;
u8 placement_offset /* Offset of placement from BD start */;
- struct tunnel_parsing_flags tunnel_pars_flags /* Tunnel Parsing Flags */
- ;
+/* Tunnel Parsing Flags */
+ struct eth_tunnel_parsing_flags tunnel_pars_flags;
u8 tpa_agg_index /* TPA aggregation index */;
u8 header_len /* Packet L2+L3+L4 header length */;
__le16 ext_bd_len_list[ETH_TPA_CQE_START_LEN_LIST_SIZE]
/* Additional BDs length list. */;
struct eth_fast_path_cqe_fw_debug fw_debug /* FW reserved. */;
+ u8 reserved;
+ struct eth_pmd_flow_flags pmd_flags /* CQE valid and toggle bits */;
};
/*
@@ -315,13 +384,7 @@ struct eth_slow_path_rx_cqe {
u8 reserved[25];
__le16 echo;
u8 reserved1;
- u8 flags;
-#define ETH_SLOW_PATH_RX_CQE_VALID_MASK 0x1
-#define ETH_SLOW_PATH_RX_CQE_VALID_SHIFT 0
-#define ETH_SLOW_PATH_RX_CQE_VALID_TOGGLE_MASK 0x1
-#define ETH_SLOW_PATH_RX_CQE_VALID_TOGGLE_SHIFT 1
-#define ETH_SLOW_PATH_RX_CQE_RESERVED2_MASK 0x3F
-#define ETH_SLOW_PATH_RX_CQE_RESERVED2_SHIFT 2
+ struct eth_pmd_flow_flags pmd_flags /* CQE valid and toggle bits */;
};
/*
@@ -361,6 +424,17 @@ struct eth_rx_pmd_cqe {
};
/*
+ * Eth RX Tunnel Type
+ */
+enum eth_rx_tunn_type {
+ ETH_RX_NO_TUNN /* No Tunnel. */,
+ ETH_RX_TUNN_GENEVE /* GENEVE Tunnel. */,
+ ETH_RX_TUNN_GRE /* GRE Tunnel. */,
+ ETH_RX_TUNN_VXLAN /* VXLAN Tunnel. */,
+ MAX_ETH_RX_TUNN_TYPE
+};
+
+/*
* Aggregation end reason.
*/
enum eth_tpa_end_reason {
@@ -377,16 +451,7 @@ enum eth_tpa_end_reason {
MAX_ETH_TPA_END_REASON
};
-/*
- * Eth Tunnel Type
- */
-enum eth_tunn_type {
- ETH_TUNN_GENEVE /* GENEVE Tunnel. */,
- ETH_TUNN_TTAG /* T-Tag Tunnel. */,
- ETH_TUNN_GRE /* GRE Tunnel. */,
- ETH_TUNN_VXLAN /* VXLAN Tunnel. */,
- MAX_ETH_TUNN_TYPE
-};
+
/*
* The first tx bd of a given packet
@@ -466,20 +531,24 @@ union eth_tx_bd_types {
};
/*
- * Mstorm Queue Zone
+ * Eth Tx Tunnel Type
*/
-struct mstorm_eth_queue_zone {
- struct eth_rx_prod_data rx_producers;
- __le32 reserved[2];
+enum eth_tx_tunn_type {
+ ETH_TX_TUNN_GENEVE /* GENEVE Tunnel. */,
+ ETH_TX_TUNN_TTAG /* T-Tag Tunnel. */,
+ ETH_TX_TUNN_GRE /* GRE Tunnel. */,
+ ETH_TX_TUNN_VXLAN /* VXLAN Tunnel. */,
+ MAX_ETH_TX_TUNN_TYPE
};
+
/*
* Ystorm Queue Zone
*/
-struct ystorm_eth_queue_zone {
+struct xstorm_eth_queue_zone {
struct coalescing_timeset int_coalescing_timeset
/* Tx interrupt coalescing TimeSet */;
- __le16 reserved[3];
+ u8 reserved[7];
};
/*
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index bdd51d5..2fc01d0 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -42,15 +42,22 @@ typedef u32 offsize_t; /* In DWORDS !!! */
#define SECTION_SIZE(_offsize) \
(((_offsize & OFFSIZE_SIZE_MASK) >> OFFSIZE_SIZE_SHIFT) << 2)
+/* SECTION_ADDR returns the GRC addr of a section, given offsize and index
+ * within section
+ */
#define SECTION_ADDR(_offsize, idx) \
-(MCP_REG_SCRATCH + SECTION_OFFSET(_offsize) + (SECTION_SIZE(_offsize) * idx))
+ (MCP_REG_SCRATCH + \
+ SECTION_OFFSET(_offsize) + (SECTION_SIZE(_offsize) * idx))
+/* SECTION_OFFSIZE_ADDR returns the GRC addr to the offsize address. Use
+ * offsetof, since the OFFSETUP collide with the firmware definition
+ */
#define SECTION_OFFSIZE_ADDR(_pub_base, _section) \
(_pub_base + offsetof(struct mcp_public_data, sections[_section]))
-
/* PHY configuration */
struct pmm_phy_cfg {
- u32 speed; /* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */
+/* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */
+ u32 speed;
#define PMM_SPEED_AUTONEG 0
#define PMM_SPEED_SMARTLINQ 0x8
@@ -93,12 +100,15 @@ struct pmm_stats {
u64 r255; /* 0x02 (Offset 0x10 ) RX 128 to 255 byte frame counter*/
u64 r511; /* 0x03 (Offset 0x18 ) RX 256 to 511 byte frame counter*/
u64 r1023; /* 0x04 (Offset 0x20 ) RX 512 to 1023 byte frame counter*/
- u64 r1518; /* 0x05 (Offset 0x28 ) RX 1024 to 1518 byte frame counter */
- u64 r1522; /* 0x06 (Offset 0x30 ) RX 1519 to 1522 byte VLAN-tagged */
+/* 0x05 (Offset 0x28 ) RX 1024 to 1518 byte frame counter */
+ u64 r1518;
+/* 0x06 (Offset 0x30 ) RX 1519 to 1522 byte VLAN-tagged frame counter */
+ u64 r1522;
u64 r2047; /* 0x07 (Offset 0x38 ) RX 1519 to 2047 byte frame counter*/
u64 r4095; /* 0x08 (Offset 0x40 ) RX 2048 to 4095 byte frame counter*/
u64 r9216; /* 0x09 (Offset 0x48 ) RX 4096 to 9216 byte frame counter*/
- u64 r16383; /* 0x0A (Offset 0x50 ) RX 9217 to 16383 byte frame ctr */
+/* 0x0A (Offset 0x50 ) RX 9217 to 16383 byte frame counter */
+ u64 r16383;
u64 rfcs; /* 0x0F (Offset 0x58 ) RX FCS error frame counter*/
u64 rxcf; /* 0x10 (Offset 0x60 ) RX control frame counter*/
u64 rxpf; /* 0x11 (Offset 0x68 ) RX pause frame counter*/
@@ -114,25 +124,36 @@ struct pmm_stats {
u64 t255; /* 0x42 (Offset 0xb8 ) TX 128 to 255 byte frame counter*/
u64 t511; /* 0x43 (Offset 0xc0 ) TX 256 to 511 byte frame counter*/
u64 t1023; /* 0x44 (Offset 0xc8 ) TX 512 to 1023 byte frame counter*/
- u64 t1518; /* 0x45 (Offset 0xd0 ) TX 1024 to 1518 byte frame counter */
- u64 t2047; /* 0x47 (Offset 0xd8 ) TX 1519 to 2047 byte frame counter */
- u64 t4095; /* 0x48 (Offset 0xe0 ) TX 2048 to 4095 byte frame counter */
- u64 t9216; /* 0x49 (Offset 0xe8 ) TX 4096 to 9216 byte frame counter */
- u64 t16383; /* 0x4A (Offset 0xf0 ) TX 9217 to 16383 byte frame ctr */
+/* 0x45 (Offset 0xd0 ) TX 1024 to 1518 byte frame counter */
+ u64 t1518;
+/* 0x47 (Offset 0xd8 ) TX 1519 to 2047 byte frame counter */
+ u64 t2047;
+/* 0x48 (Offset 0xe0 ) TX 2048 to 4095 byte frame counter */
+ u64 t4095;
+/* 0x49 (Offset 0xe8 ) TX 4096 to 9216 byte frame counter */
+ u64 t9216;
+/* 0x4A (Offset 0xf0 ) TX 9217 to 16383 byte frame counter */
+ u64 t16383;
u64 txpf; /* 0x50 (Offset 0xf8 ) TX pause frame counter */
u64 txpp; /* 0x51 (Offset 0x100) TX PFC frame counter */
- u64 tlpiec; /* 0x6C (Offset 0x108) Transmit Logical Type LLFC */
+/* 0x6C (Offset 0x108) Transmit Logical Type LLFC message counter */
+ u64 tlpiec;
u64 tncl; /* 0x6E (Offset 0x110) Transmit Total Collision Counter */
u64 rbyte; /* 0x3d (Offset 0x118) RX byte counter */
u64 rxuca; /* 0x0c (Offset 0x120) RX UC frame counter */
u64 rxmca; /* 0x0d (Offset 0x128) RX MC frame counter */
u64 rxbca; /* 0x0e (Offset 0x130) RX BC frame counter */
- u64 rxpok; /* 0x22 (Offset 0x138) RX good frame */
+/* 0x22 (Offset 0x138) RX good frame (good CRC, not oversized, no ERROR) */
+ u64 rxpok;
u64 tbyte; /* 0x6f (Offset 0x140) TX byte counter */
u64 txuca; /* 0x4d (Offset 0x148) TX UC frame counter */
u64 txmca; /* 0x4e (Offset 0x150) TX MC frame counter */
u64 txbca; /* 0x4f (Offset 0x158) TX BC frame counter */
u64 txcf; /* 0x54 (Offset 0x160) TX control frame counter */
+/* HSI - Cannot add more stats to this struct. If needed, then need to open new
+ * struct
+ */
+
};
struct brb_stats {
@@ -145,25 +166,26 @@ struct port_stats {
struct pmm_stats pmm;
};
-/*-----+-----------------------------------------------------------------------
- * Chip | Number and | Ports in| Ports in|2 PHY-s |# of ports|# of engines
- * | rate of physical | team #1 | team #2 |are used|per path | (paths)
- * | ports | | | | |
- *======+==================+=========+=========+========+======================
- * BB | 1x100G | This is special mode, where there are 2 HW func
+/*----+------------------------------------------------------------------------
+ * C | Number and | Ports in| Ports in|2 PHY-s |# of ports|# of engines
+ * h | rate of | team #1 | team #2 |are used|per path | (paths)
+ * i | physical | | | | | enabled
+ * p | ports | | | | |
+ *====+============+=========+=========+========+==========+===================
+ * BB | 1x100G | This is special mode, where there are actually 2 HW func
* BB | 2x10/20Gbps| 0,1 | NA | No | 1 | 1
* BB | 2x40 Gbps | 0,1 | NA | Yes | 1 | 1
* BB | 2x50Gbps | 0,1 | NA | No | 1 | 1
- * BB | 4x10Gbps | 0,2 | 1,3 | No | 1/2 | 1,2
- * BB | 4x10Gbps | 0,1 | 2,3 | No | 1/2 | 1,2
- * BB | 4x10Gbps | 0,3 | 1,2 | No | 1/2 | 1,2
+ * BB | 4x10Gbps | 0,2 | 1,3 | No | 1/2 | 1,2 (2 is optional)
+ * BB | 4x10Gbps | 0,1 | 2,3 | No | 1/2 | 1,2 (2 is optional)
+ * BB | 4x10Gbps | 0,3 | 1,2 | No | 1/2 | 1,2 (2 is optional)
* BB | 4x10Gbps | 0,1,2,3 | NA | No | 1 | 1
* AH | 2x10/20Gbps| 0,1 | NA | NA | 1 | NA
* AH | 4x10Gbps | 0,1 | 2,3 | NA | 2 | NA
* AH | 4x10Gbps | 0,2 | 1,3 | NA | 2 | NA
* AH | 4x10Gbps | 0,3 | 1,2 | NA | 2 | NA
* AH | 4x10Gbps | 0,1,2,3 | NA | NA | 1 | NA
- *======+==================+=========+=========+========+=======================
+ *====+============+=========+=========+========+==========+===================
*/
#define CMT_TEAM0 0
@@ -225,7 +247,6 @@ struct lldp_status_params_s {
u32 status; /* TBD */
/* Holds remote Chassis ID TLV header, subtype and 9B of payload.
*/
- u32 local_port_id[LLDP_PORT_ID_STAT_LEN];
u32 peer_chassis_id[LLDP_CHASSIS_ID_STAT_LEN];
/* Holds remote Port ID TLV header, subtype and 9B of payload.
*/
@@ -245,10 +266,26 @@ struct dcbx_ets_feature {
#define DCBX_ETS_CBS_SHIFT 3
#define DCBX_ETS_MAX_TCS_MASK 0x000000f0
#define DCBX_ETS_MAX_TCS_SHIFT 4
+#define DCBX_ISCSI_OOO_TC_MASK 0x00000f00
+#define DCBX_ISCSI_OOO_TC_SHIFT 8
+/* Entries in tc table are orginized that the left most is pri 0, right most is
+ * prio 7
+ */
+
u32 pri_tc_tbl[1];
+#define DCBX_ISCSI_OOO_TC (4)
+
+#define NIG_ETS_ISCSI_OOO_CLIENT_OFFSET (DCBX_ISCSI_OOO_TC + 1)
#define DCBX_CEE_STRICT_PRIORITY 0xf
-#define DCBX_CEE_STRICT_PRIORITY_TC 0x7
+/* Entries in tc table are orginized that the left most is pri 0, right most is
+ * prio 7
+ */
+
u32 tc_bw_tbl[2];
+/* Entries in tc table are orginized that the left most is pri 0, right most is
+ * prio 7
+ */
+
u32 tc_tsa_tbl[2];
#define DCBX_ETS_TSA_STRICT 0
#define DCBX_ETS_TSA_CBS 1
@@ -271,6 +308,14 @@ struct dcbx_app_priority_entry {
#define DCBX_APP_SF_SHIFT 8
#define DCBX_APP_SF_ETHTYPE 0
#define DCBX_APP_SF_PORT 1
+#define DCBX_APP_SF_IEEE_MASK 0x0000f000
+#define DCBX_APP_SF_IEEE_SHIFT 12
+#define DCBX_APP_SF_IEEE_RESERVED 0
+#define DCBX_APP_SF_IEEE_ETHTYPE 1
+#define DCBX_APP_SF_IEEE_TCP_PORT 2
+#define DCBX_APP_SF_IEEE_UDP_PORT 3
+#define DCBX_APP_SF_IEEE_TCP_UDP_PORT 4
+
#define DCBX_APP_PROTOCOL_ID_MASK 0xffff0000
#define DCBX_APP_PROTOCOL_ID_SHIFT 16
};
@@ -287,7 +332,7 @@ struct dcbx_app_priority_feature {
/* Not in use
* #define DCBX_APP_DEFAULT_PRI_MASK 0x00000f00
* #define DCBX_APP_DEFAULT_PRI_SHIFT 8
- */
+ */
#define DCBX_APP_MAX_TCS_MASK 0x0000f000
#define DCBX_APP_MAX_TCS_SHIFT 12
#define DCBX_APP_NUM_ENTRIES_MASK 0x00ff0000
@@ -331,11 +376,12 @@ struct dcbx_features {
struct dcbx_local_params {
u32 config;
-#define DCBX_CONFIG_VERSION_MASK 0x00000003
+#define DCBX_CONFIG_VERSION_MASK 0x00000007
#define DCBX_CONFIG_VERSION_SHIFT 0
#define DCBX_CONFIG_VERSION_DISABLED 0
#define DCBX_CONFIG_VERSION_IEEE 1
#define DCBX_CONFIG_VERSION_CEE 2
+#define DCBX_CONFIG_VERSION_STATIC 4
u32 flags;
struct dcbx_features features;
@@ -345,12 +391,13 @@ struct dcbx_mib {
u32 prefix_seq_num;
u32 flags;
/*
- * #define DCBX_CONFIG_VERSION_MASK 0x00000003
+ * #define DCBX_CONFIG_VERSION_MASK 0x00000007
* #define DCBX_CONFIG_VERSION_SHIFT 0
* #define DCBX_CONFIG_VERSION_DISABLED 0
* #define DCBX_CONFIG_VERSION_IEEE 1
* #define DCBX_CONFIG_VERSION_CEE 2
- */
+ * #define DCBX_CONFIG_VERSION_STATIC 4
+ */
struct dcbx_features features;
u32 suffix_seq_num;
};
@@ -361,6 +408,14 @@ struct lldp_system_tlvs_buffer_s {
u32 data[MAX_SYSTEM_LLDP_TLV_DATA];
};
+struct dcb_dscp_map {
+ u32 flags;
+#define DCB_DSCP_ENABLE_MASK 0x1
+#define DCB_DSCP_ENABLE_SHIFT 0
+#define DCB_DSCP_ENABLE 1
+ u32 dscp_pri_map[8];
+};
+
/**************************************/
/* */
/* P U B L I C G L O B A L */
@@ -368,7 +423,8 @@ struct lldp_system_tlvs_buffer_s {
/**************************************/
struct public_global {
u32 max_path; /* 32bit is wasty, but this will be used often */
- u32 max_ports; /* (Global) 32bit is wasty, this will be used often */
+/* (Global) 32bit is wasty, but this will be used often */
+ u32 max_ports;
#define MODE_1P 1 /* TBD - NEED TO THINK OF A BETTER NAME */
#define MODE_2P 2
#define MODE_3P 3
@@ -380,6 +436,10 @@ struct public_global {
u32 mfw_ver;
u32 running_bundle_id;
s32 external_temperature;
+ u32 mdump_reason;
+#define MDUMP_REASON_INTERNAL_ERROR (1 << 0)
+#define MDUMP_REASON_EXTERNAL_TRIGGER (1 << 1)
+#define MDUMP_REASON_DUMP_AGED (1 << 2)
};
/**************************************/
@@ -426,8 +486,8 @@ struct public_path {
*/
u32 mcp_vf_disabled[VF_MAX_STATIC / 32]; /* 0x003c */
- u32 process_kill;
/* Reset on mcp reset, and incremented for eveny process kill event. */
+ u32 process_kill;
#define PROCESS_KILL_COUNTER_MASK 0x0000ffff
#define PROCESS_KILL_COUNTER_SHIFT 0
#define PROCESS_KILL_GLOB_AEU_BIT_MASK 0xffff0000
@@ -473,7 +533,8 @@ struct public_port {
#define MCP_VALIDITY_RESERVED 0x00000007
/* One licensing bit should be set */
-#define MCP_VALIDITY_LIC_KEY_IN_EFFECT_MASK 0x00000038 /* yaniv - tbd */
+/* yaniv - tbd ? license */
+#define MCP_VALIDITY_LIC_KEY_IN_EFFECT_MASK 0x00000038
#define MCP_VALIDITY_LIC_MANUF_KEY_IN_EFFECT 0x00000008
#define MCP_VALIDITY_LIC_UPGRADE_KEY_IN_EFFECT 0x00000010
#define MCP_VALIDITY_LIC_NO_KEY_IN_EFFECT 0x00000020
@@ -525,9 +586,15 @@ struct public_port {
#define LINK_STATUS_MAC_REMOTE_FAULT 0x02000000
#define LINK_STATUS_UNSUPPORTED_SPD_REQ 0x04000000
+#define LINK_STATUS_FEC_MODE_MASK 0x38000000
+#define LINK_STATUS_FEC_MODE_NONE (0 << 27)
+#define LINK_STATUS_FEC_MODE_FIRECODE_CL74 (1 << 27)
+#define LINK_STATUS_FEC_MODE_RS_CL91 (2 << 27)
+
u32 link_status1;
u32 ext_phy_fw_version;
- u32 drv_phy_cfg_addr; /* Points to pmm_phy_cfg (For READ-ONLY) */
+/* Points to struct pmm_phy_cfg (For READ-ONLY) */
+ u32 drv_phy_cfg_addr;
u32 port_stx;
@@ -538,11 +605,11 @@ struct public_port {
u32 media_type;
#define MEDIA_UNSPECIFIED 0x0
-#define MEDIA_SFPP_10G_FIBER 0x1
-#define MEDIA_XFP_FIBER 0x2
+#define MEDIA_SFPP_10G_FIBER 0x1 /* Use MEDIA_MODULE_FIBER instead */
+#define MEDIA_XFP_FIBER 0x2 /* Use MEDIA_MODULE_FIBER instead */
#define MEDIA_DA_TWINAX 0x3
#define MEDIA_BASE_T 0x4
-#define MEDIA_SFP_1G_FIBER 0x5
+#define MEDIA_SFP_1G_FIBER 0x5 /* Use MEDIA_MODULE_FIBER instead */
#define MEDIA_MODULE_FIBER 0x6
#define MEDIA_KR 0xf0
#define MEDIA_NOT_PRESENT 0xff
@@ -565,6 +632,7 @@ struct public_port {
u32 link_change_count;
/* LLDP params */
+/* offset: 536 bytes? */
struct lldp_config_params_s lldp_config_params[LLDP_MAX_LLDP_AGENTS];
struct lldp_status_params_s lldp_status_params[LLDP_MAX_LLDP_AGENTS];
struct lldp_system_tlvs_buffer_s system_lldp_tlvs_buf;
@@ -575,6 +643,7 @@ struct public_port {
struct dcbx_mib operational_dcbx_mib;
/* FC_NPIV table offset & size in NVRAM value of 0 means not present */
+
u32 fc_npiv_nvram_tbl_addr;
u32 fc_npiv_nvram_tbl_size;
u32 transceiver_data;
@@ -628,6 +697,7 @@ struct public_port {
#define PMM_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_CR 0x34
#define PMM_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_LR 0x35
#define PMM_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_AOC 0x36
+ struct dcb_dscp_map dcb_dscp_map;
};
/**************************************/
@@ -637,11 +707,12 @@ struct public_port {
/**************************************/
struct public_func {
- u32 dpdk_rsvd1[2];
+ u32 iscsi_boot_signature;
+ u32 iscsi_boot_block_offset;
/* MTU size per funciton is needed for the OV feature */
u32 mtu_size;
-/* 9 entires for the C2S PCP map for each inner VLAN PCP + 1 default */
+ /* 9 entires for the C2S PCP map for each inner VLAN PCP + 1 default */
/* For PCP values 0-3 use the map lower */
/* 0xFF000000 - PCP 0, 0x00FF0000 - PCP 1,
* 0x0000FF00 - PCP 2, 0x000000FF PCP 3
@@ -669,7 +740,10 @@ struct public_func {
#define FUNC_MF_CFG_PROTOCOL_MASK 0x000000f0
#define FUNC_MF_CFG_PROTOCOL_SHIFT 4
#define FUNC_MF_CFG_PROTOCOL_ETHERNET 0x00000000
-#define FUNC_MF_CFG_PROTOCOL_MAX 0x00000000
+#define FUNC_MF_CFG_PROTOCOL_ISCSI 0x00000010
+#define FUNC_MF_CFG_PROTOCOL_FCOE 0x00000020
+#define FUNC_MF_CFG_PROTOCOL_ROCE 0x00000030
+#define FUNC_MF_CFG_PROTOCOL_MAX 0x00000030
/* MINBW, MAXBW */
/* value range - 0..100, increments in 1 % */
@@ -690,7 +764,11 @@ struct public_func {
u32 mac_lower;
#define FUNC_MF_CFG_LOWERMAC_DEFAULT 0xffffffff
- u32 dpdk_rsvd2[4];
+ u32 fcoe_wwn_port_name_upper;
+ u32 fcoe_wwn_port_name_lower;
+
+ u32 fcoe_wwn_node_name_upper;
+ u32 fcoe_wwn_node_name_lower;
u32 ovlan_stag; /* tags */
#define FUNC_MF_CFG_OV_STAG_MASK 0x0000ffff
@@ -763,9 +841,9 @@ struct mcp_file_att {
struct bist_nvm_image_att {
u32 return_code;
- u32 image_type; /* Image type */
- u32 nvm_start_addr; /* NVM address of the image */
- u32 len; /* Include CRC */
+ u32 image_type; /* Image type */
+ u32 nvm_start_addr; /* NVM address of the image */
+ u32 len; /* Include CRC */
};
#define MCP_DRV_VER_STR_SIZE 16
@@ -784,50 +862,82 @@ struct lan_stats_stc {
u32 rserved;
};
+struct fcoe_stats_stc {
+ u64 rx_pkts;
+ u64 tx_pkts;
+ u32 fcs_err;
+ u32 login_failure;
+};
+
+struct iscsi_stats_stc {
+ u64 rx_pdus;
+ u64 tx_pdus;
+ u64 rx_bytes;
+ u64 tx_bytes;
+};
+
+struct rdma_stats_stc {
+ u64 rx_pkts;
+ u64 tx_pkts;
+ u64 rx_bytes;
+ u64 tx_bytes;
+};
+
struct ocbb_data_stc {
u32 ocbb_host_addr;
u32 ocsd_host_addr;
u32 ocsd_req_update_interval;
};
-#define MAX_NUM_OF_SENSORS 7
-#define MFW_SENSOR_LOCATION_INTERNAL 1
-#define MFW_SENSOR_LOCATION_EXTERNAL 2
-#define MFW_SENSOR_LOCATION_SFP 3
-
-#define SENSOR_LOCATION_SHIFT 0
-#define SENSOR_LOCATION_MASK 0x000000ff
-#define THRESHOLD_HIGH_SHIFT 8
-#define THRESHOLD_HIGH_MASK 0x0000ff00
-#define CRITICAL_TEMPERATURE_SHIFT 16
-#define CRITICAL_TEMPERATURE_MASK 0x00ff0000
-#define CURRENT_TEMP_SHIFT 24
-#define CURRENT_TEMP_MASK 0xff000000
+#define MAX_NUM_OF_SENSORS 7
+#define MFW_SENSOR_LOCATION_INTERNAL 1
+#define MFW_SENSOR_LOCATION_EXTERNAL 2
+#define MFW_SENSOR_LOCATION_SFP 3
+
+#define SENSOR_LOCATION_SHIFT 0
+#define SENSOR_LOCATION_MASK 0x000000ff
+#define THRESHOLD_HIGH_SHIFT 8
+#define THRESHOLD_HIGH_MASK 0x0000ff00
+#define CRITICAL_TEMPERATURE_SHIFT 16
+#define CRITICAL_TEMPERATURE_MASK 0x00ff0000
+#define CURRENT_TEMP_SHIFT 24
+#define CURRENT_TEMP_MASK 0xff000000
struct temperature_status_stc {
u32 num_of_sensors;
u32 sensor[MAX_NUM_OF_SENSORS];
};
+/* crash dump configuration header */
+struct mdump_config_stc {
+ u32 version;
+ u32 config;
+ u32 epoc;
+ u32 num_of_logs;
+ u32 valid_logs;
+};
+
enum resource_id_enum {
- RESOURCE_NUM_SB_E = 0,
- RESOURCE_NUM_L2_QUEUE_E = 1,
- RESOURCE_NUM_VPORT_E = 2,
- RESOURCE_NUM_VMQ_E = 3,
- RESOURCE_FACTOR_NUM_RSS_PF_E = 4,
- RESOURCE_FACTOR_RSS_PER_VF_E = 5,
- RESOURCE_NUM_RL_E = 6,
- RESOURCE_NUM_PQ_E = 7,
- RESOURCE_NUM_VF_E = 8,
- RESOURCE_VFC_FILTER_E = 9,
- RESOURCE_ILT_E = 10,
- RESOURCE_CQS_E = 11,
- RESOURCE_GFT_PROFILES_E = 12,
- RESOURCE_NUM_TC_E = 13,
- RESOURCE_NUM_RSS_ENGINES_E = 14,
- RESOURCE_LL2_QUEUE_E = 15,
- RESOURCE_RDMA_STATS_QUEUE_E = 16,
+ RESOURCE_NUM_SB_E = 0,
+ RESOURCE_NUM_L2_QUEUE_E = 1,
+ RESOURCE_NUM_VPORT_E = 2,
+ RESOURCE_NUM_VMQ_E = 3,
+/* Not a real resource!! it's a factor used to calculate others */
+ RESOURCE_FACTOR_NUM_RSS_PF_E = 4,
+/* Not a real resource!! it's a factor used to calculate others */
+ RESOURCE_FACTOR_RSS_PER_VF_E = 5,
+ RESOURCE_NUM_RL_E = 6,
+ RESOURCE_NUM_PQ_E = 7,
+ RESOURCE_NUM_VF_E = 8,
+ RESOURCE_VFC_FILTER_E = 9,
+ RESOURCE_ILT_E = 10,
+ RESOURCE_CQS_E = 11,
+ RESOURCE_GFT_PROFILES_E = 12,
+ RESOURCE_NUM_TC_E = 13,
+ RESOURCE_NUM_RSS_ENGINES_E = 14,
+ RESOURCE_LL2_QUEUE_E = 15,
+ RESOURCE_RDMA_STATS_QUEUE_E = 16,
RESOURCE_MAX_NUM,
- RESOURCE_NUM_INVALID = 0xFFFFFFFF
+ RESOURCE_NUM_INVALID = 0xFFFFFFFF
};
/* Resource ID is to be filled by the driver in the MB request
@@ -847,6 +957,8 @@ union drv_union_data {
u32 ver_str[MCP_DRV_VER_STR_SIZE_DWORD]; /* LOAD_REQ */
struct mcp_mac wol_mac; /* UNLOAD_DONE */
+/* This configuration should be set by the driver for the LINK_SET command. */
+
struct pmm_phy_cfg drv_phy_cfg;
struct mcp_val64 val64; /* For PHY / AVS commands */
@@ -860,12 +972,14 @@ union drv_union_data {
struct drv_version_stc drv_version;
struct lan_stats_stc lan_stats;
- u32 dpdk_rsvd[3];
+ struct fcoe_stats_stc fcoe_stats;
+ struct iscsi_stats_stc icsci_stats;
+ struct rdma_stats_stc rdma_stats;
struct ocbb_data_stc ocbb_info;
struct temperature_status_stc temp_info;
struct resource_info resource;
struct bist_nvm_image_att nvm_image_att;
-
+ struct mdump_config_stc mdump_config;
/* ... */
};
@@ -896,9 +1010,14 @@ struct public_drv_mb {
#define DRV_MSG_CODE_NIG_DRAIN 0x30000000
+/* DRV_MB Param: driver version supp, FW_MB param: MFW version supp,
+ * data: struct resource_info
+ */
#define DRV_MSG_GET_RESOURCE_ALLOC_MSG 0x34000000
-#define DRV_MSG_CODE_INITIATE_FLR 0x02000000
+/*deprecated don't use*/
+#define DRV_MSG_CODE_INITIATE_FLR_DEPRECATED 0x02000000
+#define DRV_MSG_CODE_INITIATE_PF_FLR 0x02010000
#define DRV_MSG_CODE_VF_DISABLED_DONE 0xc0000000
#define DRV_MSG_CODE_CFG_VF_MSIX 0xc0010000
#define DRV_MSG_CODE_NVM_PUT_FILE_BEGIN 0x00010000
@@ -928,9 +1047,16 @@ struct public_drv_mb {
#define DRV_MSG_CODE_GET_STATS 0x00130000
#define DRV_MSG_CODE_STATS_TYPE_LAN 1
+#define DRV_MSG_CODE_STATS_TYPE_FCOE 2
+#define DRV_MSG_CODE_STATS_TYPE_ISCSI 3
+#define DRV_MSG_CODE_STATS_TYPE_RDMA 4
#define DRV_MSG_CODE_OCBB_DATA 0x00180000
#define DRV_MSG_CODE_SET_BW 0x00190000
+#define BW_MAX_MASK 0x000000ff
+#define BW_MAX_SHIFT 0
+#define BW_MIN_MASK 0x0000ff00
+#define BW_MIN_SHIFT 8
#define DRV_MSG_CODE_MASK_PARITIES 0x001a0000
#define DRV_MSG_CODE_INDUCE_FAILURE 0x001b0000
#define DRV_MSG_FAN_FAILURE_TYPE (1 << 0)
@@ -938,17 +1064,73 @@ struct public_drv_mb {
#define DRV_MSG_CODE_GPIO_READ 0x001c0000
#define DRV_MSG_CODE_GPIO_WRITE 0x001d0000
-#define DRV_MSG_CODE_GPIO_INFO 0x00270000
+/* Param: [0:15] - gpio number */
+#define DRV_MSG_CODE_GPIO_INFO 0x00270000
+/* Param: [0:7] - test enum, [8:15] - image index, [16:31] - reserved */
#define DRV_MSG_CODE_BIST_TEST 0x001e0000
-#define DRV_MSG_CODE_GET_TEMPERATURE 0x001f0000
+#define DRV_MSG_CODE_GET_TEMPERATURE 0x001f0000
#define DRV_MSG_CODE_SET_LED_MODE 0x00200000
-#define DRV_MSG_CODE_TIMESTAMP 0x00210000
+/* drv_data[7:0] - EPOC in seconds, drv_data[15:8] -
+ * driver version (MAJ MIN BUILD SUB)
+ */
+#define DRV_MSG_CODE_TIMESTAMP 0x00210000
+/* This is an empty mailbox just return OK*/
#define DRV_MSG_CODE_EMPTY_MB 0x00220000
+/* Param[0:4] - resource number (0-31), Param[5:7] - opcode,
+ * param[15:8] - age
+ */
+#define DRV_MSG_CODE_RESOURCE_CMD 0x00230000
+
+/* request resource ownership with default aging */
+#define RESOURCE_OPCODE_REQ 1
+/* request resource ownership without aging */
+#define RESOURCE_OPCODE_REQ_WO_AGING 2
+/* request resource ownership with specific aging timer (in seconds) */
+#define RESOURCE_OPCODE_REQ_W_AGING 3
+#define RESOURCE_OPCODE_RELEASE 4 /* release resource */
+#define RESOURCE_OPCODE_FORCE_RELEASE 5 /* force resource release */
+
+/* resource is free and granted to requester */
+#define RESOURCE_OPCODE_GNT 1
+/* resource is busy, param[7:0] indicates owner as follow 0-15 = PF0-15,
+ * 16 = MFW, 17 = diag over serial
+ */
+#define RESOURCE_OPCODE_BUSY 2
+/* indicate release request was acknowledged */
+#define RESOURCE_OPCODE_RELEASED 3
+/* indicate release request was previously received by other owner */
+#define RESOURCE_OPCODE_RELEASED_PREVIOUS 4
+/* indicate wrong owner during release */
+#define RESOURCE_OPCODE_WRONG_OWNER 5
+#define RESOURCE_OPCODE_UNKNOWN_CMD 255
+/* dedicate resource 0 for dump */
+#define RESOURCE_DUMP (1 << 0)
+
+#define DRV_MSG_CODE_GET_MBA_VERSION 0x00240000 /* Get MBA version */
+
+/* Send crash dump commands with param[3:0] - opcode */
+#define DRV_MSG_CODE_MDUMP_CMD 0x00250000
+#define MDUMP_DRV_PARAM_OPCODE_MASK 0x0000000f
+/* acknowledge reception of error indication */
+#define DRV_MSG_CODE_MDUMP_ACK 0x01
+/* set epoc and personality as follow: drv_data[3:0] - epoch,
+ * drv_data[7:4] - personality
+ */
+#define DRV_MSG_CODE_MDUMP_SET_VALUES 0x02
+/* trigger crash dump procedure */
+#define DRV_MSG_CODE_MDUMP_TRIGGER 0x03
+/* Request valid logs and config words */
+#define DRV_MSG_CODE_MDUMP_GET_CONFIG 0x04
+/* Set triggers mask. drv_mb_param should indicate (bitwise) which trigger
+ * enabled
+ */
+#define DRV_MSG_CODE_MDUMP_SET_ENABLE 0x05
+#define DRV_MSG_CODE_MDUMP_CLEAR_LOGS 0x06 /* Clear all logs */
+
-#define DRV_MSG_CODE_GET_MBA_VERSION 0x00240000
-#define DRV_MSG_CODE_MEM_ECC_EVENTS 0x00260000
+#define DRV_MSG_CODE_MEM_ECC_EVENTS 0x00260000 /* Param: None */
#define DRV_MSG_SEQ_NUMBER_MASK 0x0000ffff
@@ -1019,7 +1201,11 @@ struct public_drv_mb {
#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_SHIFT 0
#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_MASK 0x000000FF
#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_NONE (1 << 0)
+#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_ISCSI_IP_ACQUIRED (1 << 1)
+#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_FCOE_FABRIC_LOGIN_SUCCESS (1 << 1)
#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_TRARGET_FOUND (1 << 2)
+#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_ISCSI_CHAP_SUCCESS (1 << 3)
+#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_FCOE_LUN_FOUND (1 << 3)
#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_LOGGED_INTO_TGT (1 << 4)
#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_IMG_DOWNLOADED (1 << 5)
#define DRV_MB_PARAM_OV_UPDATE_BOOT_PROG_OS_HANDOFF (1 << 6)
@@ -1063,28 +1249,27 @@ struct public_drv_mb {
#define DRV_MB_PARAM_GPIO_NUMBER_MASK 0x0000FFFF
#define DRV_MB_PARAM_GPIO_VALUE_SHIFT 16
#define DRV_MB_PARAM_GPIO_VALUE_MASK 0xFFFF0000
-
-#define DRV_MB_PARAM_GPIO_DIRECTION_SHIFT 16
-#define DRV_MB_PARAM_GPIO_DIRECTION_MASK 0x00FF0000
-#define DRV_MB_PARAM_GPIO_CTRL_SHIFT 24
-#define DRV_MB_PARAM_GPIO_CTRL_MASK 0xFF000000
+#define DRV_MB_PARAM_GPIO_DIRECTION_SHIFT 16
+#define DRV_MB_PARAM_GPIO_DIRECTION_MASK 0x00FF0000
+#define DRV_MB_PARAM_GPIO_CTRL_SHIFT 24
+#define DRV_MB_PARAM_GPIO_CTRL_MASK 0xFF000000
/* Resource Allocation params - Driver version support*/
-#define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_MASK 0xFFFF0000
-#define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_SHIFT 16
-#define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_MASK 0x0000FFFF
-#define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_SHIFT 0
-
-#define DRV_MB_PARAM_BIST_UNKNOWN_TEST 0
-#define DRV_MB_PARAM_BIST_REGISTER_TEST 1
-#define DRV_MB_PARAM_BIST_CLOCK_TEST 2
-#define DRV_MB_PARAM_BIST_NVM_TEST_NUM_IMAGES 3
-#define DRV_MB_PARAM_BIST_NVM_TEST_IMAGE_BY_INDEX 4
-
-#define DRV_MB_PARAM_BIST_RC_UNKNOWN 0
-#define DRV_MB_PARAM_BIST_RC_PASSED 1
-#define DRV_MB_PARAM_BIST_RC_FAILED 2
-#define DRV_MB_PARAM_BIST_RC_INVALID_PARAMETER 3
+#define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_MASK 0xFFFF0000
+#define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_SHIFT 16
+#define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_MASK 0x0000FFFF
+#define DRV_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_SHIFT 0
+
+#define DRV_MB_PARAM_BIST_UNKNOWN_TEST 0
+#define DRV_MB_PARAM_BIST_REGISTER_TEST 1
+#define DRV_MB_PARAM_BIST_CLOCK_TEST 2
+#define DRV_MB_PARAM_BIST_NVM_TEST_NUM_IMAGES 3
+#define DRV_MB_PARAM_BIST_NVM_TEST_IMAGE_BY_INDEX 4
+
+#define DRV_MB_PARAM_BIST_RC_UNKNOWN 0
+#define DRV_MB_PARAM_BIST_RC_PASSED 1
+#define DRV_MB_PARAM_BIST_RC_FAILED 2
+#define DRV_MB_PARAM_BIST_RC_INVALID_PARAMETER 3
#define DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT 0
#define DRV_MB_PARAM_BIST_TEST_INDEX_MASK 0x000000FF
@@ -1117,6 +1302,10 @@ struct public_drv_mb {
#define FW_MSG_CODE_UPDATE_DRIVER_STATE_DONE 0x31000000
#define FW_MSG_CODE_DRV_MSG_CODE_BW_UPDATE_DONE 0x32000000
#define FW_MSG_CODE_DRV_MSG_CODE_MTU_SIZE_DONE 0x33000000
+#define FW_MSG_CODE_RESOURCE_ALLOC_OK 0x34000000
+#define FW_MSG_CODE_RESOURCE_ALLOC_UNKNOWN 0x35000000
+#define FW_MSG_CODE_RESOURCE_ALLOC_DEPRECATED 0x36000000
+#define FW_MSG_CODE_RESOURCE_ALLOC_GEN_ERR 0x37000000
#define FW_MSG_CODE_NIG_DRAIN_DONE 0x30000000
#define FW_MSG_CODE_VF_DISABLED_DONE 0xb0000000
#define FW_MSG_CODE_DRV_CFG_VF_MSIX_DONE 0xb0010000
@@ -1169,10 +1358,24 @@ struct public_drv_mb {
#define FW_MSG_CODE_GPIO_CTRL_ERR 0x00020000
#define FW_MSG_CODE_GPIO_INVALID 0x000f0000
#define FW_MSG_CODE_GPIO_INVALID_VALUE 0x00050000
+#define FW_MSG_CODE_BIST_TEST_INVALID 0x000f0000
+
+/* mdump related response codes */
+#define FW_MSG_CODE_MDUMP_NO_IMAGE_FOUND 0x00010000
+#define FW_MSG_CODE_MDUMP_ALLOC_FAILED 0x00020000
+#define FW_MSG_CODE_MDUMP_INVALID_CMD 0x00030000
+#define FW_MSG_CODE_MDUMP_IN_PROGRESS 0x00040000
+#define FW_MSG_CODE_MDUMP_WRITE_FAILED 0x00050000
#define FW_MSG_SEQ_NUMBER_MASK 0x0000ffff
u32 fw_mb_param;
+ /* Resource Allocation params - MFW version support*/
+#define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_MASK 0xFFFF0000
+#define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_SHIFT 16
+#define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_MASK 0x0000FFFF
+#define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MINOR_SHIFT 0
+
u32 drv_pulse_mb;
#define DRV_PULSE_SEQ_MASK 0x00007fff
@@ -1232,6 +1435,7 @@ enum MFW_DRV_MSG_TYPE {
MFW_DRV_MSG_GET_RDMA_STATS,
MFW_DRV_MSG_FAILURE_DETECTED,
MFW_DRV_MSG_TRANSCEIVER_STATE_CHANGE,
+ MFW_DRV_MSG_CRITICAL_ERROR_OCCURRED,
MFW_DRV_MSG_MAX
};
diff --git a/drivers/net/qede/base/nvm_cfg.h b/drivers/net/qede/base/nvm_cfg.h
index fe980d5..8e9c08a 100644
--- a/drivers/net/qede/base/nvm_cfg.h
+++ b/drivers/net/qede/base/nvm_cfg.h
@@ -13,7 +13,7 @@
* Description: NVM config file - Generated file from nvm cfg excel.
* DO NOT MODIFY !!!
*
- * Created: 1/14/2016
+ * Created: 5/9/2016
*
****************************************************************************/
@@ -84,8 +84,11 @@ struct nvm_cfg1_glob {
#define NVM_CFG1_GLOB_ASPM_SUPPORT_MASK 0x00000018
#define NVM_CFG1_GLOB_ASPM_SUPPORT_OFFSET 3
#define NVM_CFG1_GLOB_ASPM_SUPPORT_L0S_L1_ENABLED 0x0
+ #define NVM_CFG1_GLOB_ASPM_SUPPORT_L0S_DISABLED 0x1
#define NVM_CFG1_GLOB_ASPM_SUPPORT_L1_DISABLED 0x2
-#define NVM_CFG1_GLOB_RESERVED_MPREVENT_PCIE_L1_MENTRY_MASK 0x00000020
+ #define NVM_CFG1_GLOB_ASPM_SUPPORT_L0S_L1_DISABLED 0x3
+ #define NVM_CFG1_GLOB_RESERVED_MPREVENT_PCIE_L1_MENTRY_MASK \
+ 0x00000020
#define NVM_CFG1_GLOB_RESERVED_MPREVENT_PCIE_L1_MENTRY_OFFSET 5
#define NVM_CFG1_GLOB_PCIE_G2_TX_AMPLITUDE_MASK 0x000003C0
#define NVM_CFG1_GLOB_PCIE_G2_TX_AMPLITUDE_OFFSET 6
@@ -101,10 +104,9 @@ struct nvm_cfg1_glob {
#define NVM_CFG1_GLOB_WWN_NODE_PREFIX1_OFFSET 21
#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_MASK 0x60000000
#define NVM_CFG1_GLOB_NCSI_PACKAGE_ID_OFFSET 29
- /* Set the duration, in seconds, fan failure signal should be
- * sampled
- */
-#define NVM_CFG1_GLOB_RESERVED_FAN_FAILURE_DURATION_MASK 0x80000000
+ /* Set the duration, in sec, fan failure signal should be sampled */
+ #define NVM_CFG1_GLOB_RESERVED_FAN_FAILURE_DURATION_MASK \
+ 0x80000000
#define NVM_CFG1_GLOB_RESERVED_FAN_FAILURE_DURATION_OFFSET 31
u32 mgmt_traffic; /* 0x28 */
#define NVM_CFG1_GLOB_RESERVED60_MASK 0x00000001
@@ -132,27 +134,28 @@ struct nvm_cfg1_glob {
u32 core_cfg; /* 0x2C */
#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_MASK 0x000000FF
#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_OFFSET 0
-#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_2X40G 0x0
-#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_2X50G 0x1
-#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_1X100G 0x2
-#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_4X10G_F 0x3
-#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_4X10G_E 0x4
-#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_4X20G 0x5
-#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_1X40G 0xB
-#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_2X25G 0xC
-#define NVM_CFG1_GLOB_NETWORK_PORT_MODE_DE_1X25G 0xD
-#define NVM_CFG1_GLOB_EAGLE_ENFORCE_TX_FIR_CFG_MASK 0x00000100
-#define NVM_CFG1_GLOB_EAGLE_ENFORCE_TX_FIR_CFG_OFFSET 8
-#define NVM_CFG1_GLOB_EAGLE_ENFORCE_TX_FIR_CFG_DISABLED 0x0
-#define NVM_CFG1_GLOB_EAGLE_ENFORCE_TX_FIR_CFG_ENABLED 0x1
-#define NVM_CFG1_GLOB_FALCON_ENFORCE_TX_FIR_CFG_MASK 0x00000200
-#define NVM_CFG1_GLOB_FALCON_ENFORCE_TX_FIR_CFG_OFFSET 9
-#define NVM_CFG1_GLOB_FALCON_ENFORCE_TX_FIR_CFG_DISABLED 0x0
-#define NVM_CFG1_GLOB_FALCON_ENFORCE_TX_FIR_CFG_ENABLED 0x1
-#define NVM_CFG1_GLOB_EAGLE_CORE_ADDR_MASK 0x0003FC00
-#define NVM_CFG1_GLOB_EAGLE_CORE_ADDR_OFFSET 10
-#define NVM_CFG1_GLOB_FALCON_CORE_ADDR_MASK 0x03FC0000
-#define NVM_CFG1_GLOB_FALCON_CORE_ADDR_OFFSET 18
+ #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_2X40G 0x0
+ #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X50G 0x1
+ #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_1X100G 0x2
+ #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_4X10G_F 0x3
+ #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_4X10G_E 0x4
+ #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_BB_4X20G 0x5
+ #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_1X40G 0xB
+ #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_2X25G 0xC
+ #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_1X25G 0xD
+ #define NVM_CFG1_GLOB_NETWORK_PORT_MODE_4X25G 0xE
+ #define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_MASK 0x00000100
+ #define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_OFFSET 8
+ #define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_DISABLED 0x0
+ #define NVM_CFG1_GLOB_MPS10_ENFORCE_TX_FIR_CFG_ENABLED 0x1
+ #define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_MASK 0x00000200
+ #define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_OFFSET 9
+ #define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_DISABLED 0x0
+ #define NVM_CFG1_GLOB_MPS25_ENFORCE_TX_FIR_CFG_ENABLED 0x1
+ #define NVM_CFG1_GLOB_MPS10_CORE_ADDR_MASK 0x0003FC00
+ #define NVM_CFG1_GLOB_MPS10_CORE_ADDR_OFFSET 10
+ #define NVM_CFG1_GLOB_MPS25_CORE_ADDR_MASK 0x03FC0000
+ #define NVM_CFG1_GLOB_MPS25_CORE_ADDR_OFFSET 18
#define NVM_CFG1_GLOB_AVS_MODE_MASK 0x1C000000
#define NVM_CFG1_GLOB_AVS_MODE_OFFSET 26
#define NVM_CFG1_GLOB_AVS_MODE_CLOSE_LOOP 0x0
@@ -209,7 +212,7 @@ struct nvm_cfg1_glob {
/* Maximum advertised pcie link width */
#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_MASK 0x000F0000
#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_OFFSET 16
-#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_16_LANES 0x0
+ #define NVM_CFG1_GLOB_MAX_LINK_WIDTH_BB_16_LANES 0x0
#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_1_LANE 0x1
#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_2_LANES 0x2
#define NVM_CFG1_GLOB_MAX_LINK_WIDTH_4_LANES 0x3
@@ -283,7 +286,7 @@ struct nvm_cfg1_glob {
/* Set max. count for over operational temperature */
#define NVM_CFG1_GLOB_MAX_COUNT_OPER_THRESHOLD_MASK 0xFF000000
#define NVM_CFG1_GLOB_MAX_COUNT_OPER_THRESHOLD_OFFSET 24
- u32 eagle_preemphasis; /* 0x40 */
+ u32 mps10_preemphasis; /* 0x40 */
#define NVM_CFG1_GLOB_LANE0_PREEMP_MASK 0x000000FF
#define NVM_CFG1_GLOB_LANE0_PREEMP_OFFSET 0
#define NVM_CFG1_GLOB_LANE1_PREEMP_MASK 0x0000FF00
@@ -292,7 +295,7 @@ struct nvm_cfg1_glob {
#define NVM_CFG1_GLOB_LANE2_PREEMP_OFFSET 16
#define NVM_CFG1_GLOB_LANE3_PREEMP_MASK 0xFF000000
#define NVM_CFG1_GLOB_LANE3_PREEMP_OFFSET 24
- u32 eagle_driver_current; /* 0x44 */
+ u32 mps10_driver_current; /* 0x44 */
#define NVM_CFG1_GLOB_LANE0_AMP_MASK 0x000000FF
#define NVM_CFG1_GLOB_LANE0_AMP_OFFSET 0
#define NVM_CFG1_GLOB_LANE1_AMP_MASK 0x0000FF00
@@ -301,7 +304,7 @@ struct nvm_cfg1_glob {
#define NVM_CFG1_GLOB_LANE2_AMP_OFFSET 16
#define NVM_CFG1_GLOB_LANE3_AMP_MASK 0xFF000000
#define NVM_CFG1_GLOB_LANE3_AMP_OFFSET 24
- u32 falcon_preemphasis; /* 0x48 */
+ u32 mps25_preemphasis; /* 0x48 */
#define NVM_CFG1_GLOB_LANE0_PREEMP_MASK 0x000000FF
#define NVM_CFG1_GLOB_LANE0_PREEMP_OFFSET 0
#define NVM_CFG1_GLOB_LANE1_PREEMP_MASK 0x0000FF00
@@ -310,7 +313,7 @@ struct nvm_cfg1_glob {
#define NVM_CFG1_GLOB_LANE2_PREEMP_OFFSET 16
#define NVM_CFG1_GLOB_LANE3_PREEMP_MASK 0xFF000000
#define NVM_CFG1_GLOB_LANE3_PREEMP_OFFSET 24
- u32 falcon_driver_current; /* 0x4C */
+ u32 mps25_driver_current; /* 0x4C */
#define NVM_CFG1_GLOB_LANE0_AMP_MASK 0x000000FF
#define NVM_CFG1_GLOB_LANE0_AMP_OFFSET 0
#define NVM_CFG1_GLOB_LANE1_AMP_MASK 0x0000FF00
@@ -392,12 +395,34 @@ struct nvm_cfg1_glob {
#define NVM_CFG1_GLOB_BAR2_SIZE_256M 0xD
#define NVM_CFG1_GLOB_BAR2_SIZE_512M 0xE
#define NVM_CFG1_GLOB_BAR2_SIZE_1G 0xF
- /* Set the duration, in seconds, fan failure signal should be
- * sampled
- */
+ /* Set the duration, in secs, fan failure signal should be sampled */
#define NVM_CFG1_GLOB_FAN_FAILURE_DURATION_MASK 0x0000F000
#define NVM_CFG1_GLOB_FAN_FAILURE_DURATION_OFFSET 12
- u32 eagle_txfir_main; /* 0x5C */
+ /* This field defines the board total budget for bar2 when disabled
+ * the regular bar size is used.
+ */
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_MASK 0x00FF0000
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_OFFSET 16
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_DISABLED 0x0
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_64K 0x1
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_128K 0x2
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_256K 0x3
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_512K 0x4
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_1M 0x5
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_2M 0x6
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_4M 0x7
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_8M 0x8
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_16M 0x9
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_32M 0xA
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_64M 0xB
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_128M 0xC
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_256M 0xD
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_512M 0xE
+ #define NVM_CFG1_GLOB_BAR2_TOTAL_BUDGET_1G 0xF
+ /* Enable/Disable Crash dump triggers */
+ #define NVM_CFG1_GLOB_CRASH_DUMP_TRIGGER_ENABLE_MASK 0xFF000000
+ #define NVM_CFG1_GLOB_CRASH_DUMP_TRIGGER_ENABLE_OFFSET 24
+ u32 mps10_txfir_main; /* 0x5C */
#define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_MASK 0x000000FF
#define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_OFFSET 0
#define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_MASK 0x0000FF00
@@ -406,7 +431,7 @@ struct nvm_cfg1_glob {
#define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_OFFSET 16
#define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_MASK 0xFF000000
#define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_OFFSET 24
- u32 eagle_txfir_post; /* 0x60 */
+ u32 mps10_txfir_post; /* 0x60 */
#define NVM_CFG1_GLOB_LANE0_TXFIR_POST_MASK 0x000000FF
#define NVM_CFG1_GLOB_LANE0_TXFIR_POST_OFFSET 0
#define NVM_CFG1_GLOB_LANE1_TXFIR_POST_MASK 0x0000FF00
@@ -415,7 +440,7 @@ struct nvm_cfg1_glob {
#define NVM_CFG1_GLOB_LANE2_TXFIR_POST_OFFSET 16
#define NVM_CFG1_GLOB_LANE3_TXFIR_POST_MASK 0xFF000000
#define NVM_CFG1_GLOB_LANE3_TXFIR_POST_OFFSET 24
- u32 falcon_txfir_main; /* 0x64 */
+ u32 mps25_txfir_main; /* 0x64 */
#define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_MASK 0x000000FF
#define NVM_CFG1_GLOB_LANE0_TXFIR_MAIN_OFFSET 0
#define NVM_CFG1_GLOB_LANE1_TXFIR_MAIN_MASK 0x0000FF00
@@ -424,7 +449,7 @@ struct nvm_cfg1_glob {
#define NVM_CFG1_GLOB_LANE2_TXFIR_MAIN_OFFSET 16
#define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_MASK 0xFF000000
#define NVM_CFG1_GLOB_LANE3_TXFIR_MAIN_OFFSET 24
- u32 falcon_txfir_post; /* 0x68 */
+ u32 mps25_txfir_post; /* 0x68 */
#define NVM_CFG1_GLOB_LANE0_TXFIR_POST_MASK 0x000000FF
#define NVM_CFG1_GLOB_LANE0_TXFIR_POST_OFFSET 0
#define NVM_CFG1_GLOB_LANE1_TXFIR_POST_MASK 0x0000FF00
@@ -463,6 +488,14 @@ struct nvm_cfg1_glob {
u32 generic_cont1; /* 0x78 */
#define NVM_CFG1_GLOB_AVS_DAC_CODE_MASK 0x000003FF
#define NVM_CFG1_GLOB_AVS_DAC_CODE_OFFSET 0
+ #define NVM_CFG1_GLOB_LANE0_SWAP_MASK 0x00000C00
+ #define NVM_CFG1_GLOB_LANE0_SWAP_OFFSET 10
+ #define NVM_CFG1_GLOB_LANE1_SWAP_MASK 0x00003000
+ #define NVM_CFG1_GLOB_LANE1_SWAP_OFFSET 12
+ #define NVM_CFG1_GLOB_LANE2_SWAP_MASK 0x0000C000
+ #define NVM_CFG1_GLOB_LANE2_SWAP_OFFSET 14
+ #define NVM_CFG1_GLOB_LANE3_SWAP_MASK 0x00030000
+ #define NVM_CFG1_GLOB_LANE3_SWAP_OFFSET 16
u32 mbi_version; /* 0x7C */
#define NVM_CFG1_GLOB_MBI_VERSION_0_MASK 0x000000FF
#define NVM_CFG1_GLOB_MBI_VERSION_0_OFFSET 0
@@ -512,6 +545,10 @@ struct nvm_cfg1_glob {
#define NVM_CFG1_GLOB_I2C_MUX_SEL_GPIO__GPIO31 0x20
u32 device_capabilities; /* 0x88 */
#define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ETHERNET 0x1
+ #define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_FCOE 0x2
+ #define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ISCSI 0x4
+ #define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_ROCE 0x8
+ #define NVM_CFG1_GLOB_DEVICE_CAPABILITIES_IWARP 0x10
u32 power_dissipated; /* 0x8C */
#define NVM_CFG1_GLOB_POWER_DIS_D0_MASK 0x000000FF
#define NVM_CFG1_GLOB_POWER_DIS_D0_OFFSET 0
@@ -531,7 +568,17 @@ struct nvm_cfg1_glob {
#define NVM_CFG1_GLOB_POWER_CONS_D3_MASK 0xFF000000
#define NVM_CFG1_GLOB_POWER_CONS_D3_OFFSET 24
u32 efi_version; /* 0x94 */
- u32 reserved[42]; /* 0x98 */
+ u32 multi_network_modes_capability; /* 0x98 */
+ #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_4X10G 0x1
+ #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_1X25G 0x2
+ #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X25G 0x4
+ #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_4X25G 0x8
+ #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_1X40G 0x10
+ #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X40G 0x20
+ #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_2X50G 0x40
+ #define NVM_CFG1_GLOB_MULTI_NETWORK_MODES_CAPABILITY_BB_1X100G \
+ 0x80
+ u32 reserved[41]; /* 0x9C */
};
struct nvm_cfg1_path {
@@ -560,6 +607,9 @@ struct nvm_cfg1_port {
#define NVM_CFG1_PORT_LED_MODE_PHY10 0xD
#define NVM_CFG1_PORT_LED_MODE_PHY11 0xE
#define NVM_CFG1_PORT_LED_MODE_PHY12 0xF
+ #define NVM_CFG1_PORT_LED_MODE_BREAKOUT 0x10
+ #define NVM_CFG1_PORT_ROCE_PRIORITY_MASK 0x0000FF00
+ #define NVM_CFG1_PORT_ROCE_PRIORITY_OFFSET 8
#define NVM_CFG1_PORT_DCBX_MODE_MASK 0x000F0000
#define NVM_CFG1_PORT_DCBX_MODE_OFFSET 16
#define NVM_CFG1_PORT_DCBX_MODE_DISABLED 0x0
@@ -569,6 +619,8 @@ struct nvm_cfg1_port {
#define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_MASK 0x00F00000
#define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_OFFSET 20
#define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_ETHERNET 0x1
+ #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_FCOE 0x2
+ #define NVM_CFG1_PORT_DEFAULT_ENABLED_PROTOCOLS_ISCSI 0x4
u32 pcie_cfg; /* 0xC */
#define NVM_CFG1_PORT_RESERVED15_MASK 0x00000007
#define NVM_CFG1_PORT_RESERVED15_OFFSET 0
@@ -589,7 +641,7 @@ struct nvm_cfg1_port {
#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G 0x8
#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G 0x10
#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G 0x20
-#define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_100G 0x40
+ #define NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40
#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_MASK 0xFFFF0000
#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_OFFSET 16
#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_1G 0x1
@@ -597,7 +649,7 @@ struct nvm_cfg1_port {
#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_25G 0x8
#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_40G 0x10
#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_50G 0x20
-#define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_100G 0x40
+ #define NVM_CFG1_PORT_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40
u32 link_settings; /* 0x18 */
#define NVM_CFG1_PORT_DRV_LINK_SPEED_MASK 0x0000000F
#define NVM_CFG1_PORT_DRV_LINK_SPEED_OFFSET 0
@@ -607,7 +659,7 @@ struct nvm_cfg1_port {
#define NVM_CFG1_PORT_DRV_LINK_SPEED_25G 0x4
#define NVM_CFG1_PORT_DRV_LINK_SPEED_40G 0x5
#define NVM_CFG1_PORT_DRV_LINK_SPEED_50G 0x6
-#define NVM_CFG1_PORT_DRV_LINK_SPEED_100G 0x7
+ #define NVM_CFG1_PORT_DRV_LINK_SPEED_BB_100G 0x7
#define NVM_CFG1_PORT_DRV_LINK_SPEED_SMARTLINQ 0x8
#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_MASK 0x00000070
#define NVM_CFG1_PORT_DRV_FLOW_CONTROL_OFFSET 4
@@ -622,26 +674,29 @@ struct nvm_cfg1_port {
#define NVM_CFG1_PORT_MFW_LINK_SPEED_25G 0x4
#define NVM_CFG1_PORT_MFW_LINK_SPEED_40G 0x5
#define NVM_CFG1_PORT_MFW_LINK_SPEED_50G 0x6
-#define NVM_CFG1_PORT_MFW_LINK_SPEED_100G 0x7
+ #define NVM_CFG1_PORT_MFW_LINK_SPEED_BB_100G 0x7
#define NVM_CFG1_PORT_MFW_LINK_SPEED_SMARTLINQ 0x8
#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_MASK 0x00003800
#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_OFFSET 11
#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_AUTONEG 0x1
#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_RX 0x2
#define NVM_CFG1_PORT_MFW_FLOW_CONTROL_TX 0x4
-#define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_MASK 0x00004000
+ #define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_MASK \
+ 0x00004000
#define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_OFFSET 14
-#define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_DISABLED 0x0
-#define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_ENABLED 0x1
+ #define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_DISABLED \
+ 0x0
+ #define NVM_CFG1_PORT_OPTIC_MODULE_VENDOR_ENFORCEMENT_ENABLED \
+ 0x1
#define NVM_CFG1_PORT_AN_25G_50G_OUI_MASK 0x00018000
#define NVM_CFG1_PORT_AN_25G_50G_OUI_OFFSET 15
#define NVM_CFG1_PORT_AN_25G_50G_OUI_CONSORTIUM 0x0
#define NVM_CFG1_PORT_AN_25G_50G_OUI_BAM 0x1
#define NVM_CFG1_PORT_FEC_FORCE_MODE_MASK 0x000E0000
#define NVM_CFG1_PORT_FEC_FORCE_MODE_OFFSET 17
-#define NVM_CFG1_PORT_FEC_FORCE_MODE_FEC_FORCE_NONE 0x0
-#define NVM_CFG1_PORT_FEC_FORCE_MODE_FEC_FORCE_FIRECODE 0x1
-#define NVM_CFG1_PORT_FEC_FORCE_MODE_FEC_FORCE_RS 0x2
+ #define NVM_CFG1_PORT_FEC_FORCE_MODE_NONE 0x0
+ #define NVM_CFG1_PORT_FEC_FORCE_MODE_FIRECODE 0x1
+ #define NVM_CFG1_PORT_FEC_FORCE_MODE_RS 0x2
u32 phy_cfg; /* 0x1C */
#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_MASK 0x0000FFFF
#define NVM_CFG1_PORT_OPTIONAL_LINK_MODES_OFFSET 0
@@ -671,9 +726,9 @@ struct nvm_cfg1_port {
#define NVM_CFG1_PORT_AN_MODE_CL73 0x1
#define NVM_CFG1_PORT_AN_MODE_CL37 0x2
#define NVM_CFG1_PORT_AN_MODE_CL73_BAM 0x3
-#define NVM_CFG1_PORT_AN_MODE_CL37_BAM 0x4
-#define NVM_CFG1_PORT_AN_MODE_HPAM 0x5
-#define NVM_CFG1_PORT_AN_MODE_SGMII 0x6
+ #define NVM_CFG1_PORT_AN_MODE_BB_CL37_BAM 0x4
+ #define NVM_CFG1_PORT_AN_MODE_BB_HPAM 0x5
+ #define NVM_CFG1_PORT_AN_MODE_BB_SGMII 0x6
u32 mgmt_traffic; /* 0x20 */
#define NVM_CFG1_PORT_RESERVED61_MASK 0x0000000F
#define NVM_CFG1_PORT_RESERVED61_OFFSET 0
@@ -711,9 +766,10 @@ struct nvm_cfg1_port {
#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_25G 0x4
#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_40G 0x5
#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_50G 0x6
-#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_100G 0x7
+ #define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_BB_100G 0x7
#define NVM_CFG1_PORT_PREBOOT_LINK_SPEED_SMARTLINQ 0x8
-#define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_RETRY_COUNT_MASK 0x00E00000
+ #define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_RETRY_COUNT_MASK \
+ 0x00E00000
#define NVM_CFG1_PORT_RESERVED__M_MBA_BOOT_RETRY_COUNT_OFFSET 21
u32 mba_cfg2; /* 0x2C */
#define NVM_CFG1_PORT_RESERVED65_MASK 0x0000FFFF
@@ -738,7 +794,7 @@ struct nvm_cfg1_port {
#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_25G 0x8
#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_40G 0x10
#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_50G 0x20
-#define NVM_CFG1_PORT_LANE_LED_SPD__SEL_100G 0x40
+ #define NVM_CFG1_PORT_LANE_LED_SPD__SEL_BB_100G 0x40
u32 transceiver_00; /* 0x40 */
/* Define for mapping of transceiver signal module absent */
#define NVM_CFG1_PORT_TRANS_MODULE_ABS_MASK 0x000000FF
@@ -784,6 +840,10 @@ struct nvm_cfg1_port {
u32 device_ids; /* 0x44 */
#define NVM_CFG1_PORT_ETH_DID_SUFFIX_MASK 0x000000FF
#define NVM_CFG1_PORT_ETH_DID_SUFFIX_OFFSET 0
+ #define NVM_CFG1_PORT_FCOE_DID_SUFFIX_MASK 0x0000FF00
+ #define NVM_CFG1_PORT_FCOE_DID_SUFFIX_OFFSET 8
+ #define NVM_CFG1_PORT_ISCSI_DID_SUFFIX_MASK 0x00FF0000
+ #define NVM_CFG1_PORT_ISCSI_DID_SUFFIX_OFFSET 16
#define NVM_CFG1_PORT_RESERVED_DID_SUFFIX_MASK 0xFF000000
#define NVM_CFG1_PORT_RESERVED_DID_SUFFIX_OFFSET 24
u32 board_cfg; /* 0x48 */
@@ -833,7 +893,391 @@ struct nvm_cfg1_port {
#define NVM_CFG1_PORT_TX_DISABLE_GPIO29 0x1E
#define NVM_CFG1_PORT_TX_DISABLE_GPIO30 0x1F
#define NVM_CFG1_PORT_TX_DISABLE_GPIO31 0x20
- u32 reserved[131]; /* 0x4C */
+ u32 mnm_10g_cap; /* 0x4C */
+ #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_MASK \
+ 0x0000FFFF
+ #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0
+ #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_1G 0x1
+ #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_10G 0x2
+ #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_25G 0x8
+ #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_40G 0x10
+ #define NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_50G 0x20
+ #define \
+ NVM_CFG1_PORT_MNM_10G_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40
+ #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_MASK \
+ 0xFFFF0000
+ #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_OFFSET \
+ 16
+ #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_1G 0x1
+ #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_10G 0x2
+ #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_25G 0x8
+ #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_40G 0x10
+ #define NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_50G 0x20
+ #define \
+ NVM_CFG1_PORT_MNM_10G_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40
+ u32 mnm_10g_ctrl; /* 0x50 */
+ #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_MASK 0x0000000F
+ #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_OFFSET 0
+ #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_AUTONEG 0x0
+ #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_1G 0x1
+ #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_10G 0x2
+ #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_25G 0x4
+ #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_40G 0x5
+ #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_50G 0x6
+ #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_BB_100G 0x7
+ #define NVM_CFG1_PORT_MNM_10G_DRV_LINK_SPEED_SMARTLINQ 0x8
+ #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_MASK 0x000000F0
+ #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_OFFSET 4
+ #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_AUTONEG 0x0
+ #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_1G 0x1
+ #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_10G 0x2
+ #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_25G 0x4
+ #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_40G 0x5
+ #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_50G 0x6
+ #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_BB_100G 0x7
+ #define NVM_CFG1_PORT_MNM_10G_MFW_LINK_SPEED_SMARTLINQ 0x8
+ /* This field defines the board technology
+ * (backpane,transceiver,external PHY)
+ */
+ #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_MASK 0x0000FF00
+ #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_OFFSET 8
+ #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_UNDEFINED 0x0
+ #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_MODULE 0x1
+ #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_BACKPLANE 0x2
+ #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_EXT_PHY 0x3
+ #define NVM_CFG1_PORT_MNM_10G_PORT_TYPE_MODULE_SLAVE 0x4
+ #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_MASK \
+ 0x00FF0000
+ #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_OFFSET 16
+ #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_BYPASS 0x0
+ #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_KR 0x2
+ #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_KR2 0x3
+ #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_KR4 0x4
+ #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_XFI 0x8
+ #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_SFI 0x9
+ #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_1000X 0xB
+ #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_SGMII 0xC
+ #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_XLAUI 0x11
+ #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_XLPPI 0x12
+ #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_CAUI 0x21
+ #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_CPPI 0x22
+ #define NVM_CFG1_PORT_MNM_10G_SERDES_NET_INTERFACE_25GAUI 0x31
+ #define NVM_CFG1_PORT_MNM_10G_ETH_DID_SUFFIX_MASK 0xFF000000
+ #define NVM_CFG1_PORT_MNM_10G_ETH_DID_SUFFIX_OFFSET 24
+ u32 mnm_10g_misc; /* 0x54 */
+ #define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_MASK 0x00000007
+ #define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_OFFSET 0
+ #define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_NONE 0x0
+ #define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_FIRECODE 0x1
+ #define NVM_CFG1_PORT_MNM_10G_FEC_FORCE_MODE_RS 0x2
+ u32 mnm_25g_cap; /* 0x58 */
+ #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_MASK \
+ 0x0000FFFF
+ #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0
+ #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_1G 0x1
+ #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_10G 0x2
+ #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_25G 0x8
+ #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_40G 0x10
+ #define NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_50G 0x20
+ #define \
+ NVM_CFG1_PORT_MNM_25G_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40
+ #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_MASK \
+ 0xFFFF0000
+ #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_OFFSET \
+ 16
+ #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_1G 0x1
+ #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_10G 0x2
+ #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_25G 0x8
+ #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_40G 0x10
+ #define NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_50G 0x20
+ #define \
+ NVM_CFG1_PORT_MNM_25G_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40
+ u32 mnm_25g_ctrl; /* 0x5C */
+ #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_MASK 0x0000000F
+ #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_OFFSET 0
+ #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_AUTONEG 0x0
+ #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_1G 0x1
+ #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_10G 0x2
+ #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_25G 0x4
+ #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_40G 0x5
+ #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_50G 0x6
+ #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_BB_100G 0x7
+ #define NVM_CFG1_PORT_MNM_25G_DRV_LINK_SPEED_SMARTLINQ 0x8
+ #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_MASK 0x000000F0
+ #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_OFFSET 4
+ #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_AUTONEG 0x0
+ #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_1G 0x1
+ #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_10G 0x2
+ #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_25G 0x4
+ #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_40G 0x5
+ #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_50G 0x6
+ #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_BB_100G 0x7
+ #define NVM_CFG1_PORT_MNM_25G_MFW_LINK_SPEED_SMARTLINQ 0x8
+ /* This field defines the board technology
+ * (backpane,transceiver,external PHY)
+ */
+ #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_MASK 0x0000FF00
+ #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_OFFSET 8
+ #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_UNDEFINED 0x0
+ #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_MODULE 0x1
+ #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_BACKPLANE 0x2
+ #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_EXT_PHY 0x3
+ #define NVM_CFG1_PORT_MNM_25G_PORT_TYPE_MODULE_SLAVE 0x4
+ #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_MASK \
+ 0x00FF0000
+ #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_OFFSET 16
+ #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_BYPASS 0x0
+ #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_KR 0x2
+ #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_KR2 0x3
+ #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_KR4 0x4
+ #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_XFI 0x8
+ #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_SFI 0x9
+ #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_1000X 0xB
+ #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_SGMII 0xC
+ #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_XLAUI 0x11
+ #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_XLPPI 0x12
+ #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_CAUI 0x21
+ #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_CPPI 0x22
+ #define NVM_CFG1_PORT_MNM_25G_SERDES_NET_INTERFACE_25GAUI 0x31
+ #define NVM_CFG1_PORT_MNM_25G_ETH_DID_SUFFIX_MASK 0xFF000000
+ #define NVM_CFG1_PORT_MNM_25G_ETH_DID_SUFFIX_OFFSET 24
+ u32 mnm_25g_misc; /* 0x60 */
+ #define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_MASK 0x00000007
+ #define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_OFFSET 0
+ #define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_NONE 0x0
+ #define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_FIRECODE 0x1
+ #define NVM_CFG1_PORT_MNM_25G_FEC_FORCE_MODE_RS 0x2
+ u32 mnm_40g_cap; /* 0x64 */
+ #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_MASK \
+ 0x0000FFFF
+ #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0
+ #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_1G 0x1
+ #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_10G 0x2
+ #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_25G 0x8
+ #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_40G 0x10
+ #define NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_50G 0x20
+ #define \
+ NVM_CFG1_PORT_MNM_40G_DRV_SPEED_CAPABILITY_MASK_BB_100G 0x40
+ #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_MASK \
+ 0xFFFF0000
+ #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_OFFSET \
+ 16
+ #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_1G 0x1
+ #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_10G 0x2
+ #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_25G 0x8
+ #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_40G 0x10
+ #define NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_50G 0x20
+ #define \
+ NVM_CFG1_PORT_MNM_40G_MFW_SPEED_CAPABILITY_MASK_BB_100G 0x40
+ u32 mnm_40g_ctrl; /* 0x68 */
+ #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_MASK 0x0000000F
+ #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_OFFSET 0
+ #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_AUTONEG 0x0
+ #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_1G 0x1
+ #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_10G 0x2
+ #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_25G 0x4
+ #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_40G 0x5
+ #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_50G 0x6
+ #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_BB_100G 0x7
+ #define NVM_CFG1_PORT_MNM_40G_DRV_LINK_SPEED_SMARTLINQ 0x8
+ #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_MASK 0x000000F0
+ #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_OFFSET 4
+ #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_AUTONEG 0x0
+ #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_1G 0x1
+ #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_10G 0x2
+ #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_25G 0x4
+ #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_40G 0x5
+ #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_50G 0x6
+ #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_BB_100G 0x7
+ #define NVM_CFG1_PORT_MNM_40G_MFW_LINK_SPEED_SMARTLINQ 0x8
+ /* This field defines the board technology
+ * (backpane,transceiver,external PHY)
+ */
+ #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_MASK 0x0000FF00
+ #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_OFFSET 8
+ #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_UNDEFINED 0x0
+ #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_MODULE 0x1
+ #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_BACKPLANE 0x2
+ #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_EXT_PHY 0x3
+ #define NVM_CFG1_PORT_MNM_40G_PORT_TYPE_MODULE_SLAVE 0x4
+ #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_MASK \
+ 0x00FF0000
+ #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_OFFSET 16
+ #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_BYPASS 0x0
+ #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_KR 0x2
+ #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_KR2 0x3
+ #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_KR4 0x4
+ #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_XFI 0x8
+ #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_SFI 0x9
+ #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_1000X 0xB
+ #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_SGMII 0xC
+ #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_XLAUI 0x11
+ #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_XLPPI 0x12
+ #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_CAUI 0x21
+ #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_CPPI 0x22
+ #define NVM_CFG1_PORT_MNM_40G_SERDES_NET_INTERFACE_25GAUI 0x31
+ #define NVM_CFG1_PORT_MNM_40G_ETH_DID_SUFFIX_MASK 0xFF000000
+ #define NVM_CFG1_PORT_MNM_40G_ETH_DID_SUFFIX_OFFSET 24
+ u32 mnm_40g_misc; /* 0x6C */
+ #define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_MASK 0x00000007
+ #define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_OFFSET 0
+ #define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_NONE 0x0
+ #define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_FIRECODE 0x1
+ #define NVM_CFG1_PORT_MNM_40G_FEC_FORCE_MODE_RS 0x2
+ u32 mnm_50g_cap; /* 0x70 */
+ #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_MASK \
+ 0x0000FFFF
+ #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_OFFSET 0
+ #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_1G 0x1
+ #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_10G 0x2
+ #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_25G 0x8
+ #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_40G 0x10
+ #define NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_50G 0x20
+ #define \
+ NVM_CFG1_PORT_MNM_50G_DRV_SPEED_CAPABILITY_MASK_BB_100G \
+ 0x40
+ #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_MASK \
+ 0xFFFF0000
+ #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_OFFSET \
+ 16
+ #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_1G 0x1
+ #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_10G 0x2
+ #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_25G 0x8
+ #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_40G 0x10
+ #define NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_50G 0x20
+ #define \
+ NVM_CFG1_PORT_MNM_50G_MFW_SPEED_CAPABILITY_MASK_BB_100G \
+ 0x40
+ u32 mnm_50g_ctrl; /* 0x74 */
+ #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_MASK 0x0000000F
+ #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_OFFSET 0
+ #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_AUTONEG 0x0
+ #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_1G 0x1
+ #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_10G 0x2
+ #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_25G 0x4
+ #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_40G 0x5
+ #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_50G 0x6
+ #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_BB_100G 0x7
+ #define NVM_CFG1_PORT_MNM_50G_DRV_LINK_SPEED_SMARTLINQ 0x8
+ #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_MASK 0x000000F0
+ #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_OFFSET 4
+ #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_AUTONEG 0x0
+ #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_1G 0x1
+ #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_10G 0x2
+ #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_25G 0x4
+ #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_40G 0x5
+ #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_50G 0x6
+ #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_BB_100G 0x7
+ #define NVM_CFG1_PORT_MNM_50G_MFW_LINK_SPEED_SMARTLINQ 0x8
+ /* This field defines the board technology
+ * (backpane,transceiver,external PHY)
+ */
+ #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_MASK 0x0000FF00
+ #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_OFFSET 8
+ #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_UNDEFINED 0x0
+ #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_MODULE 0x1
+ #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_BACKPLANE 0x2
+ #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_EXT_PHY 0x3
+ #define NVM_CFG1_PORT_MNM_50G_PORT_TYPE_MODULE_SLAVE 0x4
+ #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_MASK \
+ 0x00FF0000
+ #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_OFFSET 16
+ #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_BYPASS 0x0
+ #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_KR 0x2
+ #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_KR2 0x3
+ #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_KR4 0x4
+ #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_XFI 0x8
+ #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_SFI 0x9
+ #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_1000X 0xB
+ #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_SGMII 0xC
+ #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_XLAUI 0x11
+ #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_XLPPI 0x12
+ #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_CAUI 0x21
+ #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_CPPI 0x22
+ #define NVM_CFG1_PORT_MNM_50G_SERDES_NET_INTERFACE_25GAUI 0x31
+ #define NVM_CFG1_PORT_MNM_50G_ETH_DID_SUFFIX_MASK 0xFF000000
+ #define NVM_CFG1_PORT_MNM_50G_ETH_DID_SUFFIX_OFFSET 24
+ u32 mnm_50g_misc; /* 0x78 */
+ #define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_MASK 0x00000007
+ #define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_OFFSET 0
+ #define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_NONE 0x0
+ #define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_FIRECODE 0x1
+ #define NVM_CFG1_PORT_MNM_50G_FEC_FORCE_MODE_RS 0x2
+ u32 mnm_100g_cap; /* 0x7C */
+ #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_MASK \
+ 0x0000FFFF
+ #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_OFFSET 0
+ #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_1G 0x1
+ #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_10G 0x2
+ #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_25G 0x8
+ #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_40G 0x10
+ #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_50G 0x20
+ #define NVM_CFG1_PORT_MNM_100G_DRV_SPEED_CAP_MASK_BB_100G 0x40
+ #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_MASK \
+ 0xFFFF0000
+ #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_OFFSET 16
+ #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_1G 0x1
+ #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_10G 0x2
+ #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_25G 0x8
+ #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_40G 0x10
+ #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_50G 0x20
+ #define NVM_CFG1_PORT_MNM_100G_MFW_SPEED_CAP_MASK_BB_100G 0x40
+ u32 mnm_100g_ctrl; /* 0x80 */
+ #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_MASK 0x0000000F
+ #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_OFFSET 0
+ #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_AUTONEG 0x0
+ #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_1G 0x1
+ #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_10G 0x2
+ #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_25G 0x4
+ #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_40G 0x5
+ #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_50G 0x6
+ #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_BB_100G 0x7
+ #define NVM_CFG1_PORT_MNM_100G_DRV_LINK_SPEED_SMARTLINQ 0x8
+ #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_MASK 0x000000F0
+ #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_OFFSET 4
+ #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_AUTONEG 0x0
+ #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_1G 0x1
+ #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_10G 0x2
+ #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_25G 0x4
+ #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_40G 0x5
+ #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_50G 0x6
+ #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_BB_100G 0x7
+ #define NVM_CFG1_PORT_MNM_100G_MFW_LINK_SPEED_SMARTLINQ 0x8
+ /* This field defines the board technology
+ * (backpane,transceiver,external PHY)
+ */
+ #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_MASK 0x0000FF00
+ #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_OFFSET 8
+ #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_UNDEFINED 0x0
+ #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_MODULE 0x1
+ #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_BACKPLANE 0x2
+ #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_EXT_PHY 0x3
+ #define NVM_CFG1_PORT_MNM_100G_PORT_TYPE_MODULE_SLAVE 0x4
+ #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_MASK \
+ 0x00FF0000
+ #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_OFFSET 16
+ #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_BYPASS 0x0
+ #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_KR 0x2
+ #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_KR2 0x3
+ #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_KR4 0x4
+ #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_XFI 0x8
+ #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_SFI 0x9
+ #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_1000X 0xB
+ #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_SGMII 0xC
+ #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_XLAUI 0x11
+ #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_XLPPI 0x12
+ #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_CAUI 0x21
+ #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_CPPI 0x22
+ #define NVM_CFG1_PORT_MNM_100G_SERDES_NET_INTERFACE_25GAUI 0x31
+ #define NVM_CFG1_PORT_MNM_100G_ETH_DID_SUFFIX_MASK 0xFF000000
+ #define NVM_CFG1_PORT_MNM_100G_ETH_DID_SUFFIX_OFFSET 24
+ u32 mnm_100g_misc; /* 0x84 */
+ #define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_MASK 0x00000007
+ #define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_OFFSET 0
+ #define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_NONE 0x0
+ #define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_FIRECODE 0x1
+ #define NVM_CFG1_PORT_MNM_100G_FEC_FORCE_MODE_RS 0x2
+ u32 reserved[116]; /* 0x88 */
};
struct nvm_cfg1_func {
@@ -857,12 +1301,17 @@ struct nvm_cfg1_func {
#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_MASK 0x00000007
#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_OFFSET 0
#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_PXE 0x0
+ #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_ISCSI_BOOT 0x3
+ #define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_FCOE_BOOT 0x4
#define NVM_CFG1_FUNC_PREBOOT_BOOT_PROTOCOL_NONE 0x7
#define NVM_CFG1_FUNC_VF_PCI_DEVICE_ID_MASK 0x0007FFF8
#define NVM_CFG1_FUNC_VF_PCI_DEVICE_ID_OFFSET 3
#define NVM_CFG1_FUNC_PERSONALITY_MASK 0x00780000
#define NVM_CFG1_FUNC_PERSONALITY_OFFSET 19
#define NVM_CFG1_FUNC_PERSONALITY_ETHERNET 0x0
+ #define NVM_CFG1_FUNC_PERSONALITY_ISCSI 0x1
+ #define NVM_CFG1_FUNC_PERSONALITY_FCOE 0x2
+ #define NVM_CFG1_FUNC_PERSONALITY_ROCE 0x3
#define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_MASK 0x7F800000
#define NVM_CFG1_FUNC_BANDWIDTH_WEIGHT_OFFSET 23
#define NVM_CFG1_FUNC_PAUSE_ON_HOST_RING_MASK 0x80000000
@@ -872,8 +1321,25 @@ struct nvm_cfg1_func {
u32 pci_cfg; /* 0x18 */
#define NVM_CFG1_FUNC_NUMBER_OF_VFS_PER_PF_MASK 0x0000007F
#define NVM_CFG1_FUNC_NUMBER_OF_VFS_PER_PF_OFFSET 0
-#define NVM_CFG1_FUNC_RESERVESD12_MASK 0x00003F80
-#define NVM_CFG1_FUNC_RESERVESD12_OFFSET 7
+ /* AH VF BAR2 size */
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_MASK 0x00003F80
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_OFFSET 7
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_DISABLED 0x0
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_4K 0x1
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_8K 0x2
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_16K 0x3
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_32K 0x4
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_64K 0x5
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_128K 0x6
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_256K 0x7
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_512K 0x8
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_1M 0x9
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_2M 0xA
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_4M 0xB
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_8M 0xC
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_16M 0xD
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_32M 0xE
+ #define NVM_CFG1_FUNC_VF_PCI_BAR2_SIZE_64M 0xF
#define NVM_CFG1_FUNC_BAR1_SIZE_MASK 0x0003C000
#define NVM_CFG1_FUNC_BAR1_SIZE_OFFSET 14
#define NVM_CFG1_FUNC_BAR1_SIZE_DISABLED 0x0
@@ -894,6 +1360,28 @@ struct nvm_cfg1_func {
#define NVM_CFG1_FUNC_BAR1_SIZE_1G 0xF
#define NVM_CFG1_FUNC_MAX_BANDWIDTH_MASK 0x03FC0000
#define NVM_CFG1_FUNC_MAX_BANDWIDTH_OFFSET 18
+ /* Hide function in npar mode */
+ #define NVM_CFG1_FUNC_FUNCTION_HIDE_MASK 0x04000000
+ #define NVM_CFG1_FUNC_FUNCTION_HIDE_OFFSET 26
+ #define NVM_CFG1_FUNC_FUNCTION_HIDE_DISABLED 0x0
+ #define NVM_CFG1_FUNC_FUNCTION_HIDE_ENABLED 0x1
+ /* AH BAR2 size (per function) */
+ #define NVM_CFG1_FUNC_BAR2_SIZE_MASK 0x78000000
+ #define NVM_CFG1_FUNC_BAR2_SIZE_OFFSET 27
+ #define NVM_CFG1_FUNC_BAR2_SIZE_DISABLED 0x0
+ #define NVM_CFG1_FUNC_BAR2_SIZE_1M 0x5
+ #define NVM_CFG1_FUNC_BAR2_SIZE_2M 0x6
+ #define NVM_CFG1_FUNC_BAR2_SIZE_4M 0x7
+ #define NVM_CFG1_FUNC_BAR2_SIZE_8M 0x8
+ #define NVM_CFG1_FUNC_BAR2_SIZE_16M 0x9
+ #define NVM_CFG1_FUNC_BAR2_SIZE_32M 0xA
+ #define NVM_CFG1_FUNC_BAR2_SIZE_64M 0xB
+ #define NVM_CFG1_FUNC_BAR2_SIZE_128M 0xC
+ #define NVM_CFG1_FUNC_BAR2_SIZE_256M 0xD
+ #define NVM_CFG1_FUNC_BAR2_SIZE_512M 0xE
+ #define NVM_CFG1_FUNC_BAR2_SIZE_1G 0xF
+ struct nvm_cfg_mac_address fcoe_node_wwn_mac_addr; /* 0x1C */
+ struct nvm_cfg_mac_address fcoe_port_wwn_mac_addr; /* 0x24 */
u32 preboot_generic_cfg; /* 0x2C */
#define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_MASK 0x0000FFFF
#define NVM_CFG1_FUNC_PREBOOT_VLAN_VALUE_OFFSET 0
diff --git a/drivers/net/qede/qede_eth_if.c b/drivers/net/qede/qede_eth_if.c
index b6f6487..a00f05d 100644
--- a/drivers/net/qede/qede_eth_if.c
+++ b/drivers/net/qede/qede_eth_if.c
@@ -246,6 +246,7 @@ qed_start_txq(struct ecore_dev *edev,
vport_id,
sb,
sb_index,
+ 0 /* tc */,
pbl_addr, pbl_size, pp_doorbell);
if (rc) {
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 2e62371..e4ef4f0 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -20,7 +20,7 @@ static uint8_t npar_tx_switching = 1;
char fw_file[PATH_MAX];
const char *QEDE_DEFAULT_FIRMWARE =
- "/lib/firmware/qed/qed_init_values_zipped-8.7.7.0.bin";
+ "/lib/firmware/qed/qed_init_values_zipped-8.10.9.0.bin";
static void
qed_update_pf_params(struct ecore_dev *edev, struct ecore_pf_params *params)
@@ -44,6 +44,7 @@ qed_probe(struct ecore_dev *edev, struct rte_pci_device *pci_dev,
enum qed_protocol protocol, uint32_t dp_module,
uint8_t dp_level, bool is_vf)
{
+ struct ecore_hw_prepare_params hw_prepare_params;
struct qede_dev *qdev = (struct qede_dev *)edev;
int rc;
@@ -51,11 +52,16 @@ qed_probe(struct ecore_dev *edev, struct rte_pci_device *pci_dev,
qdev->protocol = protocol;
if (is_vf) {
edev->b_is_vf = true;
- edev->sriov_info.b_hw_channel = true;
+ edev->b_hw_channel = true; /* @DPDK */
}
ecore_init_dp(edev, dp_module, dp_level, NULL);
qed_init_pci(edev, pci_dev);
- rc = ecore_hw_prepare(edev, ECORE_PCI_DEFAULT);
+
+ memset(&hw_prepare_params, 0, sizeof(hw_prepare_params));
+ hw_prepare_params.personality = ECORE_PCI_ETH;
+ hw_prepare_params.drv_resc_alloc = false;
+ hw_prepare_params.chk_reg_fifo = false;
+ rc = ecore_hw_prepare(edev, &hw_prepare_params);
if (rc) {
DP_ERR(edev, "hw prepare failed\n");
return rc;
@@ -256,8 +262,9 @@ static int qed_slowpath_start(struct ecore_dev *edev,
/* Start the slowpath */
#ifdef CONFIG_ECORE_BINARY_FW
if (IS_PF(edev))
- data = edev->firmware;
+ data = (const uint8_t *)edev->firmware + sizeof(u32);
#endif
+
allow_npar_tx_switching = npar_tx_switching ? true : false;
#ifdef QED_ENC_SUPPORTED
@@ -346,7 +353,7 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
if (IS_PF(edev)) {
ptt = ecore_ptt_acquire(ECORE_LEADING_HWFN(edev));
if (ptt) {
- ecore_mcp_get_mfw_ver(edev, ptt,
+ ecore_mcp_get_mfw_ver(ECORE_LEADING_HWFN(edev), ptt,
&dev_info->mfw_rev, NULL);
ecore_mcp_get_flash_size(ECORE_LEADING_HWFN(edev), ptt,
@@ -361,7 +368,8 @@ qed_fill_dev_info(struct ecore_dev *edev, struct qed_dev_info *dev_info)
ecore_ptt_release(ECORE_LEADING_HWFN(edev), ptt);
}
} else {
- ecore_mcp_get_mfw_ver(edev, ptt, &dev_info->mfw_rev, NULL);
+ ecore_mcp_get_mfw_ver(ECORE_LEADING_HWFN(edev), ptt,
+ &dev_info->mfw_rev, NULL);
}
return 0;
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 34eaf8b..90da603 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -44,6 +44,10 @@
(bd)->addr.hi = rte_cpu_to_le_32(U64_HI(maddr)); \
(bd)->addr.lo = rte_cpu_to_le_32(U64_LO(maddr)); \
(bd)->nbytes = rte_cpu_to_le_16(len); \
+ /* FW 8.10.x specific change */ \
+ (bd)->data.bitfields = ((len) & \
+ ETH_TX_DATA_1ST_BD_PKT_LEN_MASK) \
+ << ETH_TX_DATA_1ST_BD_PKT_LEN_SHIFT; \
} while (0)
#define CQE_HAS_VLAN(flags) \
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 12/32] net/qede/base: rename structure and defines
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (10 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 11/32] net/qede/base: update base driver Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 13/32] net/qede/base: comment enhancements Rasesh Mody
` (20 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Rasesh Mody
Renamed following to match with HSI changes
- PMM_* to ETH_*
- pmm_* to eth_*
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
---
drivers/net/qede/base/ecore_l2.c | 90 ++++++++++-----------
drivers/net/qede/base/ecore_mcp.c | 12 +--
drivers/net/qede/base/mcp_public.h | 158 +++++++++++++++++++++----------------
3 files changed, 140 insertions(+), 120 deletions(-)
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index b1190e4..5a38ad2 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -1583,51 +1583,51 @@ static void __ecore_get_vport_port_stats(struct ecore_hwfn *p_hwfn,
OFFSETOF(struct public_port, stats),
sizeof(port_stats));
- p_stats->rx_64_byte_packets += port_stats.pmm.r64;
- p_stats->rx_65_to_127_byte_packets += port_stats.pmm.r127;
- p_stats->rx_128_to_255_byte_packets += port_stats.pmm.r255;
- p_stats->rx_256_to_511_byte_packets += port_stats.pmm.r511;
- p_stats->rx_512_to_1023_byte_packets += port_stats.pmm.r1023;
- p_stats->rx_1024_to_1518_byte_packets += port_stats.pmm.r1518;
- p_stats->rx_1519_to_1522_byte_packets += port_stats.pmm.r1522;
- p_stats->rx_1519_to_2047_byte_packets += port_stats.pmm.r2047;
- p_stats->rx_2048_to_4095_byte_packets += port_stats.pmm.r4095;
- p_stats->rx_4096_to_9216_byte_packets += port_stats.pmm.r9216;
- p_stats->rx_9217_to_16383_byte_packets += port_stats.pmm.r16383;
- p_stats->rx_crc_errors += port_stats.pmm.rfcs;
- p_stats->rx_mac_crtl_frames += port_stats.pmm.rxcf;
- p_stats->rx_pause_frames += port_stats.pmm.rxpf;
- p_stats->rx_pfc_frames += port_stats.pmm.rxpp;
- p_stats->rx_align_errors += port_stats.pmm.raln;
- p_stats->rx_carrier_errors += port_stats.pmm.rfcr;
- p_stats->rx_oversize_packets += port_stats.pmm.rovr;
- p_stats->rx_jabbers += port_stats.pmm.rjbr;
- p_stats->rx_undersize_packets += port_stats.pmm.rund;
- p_stats->rx_fragments += port_stats.pmm.rfrg;
- p_stats->tx_64_byte_packets += port_stats.pmm.t64;
- p_stats->tx_65_to_127_byte_packets += port_stats.pmm.t127;
- p_stats->tx_128_to_255_byte_packets += port_stats.pmm.t255;
- p_stats->tx_256_to_511_byte_packets += port_stats.pmm.t511;
- p_stats->tx_512_to_1023_byte_packets += port_stats.pmm.t1023;
- p_stats->tx_1024_to_1518_byte_packets += port_stats.pmm.t1518;
- p_stats->tx_1519_to_2047_byte_packets += port_stats.pmm.t2047;
- p_stats->tx_2048_to_4095_byte_packets += port_stats.pmm.t4095;
- p_stats->tx_4096_to_9216_byte_packets += port_stats.pmm.t9216;
- p_stats->tx_9217_to_16383_byte_packets += port_stats.pmm.t16383;
- p_stats->tx_pause_frames += port_stats.pmm.txpf;
- p_stats->tx_pfc_frames += port_stats.pmm.txpp;
- p_stats->tx_lpi_entry_count += port_stats.pmm.tlpiec;
- p_stats->tx_total_collisions += port_stats.pmm.tncl;
- p_stats->rx_mac_bytes += port_stats.pmm.rbyte;
- p_stats->rx_mac_uc_packets += port_stats.pmm.rxuca;
- p_stats->rx_mac_mc_packets += port_stats.pmm.rxmca;
- p_stats->rx_mac_bc_packets += port_stats.pmm.rxbca;
- p_stats->rx_mac_frames_ok += port_stats.pmm.rxpok;
- p_stats->tx_mac_bytes += port_stats.pmm.tbyte;
- p_stats->tx_mac_uc_packets += port_stats.pmm.txuca;
- p_stats->tx_mac_mc_packets += port_stats.pmm.txmca;
- p_stats->tx_mac_bc_packets += port_stats.pmm.txbca;
- p_stats->tx_mac_ctrl_frames += port_stats.pmm.txcf;
+ p_stats->rx_64_byte_packets += port_stats.eth.r64;
+ p_stats->rx_65_to_127_byte_packets += port_stats.eth.r127;
+ p_stats->rx_128_to_255_byte_packets += port_stats.eth.r255;
+ p_stats->rx_256_to_511_byte_packets += port_stats.eth.r511;
+ p_stats->rx_512_to_1023_byte_packets += port_stats.eth.r1023;
+ p_stats->rx_1024_to_1518_byte_packets += port_stats.eth.r1518;
+ p_stats->rx_1519_to_1522_byte_packets += port_stats.eth.r1522;
+ p_stats->rx_1519_to_2047_byte_packets += port_stats.eth.r2047;
+ p_stats->rx_2048_to_4095_byte_packets += port_stats.eth.r4095;
+ p_stats->rx_4096_to_9216_byte_packets += port_stats.eth.r9216;
+ p_stats->rx_9217_to_16383_byte_packets += port_stats.eth.r16383;
+ p_stats->rx_crc_errors += port_stats.eth.rfcs;
+ p_stats->rx_mac_crtl_frames += port_stats.eth.rxcf;
+ p_stats->rx_pause_frames += port_stats.eth.rxpf;
+ p_stats->rx_pfc_frames += port_stats.eth.rxpp;
+ p_stats->rx_align_errors += port_stats.eth.raln;
+ p_stats->rx_carrier_errors += port_stats.eth.rfcr;
+ p_stats->rx_oversize_packets += port_stats.eth.rovr;
+ p_stats->rx_jabbers += port_stats.eth.rjbr;
+ p_stats->rx_undersize_packets += port_stats.eth.rund;
+ p_stats->rx_fragments += port_stats.eth.rfrg;
+ p_stats->tx_64_byte_packets += port_stats.eth.t64;
+ p_stats->tx_65_to_127_byte_packets += port_stats.eth.t127;
+ p_stats->tx_128_to_255_byte_packets += port_stats.eth.t255;
+ p_stats->tx_256_to_511_byte_packets += port_stats.eth.t511;
+ p_stats->tx_512_to_1023_byte_packets += port_stats.eth.t1023;
+ p_stats->tx_1024_to_1518_byte_packets += port_stats.eth.t1518;
+ p_stats->tx_1519_to_2047_byte_packets += port_stats.eth.t2047;
+ p_stats->tx_2048_to_4095_byte_packets += port_stats.eth.t4095;
+ p_stats->tx_4096_to_9216_byte_packets += port_stats.eth.t9216;
+ p_stats->tx_9217_to_16383_byte_packets += port_stats.eth.t16383;
+ p_stats->tx_pause_frames += port_stats.eth.txpf;
+ p_stats->tx_pfc_frames += port_stats.eth.txpp;
+ p_stats->tx_lpi_entry_count += port_stats.eth.tlpiec;
+ p_stats->tx_total_collisions += port_stats.eth.tncl;
+ p_stats->rx_mac_bytes += port_stats.eth.rbyte;
+ p_stats->rx_mac_uc_packets += port_stats.eth.rxuca;
+ p_stats->rx_mac_mc_packets += port_stats.eth.rxmca;
+ p_stats->rx_mac_bc_packets += port_stats.eth.rxbca;
+ p_stats->rx_mac_frames_ok += port_stats.eth.rxpok;
+ p_stats->tx_mac_bytes += port_stats.eth.tbyte;
+ p_stats->tx_mac_uc_packets += port_stats.eth.txuca;
+ p_stats->tx_mac_mc_packets += port_stats.eth.txmca;
+ p_stats->tx_mac_bc_packets += port_stats.eth.txbca;
+ p_stats->tx_mac_ctrl_frames += port_stats.eth.txcf;
for (j = 0; j < 8; j++) {
p_stats->brb_truncates += port_stats.brb.brb_truncate[j];
p_stats->brb_discards += port_stats.brb.brb_discard[j];
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 5002ee3..cf67fa1 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -659,9 +659,9 @@ static void ecore_mcp_handle_transceiver_change(struct ecore_hwfn *p_hwfn,
OFFSETOF(struct public_port,
transceiver_data)));
- transceiver_state = GET_FIELD(transceiver_state, PMM_TRANSCEIVER_STATE);
+ transceiver_state = GET_FIELD(transceiver_state, ETH_TRANSCEIVER_STATE);
- if (transceiver_state == PMM_TRANSCEIVER_STATE_PRESENT)
+ if (transceiver_state == ETH_TRANSCEIVER_STATE_PRESENT)
DP_NOTICE(p_hwfn, false, "Transceiver is present.\n");
else
DP_NOTICE(p_hwfn, false, "Transceiver is unplugged.\n");
@@ -813,7 +813,7 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
struct ecore_mcp_link_params *params = &p_hwfn->mcp_info->link_input;
struct ecore_mcp_mb_params mb_params;
union drv_union_data union_data;
- struct pmm_phy_cfg *p_phy_cfg;
+ struct eth_phy_cfg *p_phy_cfg;
enum _ecore_status_t rc = ECORE_SUCCESS;
u32 cmd;
@@ -828,9 +828,9 @@ enum _ecore_status_t ecore_mcp_set_link(struct ecore_hwfn *p_hwfn,
cmd = b_up ? DRV_MSG_CODE_INIT_PHY : DRV_MSG_CODE_LINK_RESET;
if (!params->speed.autoneg)
p_phy_cfg->speed = params->speed.forced_speed;
- p_phy_cfg->pause |= (params->pause.autoneg) ? PMM_PAUSE_AUTONEG : 0;
- p_phy_cfg->pause |= (params->pause.forced_rx) ? PMM_PAUSE_RX : 0;
- p_phy_cfg->pause |= (params->pause.forced_tx) ? PMM_PAUSE_TX : 0;
+ p_phy_cfg->pause |= (params->pause.autoneg) ? ETH_PAUSE_AUTONEG : 0;
+ p_phy_cfg->pause |= (params->pause.forced_rx) ? ETH_PAUSE_RX : 0;
+ p_phy_cfg->pause |= (params->pause.forced_tx) ? ETH_PAUSE_TX : 0;
p_phy_cfg->adv_speed = params->speed.advertised_speeds;
p_phy_cfg->loopback_mode = params->loopback_mode;
p_hwfn->b_drv_link_init = b_up;
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 2fc01d0..d430752 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -55,31 +55,38 @@ typedef u32 offsize_t; /* In DWORDS !!! */
#define SECTION_OFFSIZE_ADDR(_pub_base, _section) \
(_pub_base + offsetof(struct mcp_public_data, sections[_section]))
/* PHY configuration */
-struct pmm_phy_cfg {
+struct eth_phy_cfg {
/* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */
u32 speed;
-#define PMM_SPEED_AUTONEG 0
-#define PMM_SPEED_SMARTLINQ 0x8
+#define ETH_SPEED_AUTONEG 0
+#define ETH_SPEED_SMARTLINQ 0x8
u32 pause; /* bitmask */
-#define PMM_PAUSE_NONE 0x0
-#define PMM_PAUSE_AUTONEG 0x1
-#define PMM_PAUSE_RX 0x2
-#define PMM_PAUSE_TX 0x4
+#define ETH_PAUSE_NONE 0x0
+#define ETH_PAUSE_AUTONEG 0x1
+#define ETH_PAUSE_RX 0x2
+#define ETH_PAUSE_TX 0x4
u32 adv_speed; /* Default should be the speed_cap_mask */
u32 loopback_mode;
-#define PMM_LOOPBACK_NONE 0
-#define PMM_LOOPBACK_INT_PHY 1
-#define PMM_LOOPBACK_EXT_PHY 2
-#define PMM_LOOPBACK_EXT 3
-#define PMM_LOOPBACK_MAC 4
-#define PMM_LOOPBACK_CNIG_AH_ONLY_0123 5 /* Port to itself */
-#define PMM_LOOPBACK_CNIG_AH_ONLY_2301 6 /* Port to Port */
+#define ETH_LOOPBACK_NONE (0)
+/* Serdes loopback. In AH, it refers to Near End */
+#define ETH_LOOPBACK_INT_PHY (1)
+#define ETH_LOOPBACK_EXT_PHY (2) /* External PHY Loopback */
+/* External Loopback (Require loopback plug) */
+#define ETH_LOOPBACK_EXT (3)
+#define ETH_LOOPBACK_MAC (4) /* MAC Loopback - not supported */
+#define ETH_LOOPBACK_CNIG_AH_ONLY_0123 (5) /* Port to itself */
+#define ETH_LOOPBACK_CNIG_AH_ONLY_2301 (6) /* Port to Port */
+#define ETH_LOOPBACK_PCS_AH_ONLY (7) /* PCS loopback (TX to RX) */
+/* Loop RX packet from PCS to TX */
+#define ETH_LOOPBACK_REVERSE_MAC_AH_ONLY (8)
+/* Remote Serdes Loopback (RX to TX) */
+#define ETH_LOOPBACK_INT_PHY_FEA_AH_ONLY (9)
/* features */
u32 feature_config_flags;
-
+#define ETH_EEE_MODE_ADV_LPI (1 << 0)
};
struct port_mf_cfg {
@@ -94,7 +101,7 @@ struct port_mf_cfg {
/* DO NOT add new fields in the middle
* MUST be synced with struct pmm_stats_map
*/
-struct pmm_stats {
+struct eth_stats {
u64 r64; /* 0x00 (Offset 0x00 ) RX 64-byte frame counter*/
u64 r127; /* 0x01 (Offset 0x08 ) RX 65 to 127 byte frame counter*/
u64 r255; /* 0x02 (Offset 0x10 ) RX 128 to 255 byte frame counter*/
@@ -163,7 +170,7 @@ struct brb_stats {
struct port_stats {
struct brb_stats brb;
- struct pmm_stats pmm;
+ struct eth_stats eth;
};
/*----+------------------------------------------------------------------------
@@ -593,7 +600,7 @@ struct public_port {
u32 link_status1;
u32 ext_phy_fw_version;
-/* Points to struct pmm_phy_cfg (For READ-ONLY) */
+/* Points to struct eth_phy_cfg (For READ-ONLY) */
u32 drv_phy_cfg_addr;
u32 port_stx;
@@ -647,56 +654,69 @@ struct public_port {
u32 fc_npiv_nvram_tbl_addr;
u32 fc_npiv_nvram_tbl_size;
u32 transceiver_data;
-#define PMM_TRANSCEIVER_STATE_MASK 0x000000FF
-#define PMM_TRANSCEIVER_STATE_SHIFT 0x00000000
-#define PMM_TRANSCEIVER_STATE_UNPLUGGED 0x00000000
-#define PMM_TRANSCEIVER_STATE_PRESENT 0x00000001
-#define PMM_TRANSCEIVER_STATE_VALID 0x00000003
-#define PMM_TRANSCEIVER_STATE_UPDATING 0x00000008
-#define PMM_TRANSCEIVER_TYPE_MASK 0x0000FF00
-#define PMM_TRANSCEIVER_TYPE_SHIFT 0x00000008
-#define PMM_TRANSCEIVER_TYPE_NONE 0x00000000
-#define PMM_TRANSCEIVER_TYPE_UNKNOWN 0x000000FF
-#define PMM_TRANSCEIVER_TYPE_1G_PCC 0x01 /* 1G Passive copper cable */
-#define PMM_TRANSCEIVER_TYPE_1G_ACC 0x02 /* 1G Active copper cable */
-#define PMM_TRANSCEIVER_TYPE_1G_LX 0x03
-#define PMM_TRANSCEIVER_TYPE_1G_SX 0x04
-#define PMM_TRANSCEIVER_TYPE_10G_SR 0x05
-#define PMM_TRANSCEIVER_TYPE_10G_LR 0x06
-#define PMM_TRANSCEIVER_TYPE_10G_LRM 0x07
-#define PMM_TRANSCEIVER_TYPE_10G_ER 0x08
-#define PMM_TRANSCEIVER_TYPE_10G_PCC 0x09 /* 10G Passive copper cable */
-#define PMM_TRANSCEIVER_TYPE_10G_ACC 0x0a /* 10G Active copper cable */
-#define PMM_TRANSCEIVER_TYPE_XLPPI 0x0b
-#define PMM_TRANSCEIVER_TYPE_40G_LR4 0x0c
-#define PMM_TRANSCEIVER_TYPE_40G_SR4 0x0d
-#define PMM_TRANSCEIVER_TYPE_40G_CR4 0x0e
-#define PMM_TRANSCEIVER_TYPE_100G_AOC 0x0f /* Active optical cable */
-#define PMM_TRANSCEIVER_TYPE_100G_SR4 0x10
-#define PMM_TRANSCEIVER_TYPE_100G_LR4 0x11
-#define PMM_TRANSCEIVER_TYPE_100G_ER4 0x12
-#define PMM_TRANSCEIVER_TYPE_100G_ACC 0x13 /* Active copper cable */
-#define PMM_TRANSCEIVER_TYPE_100G_CR4 0x14
-#define PMM_TRANSCEIVER_TYPE_4x10G_SR 0x15
-#define PMM_TRANSCEIVER_TYPE_25G_PCC_S 0x16
-#define PMM_TRANSCEIVER_TYPE_25G_ACC_S 0x17
-#define PMM_TRANSCEIVER_TYPE_25G_PCC_M 0x18
-#define PMM_TRANSCEIVER_TYPE_25G_ACC_M 0x19
-#define PMM_TRANSCEIVER_TYPE_25G_PCC_L 0x1a
-#define PMM_TRANSCEIVER_TYPE_25G_ACC_L 0x1b
-#define PMM_TRANSCEIVER_TYPE_25G_SR 0x1c
-#define PMM_TRANSCEIVER_TYPE_25G_LR 0x1d
-#define PMM_TRANSCEIVER_TYPE_25G_AOC 0x1e
-
-#define PMM_TRANSCEIVER_TYPE_4x10G 0x1d
-#define PMM_TRANSCEIVER_TYPE_4x25G_CR 0x1e
-#define PMM_TRANSCEIVER_TYPE_MULTI_RATE_10G_40GR 0x30
-#define PMM_TRANSCEIVER_TYPE_MULTI_RATE_10G_40G_CR 0x31
-#define PMM_TRANSCEIVER_TYPE_MULTI_RATE_10G_40G_LR 0x32
-#define PMM_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_SR 0x33
-#define PMM_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_CR 0x34
-#define PMM_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_LR 0x35
-#define PMM_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_AOC 0x36
+#define ETH_TRANSCEIVER_STATE_MASK 0x000000FF
+#define ETH_TRANSCEIVER_STATE_SHIFT 0x00000000
+#define ETH_TRANSCEIVER_STATE_UNPLUGGED 0x00000000
+#define ETH_TRANSCEIVER_STATE_PRESENT 0x00000001
+#define ETH_TRANSCEIVER_STATE_VALID 0x00000003
+#define ETH_TRANSCEIVER_STATE_UPDATING 0x00000008
+#define ETH_TRANSCEIVER_TYPE_MASK 0x0000FF00
+#define ETH_TRANSCEIVER_TYPE_SHIFT 0x00000008
+#define ETH_TRANSCEIVER_TYPE_NONE 0x00000000
+#define ETH_TRANSCEIVER_TYPE_UNKNOWN 0x000000FF
+/* 1G Passive copper cable */
+#define ETH_TRANSCEIVER_TYPE_1G_PCC 0x01
+/* 1G Active copper cable */
+#define ETH_TRANSCEIVER_TYPE_1G_ACC 0x02
+#define ETH_TRANSCEIVER_TYPE_1G_LX 0x03
+#define ETH_TRANSCEIVER_TYPE_1G_SX 0x04
+#define ETH_TRANSCEIVER_TYPE_10G_SR 0x05
+#define ETH_TRANSCEIVER_TYPE_10G_LR 0x06
+#define ETH_TRANSCEIVER_TYPE_10G_LRM 0x07
+#define ETH_TRANSCEIVER_TYPE_10G_ER 0x08
+/* 10G Passive copper cable */
+#define ETH_TRANSCEIVER_TYPE_10G_PCC 0x09
+/* 10G Active copper cable */
+#define ETH_TRANSCEIVER_TYPE_10G_ACC 0x0a
+#define ETH_TRANSCEIVER_TYPE_XLPPI 0x0b
+#define ETH_TRANSCEIVER_TYPE_40G_LR4 0x0c
+#define ETH_TRANSCEIVER_TYPE_40G_SR4 0x0d
+#define ETH_TRANSCEIVER_TYPE_40G_CR4 0x0e
+#define ETH_TRANSCEIVER_TYPE_100G_AOC 0x0f /* Active optical cable */
+#define ETH_TRANSCEIVER_TYPE_100G_SR4 0x10
+#define ETH_TRANSCEIVER_TYPE_100G_LR4 0x11
+#define ETH_TRANSCEIVER_TYPE_100G_ER4 0x12
+#define ETH_TRANSCEIVER_TYPE_100G_ACC 0x13 /* Active copper cable */
+#define ETH_TRANSCEIVER_TYPE_100G_CR4 0x14
+#define ETH_TRANSCEIVER_TYPE_4x10G_SR 0x15
+/* 25G Passive copper cable - short */
+#define ETH_TRANSCEIVER_TYPE_25G_CA_N 0x16
+/* 25G Active copper cable - short */
+#define ETH_TRANSCEIVER_TYPE_25G_ACC_S 0x17
+/* 25G Passive copper cable - medium */
+#define ETH_TRANSCEIVER_TYPE_25G_CA_S 0x18
+/* 25G Active copper cable - medium */
+#define ETH_TRANSCEIVER_TYPE_25G_ACC_M 0x19
+/* 25G Passive copper cable - long */
+#define ETH_TRANSCEIVER_TYPE_25G_CA_L 0x1a
+/* 25G Active copper cable - long */
+#define ETH_TRANSCEIVER_TYPE_25G_ACC_L 0x1b
+#define ETH_TRANSCEIVER_TYPE_25G_SR 0x1c
+#define ETH_TRANSCEIVER_TYPE_25G_LR 0x1d
+#define ETH_TRANSCEIVER_TYPE_25G_AOC 0x1e
+
+#define ETH_TRANSCEIVER_TYPE_4x10G 0x1f
+#define ETH_TRANSCEIVER_TYPE_4x25G_CR 0x20
+#define ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_40G_SR 0x30
+#define ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_40G_CR 0x31
+#define ETH_TRANSCEIVER_TYPE_MULTI_RATE_10G_40G_LR 0x32
+#define ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_SR 0x33
+#define ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_CR 0x34
+#define ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_LR 0x35
+#define ETH_TRANSCEIVER_TYPE_MULTI_RATE_40G_100G_AOC 0x36
+ u32 wol_info;
+ u32 wol_pkt_len;
+ u32 wol_pkt_details;
struct dcb_dscp_map dcb_dscp_map;
};
@@ -959,7 +979,7 @@ union drv_union_data {
/* This configuration should be set by the driver for the LINK_SET command. */
- struct pmm_phy_cfg drv_phy_cfg;
+ struct eth_phy_cfg drv_phy_cfg;
struct mcp_val64 val64; /* For PHY / AVS commands */
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 13/32] net/qede/base: comment enhancements
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (11 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 12/32] net/qede/base: rename structure and defines Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 14/32] net/qede/base: add MFW crash dump support Rasesh Mody
` (19 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Rasesh Mody
Comment additions and modifications
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
---
drivers/net/qede/base/ecore_dev_api.h | 4 +-
drivers/net/qede/base/ecore_gtt_reg_addr.h | 10 +
drivers/net/qede/base/ecore_hsi_common.h | 863 +++++++++++++++++++---------
drivers/net/qede/base/ecore_hsi_eth.h | 877 ++++++++++++++++++++++-------
drivers/net/qede/base/ecore_hw_defs.h | 33 +-
drivers/net/qede/base/ecore_init_ops.h | 6 +-
drivers/net/qede/base/ecore_mcp.h | 31 +-
drivers/net/qede/base/ecore_sp_api.h | 5 +-
drivers/net/qede/base/eth_common.h | 154 +++--
drivers/net/qede/base/mcp_public.h | 99 +++-
10 files changed, 1519 insertions(+), 563 deletions(-)
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index e6924bd..1a810b5 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -98,8 +98,8 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev);
/**
* @brief ecore_hw_stop_fastpath -should be called incase
- * slowpath is still required for the device, but
- * fastpath is not.
+ * slowpath is still required for the device,
+ * but fastpath is not.
*
* @param p_dev
*
diff --git a/drivers/net/qede/base/ecore_gtt_reg_addr.h b/drivers/net/qede/base/ecore_gtt_reg_addr.h
index 0eba1aa..6395b7c 100644
--- a/drivers/net/qede/base/ecore_gtt_reg_addr.h
+++ b/drivers/net/qede/base/ecore_gtt_reg_addr.h
@@ -10,33 +10,43 @@
#define GTT_REG_ADDR_H
/* Win 2 */
+/* Access:RW DataWidth:0x20 Chips: BB_A0 BB_B0 K2 */
#define GTT_BAR0_MAP_REG_IGU_CMD 0x00f000UL
/* Win 3 */
+/* Access:RW DataWidth:0x20 Chips: BB_A0 BB_B0 K2 */
#define GTT_BAR0_MAP_REG_TSDM_RAM 0x010000UL
/* Win 4 */
+/* Access:RW DataWidth:0x20 Chips: BB_A0 BB_B0 K2 */
#define GTT_BAR0_MAP_REG_MSDM_RAM 0x011000UL
/* Win 5 */
+/* Access:RW DataWidth:0x20 Chips: BB_A0 BB_B0 K2 */
#define GTT_BAR0_MAP_REG_MSDM_RAM_1024 0x012000UL
/* Win 6 */
+/* Access:RW DataWidth:0x20 Chips: BB_A0 BB_B0 K2 */
#define GTT_BAR0_MAP_REG_USDM_RAM 0x013000UL
/* Win 7 */
+/* Access:RW DataWidth:0x20 Chips: BB_A0 BB_B0 K2 */
#define GTT_BAR0_MAP_REG_USDM_RAM_1024 0x014000UL
/* Win 8 */
+/* Access:RW DataWidth:0x20 Chips: BB_A0 BB_B0 K2 */
#define GTT_BAR0_MAP_REG_USDM_RAM_2048 0x015000UL
/* Win 9 */
+/* Access:RW DataWidth:0x20 Chips: BB_A0 BB_B0 K2 */
#define GTT_BAR0_MAP_REG_XSDM_RAM 0x016000UL
/* Win 10 */
+/* Access:RW DataWidth:0x20 Chips: BB_A0 BB_B0 K2 */
#define GTT_BAR0_MAP_REG_YSDM_RAM 0x017000UL
/* Win 11 */
+/* Access:RW DataWidth:0x20 Chips: BB_A0 BB_B0 K2 */
#define GTT_BAR0_MAP_REG_PSDM_RAM 0x018000UL
#endif
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index 3c4d7c0..179d410 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -13,6 +13,7 @@
/********************************/
#include "common_hsi.h"
+
/*
* opcodes for the event ring
*/
@@ -30,6 +31,7 @@ enum common_event_opcode {
MAX_COMMON_EVENT_OPCODE
};
+
/*
* Common Ramrod Command IDs
*/
@@ -45,6 +47,7 @@ enum common_ramrod_cmd_id {
MAX_COMMON_RAMROD_CMD_ID
};
+
/*
* The core storm context for the Ystorm
*/
@@ -65,8 +68,8 @@ struct pstorm_core_conn_st_ctx {
struct xstorm_core_conn_st_ctx {
__le32 spq_base_lo /* SPQ Ring Base Address low dword */;
__le32 spq_base_hi /* SPQ Ring Base Address high dword */;
- struct regpair consolid_base_addr /* Consolidation Ring Base Address */
- ;
+/* Consolidation Ring Base Address */
+ struct regpair consolid_base_addr;
__le16 spq_cons /* SPQ Ring Consumer */;
__le16 consolid_cons /* Consolidation Ring Consumer */;
__le32 reserved0[55] /* Pad to 15 cycles */;
@@ -76,210 +79,300 @@ struct xstorm_core_conn_ag_ctx {
u8 reserved0 /* cdu_validation */;
u8 core_state /* state */;
u8 flags0;
+/* exist_in_qm0 */
#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
+/* exist_in_qm1 */
#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RESERVED1_SHIFT 1
+/* exist_in_qm2 */
#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RESERVED2_SHIFT 2
+/* exist_in_qm3 */
#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_EXIST_IN_QM3_SHIFT 3
+/* bit4 */
#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RESERVED3_SHIFT 4
+/* cf_array_active */
#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RESERVED4_SHIFT 5
+/* bit6 */
#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RESERVED5_SHIFT 6
+/* bit7 */
#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RESERVED6_SHIFT 7
u8 flags1;
+/* bit8 */
#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RESERVED7_SHIFT 0
+/* bit9 */
#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RESERVED8_SHIFT 1
+/* bit10 */
#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RESERVED9_SHIFT 2
+/* bit11 */
#define XSTORM_CORE_CONN_AG_CTX_BIT11_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_BIT11_SHIFT 3
+/* bit12 */
#define XSTORM_CORE_CONN_AG_CTX_BIT12_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_BIT12_SHIFT 4
+/* bit13 */
#define XSTORM_CORE_CONN_AG_CTX_BIT13_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_BIT13_SHIFT 5
+/* bit14 */
#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT 6
+/* bit15 */
#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT 7
u8 flags2;
+/* timer0cf */
#define XSTORM_CORE_CONN_AG_CTX_CF0_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_CF0_SHIFT 0
+/* timer1cf */
#define XSTORM_CORE_CONN_AG_CTX_CF1_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_CF1_SHIFT 2
+/* timer2cf */
#define XSTORM_CORE_CONN_AG_CTX_CF2_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_CF2_SHIFT 4
+/* timer_stop_all */
#define XSTORM_CORE_CONN_AG_CTX_CF3_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_CF3_SHIFT 6
u8 flags3;
-#define XSTORM_CORE_CONN_AG_CTX_CF4_MASK 0x3
+#define XSTORM_CORE_CONN_AG_CTX_CF4_MASK 0x3 /* cf4 */
#define XSTORM_CORE_CONN_AG_CTX_CF4_SHIFT 0
-#define XSTORM_CORE_CONN_AG_CTX_CF5_MASK 0x3
+#define XSTORM_CORE_CONN_AG_CTX_CF5_MASK 0x3 /* cf5 */
#define XSTORM_CORE_CONN_AG_CTX_CF5_SHIFT 2
-#define XSTORM_CORE_CONN_AG_CTX_CF6_MASK 0x3
+#define XSTORM_CORE_CONN_AG_CTX_CF6_MASK 0x3 /* cf6 */
#define XSTORM_CORE_CONN_AG_CTX_CF6_SHIFT 4
-#define XSTORM_CORE_CONN_AG_CTX_CF7_MASK 0x3
+#define XSTORM_CORE_CONN_AG_CTX_CF7_MASK 0x3 /* cf7 */
#define XSTORM_CORE_CONN_AG_CTX_CF7_SHIFT 6
u8 flags4;
-#define XSTORM_CORE_CONN_AG_CTX_CF8_MASK 0x3
+#define XSTORM_CORE_CONN_AG_CTX_CF8_MASK 0x3 /* cf8 */
#define XSTORM_CORE_CONN_AG_CTX_CF8_SHIFT 0
-#define XSTORM_CORE_CONN_AG_CTX_CF9_MASK 0x3
+#define XSTORM_CORE_CONN_AG_CTX_CF9_MASK 0x3 /* cf9 */
#define XSTORM_CORE_CONN_AG_CTX_CF9_SHIFT 2
+/* cf10 */
#define XSTORM_CORE_CONN_AG_CTX_CF10_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_CF10_SHIFT 4
+/* cf11 */
#define XSTORM_CORE_CONN_AG_CTX_CF11_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_CF11_SHIFT 6
u8 flags5;
+/* cf12 */
#define XSTORM_CORE_CONN_AG_CTX_CF12_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_CF12_SHIFT 0
+/* cf13 */
#define XSTORM_CORE_CONN_AG_CTX_CF13_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_CF13_SHIFT 2
+/* cf14 */
#define XSTORM_CORE_CONN_AG_CTX_CF14_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_CF14_SHIFT 4
+/* cf15 */
#define XSTORM_CORE_CONN_AG_CTX_CF15_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_CF15_SHIFT 6
u8 flags6;
+/* cf16 */
#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_SHIFT 0
+/* cf_array_cf */
#define XSTORM_CORE_CONN_AG_CTX_CF17_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_CF17_SHIFT 2
+/* cf18 */
#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_SHIFT 4
+/* cf19 */
#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_SHIFT 6
u8 flags7;
+/* cf20 */
#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_SHIFT 0
+/* cf21 */
#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_RESERVED10_SHIFT 2
+/* cf22 */
#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_SHIFT 4
+/* cf0en */
#define XSTORM_CORE_CONN_AG_CTX_CF0EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT 6
+/* cf1en */
#define XSTORM_CORE_CONN_AG_CTX_CF1EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT 7
u8 flags8;
+/* cf2en */
#define XSTORM_CORE_CONN_AG_CTX_CF2EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT 0
+/* cf3en */
#define XSTORM_CORE_CONN_AG_CTX_CF3EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT 1
+/* cf4en */
#define XSTORM_CORE_CONN_AG_CTX_CF4EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT 2
+/* cf5en */
#define XSTORM_CORE_CONN_AG_CTX_CF5EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT 3
+/* cf6en */
#define XSTORM_CORE_CONN_AG_CTX_CF6EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT 4
+/* cf7en */
#define XSTORM_CORE_CONN_AG_CTX_CF7EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT 5
+/* cf8en */
#define XSTORM_CORE_CONN_AG_CTX_CF8EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT 6
+/* cf9en */
#define XSTORM_CORE_CONN_AG_CTX_CF9EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT 7
u8 flags9;
+/* cf10en */
#define XSTORM_CORE_CONN_AG_CTX_CF10EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT 0
+/* cf11en */
#define XSTORM_CORE_CONN_AG_CTX_CF11EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF11EN_SHIFT 1
+/* cf12en */
#define XSTORM_CORE_CONN_AG_CTX_CF12EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF12EN_SHIFT 2
+/* cf13en */
#define XSTORM_CORE_CONN_AG_CTX_CF13EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF13EN_SHIFT 3
+/* cf14en */
#define XSTORM_CORE_CONN_AG_CTX_CF14EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF14EN_SHIFT 4
+/* cf15en */
#define XSTORM_CORE_CONN_AG_CTX_CF15EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF15EN_SHIFT 5
+/* cf16en */
#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CONSOLID_PROD_CF_EN_SHIFT 6
+/* cf_array_cf_en */
#define XSTORM_CORE_CONN_AG_CTX_CF17EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF17EN_SHIFT 7
u8 flags10;
+/* cf18en */
#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_DQ_CF_EN_SHIFT 0
+/* cf19en */
#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT 1
+/* cf20en */
#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT 2
+/* cf21en */
#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RESERVED11_SHIFT 3
+/* cf22en */
#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_SLOW_PATH_EN_SHIFT 4
+/* cf23en */
#define XSTORM_CORE_CONN_AG_CTX_CF23EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_CF23EN_SHIFT 5
+/* rule0en */
#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RESERVED12_SHIFT 6
+/* rule1en */
#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RESERVED13_SHIFT 7
u8 flags11;
+/* rule2en */
#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RESERVED14_SHIFT 0
+/* rule3en */
#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RESERVED15_SHIFT 1
+/* rule4en */
#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT 2
+/* rule5en */
#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 3
+/* rule6en */
#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 4
+/* rule7en */
#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 5
+/* rule8en */
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED1_SHIFT 6
+/* rule9en */
#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RULE9EN_SHIFT 7
u8 flags12;
+/* rule10en */
#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RULE10EN_SHIFT 0
+/* rule11en */
#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RULE11EN_SHIFT 1
+/* rule12en */
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED2_SHIFT 2
+/* rule13en */
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED3_SHIFT 3
+/* rule14en */
#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RULE14EN_SHIFT 4
+/* rule15en */
#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RULE15EN_SHIFT 5
+/* rule16en */
#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RULE16EN_SHIFT 6
+/* rule17en */
#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RULE17EN_SHIFT 7
u8 flags13;
+/* rule18en */
#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RULE18EN_SHIFT 0
+/* rule19en */
#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_RULE19EN_SHIFT 1
+/* rule20en */
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED4_SHIFT 2
+/* rule21en */
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED5_SHIFT 3
+/* rule22en */
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED6_SHIFT 4
+/* rule23en */
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED7_SHIFT 5
+/* rule24en */
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED8_SHIFT 6
+/* rule25en */
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_A0_RESERVED9_SHIFT 7
u8 flags14;
+/* bit16 */
#define XSTORM_CORE_CONN_AG_CTX_BIT16_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_BIT16_SHIFT 0
+/* bit17 */
#define XSTORM_CORE_CONN_AG_CTX_BIT17_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_BIT17_SHIFT 1
+/* bit18 */
#define XSTORM_CORE_CONN_AG_CTX_BIT18_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_BIT18_SHIFT 2
+/* bit19 */
#define XSTORM_CORE_CONN_AG_CTX_BIT19_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_BIT19_SHIFT 3
+/* bit20 */
#define XSTORM_CORE_CONN_AG_CTX_BIT20_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_BIT20_SHIFT 4
+/* bit21 */
#define XSTORM_CORE_CONN_AG_CTX_BIT21_MASK 0x1
#define XSTORM_CORE_CONN_AG_CTX_BIT21_SHIFT 5
+/* cf23 */
#define XSTORM_CORE_CONN_AG_CTX_CF23_MASK 0x3
#define XSTORM_CORE_CONN_AG_CTX_CF23_SHIFT 6
u8 byte2 /* byte2 */;
@@ -339,84 +432,84 @@ struct tstorm_core_conn_ag_ctx {
u8 byte0 /* cdu_validation */;
u8 byte1 /* state */;
u8 flags0;
-#define TSTORM_CORE_CONN_AG_CTX_BIT0_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_BIT0_MASK 0x1 /* exist_in_qm0 */
#define TSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT 0
-#define TSTORM_CORE_CONN_AG_CTX_BIT1_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_BIT1_MASK 0x1 /* exist_in_qm1 */
#define TSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT 1
-#define TSTORM_CORE_CONN_AG_CTX_BIT2_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_BIT2_MASK 0x1 /* bit2 */
#define TSTORM_CORE_CONN_AG_CTX_BIT2_SHIFT 2
-#define TSTORM_CORE_CONN_AG_CTX_BIT3_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_BIT3_MASK 0x1 /* bit3 */
#define TSTORM_CORE_CONN_AG_CTX_BIT3_SHIFT 3
-#define TSTORM_CORE_CONN_AG_CTX_BIT4_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_BIT4_MASK 0x1 /* bit4 */
#define TSTORM_CORE_CONN_AG_CTX_BIT4_SHIFT 4
-#define TSTORM_CORE_CONN_AG_CTX_BIT5_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_BIT5_MASK 0x1 /* bit5 */
#define TSTORM_CORE_CONN_AG_CTX_BIT5_SHIFT 5
-#define TSTORM_CORE_CONN_AG_CTX_CF0_MASK 0x3
+#define TSTORM_CORE_CONN_AG_CTX_CF0_MASK 0x3 /* timer0cf */
#define TSTORM_CORE_CONN_AG_CTX_CF0_SHIFT 6
u8 flags1;
-#define TSTORM_CORE_CONN_AG_CTX_CF1_MASK 0x3
+#define TSTORM_CORE_CONN_AG_CTX_CF1_MASK 0x3 /* timer1cf */
#define TSTORM_CORE_CONN_AG_CTX_CF1_SHIFT 0
-#define TSTORM_CORE_CONN_AG_CTX_CF2_MASK 0x3
+#define TSTORM_CORE_CONN_AG_CTX_CF2_MASK 0x3 /* timer2cf */
#define TSTORM_CORE_CONN_AG_CTX_CF2_SHIFT 2
-#define TSTORM_CORE_CONN_AG_CTX_CF3_MASK 0x3
+#define TSTORM_CORE_CONN_AG_CTX_CF3_MASK 0x3 /* timer_stop_all */
#define TSTORM_CORE_CONN_AG_CTX_CF3_SHIFT 4
-#define TSTORM_CORE_CONN_AG_CTX_CF4_MASK 0x3
+#define TSTORM_CORE_CONN_AG_CTX_CF4_MASK 0x3 /* cf4 */
#define TSTORM_CORE_CONN_AG_CTX_CF4_SHIFT 6
u8 flags2;
-#define TSTORM_CORE_CONN_AG_CTX_CF5_MASK 0x3
+#define TSTORM_CORE_CONN_AG_CTX_CF5_MASK 0x3 /* cf5 */
#define TSTORM_CORE_CONN_AG_CTX_CF5_SHIFT 0
-#define TSTORM_CORE_CONN_AG_CTX_CF6_MASK 0x3
+#define TSTORM_CORE_CONN_AG_CTX_CF6_MASK 0x3 /* cf6 */
#define TSTORM_CORE_CONN_AG_CTX_CF6_SHIFT 2
-#define TSTORM_CORE_CONN_AG_CTX_CF7_MASK 0x3
+#define TSTORM_CORE_CONN_AG_CTX_CF7_MASK 0x3 /* cf7 */
#define TSTORM_CORE_CONN_AG_CTX_CF7_SHIFT 4
-#define TSTORM_CORE_CONN_AG_CTX_CF8_MASK 0x3
+#define TSTORM_CORE_CONN_AG_CTX_CF8_MASK 0x3 /* cf8 */
#define TSTORM_CORE_CONN_AG_CTX_CF8_SHIFT 6
u8 flags3;
-#define TSTORM_CORE_CONN_AG_CTX_CF9_MASK 0x3
+#define TSTORM_CORE_CONN_AG_CTX_CF9_MASK 0x3 /* cf9 */
#define TSTORM_CORE_CONN_AG_CTX_CF9_SHIFT 0
-#define TSTORM_CORE_CONN_AG_CTX_CF10_MASK 0x3
+#define TSTORM_CORE_CONN_AG_CTX_CF10_MASK 0x3 /* cf10 */
#define TSTORM_CORE_CONN_AG_CTX_CF10_SHIFT 2
-#define TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_CF0EN_MASK 0x1 /* cf0en */
#define TSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT 4
-#define TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_CF1EN_MASK 0x1 /* cf1en */
#define TSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT 5
-#define TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_CF2EN_MASK 0x1 /* cf2en */
#define TSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT 6
-#define TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_CF3EN_MASK 0x1 /* cf3en */
#define TSTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT 7
u8 flags4;
-#define TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_CF4EN_MASK 0x1 /* cf4en */
#define TSTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT 0
-#define TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_CF5EN_MASK 0x1 /* cf5en */
#define TSTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT 1
-#define TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_CF6EN_MASK 0x1 /* cf6en */
#define TSTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT 2
-#define TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_CF7EN_MASK 0x1 /* cf7en */
#define TSTORM_CORE_CONN_AG_CTX_CF7EN_SHIFT 3
-#define TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_CF8EN_MASK 0x1 /* cf8en */
#define TSTORM_CORE_CONN_AG_CTX_CF8EN_SHIFT 4
-#define TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_CF9EN_MASK 0x1 /* cf9en */
#define TSTORM_CORE_CONN_AG_CTX_CF9EN_SHIFT 5
-#define TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_CF10EN_MASK 0x1 /* cf10en */
#define TSTORM_CORE_CONN_AG_CTX_CF10EN_SHIFT 6
-#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK 0x1 /* rule0en */
#define TSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
u8 flags5;
-#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK 0x1 /* rule1en */
#define TSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK 0x1 /* rule2en */
#define TSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK 0x1 /* rule3en */
#define TSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK 0x1 /* rule4en */
#define TSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_MASK 0x1 /* rule5en */
#define TSTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_MASK 0x1 /* rule6en */
#define TSTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_MASK 0x1 /* rule7en */
#define TSTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK 0x1
+#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_MASK 0x1 /* rule8en */
#define TSTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
__le32 reg0 /* reg0 */;
__le32 reg1 /* reg1 */;
@@ -443,58 +536,58 @@ struct ustorm_core_conn_ag_ctx {
u8 reserved /* cdu_validation */;
u8 byte1 /* state */;
u8 flags0;
-#define USTORM_CORE_CONN_AG_CTX_BIT0_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_BIT0_MASK 0x1 /* exist_in_qm0 */
#define USTORM_CORE_CONN_AG_CTX_BIT0_SHIFT 0
-#define USTORM_CORE_CONN_AG_CTX_BIT1_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_BIT1_MASK 0x1 /* exist_in_qm1 */
#define USTORM_CORE_CONN_AG_CTX_BIT1_SHIFT 1
-#define USTORM_CORE_CONN_AG_CTX_CF0_MASK 0x3
+#define USTORM_CORE_CONN_AG_CTX_CF0_MASK 0x3 /* timer0cf */
#define USTORM_CORE_CONN_AG_CTX_CF0_SHIFT 2
-#define USTORM_CORE_CONN_AG_CTX_CF1_MASK 0x3
+#define USTORM_CORE_CONN_AG_CTX_CF1_MASK 0x3 /* timer1cf */
#define USTORM_CORE_CONN_AG_CTX_CF1_SHIFT 4
-#define USTORM_CORE_CONN_AG_CTX_CF2_MASK 0x3
+#define USTORM_CORE_CONN_AG_CTX_CF2_MASK 0x3 /* timer2cf */
#define USTORM_CORE_CONN_AG_CTX_CF2_SHIFT 6
u8 flags1;
-#define USTORM_CORE_CONN_AG_CTX_CF3_MASK 0x3
+#define USTORM_CORE_CONN_AG_CTX_CF3_MASK 0x3 /* timer_stop_all */
#define USTORM_CORE_CONN_AG_CTX_CF3_SHIFT 0
-#define USTORM_CORE_CONN_AG_CTX_CF4_MASK 0x3
+#define USTORM_CORE_CONN_AG_CTX_CF4_MASK 0x3 /* cf4 */
#define USTORM_CORE_CONN_AG_CTX_CF4_SHIFT 2
-#define USTORM_CORE_CONN_AG_CTX_CF5_MASK 0x3
+#define USTORM_CORE_CONN_AG_CTX_CF5_MASK 0x3 /* cf5 */
#define USTORM_CORE_CONN_AG_CTX_CF5_SHIFT 4
-#define USTORM_CORE_CONN_AG_CTX_CF6_MASK 0x3
+#define USTORM_CORE_CONN_AG_CTX_CF6_MASK 0x3 /* cf6 */
#define USTORM_CORE_CONN_AG_CTX_CF6_SHIFT 6
u8 flags2;
-#define USTORM_CORE_CONN_AG_CTX_CF0EN_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_CF0EN_MASK 0x1 /* cf0en */
#define USTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT 0
-#define USTORM_CORE_CONN_AG_CTX_CF1EN_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_CF1EN_MASK 0x1 /* cf1en */
#define USTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT 1
-#define USTORM_CORE_CONN_AG_CTX_CF2EN_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_CF2EN_MASK 0x1 /* cf2en */
#define USTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT 2
-#define USTORM_CORE_CONN_AG_CTX_CF3EN_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_CF3EN_MASK 0x1 /* cf3en */
#define USTORM_CORE_CONN_AG_CTX_CF3EN_SHIFT 3
-#define USTORM_CORE_CONN_AG_CTX_CF4EN_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_CF4EN_MASK 0x1 /* cf4en */
#define USTORM_CORE_CONN_AG_CTX_CF4EN_SHIFT 4
-#define USTORM_CORE_CONN_AG_CTX_CF5EN_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_CF5EN_MASK 0x1 /* cf5en */
#define USTORM_CORE_CONN_AG_CTX_CF5EN_SHIFT 5
-#define USTORM_CORE_CONN_AG_CTX_CF6EN_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_CF6EN_MASK 0x1 /* cf6en */
#define USTORM_CORE_CONN_AG_CTX_CF6EN_SHIFT 6
-#define USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_RULE0EN_MASK 0x1 /* rule0en */
#define USTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 7
u8 flags3;
-#define USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_RULE1EN_MASK 0x1 /* rule1en */
#define USTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_RULE2EN_MASK 0x1 /* rule2en */
#define USTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_RULE3EN_MASK 0x1 /* rule3en */
#define USTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_RULE4EN_MASK 0x1 /* rule4en */
#define USTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_RULE5EN_MASK 0x1 /* rule5en */
#define USTORM_CORE_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_RULE6EN_MASK 0x1 /* rule6en */
#define USTORM_CORE_CONN_AG_CTX_RULE6EN_SHIFT 5
-#define USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_RULE7EN_MASK 0x1 /* rule7en */
#define USTORM_CORE_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK 0x1
+#define USTORM_CORE_CONN_AG_CTX_RULE8EN_MASK 0x1 /* rule8en */
#define USTORM_CORE_CONN_AG_CTX_RULE8EN_SHIFT 7
u8 byte2 /* byte2 */;
u8 byte3 /* byte3 */;
@@ -526,27 +619,28 @@ struct ustorm_core_conn_st_ctx {
* core connection context
*/
struct core_conn_context {
- struct ystorm_core_conn_st_ctx ystorm_st_context
- /* ystorm storm context */;
+/* ystorm storm context */
+ struct ystorm_core_conn_st_ctx ystorm_st_context;
struct regpair ystorm_st_padding[2] /* padding */;
- struct pstorm_core_conn_st_ctx pstorm_st_context
- /* pstorm storm context */;
+/* pstorm storm context */
+ struct pstorm_core_conn_st_ctx pstorm_st_context;
struct regpair pstorm_st_padding[2] /* padding */;
- struct xstorm_core_conn_st_ctx xstorm_st_context
- /* xstorm storm context */;
- struct xstorm_core_conn_ag_ctx xstorm_ag_context
- /* xstorm aggregative context */;
- struct tstorm_core_conn_ag_ctx tstorm_ag_context
- /* tstorm aggregative context */;
- struct ustorm_core_conn_ag_ctx ustorm_ag_context
- /* ustorm aggregative context */;
- struct mstorm_core_conn_st_ctx mstorm_st_context
- /* mstorm storm context */;
- struct ustorm_core_conn_st_ctx ustorm_st_context
- /* ustorm storm context */;
+/* xstorm storm context */
+ struct xstorm_core_conn_st_ctx xstorm_st_context;
+/* xstorm aggregative context */
+ struct xstorm_core_conn_ag_ctx xstorm_ag_context;
+/* tstorm aggregative context */
+ struct tstorm_core_conn_ag_ctx tstorm_ag_context;
+/* ustorm aggregative context */
+ struct ustorm_core_conn_ag_ctx ustorm_ag_context;
+/* mstorm storm context */
+ struct mstorm_core_conn_st_ctx mstorm_st_context;
+/* ustorm storm context */
+ struct ustorm_core_conn_st_ctx ustorm_st_context;
struct regpair ustorm_st_padding[2] /* padding */;
};
+
/*
* How ll2 should deal with packet upon errors
*/
@@ -557,6 +651,7 @@ enum core_error_handle {
MAX_CORE_ERROR_HANDLE
};
+
/*
* opcodes for the event ring
*/
@@ -568,17 +663,19 @@ enum core_event_opcode {
MAX_CORE_EVENT_OPCODE
};
+
/*
* The L4 pseudo checksum mode for Core
*/
enum core_l4_pseudo_checksum_mode {
- CORE_L4_PSEUDO_CSUM_CORRECT_LENGTH
- ,
- CORE_L4_PSEUDO_CSUM_ZERO_LENGTH
- /* Pseudo Checksum on packet is calculated with zero length. */,
+/* Pseudo Checksum on packet is calculated with the correct packet length. */
+ CORE_L4_PSEUDO_CSUM_CORRECT_LENGTH,
+/* Pseudo Checksum on packet is calculated with zero length. */
+ CORE_L4_PSEUDO_CSUM_ZERO_LENGTH,
MAX_CORE_L4_PSEUDO_CHECKSUM_MODE
};
+
/*
* Light-L2 RX Producers in Tstorm RAM
*/
@@ -589,24 +686,26 @@ struct core_ll2_port_stats {
struct regpair gsi_crcchksm_error;
};
+
/*
* Ethernet TX Per Queue Stats
*/
struct core_ll2_pstorm_per_queue_stat {
- struct regpair sent_ucast_bytes
- /* number of total bytes sent without errors */;
- struct regpair sent_mcast_bytes
- /* number of total bytes sent without errors */;
- struct regpair sent_bcast_bytes
- /* number of total bytes sent without errors */;
- struct regpair sent_ucast_pkts
- /* number of total packets sent without errors */;
- struct regpair sent_mcast_pkts
- /* number of total packets sent without errors */;
- struct regpair sent_bcast_pkts
- /* number of total packets sent without errors */;
+/* number of total bytes sent without errors */
+ struct regpair sent_ucast_bytes;
+/* number of total bytes sent without errors */
+ struct regpair sent_mcast_bytes;
+/* number of total bytes sent without errors */
+ struct regpair sent_bcast_bytes;
+/* number of total packets sent without errors */
+ struct regpair sent_ucast_pkts;
+/* number of total packets sent without errors */
+ struct regpair sent_mcast_pkts;
+/* number of total packets sent without errors */
+ struct regpair sent_bcast_pkts;
};
+
/*
* Light-L2 RX Producers in Tstorm RAM
*/
@@ -616,13 +715,15 @@ struct core_ll2_rx_prod {
__le32 reserved;
};
+
struct core_ll2_tstorm_per_queue_stat {
- struct regpair packet_too_big_discard
- /* Number of packets discarded because they are bigger than MTU */;
- struct regpair no_buff_discard
- /* Number of packets discarded due to lack of host buffers */;
+/* Number of packets discarded because they are bigger than MTU */
+ struct regpair packet_too_big_discard;
+/* Number of packets discarded due to lack of host buffers */
+ struct regpair no_buff_discard;
};
+
struct core_ll2_ustorm_per_queue_stat {
struct regpair rcv_ucast_bytes;
struct regpair rcv_mcast_bytes;
@@ -632,6 +733,7 @@ struct core_ll2_ustorm_per_queue_stat {
struct regpair rcv_bcast_pkts;
};
+
/*
* Core Ramrod Command IDs (light L2)
*/
@@ -644,6 +746,7 @@ enum core_ramrod_cmd_id {
MAX_CORE_RAMROD_CMD_ID
};
+
/*
* Core RX CQE Type for Light L2
*/
@@ -653,19 +756,23 @@ enum core_roce_flavor_type {
MAX_CORE_ROCE_FLAVOR_TYPE
};
+
/*
* Specifies how ll2 should deal with packets errors: packet_too_big and no_buff
*/
struct core_rx_action_on_error {
u8 error_type;
+/* ll2 how to handle error packet_too_big (use enum core_error_handle) */
#define CORE_RX_ACTION_ON_ERROR_PACKET_TOO_BIG_MASK 0x3
#define CORE_RX_ACTION_ON_ERROR_PACKET_TOO_BIG_SHIFT 0
+/* ll2 how to handle error with no_buff (use enum core_error_handle) */
#define CORE_RX_ACTION_ON_ERROR_NO_BUFF_MASK 0x3
#define CORE_RX_ACTION_ON_ERROR_NO_BUFF_SHIFT 2
#define CORE_RX_ACTION_ON_ERROR_RESERVED_MASK 0xF
#define CORE_RX_ACTION_ON_ERROR_RESERVED_SHIFT 4
};
+
/*
* Core RX BD for Light L2
*/
@@ -674,6 +781,7 @@ struct core_rx_bd {
__le16 reserved[4];
};
+
/*
* Core RX CM offload BD for Light L2
*/
@@ -688,10 +796,12 @@ struct core_rx_bd_with_buff_len {
*/
union core_rx_bd_union {
struct core_rx_bd rx_bd /* Core Rx Bd static buffer size */;
- struct core_rx_bd_with_buff_len rx_bd_with_len
- /* Core Rx Bd with dynamic buffer length */;
+/* Core Rx Bd with dynamic buffer length */
+ struct core_rx_bd_with_buff_len rx_bd_with_len;
};
+
+
/*
* Opaque Data for Light L2 RX CQE .
*/
@@ -699,6 +809,7 @@ struct core_rx_cqe_opaque_data {
__le32 data[2] /* Opaque CQE Data */;
};
+
/*
* Core RX CQE Type for Light L2
*/
@@ -710,15 +821,16 @@ enum core_rx_cqe_type {
MAX_CORE_RX_CQE_TYPE
};
+
/*
* Core RX CQE for Light L2 .
*/
struct core_rx_fast_path_cqe {
u8 type /* CQE type */;
- u8 placement_offset
- /* Offset (in bytes) of the packet from start of the buffer */;
- struct parsing_and_err_flags parse_flags
- /* Parsing and error flags from the parser */;
+/* Offset (in bytes) of the packet from start of the buffer */
+ u8 placement_offset;
+/* Parsing and error flags from the parser */
+ struct parsing_and_err_flags parse_flags;
__le16 packet_length /* Total packet length (from the parser) */;
__le16 vlan /* 802.1q VLAN tag */;
struct core_rx_cqe_opaque_data opaque_data /* Opaque Data */;
@@ -731,8 +843,8 @@ struct core_rx_fast_path_cqe {
struct core_rx_gsi_offload_cqe {
u8 type /* CQE type */;
u8 data_length_error /* set if gsi data is bigger than buff */;
- struct parsing_and_err_flags parse_flags
- /* Parsing and error flags from the parser */;
+/* Parsing and error flags from the parser */
+ struct parsing_and_err_flags parse_flags;
__le16 data_length /* Total packet length (from the parser) */;
__le16 vlan /* 802.1q VLAN tag */;
__le32 src_mac_addrhi /* hi 4 bytes source mac address */;
@@ -760,6 +872,10 @@ union core_rx_cqe_union {
struct core_rx_slow_path_cqe rx_cqe_sp /* Slow path CQE */;
};
+
+
+
+
/*
* Ramrod data for rx queue start ramrod
*/
@@ -773,18 +889,28 @@ struct core_rx_start_ramrod_data {
u8 complete_event_flg /* post completion to the event ring if set */;
u8 drop_ttl0_flg /* drop packet with ttl0 if set */;
__le16 num_of_pbl_pages /* Num of pages in CQE PBL */;
- u8 inner_vlan_removal_en
- /* if set, 802.1q tags will be removed and copied to CQE */;
+/* if set, 802.1q tags will be removed and copied to CQE */
+ u8 inner_vlan_removal_en;
u8 queue_id /* Light L2 RX Queue ID */;
u8 main_func_queue /* Is this the main queue for the PF */;
+/* Duplicate broadcast packets to LL2 main queue in mf_si mode. Valid if
+ * main_func_queue is set.
+ */
u8 mf_si_bcast_accept_all;
+/* Duplicate multicast packets to LL2 main queue in mf_si mode. Valid if
+ * main_func_queue is set.
+ */
u8 mf_si_mcast_accept_all;
+/* Specifies how ll2 should deal with packets errors: packet_too_big and
+ * no_buff
+ */
struct core_rx_action_on_error action_on_error;
- u8 gsi_offload_flag
- /* set when in GSI offload mode on ROCE connection */;
+/* set when in GSI offload mode on ROCE connection */
+ u8 gsi_offload_flag;
u8 reserved[7];
};
+
/*
* Ramrod data for rx queue stop ramrod
*/
@@ -796,25 +922,38 @@ struct core_rx_stop_ramrod_data {
__le16 reserved2[2];
};
+
/*
* Flags for Core TX BD
*/
struct core_tx_bd_flags {
u8 as_bitfield;
+/* Do not allow additional VLAN manipulations on this packet (DCB) */
#define CORE_TX_BD_FLAGS_FORCE_VLAN_MODE_MASK 0x1
#define CORE_TX_BD_FLAGS_FORCE_VLAN_MODE_SHIFT 0
+/* Insert VLAN into packet */
#define CORE_TX_BD_FLAGS_VLAN_INSERTION_MASK 0x1
#define CORE_TX_BD_FLAGS_VLAN_INSERTION_SHIFT 1
+/* This is the first BD of the packet (for debug) */
#define CORE_TX_BD_FLAGS_START_BD_MASK 0x1
#define CORE_TX_BD_FLAGS_START_BD_SHIFT 2
+/* Calculate the IP checksum for the packet */
#define CORE_TX_BD_FLAGS_IP_CSUM_MASK 0x1
#define CORE_TX_BD_FLAGS_IP_CSUM_SHIFT 3
+/* Calculate the L4 checksum for the packet */
#define CORE_TX_BD_FLAGS_L4_CSUM_MASK 0x1
#define CORE_TX_BD_FLAGS_L4_CSUM_SHIFT 4
+/* Packet is IPv6 with extensions */
#define CORE_TX_BD_FLAGS_IPV6_EXT_MASK 0x1
#define CORE_TX_BD_FLAGS_IPV6_EXT_SHIFT 5
+/* If IPv6+ext, and if l4_csum is 1, than this field indicates L4 protocol:
+ * 0-TCP, 1-UDP
+ */
#define CORE_TX_BD_FLAGS_L4_PROTOCOL_MASK 0x1
#define CORE_TX_BD_FLAGS_L4_PROTOCOL_SHIFT 6
+/* The pseudo checksum mode to place in the L4 checksum field. Required only
+ * when IPv6+ext and l4_csum is set. (use enum core_l4_pseudo_checksum_mode)
+ */
#define CORE_TX_BD_FLAGS_L4_PSEUDO_CSUM_MODE_MASK 0x1
#define CORE_TX_BD_FLAGS_L4_PSEUDO_CSUM_MODE_SHIFT 7
};
@@ -853,6 +992,8 @@ struct core_tx_bd {
#define CORE_TX_BD_RESERVED1_SHIFT 15
};
+
+
/*
* Light L2 TX Destination
*/
@@ -862,6 +1003,7 @@ enum core_tx_dest {
MAX_CORE_TX_DEST
};
+
/*
* Ramrod data for tx queue start ramrod
*/
@@ -875,11 +1017,12 @@ struct core_tx_start_ramrod_data {
u8 conn_type /* connection type that loaded ll2 */;
__le16 pbl_size /* Number of BD pages pointed by PBL */;
__le16 qm_pq_id /* QM PQ ID */;
- u8 gsi_offload_flag
- /* set when in GSI offload mode on ROCE connection */;
+/* set when in GSI offload mode on ROCE connection */
+ u8 gsi_offload_flag;
u8 resrved[3];
};
+
/*
* Ramrod data for tx queue stop ramrod
*/
@@ -887,6 +1030,7 @@ struct core_tx_stop_ramrod_data {
__le32 reserved0[2];
};
+
/*
* Enum flag for what type of dcb data to update
*/
@@ -899,6 +1043,7 @@ enum dcb_dhcp_update_flag {
MAX_DCB_DHCP_UPDATE_FLAG
};
+
struct eth_mstorm_per_pf_stat {
struct regpair gre_discard_pkts /* Dropped GRE RX packets */;
struct regpair vxlan_discard_pkts /* Dropped VXLAN RX packets */;
@@ -906,17 +1051,27 @@ struct eth_mstorm_per_pf_stat {
struct regpair lb_discard_pkts /* Dropped Tx switched packets */;
};
+
struct eth_mstorm_per_queue_stat {
+/* Number of packets discarded because TTL=0 (in IPv4) or hopLimit=0 (IPv6) */
struct regpair ttl0_discard;
+/* Number of packets discarded because they are bigger than MTU */
struct regpair packet_too_big_discard;
+/* Number of packets discarded due to lack of host buffers (BDs/SGEs/CQEs) */
struct regpair no_buff_discard;
+/* Number of packets discarded because of no active Rx connection */
struct regpair not_active_discard;
+/* number of coalesced packets in all TPA aggregations */
struct regpair tpa_coalesced_pkts;
+/* total number of TPA aggregations */
struct regpair tpa_coalesced_events;
+/* number of aggregations, which abnormally ended */
struct regpair tpa_aborts_num;
+/* total TCP payload length in all TPA aggregations */
struct regpair tpa_coalesced_bytes;
};
+
/*
* Ethernet TX Per PF
*/
@@ -944,38 +1099,42 @@ struct eth_pstorm_per_pf_stat {
struct regpair geneve_drop_pkts /* Dropped GENEVE TX packets */;
};
+
/*
* Ethernet TX Per Queue Stats
*/
struct eth_pstorm_per_queue_stat {
- struct regpair sent_ucast_bytes
- /* number of total bytes sent without errors */;
- struct regpair sent_mcast_bytes
- /* number of total bytes sent without errors */;
- struct regpair sent_bcast_bytes
- /* number of total bytes sent without errors */;
- struct regpair sent_ucast_pkts
- /* number of total packets sent without errors */;
- struct regpair sent_mcast_pkts
- /* number of total packets sent without errors */;
- struct regpair sent_bcast_pkts
- /* number of total packets sent without errors */;
- struct regpair error_drop_pkts
- /* number of total packets dropped due to errors */;
+/* number of total bytes sent without errors */
+ struct regpair sent_ucast_bytes;
+/* number of total bytes sent without errors */
+ struct regpair sent_mcast_bytes;
+/* number of total bytes sent without errors */
+ struct regpair sent_bcast_bytes;
+/* number of total packets sent without errors */
+ struct regpair sent_ucast_pkts;
+/* number of total packets sent without errors */
+ struct regpair sent_mcast_pkts;
+/* number of total packets sent without errors */
+ struct regpair sent_bcast_pkts;
+/* number of total packets dropped due to errors */
+ struct regpair error_drop_pkts;
};
+
/*
* ETH Rx producers data
*/
struct eth_rx_rate_limit {
+/* Rate Limit Multiplier - (Storm Clock (MHz) * 8 / Desired Bandwidth (MB/s)) */
__le16 mult;
- __le16 cnst
- /* Constant term to add (or subtract from number of cycles) */;
+/* Constant term to add (or subtract from number of cycles) */
+ __le16 cnst;
u8 add_sub_cnst /* Add (1) or subtract (0) constant term */;
u8 reserved0;
__le16 reserved1;
};
+
struct eth_ustorm_per_pf_stat {
/* number of total ucast bytes received on loopback port without errors */
struct regpair rcv_lb_ucast_bytes;
@@ -997,6 +1156,7 @@ struct eth_ustorm_per_pf_stat {
struct regpair rcv_geneve_pkts /* Received GENEVE packets */;
};
+
struct eth_ustorm_per_queue_stat {
struct regpair rcv_ucast_bytes;
struct regpair rcv_mcast_bytes;
@@ -1006,6 +1166,7 @@ struct eth_ustorm_per_queue_stat {
struct regpair rcv_bcast_pkts;
};
+
/*
* Event Ring Next Page Address
*/
@@ -1019,10 +1180,12 @@ struct event_ring_next_addr {
*/
union event_ring_element {
struct event_ring_entry entry /* Event Ring Entry */;
- struct event_ring_next_addr next_addr /* Event Ring Next Page Address */
- ;
+/* Event Ring Next Page Address */
+ struct event_ring_next_addr next_addr;
};
+
+
/*
* Ports mode
*/
@@ -1032,6 +1195,7 @@ enum fw_flow_ctrl_mode {
MAX_FW_FLOW_CTRL_MODE
};
+
/*
* Major and Minor hsi Versions
*/
@@ -1040,6 +1204,7 @@ struct hsi_fp_ver_struct {
u8 major_ver_arr[2] /* Major Version of driver loading pf */;
};
+
/*
* Integration Phase
*/
@@ -1050,6 +1215,7 @@ enum integ_phase {
MAX_INTEG_PHASE
};
+
/*
* Ports mode
*/
@@ -1062,61 +1228,69 @@ enum iwarp_ll2_tx_queues {
MAX_IWARP_LL2_TX_QUEUES
};
+
/*
* Malicious VF error ID
*/
enum malicious_vf_error_id {
MALICIOUS_VF_NO_ERROR /* Zero placeholder value */,
- VF_PF_CHANNEL_NOT_READY
- /* Writing to VF/PF channel when it is not ready */,
+/* Writing to VF/PF channel when it is not ready */
+ VF_PF_CHANNEL_NOT_READY,
VF_ZONE_MSG_NOT_VALID /* VF channel message is not valid */,
VF_ZONE_FUNC_NOT_ENABLED /* Parent PF of VF channel is not active */,
- ETH_PACKET_TOO_SMALL
- /* TX packet is shorter then reported on BDs or from minimal size */
- ,
- ETH_ILLEGAL_VLAN_MODE
- /* Tx packet with marked as insert VLAN when its illegal */,
+/* TX packet is shorter then reported on BDs or from minimal size */
+ ETH_PACKET_TOO_SMALL,
+/* Tx packet with marked as insert VLAN when its illegal */
+ ETH_ILLEGAL_VLAN_MODE,
ETH_MTU_VIOLATION /* TX packet is greater then MTU */,
- ETH_ILLEGAL_INBAND_TAGS /* TX packet has illegal inband tags marked */,
- ETH_VLAN_INSERT_AND_INBAND_VLAN /* Vlan cant be added to inband tag */,
- ETH_ILLEGAL_NBDS /* indicated number of BDs for the packet is illegal */
- ,
+/* TX packet has illegal inband tags marked */
+ ETH_ILLEGAL_INBAND_TAGS,
+/* Vlan cant be added to inband tag */
+ ETH_VLAN_INSERT_AND_INBAND_VLAN,
+/* indicated number of BDs for the packet is illegal */
+ ETH_ILLEGAL_NBDS,
ETH_FIRST_BD_WO_SOP /* 1st BD must have start_bd flag set */,
- ETH_INSUFFICIENT_BDS
- /* There are not enough BDs for transmission of even one packet */,
+/* There are not enough BDs for transmission of even one packet */
+ ETH_INSUFFICIENT_BDS,
ETH_ILLEGAL_LSO_HDR_NBDS /* Header NBDs value is illegal */,
ETH_ILLEGAL_LSO_MSS /* LSO MSS value is more than allowed */,
- ETH_ZERO_SIZE_BD
- /* empty BD (which not contains control flags) is illegal */,
+/* empty BD (which not contains control flags) is illegal */
+ ETH_ZERO_SIZE_BD,
ETH_ILLEGAL_LSO_HDR_LEN /* LSO header size is above the limit */,
- ETH_INSUFFICIENT_PAYLOAD
- ,
+/* In LSO its expected that on the local BD ring there will be at least MSS
+ * bytes of data
+ */
+ ETH_INSUFFICIENT_PAYLOAD,
ETH_EDPM_OUT_OF_SYNC /* Valid BDs on local ring after EDPM L2 sync */,
- ETH_TUNN_IPV6_EXT_NBD_ERR
- /* Tunneled packet with IPv6+Ext without a proper number of BDs */,
+/* Tunneled packet with IPv6+Ext without a proper number of BDs */
+ ETH_TUNN_IPV6_EXT_NBD_ERR,
ETH_CONTROL_PACKET_VIOLATION /* VF sent control frame such as PFC */,
MAX_MALICIOUS_VF_ERROR_ID
};
+
+
/*
* Mstorm non-triggering VF zone
*/
struct mstorm_non_trigger_vf_zone {
- struct eth_mstorm_per_queue_stat eth_queue_stat
- /* VF statistic bucket */;
+/* VF statistic bucket */
+ struct eth_mstorm_per_queue_stat eth_queue_stat;
/* VF RX queues producers */
struct eth_rx_prod_data
eth_rx_queue_producers[ETH_MAX_NUM_RX_QUEUES_PER_VF_QUAD];
};
+
/*
* Mstorm VF zone
*/
struct mstorm_vf_zone {
- struct mstorm_non_trigger_vf_zone non_trigger
- /* non-interrupt-triggering zone */;
+/* non-interrupt-triggering zone */
+ struct mstorm_non_trigger_vf_zone non_trigger;
};
+
/*
* personality per PF
*/
@@ -1132,25 +1306,27 @@ enum personality_type {
MAX_PERSONALITY_TYPE
};
+
/*
* tunnel configuration
*/
struct pf_start_tunnel_config {
- u8 set_vxlan_udp_port_flg /* Set VXLAN tunnel UDP destination port. */;
- u8 set_geneve_udp_port_flg /* Set GENEVE tunnel UDP destination port. */
- ;
+/* Set VXLAN tunnel UDP destination port. */
+ u8 set_vxlan_udp_port_flg;
+/* Set GENEVE tunnel UDP destination port. */
+ u8 set_geneve_udp_port_flg;
u8 tx_enable_vxlan /* If set, enable VXLAN tunnel in TX path. */;
- u8 tx_enable_l2geneve /* If set, enable l2 GENEVE tunnel in TX path. */
- ;
- u8 tx_enable_ipgeneve /* If set, enable IP GENEVE tunnel in TX path. */
- ;
+/* If set, enable l2 GENEVE tunnel in TX path. */
+ u8 tx_enable_l2geneve;
+/* If set, enable IP GENEVE tunnel in TX path. */
+ u8 tx_enable_ipgeneve;
u8 tx_enable_l2gre /* If set, enable l2 GRE tunnel in TX path. */;
u8 tx_enable_ipgre /* If set, enable IP GRE tunnel in TX path. */;
u8 tunnel_clss_vxlan /* Classification scheme for VXLAN tunnel. */;
- u8 tunnel_clss_l2geneve
- /* Classification scheme for l2 GENEVE tunnel. */;
- u8 tunnel_clss_ipgeneve
- /* Classification scheme for ip GENEVE tunnel. */;
+/* Classification scheme for l2 GENEVE tunnel. */
+ u8 tunnel_clss_l2geneve;
+/* Classification scheme for ip GENEVE tunnel. */
+ u8 tunnel_clss_ipgeneve;
u8 tunnel_clss_l2gre /* Classification scheme for l2 GRE tunnel. */;
u8 tunnel_clss_ipgre /* Classification scheme for ip GRE tunnel. */;
__le16 vxlan_udp_port /* VXLAN tunnel UDP destination port. */;
@@ -1162,34 +1338,43 @@ struct pf_start_tunnel_config {
*/
struct pf_start_ramrod_data {
struct regpair event_ring_pbl_addr /* Address of event ring PBL */;
- struct regpair consolid_q_pbl_addr
- /* PBL address of consolidation queue */;
- struct pf_start_tunnel_config tunnel_config /* tunnel configuration. */
- ;
+/* PBL address of consolidation queue */
+ struct regpair consolid_q_pbl_addr;
+/* tunnel configuration. */
+ struct pf_start_tunnel_config tunnel_config;
__le16 event_ring_sb_id /* Status block ID */;
+/* All VfIds owned by Pf will be from baseVfId till baseVfId+numVfs */
u8 base_vf_id;
- ;
u8 num_vfs /* Amount of vfs owned by PF */;
u8 event_ring_num_pages /* Number of PBL pages in event ring */;
u8 event_ring_sb_index /* Status block index */;
u8 path_id /* HW path ID (engine ID) */;
u8 warning_as_error /* In FW asserts, treat warning as error */;
- u8 dont_log_ramrods
- /* If not set - throw a warning for each ramrod (for debug) */;
+/* If not set - throw a warning for each ramrod (for debug) */
+ u8 dont_log_ramrods;
u8 personality /* define what type of personality is new PF */;
+/* Log type mask. Each bit set enables a corresponding event type logging.
+ * Event types are defined as ASSERT_LOG_TYPE_xxx
+ */
__le16 log_type_mask;
u8 mf_mode /* Multi function mode */;
u8 integ_phase /* Integration phase */;
+/* If set, inter-pf tx switching is allowed in Switch Independent func mode */
u8 allow_npar_tx_switching;
+/* Map from inner to outer priority. Set pri_map_valid when init map */
u8 inner_to_outer_pri_map[8];
- u8 pri_map_valid
- /* If inner_to_outer_pri_map is initialize then set pri_map_valid */
- ;
+/* If inner_to_outer_pri_map is initialize then set pri_map_valid */
+ u8 pri_map_valid;
+/* In case mf_mode is MF_OVLAN, this field specifies the outer vlan
+ * (lower 16 bits) and ethType to use (higher 16 bits)
+ */
__le32 outer_tag;
/* FP HSI version to be used by FW */
struct hsi_fp_ver_struct hsi_fp_ver;
};
+
+
/*
* Data for port update ramrod
*/
@@ -1203,9 +1388,10 @@ struct protocol_dcb_data {
};
/*
- * tunnel configuration
+ * Update tunnel configuration
*/
struct pf_update_tunnel_config {
+/* Update RX per PF tunnel classification scheme. */
u8 update_rx_pf_clss;
/* Update per PORT default tunnel RX classification scheme for traffic with
* unknown unicast outer MAC in NPAR mode.
@@ -1215,23 +1401,24 @@ struct pf_update_tunnel_config {
* unicast outer MAC in NPAR mode.
*/
u8 update_rx_def_non_ucast_clss;
+/* Update TX per PF tunnel classification scheme. used by pf update. */
u8 update_tx_pf_clss;
- u8 set_vxlan_udp_port_flg
- /* Update VXLAN tunnel UDP destination port. */;
- u8 set_geneve_udp_port_flg
- /* Update GENEVE tunnel UDP destination port. */;
+/* Update VXLAN tunnel UDP destination port. */
+ u8 set_vxlan_udp_port_flg;
+/* Update GENEVE tunnel UDP destination port. */
+ u8 set_geneve_udp_port_flg;
u8 tx_enable_vxlan /* If set, enable VXLAN tunnel in TX path. */;
- u8 tx_enable_l2geneve /* If set, enable l2 GENEVE tunnel in TX path. */
- ;
- u8 tx_enable_ipgeneve /* If set, enable IP GENEVE tunnel in TX path. */
- ;
+/* If set, enable l2 GENEVE tunnel in TX path. */
+ u8 tx_enable_l2geneve;
+/* If set, enable IP GENEVE tunnel in TX path. */
+ u8 tx_enable_ipgeneve;
u8 tx_enable_l2gre /* If set, enable l2 GRE tunnel in TX path. */;
u8 tx_enable_ipgre /* If set, enable IP GRE tunnel in TX path. */;
u8 tunnel_clss_vxlan /* Classification scheme for VXLAN tunnel. */;
- u8 tunnel_clss_l2geneve
- /* Classification scheme for l2 GENEVE tunnel. */;
- u8 tunnel_clss_ipgeneve
- /* Classification scheme for ip GENEVE tunnel. */;
+/* Classification scheme for l2 GENEVE tunnel. */
+ u8 tunnel_clss_l2geneve;
+/* Classification scheme for ip GENEVE tunnel. */
+ u8 tunnel_clss_ipgeneve;
u8 tunnel_clss_l2gre /* Classification scheme for l2 GRE tunnel. */;
u8 tunnel_clss_ipgre /* Classification scheme for ip GRE tunnel. */;
__le16 vxlan_udp_port /* VXLAN tunnel UDP destination port. */;
@@ -1254,19 +1441,21 @@ struct pf_update_ramrod_data {
u8 update_mf_vlan_flag /* Update MF outer vlan Id */;
struct protocol_dcb_data eth_dcb_data /* core eth related fields */;
struct protocol_dcb_data fcoe_dcb_data /* core fcoe related fields */;
- struct protocol_dcb_data iscsi_dcb_data /* core iscsi related fields */
- ;
+/* core iscsi related fields */
+ struct protocol_dcb_data iscsi_dcb_data;
struct protocol_dcb_data roce_dcb_data /* core roce related fields */;
- struct protocol_dcb_data iwarp_dcb_data /* core iwarp related fields */
- ;
/* core roce related fields */
struct protocol_dcb_data rroce_dcb_data;
+/* core iwarp related fields */
+ struct protocol_dcb_data iwarp_dcb_data;
__le16 mf_vlan /* new outer vlan id value */;
__le16 reserved;
/* tunnel configuration. */
struct pf_update_tunnel_config tunnel_config;
};
+
+
/*
* Ports mode
*/
@@ -1279,6 +1468,8 @@ enum ports_mode {
MAX_PORTS_MODE
};
+
+
/*
* use to index in hsi_fp_[major|minor]_ver_arr per protocol
*/
@@ -1288,6 +1479,8 @@ enum protocol_version_array_key {
MAX_PROTOCOL_VERSION_ARRAY_KEY
};
+
+
/*
* RDMA TX Stats
*/
@@ -1300,20 +1493,22 @@ struct rdma_sent_stats {
* Pstorm non-triggering VF zone
*/
struct pstorm_non_trigger_vf_zone {
- struct eth_pstorm_per_queue_stat eth_queue_stat
- /* VF statistic bucket */;
+/* VF statistic bucket */
+ struct eth_pstorm_per_queue_stat eth_queue_stat;
struct rdma_sent_stats rdma_stats /* RoCE sent statistics */;
};
+
/*
* Pstorm VF zone
*/
struct pstorm_vf_zone {
- struct pstorm_non_trigger_vf_zone non_trigger
- /* non-interrupt-triggering zone */;
+/* non-interrupt-triggering zone */
+ struct pstorm_non_trigger_vf_zone non_trigger;
struct regpair reserved[7] /* vf_zone size mus be power of 2 */;
};
+
/*
* Ramrod Header of SPQE
*/
@@ -1324,6 +1519,7 @@ struct ramrod_header {
__le16 echo /* Ramrod echo */;
};
+
/*
* RDMA RX Stats
*/
@@ -1332,6 +1528,8 @@ struct rdma_rcv_stats {
struct regpair rcv_pkts /* number of total RDMA packets received */;
};
+
+
/*
* Data for update QCN/DCQCN RL ramrod
*/
@@ -1357,6 +1555,7 @@ struct rl_update_ramrod_data {
__le32 reserved[2];
};
+
/*
* Slowpath Element (SPQE)
*/
@@ -1365,6 +1564,7 @@ struct slow_path_element {
struct regpair data_ptr /* Pointer to the Ramrod Data on the Host */;
};
+
/*
* Tstorm non-triggering VF zone
*/
@@ -1372,28 +1572,36 @@ struct tstorm_non_trigger_vf_zone {
struct rdma_rcv_stats rdma_stats /* RoCE received statistics */;
};
+
struct tstorm_per_port_stat {
- struct regpair trunc_error_discard
- /* packet is dropped because it was truncated in NIG */;
- struct regpair mac_error_discard
- /* packet is dropped because of Ethernet FCS error */;
- struct regpair mftag_filter_discard
- /* packet is dropped because classification was unsuccessful */;
+/* packet is dropped because it was truncated in NIG */
+ struct regpair trunc_error_discard;
+/* packet is dropped because of Ethernet FCS error */
+ struct regpair mac_error_discard;
+/* packet is dropped because classification was unsuccessful */
+ struct regpair mftag_filter_discard;
+/* packet was passed to Ethernet and dropped because of no mac filter match */
struct regpair eth_mac_filter_discard;
+/* packet passed to Light L2 and dropped because Light L2 is not configured for
+ * this PF
+ */
struct regpair ll2_mac_filter_discard;
+/* packet passed to Light L2 and dropped because Light L2 is not configured for
+ * this PF
+ */
struct regpair ll2_conn_disabled_discard;
- struct regpair iscsi_irregular_pkt
- /* packet is an ISCSI irregular packet */;
- struct regpair fcoe_irregular_pkt
- /* packet is an FCOE irregular packet */;
- struct regpair roce_irregular_pkt
- /* packet is an ROCE irregular packet */;
- struct regpair eth_irregular_pkt /* packet is an ETH irregular packet */
- ;
- struct regpair toe_irregular_pkt /* packet is an TOE irregular packet */
- ;
- struct regpair preroce_irregular_pkt
- /* packet is an PREROCE irregular packet */;
+/* packet is an ISCSI irregular packet */
+ struct regpair iscsi_irregular_pkt;
+/* packet is an FCOE irregular packet */
+ struct regpair fcoe_irregular_pkt;
+/* packet is an ROCE irregular packet */
+ struct regpair roce_irregular_pkt;
+/* packet is an ETH irregular packet */
+ struct regpair eth_irregular_pkt;
+/* packet is an TOE irregular packet */
+ struct regpair toe_irregular_pkt;
+/* packet is an PREROCE irregular packet */
+ struct regpair preroce_irregular_pkt;
struct regpair eth_gre_tunn_filter_discard /* GRE dropped packets */;
/* VXLAN dropped packets */
struct regpair eth_vxlan_tunn_filter_discard;
@@ -1401,29 +1609,32 @@ struct tstorm_per_port_stat {
struct regpair eth_geneve_tunn_filter_discard;
};
+
/*
* Tstorm VF zone
*/
struct tstorm_vf_zone {
- struct tstorm_non_trigger_vf_zone non_trigger
- /* non-interrupt-triggering zone */;
+/* non-interrupt-triggering zone */
+ struct tstorm_non_trigger_vf_zone non_trigger;
};
+
/*
* Tunnel classification scheme
*/
enum tunnel_clss {
- TUNNEL_CLSS_MAC_VLAN =
- 0
- /* Use MAC & VLAN from first L2 header for vport classification. */
- ,
- TUNNEL_CLSS_MAC_VNI
- ,
- TUNNEL_CLSS_INNER_MAC_VLAN
- /* Use MAC and VLAN from last L2 header for vport classification */
- ,
- TUNNEL_CLSS_INNER_MAC_VNI
- ,
+/* Use MAC and VLAN from first L2 header for vport classification. */
+ TUNNEL_CLSS_MAC_VLAN = 0,
+/* Use MAC from first L2 header and VNI from tunnel header for vport
+ * classification
+ */
+ TUNNEL_CLSS_MAC_VNI,
+/* Use MAC and VLAN from last L2 header for vport classification */
+ TUNNEL_CLSS_INNER_MAC_VLAN,
+/* Use MAC from last L2 header and VNI from tunnel header for vport
+ * classification
+ */
+ TUNNEL_CLSS_INNER_MAC_VNI,
/* Use MAC and VLAN from last L2 header for vport classification. If no exact
* match, use MAC and VLAN from first L2 header for classification.
*/
@@ -1431,15 +1642,18 @@ enum tunnel_clss {
MAX_TUNNEL_CLSS
};
+
+
/*
* Ustorm non-triggering VF zone
*/
struct ustorm_non_trigger_vf_zone {
- struct eth_ustorm_per_queue_stat eth_queue_stat
- /* VF statistic bucket */;
+/* VF statistic bucket */
+ struct eth_ustorm_per_queue_stat eth_queue_stat;
struct regpair vf_pf_msg_addr /* VF-PF message address */;
};
+
/*
* Ustorm triggering VF zone
*/
@@ -1448,30 +1662,40 @@ struct ustorm_trigger_vf_zone {
u8 reserved[7];
};
+
/*
* Ustorm VF zone
*/
struct ustorm_vf_zone {
- struct ustorm_non_trigger_vf_zone non_trigger
- /* non-interrupt-triggering zone */;
+/* non-interrupt-triggering zone */
+ struct ustorm_non_trigger_vf_zone non_trigger;
struct ustorm_trigger_vf_zone trigger /* interrupt triggering zone */;
};
+
/*
* VF-PF channel data
*/
struct vf_pf_channel_data {
+/* 0: VF-PF Channel NOT ready. Waiting for ack from PF driver. 1: VF-PF Channel
+ * is ready for a new transaction.
+ */
__le32 ready;
+/* 0: VF-PF Channel is invalid because of malicious VF. 1: VF-PF Channel is
+ * valid.
+ */
u8 valid;
u8 reserved0;
__le16 reserved1;
};
+
/*
* Ramrod data for VF start ramrod
*/
struct vf_start_ramrod_data {
u8 vf_id /* VF ID */;
+/* If set, initial cleanup ack will be sent to parent PF SP event queue */
u8 enable_flr_ack;
__le16 opaque_fid /* VF opaque FID */;
u8 personality /* define what type of personality is new VF */;
@@ -1480,6 +1704,7 @@ struct vf_start_ramrod_data {
struct hsi_fp_ver_struct hsi_fp_ver;
};
+
/*
* Ramrod data for VF start ramrod
*/
@@ -1490,6 +1715,7 @@ struct vf_stop_ramrod_data {
__le32 reserved2;
};
+
/*
* VF zone size mode.
*/
@@ -1503,6 +1729,9 @@ enum vf_zone_size_mode {
MAX_VF_ZONE_SIZE_MODE
};
+
+
+
/*
* Attentions status block
*/
@@ -1517,6 +1746,7 @@ struct atten_status_block {
/*
* Igu cleanup bit values to distinguish between clean or producer consumer
+ * update.
*/
enum command_type_bit {
IGU_COMMAND_TYPE_NOP = 0,
@@ -1524,61 +1754,100 @@ enum command_type_bit {
MAX_COMMAND_TYPE_BIT
};
+
/*
* DMAE command
*/
struct dmae_cmd {
__le32 opcode;
+/* DMA Source. 0 - PCIe, 1 - GRC (use enum dmae_cmd_src_enum) */
#define DMAE_CMD_SRC_MASK 0x1
#define DMAE_CMD_SRC_SHIFT 0
+/* DMA destination. 0 - None, 1 - PCIe, 2 - GRC, 3 - None
+ * (use enum dmae_cmd_dst_enum)
+ */
#define DMAE_CMD_DST_MASK 0x3
#define DMAE_CMD_DST_SHIFT 1
+/* Completion destination. 0 - PCie, 1 - GRC (use enum dmae_cmd_c_dst_enum) */
#define DMAE_CMD_C_DST_MASK 0x1
#define DMAE_CMD_C_DST_SHIFT 3
+/* Reset the CRC result (do not use the previous result as the seed) */
#define DMAE_CMD_CRC_RESET_MASK 0x1
#define DMAE_CMD_CRC_RESET_SHIFT 4
+/* Reset the source address in the next go to the same source address of the
+ * previous go
+ */
#define DMAE_CMD_SRC_ADDR_RESET_MASK 0x1
#define DMAE_CMD_SRC_ADDR_RESET_SHIFT 5
+/* Reset the destination address in the next go to the same destination address
+ * of the previous go
+ */
#define DMAE_CMD_DST_ADDR_RESET_MASK 0x1
#define DMAE_CMD_DST_ADDR_RESET_SHIFT 6
+/* 0 completion function is the same as src function, 1 - 0 completion
+ * function is the same as dst function (use enum dmae_cmd_comp_func_enum)
+ */
#define DMAE_CMD_COMP_FUNC_MASK 0x1
#define DMAE_CMD_COMP_FUNC_SHIFT 7
+/* 0 - Do not write a completion word, 1 - Write a completion word
+ * (use enum dmae_cmd_comp_word_en_enum)
+ */
#define DMAE_CMD_COMP_WORD_EN_MASK 0x1
#define DMAE_CMD_COMP_WORD_EN_SHIFT 8
+/* 0 - Do not write a CRC word, 1 - Write a CRC word
+ * (use enum dmae_cmd_comp_crc_en_enum)
+ */
#define DMAE_CMD_COMP_CRC_EN_MASK 0x1
#define DMAE_CMD_COMP_CRC_EN_SHIFT 9
+/* The CRC word should be taken from the DMAE address space from address 9+X,
+ * where X is the value in these bits.
+ */
#define DMAE_CMD_COMP_CRC_OFFSET_MASK 0x7
#define DMAE_CMD_COMP_CRC_OFFSET_SHIFT 10
#define DMAE_CMD_RESERVED1_MASK 0x1
#define DMAE_CMD_RESERVED1_SHIFT 13
#define DMAE_CMD_ENDIANITY_MODE_MASK 0x3
#define DMAE_CMD_ENDIANITY_MODE_SHIFT 14
+/* The field specifies how the completion word is affected by PCIe read error. 0
+ * Send a regular completion, 1 - Send a completion with an error indication,
+ * 2 do not send a completion (use enum dmae_cmd_error_handling_enum)
+ */
#define DMAE_CMD_ERR_HANDLING_MASK 0x3
#define DMAE_CMD_ERR_HANDLING_SHIFT 16
+/* The port ID to be placed on the RF FID field of the GRC bus. this field is
+ * used both when GRC is the destination and when it is the source of the DMAE
+ * transaction.
+ */
#define DMAE_CMD_PORT_ID_MASK 0x3
#define DMAE_CMD_PORT_ID_SHIFT 18
+/* Source PCI function number [3:0] */
#define DMAE_CMD_SRC_PF_ID_MASK 0xF
#define DMAE_CMD_SRC_PF_ID_SHIFT 20
+/* Destination PCI function number [3:0] */
#define DMAE_CMD_DST_PF_ID_MASK 0xF
#define DMAE_CMD_DST_PF_ID_SHIFT 24
-#define DMAE_CMD_SRC_VF_ID_VALID_MASK 0x1
+#define DMAE_CMD_SRC_VF_ID_VALID_MASK 0x1 /* Source VFID valid */
#define DMAE_CMD_SRC_VF_ID_VALID_SHIFT 28
-#define DMAE_CMD_DST_VF_ID_VALID_MASK 0x1
+#define DMAE_CMD_DST_VF_ID_VALID_MASK 0x1 /* Destination VFID valid */
#define DMAE_CMD_DST_VF_ID_VALID_SHIFT 29
#define DMAE_CMD_RESERVED2_MASK 0x3
#define DMAE_CMD_RESERVED2_SHIFT 30
- __le32 src_addr_lo
- /* PCIe source address low in bytes or GRC source address in DW */;
+/* PCIe source address low in bytes or GRC source address in DW */
+ __le32 src_addr_lo;
+/* PCIe source address high in bytes or reserved (if source is GRC) */
__le32 src_addr_hi;
+/* PCIe destination address low in bytes or GRC destination address in DW */
__le32 dst_addr_lo;
+/* PCIe destination address high in bytes or reserved (if destination is GRC) */
__le32 dst_addr_hi;
__le16 length_dw /* Length in DW */;
__le16 opcode_b;
-#define DMAE_CMD_SRC_VF_ID_MASK 0xFF
+#define DMAE_CMD_SRC_VF_ID_MASK 0xFF /* Source VF id */
#define DMAE_CMD_SRC_VF_ID_SHIFT 0
-#define DMAE_CMD_DST_VF_ID_MASK 0xFF
+#define DMAE_CMD_DST_VF_ID_MASK 0xFF /* Destination VF id */
#define DMAE_CMD_DST_VF_ID_SHIFT 8
__le32 comp_addr_lo /* PCIe completion address low or grc address */;
+/* PCIe completion address high or reserved (if completion address is in GRC) */
__le32 comp_addr_hi;
__le32 comp_val /* Value to write to completion address */;
__le32 crc32 /* crc16 result */;
@@ -1649,6 +1918,7 @@ enum dmae_cmd_src_enum {
MAX_DMAE_CMD_SRC_ENUM
};
+
/*
* IGU cleanup command
*/
@@ -1656,15 +1926,18 @@ struct igu_cleanup {
__le32 sb_id_and_flags;
#define IGU_CLEANUP_RESERVED0_MASK 0x7FFFFFF
#define IGU_CLEANUP_RESERVED0_SHIFT 0
+/* cleanup clear - 0, set - 1 */
#define IGU_CLEANUP_CLEANUP_SET_MASK 0x1
#define IGU_CLEANUP_CLEANUP_SET_SHIFT 27
#define IGU_CLEANUP_CLEANUP_TYPE_MASK 0x7
#define IGU_CLEANUP_CLEANUP_TYPE_SHIFT 28
+/* must always be set (use enum command_type_bit) */
#define IGU_CLEANUP_COMMAND_TYPE_MASK 0x1
#define IGU_CLEANUP_COMMAND_TYPE_SHIFT 31
__le32 reserved1;
};
+
/*
* IGU firmware driver command
*/
@@ -1673,6 +1946,7 @@ union igu_command {
struct igu_cleanup cleanup;
};
+
/*
* IGU firmware driver command
*/
@@ -1683,10 +1957,12 @@ struct igu_command_reg_ctrl {
#define IGU_COMMAND_REG_CTRL_PXP_BAR_ADDR_SHIFT 0
#define IGU_COMMAND_REG_CTRL_RESERVED_MASK 0x7
#define IGU_COMMAND_REG_CTRL_RESERVED_SHIFT 12
+/* command typ: 0 - read, 1 - write */
#define IGU_COMMAND_REG_CTRL_COMMAND_TYPE_MASK 0x1
#define IGU_COMMAND_REG_CTRL_COMMAND_TYPE_SHIFT 15
};
+
/*
* IGU mapping line structure
*/
@@ -1696,9 +1972,10 @@ struct igu_mapping_line {
#define IGU_MAPPING_LINE_VALID_SHIFT 0
#define IGU_MAPPING_LINE_VECTOR_NUMBER_MASK 0xFF
#define IGU_MAPPING_LINE_VECTOR_NUMBER_SHIFT 1
+/* In BB: VF-0-120, PF-0-7; In K2: VF-0-191, PF-0-15 */
#define IGU_MAPPING_LINE_FUNCTION_NUMBER_MASK 0xFF
#define IGU_MAPPING_LINE_FUNCTION_NUMBER_SHIFT 9
-#define IGU_MAPPING_LINE_PF_VALID_MASK 0x1
+#define IGU_MAPPING_LINE_PF_VALID_MASK 0x1 /* PF-1, VF-0 */
#define IGU_MAPPING_LINE_PF_VALID_SHIFT 17
#define IGU_MAPPING_LINE_IPS_GROUP_MASK 0x3F
#define IGU_MAPPING_LINE_IPS_GROUP_SHIFT 18
@@ -1706,6 +1983,7 @@ struct igu_mapping_line {
#define IGU_MAPPING_LINE_RESERVED_SHIFT 24
};
+
/*
* IGU MSIX line structure
*/
@@ -1728,32 +2006,32 @@ struct mstorm_core_conn_ag_ctx {
u8 byte0 /* cdu_validation */;
u8 byte1 /* state */;
u8 flags0;
-#define MSTORM_CORE_CONN_AG_CTX_BIT0_MASK 0x1
+#define MSTORM_CORE_CONN_AG_CTX_BIT0_MASK 0x1 /* exist_in_qm0 */
#define MSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT 0
-#define MSTORM_CORE_CONN_AG_CTX_BIT1_MASK 0x1
+#define MSTORM_CORE_CONN_AG_CTX_BIT1_MASK 0x1 /* exist_in_qm1 */
#define MSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT 1
-#define MSTORM_CORE_CONN_AG_CTX_CF0_MASK 0x3
+#define MSTORM_CORE_CONN_AG_CTX_CF0_MASK 0x3 /* cf0 */
#define MSTORM_CORE_CONN_AG_CTX_CF0_SHIFT 2
-#define MSTORM_CORE_CONN_AG_CTX_CF1_MASK 0x3
+#define MSTORM_CORE_CONN_AG_CTX_CF1_MASK 0x3 /* cf1 */
#define MSTORM_CORE_CONN_AG_CTX_CF1_SHIFT 4
-#define MSTORM_CORE_CONN_AG_CTX_CF2_MASK 0x3
+#define MSTORM_CORE_CONN_AG_CTX_CF2_MASK 0x3 /* cf2 */
#define MSTORM_CORE_CONN_AG_CTX_CF2_SHIFT 6
u8 flags1;
-#define MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK 0x1
+#define MSTORM_CORE_CONN_AG_CTX_CF0EN_MASK 0x1 /* cf0en */
#define MSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT 0
-#define MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK 0x1
+#define MSTORM_CORE_CONN_AG_CTX_CF1EN_MASK 0x1 /* cf1en */
#define MSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT 1
-#define MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK 0x1
+#define MSTORM_CORE_CONN_AG_CTX_CF2EN_MASK 0x1 /* cf2en */
#define MSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT 2
-#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK 0x1
+#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK 0x1 /* rule0en */
#define MSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
-#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK 0x1
+#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK 0x1 /* rule1en */
#define MSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
-#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK 0x1
+#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK 0x1 /* rule2en */
#define MSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
-#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK 0x1
+#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK 0x1 /* rule3en */
#define MSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
-#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK 0x1
+#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK 0x1 /* rule4en */
#define MSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
__le16 word0 /* word0 */;
__le16 word1 /* word1 */;
@@ -1761,36 +2039,48 @@ struct mstorm_core_conn_ag_ctx {
__le32 reg1 /* reg1 */;
};
+
/*
* per encapsulation type enabling flags
*/
struct prs_reg_encapsulation_type_en {
u8 flags;
+/* Enable bit for Ethernet-over-GRE (L2 GRE) encapsulation. */
#define PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GRE_ENABLE_MASK 0x1
#define PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GRE_ENABLE_SHIFT 0
+/* Enable bit for IP-over-GRE (IP GRE) encapsulation. */
#define PRS_REG_ENCAPSULATION_TYPE_EN_IP_OVER_GRE_ENABLE_MASK 0x1
#define PRS_REG_ENCAPSULATION_TYPE_EN_IP_OVER_GRE_ENABLE_SHIFT 1
+/* Enable bit for VXLAN encapsulation. */
#define PRS_REG_ENCAPSULATION_TYPE_EN_VXLAN_ENABLE_MASK 0x1
#define PRS_REG_ENCAPSULATION_TYPE_EN_VXLAN_ENABLE_SHIFT 2
+/* Enable bit for T-Tag encapsulation. */
#define PRS_REG_ENCAPSULATION_TYPE_EN_T_TAG_ENABLE_MASK 0x1
#define PRS_REG_ENCAPSULATION_TYPE_EN_T_TAG_ENABLE_SHIFT 3
+/* Enable bit for Ethernet-over-GENEVE (L2 GENEVE) encapsulation. */
#define PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GENEVE_ENABLE_MASK 0x1
#define PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GENEVE_ENABLE_SHIFT 4
+/* Enable bit for IP-over-GENEVE (IP GENEVE) encapsulation. */
#define PRS_REG_ENCAPSULATION_TYPE_EN_IP_OVER_GENEVE_ENABLE_MASK 0x1
#define PRS_REG_ENCAPSULATION_TYPE_EN_IP_OVER_GENEVE_ENABLE_SHIFT 5
#define PRS_REG_ENCAPSULATION_TYPE_EN_RESERVED_MASK 0x3
#define PRS_REG_ENCAPSULATION_TYPE_EN_RESERVED_SHIFT 6
};
+
enum pxp_tph_st_hint {
TPH_ST_HINT_BIDIR /* Read/Write access by Host and Device */,
TPH_ST_HINT_REQUESTER /* Read/Write access by Device */,
- TPH_ST_HINT_TARGET
- /* Device Write and Host Read, or Host Write and Device Read */,
+/* Device Write and Host Read, or Host Write and Device Read */
+ TPH_ST_HINT_TARGET,
+/* Device Write and Host Read, or Host Write and Device Read - with temporal
+ * reuse
+ */
TPH_ST_HINT_TARGET_PRIO,
MAX_PXP_TPH_ST_HINT
};
+
/*
* QM hardware structure of enable bypass credit mask
*/
@@ -1814,6 +2104,7 @@ struct qm_rf_bypass_mask {
#define QM_RF_BYPASS_MASK_RESERVED1_SHIFT 7
};
+
/*
* QM hardware structure of opportunistic credit mask
*/
@@ -1841,83 +2132,95 @@ struct qm_rf_opportunistic_mask {
#define QM_RF_OPPORTUNISTIC_MASK_RESERVED1_SHIFT 9
};
+
/*
* QM hardware structure of QM map memory
*/
struct qm_rf_pq_map {
__le32 reg;
-#define QM_RF_PQ_MAP_PQ_VALID_MASK 0x1
+#define QM_RF_PQ_MAP_PQ_VALID_MASK 0x1 /* PQ active */
#define QM_RF_PQ_MAP_PQ_VALID_SHIFT 0
-#define QM_RF_PQ_MAP_RL_ID_MASK 0xFF
+#define QM_RF_PQ_MAP_RL_ID_MASK 0xFF /* RL ID */
#define QM_RF_PQ_MAP_RL_ID_SHIFT 1
+/* the first PQ associated with the VPORT and VOQ of this PQ */
#define QM_RF_PQ_MAP_VP_PQ_ID_MASK 0x1FF
#define QM_RF_PQ_MAP_VP_PQ_ID_SHIFT 9
-#define QM_RF_PQ_MAP_VOQ_MASK 0x1F
+#define QM_RF_PQ_MAP_VOQ_MASK 0x1F /* VOQ */
#define QM_RF_PQ_MAP_VOQ_SHIFT 18
-#define QM_RF_PQ_MAP_WRR_WEIGHT_GROUP_MASK 0x3
+#define QM_RF_PQ_MAP_WRR_WEIGHT_GROUP_MASK 0x3 /* WRR weight */
#define QM_RF_PQ_MAP_WRR_WEIGHT_GROUP_SHIFT 23
-#define QM_RF_PQ_MAP_RL_VALID_MASK 0x1
+#define QM_RF_PQ_MAP_RL_VALID_MASK 0x1 /* RL active */
#define QM_RF_PQ_MAP_RL_VALID_SHIFT 25
#define QM_RF_PQ_MAP_RESERVED_MASK 0x3F
#define QM_RF_PQ_MAP_RESERVED_SHIFT 26
};
+
/*
* Completion params for aggregated interrupt completion
*/
struct sdm_agg_int_comp_params {
__le16 params;
+/* the number of aggregated interrupt, 0-31 */
#define SDM_AGG_INT_COMP_PARAMS_AGG_INT_INDEX_MASK 0x3F
#define SDM_AGG_INT_COMP_PARAMS_AGG_INT_INDEX_SHIFT 0
+/* 1 - set a bit in aggregated vector, 0 - dont set */
#define SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_ENABLE_MASK 0x1
#define SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_ENABLE_SHIFT 6
+/* Number of bit in the aggregated vector, 0-279 (TBD) */
#define SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_BIT_MASK 0x1FF
#define SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_BIT_SHIFT 7
};
+
/*
* SDM operation gen command (generate aggregative interrupt)
*/
struct sdm_op_gen {
__le32 command;
+/* completion parameters 0-15 */
#define SDM_OP_GEN_COMP_PARAM_MASK 0xFFFF
#define SDM_OP_GEN_COMP_PARAM_SHIFT 0
-#define SDM_OP_GEN_COMP_TYPE_MASK 0xF
+#define SDM_OP_GEN_COMP_TYPE_MASK 0xF /* completion type 16-19 */
#define SDM_OP_GEN_COMP_TYPE_SHIFT 16
-#define SDM_OP_GEN_RESERVED_MASK 0xFFF
+#define SDM_OP_GEN_RESERVED_MASK 0xFFF /* reserved 20-31 */
#define SDM_OP_GEN_RESERVED_SHIFT 20
};
+
+
+
+
struct ystorm_core_conn_ag_ctx {
u8 byte0 /* cdu_validation */;
u8 byte1 /* state */;
u8 flags0;
-#define YSTORM_CORE_CONN_AG_CTX_BIT0_MASK 0x1
+#define YSTORM_CORE_CONN_AG_CTX_BIT0_MASK 0x1 /* exist_in_qm0 */
#define YSTORM_CORE_CONN_AG_CTX_BIT0_SHIFT 0
-#define YSTORM_CORE_CONN_AG_CTX_BIT1_MASK 0x1
+#define YSTORM_CORE_CONN_AG_CTX_BIT1_MASK 0x1 /* exist_in_qm1 */
#define YSTORM_CORE_CONN_AG_CTX_BIT1_SHIFT 1
-#define YSTORM_CORE_CONN_AG_CTX_CF0_MASK 0x3
+#define YSTORM_CORE_CONN_AG_CTX_CF0_MASK 0x3 /* cf0 */
#define YSTORM_CORE_CONN_AG_CTX_CF0_SHIFT 2
-#define YSTORM_CORE_CONN_AG_CTX_CF1_MASK 0x3
+#define YSTORM_CORE_CONN_AG_CTX_CF1_MASK 0x3 /* cf1 */
#define YSTORM_CORE_CONN_AG_CTX_CF1_SHIFT 4
-#define YSTORM_CORE_CONN_AG_CTX_CF2_MASK 0x3
+#define YSTORM_CORE_CONN_AG_CTX_CF2_MASK 0x3 /* cf2 */
#define YSTORM_CORE_CONN_AG_CTX_CF2_SHIFT 6
u8 flags1;
-#define YSTORM_CORE_CONN_AG_CTX_CF0EN_MASK 0x1
+#define YSTORM_CORE_CONN_AG_CTX_CF0EN_MASK 0x1 /* cf0en */
#define YSTORM_CORE_CONN_AG_CTX_CF0EN_SHIFT 0
-#define YSTORM_CORE_CONN_AG_CTX_CF1EN_MASK 0x1
+#define YSTORM_CORE_CONN_AG_CTX_CF1EN_MASK 0x1 /* cf1en */
#define YSTORM_CORE_CONN_AG_CTX_CF1EN_SHIFT 1
-#define YSTORM_CORE_CONN_AG_CTX_CF2EN_MASK 0x1
+#define YSTORM_CORE_CONN_AG_CTX_CF2EN_MASK 0x1 /* cf2en */
#define YSTORM_CORE_CONN_AG_CTX_CF2EN_SHIFT 2
-#define YSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK 0x1
+#define YSTORM_CORE_CONN_AG_CTX_RULE0EN_MASK 0x1 /* rule0en */
#define YSTORM_CORE_CONN_AG_CTX_RULE0EN_SHIFT 3
-#define YSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK 0x1
+#define YSTORM_CORE_CONN_AG_CTX_RULE1EN_MASK 0x1 /* rule1en */
#define YSTORM_CORE_CONN_AG_CTX_RULE1EN_SHIFT 4
-#define YSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK 0x1
+#define YSTORM_CORE_CONN_AG_CTX_RULE2EN_MASK 0x1 /* rule2en */
#define YSTORM_CORE_CONN_AG_CTX_RULE2EN_SHIFT 5
-#define YSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK 0x1
+#define YSTORM_CORE_CONN_AG_CTX_RULE3EN_MASK 0x1 /* rule3en */
#define YSTORM_CORE_CONN_AG_CTX_RULE3EN_SHIFT 6
-#define YSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK 0x1
+#define YSTORM_CORE_CONN_AG_CTX_RULE4EN_MASK 0x1 /* rule4en */
#define YSTORM_CORE_CONN_AG_CTX_RULE4EN_SHIFT 7
u8 byte2 /* byte2 */;
u8 byte3 /* byte3 */;
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index 5892f99..e26c183 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -38,210 +38,306 @@ struct xstorm_eth_conn_ag_ctx {
u8 reserved0 /* cdu_validation */;
u8 eth_state /* state */;
u8 flags0;
+/* exist_in_qm0 */
#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
+/* exist_in_qm1 */
#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RESERVED1_SHIFT 1
+/* exist_in_qm2 */
#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RESERVED2_SHIFT 2
+/* exist_in_qm3 */
#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM3_SHIFT 3
+/* bit4 */
#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RESERVED3_SHIFT 4
+/* cf_array_active */
#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RESERVED4_SHIFT 5
+/* bit6 */
#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RESERVED5_SHIFT 6
+/* bit7 */
#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RESERVED6_SHIFT 7
u8 flags1;
+/* bit8 */
#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RESERVED7_SHIFT 0
+/* bit9 */
#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RESERVED8_SHIFT 1
+/* bit10 */
#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RESERVED9_SHIFT 2
+/* bit11 */
#define XSTORM_ETH_CONN_AG_CTX_BIT11_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_BIT11_SHIFT 3
+/* bit12 */
#define XSTORM_ETH_CONN_AG_CTX_BIT12_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_BIT12_SHIFT 4
+/* bit13 */
#define XSTORM_ETH_CONN_AG_CTX_BIT13_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_BIT13_SHIFT 5
+/* bit14 */
#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT 6
+/* bit15 */
#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT 7
u8 flags2;
+/* timer0cf */
#define XSTORM_ETH_CONN_AG_CTX_CF0_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_CF0_SHIFT 0
+/* timer1cf */
#define XSTORM_ETH_CONN_AG_CTX_CF1_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_CF1_SHIFT 2
+/* timer2cf */
#define XSTORM_ETH_CONN_AG_CTX_CF2_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_CF2_SHIFT 4
+/* timer_stop_all */
#define XSTORM_ETH_CONN_AG_CTX_CF3_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_CF3_SHIFT 6
u8 flags3;
+/* cf4 */
#define XSTORM_ETH_CONN_AG_CTX_CF4_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_CF4_SHIFT 0
+/* cf5 */
#define XSTORM_ETH_CONN_AG_CTX_CF5_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_CF5_SHIFT 2
+/* cf6 */
#define XSTORM_ETH_CONN_AG_CTX_CF6_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_CF6_SHIFT 4
+/* cf7 */
#define XSTORM_ETH_CONN_AG_CTX_CF7_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_CF7_SHIFT 6
u8 flags4;
+/* cf8 */
#define XSTORM_ETH_CONN_AG_CTX_CF8_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_CF8_SHIFT 0
+/* cf9 */
#define XSTORM_ETH_CONN_AG_CTX_CF9_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_CF9_SHIFT 2
+/* cf10 */
#define XSTORM_ETH_CONN_AG_CTX_CF10_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_CF10_SHIFT 4
+/* cf11 */
#define XSTORM_ETH_CONN_AG_CTX_CF11_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_CF11_SHIFT 6
u8 flags5;
+/* cf12 */
#define XSTORM_ETH_CONN_AG_CTX_CF12_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_CF12_SHIFT 0
+/* cf13 */
#define XSTORM_ETH_CONN_AG_CTX_CF13_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_CF13_SHIFT 2
+/* cf14 */
#define XSTORM_ETH_CONN_AG_CTX_CF14_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_CF14_SHIFT 4
+/* cf15 */
#define XSTORM_ETH_CONN_AG_CTX_CF15_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_CF15_SHIFT 6
u8 flags6;
+/* cf16 */
#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT 0
+/* cf_array_cf */
#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT 2
+/* cf18 */
#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_SHIFT 4
+/* cf19 */
#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_SHIFT 6
u8 flags7;
+/* cf20 */
#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_SHIFT 0
+/* cf21 */
#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_RESERVED10_SHIFT 2
+/* cf22 */
#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_SHIFT 4
+/* cf0en */
#define XSTORM_ETH_CONN_AG_CTX_CF0EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT 6
+/* cf1en */
#define XSTORM_ETH_CONN_AG_CTX_CF1EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT 7
u8 flags8;
+/* cf2en */
#define XSTORM_ETH_CONN_AG_CTX_CF2EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT 0
+/* cf3en */
#define XSTORM_ETH_CONN_AG_CTX_CF3EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT 1
+/* cf4en */
#define XSTORM_ETH_CONN_AG_CTX_CF4EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT 2
+/* cf5en */
#define XSTORM_ETH_CONN_AG_CTX_CF5EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT 3
+/* cf6en */
#define XSTORM_ETH_CONN_AG_CTX_CF6EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT 4
+/* cf7en */
#define XSTORM_ETH_CONN_AG_CTX_CF7EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT 5
+/* cf8en */
#define XSTORM_ETH_CONN_AG_CTX_CF8EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT 6
+/* cf9en */
#define XSTORM_ETH_CONN_AG_CTX_CF9EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT 7
u8 flags9;
+/* cf10en */
#define XSTORM_ETH_CONN_AG_CTX_CF10EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT 0
+/* cf11en */
#define XSTORM_ETH_CONN_AG_CTX_CF11EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_CF11EN_SHIFT 1
+/* cf12en */
#define XSTORM_ETH_CONN_AG_CTX_CF12EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_CF12EN_SHIFT 2
+/* cf13en */
#define XSTORM_ETH_CONN_AG_CTX_CF13EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_CF13EN_SHIFT 3
+/* cf14en */
#define XSTORM_ETH_CONN_AG_CTX_CF14EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_CF14EN_SHIFT 4
+/* cf15en */
#define XSTORM_ETH_CONN_AG_CTX_CF15EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_CF15EN_SHIFT 5
+/* cf16en */
#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT 6
+/* cf_array_cf_en */
#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT 7
u8 flags10;
+/* cf18en */
#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_DQ_CF_EN_SHIFT 0
+/* cf19en */
#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT 1
+/* cf20en */
#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT 2
+/* cf21en */
#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RESERVED11_SHIFT 3
+/* cf22en */
#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_SLOW_PATH_EN_SHIFT 4
+/* cf23en */
#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+/* rule0en */
#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RESERVED12_SHIFT 6
+/* rule1en */
#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RESERVED13_SHIFT 7
u8 flags11;
+/* rule2en */
#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RESERVED14_SHIFT 0
+/* rule3en */
#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RESERVED15_SHIFT 1
+/* rule4en */
#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT 2
+/* rule5en */
#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT 3
+/* rule6en */
#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT 4
+/* rule7en */
#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT 5
+/* rule8en */
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED1_SHIFT 6
+/* rule9en */
#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RULE9EN_SHIFT 7
u8 flags12;
+/* rule10en */
#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RULE10EN_SHIFT 0
+/* rule11en */
#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RULE11EN_SHIFT 1
+/* rule12en */
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED2_SHIFT 2
+/* rule13en */
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED3_SHIFT 3
+/* rule14en */
#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RULE14EN_SHIFT 4
+/* rule15en */
#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RULE15EN_SHIFT 5
+/* rule16en */
#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RULE16EN_SHIFT 6
+/* rule17en */
#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RULE17EN_SHIFT 7
u8 flags13;
+/* rule18en */
#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RULE18EN_SHIFT 0
+/* rule19en */
#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_RULE19EN_SHIFT 1
+/* rule20en */
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED4_SHIFT 2
+/* rule21en */
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED5_SHIFT 3
+/* rule22en */
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED6_SHIFT 4
+/* rule23en */
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED7_SHIFT 5
+/* rule24en */
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED8_SHIFT 6
+/* rule25en */
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_A0_RESERVED9_SHIFT 7
u8 flags14;
+/* bit16 */
#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT 0
+/* bit17 */
#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT 1
+/* bit18 */
#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT 2
+/* bit19 */
#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT 3
+/* bit20 */
#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT 4
+/* bit21 */
#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK 0x1
#define XSTORM_ETH_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT 5
+/* cf23 */
#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_MASK 0x3
#define XSTORM_ETH_CONN_AG_CTX_TPH_ENABLE_SHIFT 6
u8 edpm_event_id /* byte2 */;
@@ -308,31 +404,41 @@ struct ystorm_eth_conn_ag_ctx {
u8 byte0 /* cdu_validation */;
u8 state /* state */;
u8 flags0;
+/* exist_in_qm0 */
#define YSTORM_ETH_CONN_AG_CTX_BIT0_MASK 0x1
#define YSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT 0
+/* exist_in_qm1 */
#define YSTORM_ETH_CONN_AG_CTX_BIT1_MASK 0x1
#define YSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT 1
-#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK 0x3
+#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK 0x3 /* cf0 */
#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT 2
-#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK 0x3
+#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_MASK 0x3 /* cf1 */
#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_SHIFT 4
-#define YSTORM_ETH_CONN_AG_CTX_CF2_MASK 0x3
+#define YSTORM_ETH_CONN_AG_CTX_CF2_MASK 0x3 /* cf2 */
#define YSTORM_ETH_CONN_AG_CTX_CF2_SHIFT 6
u8 flags1;
+/* cf0en */
#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK 0x1
#define YSTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 0
+/* cf1en */
#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_MASK 0x1
#define YSTORM_ETH_CONN_AG_CTX_PMD_TERMINATE_CF_EN_SHIFT 1
+/* cf2en */
#define YSTORM_ETH_CONN_AG_CTX_CF2EN_MASK 0x1
#define YSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT 2
+/* rule0en */
#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK 0x1
#define YSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT 3
+/* rule1en */
#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK 0x1
#define YSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT 4
+/* rule2en */
#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK 0x1
#define YSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT 5
+/* rule3en */
#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK 0x1
#define YSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT 6
+/* rule4en */
#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK 0x1
#define YSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT 7
u8 tx_q0_int_coallecing_timeset /* byte2 */;
@@ -352,84 +458,84 @@ struct tstorm_eth_conn_ag_ctx {
u8 byte0 /* cdu_validation */;
u8 byte1 /* state */;
u8 flags0;
-#define TSTORM_ETH_CONN_AG_CTX_BIT0_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_BIT0_MASK 0x1 /* exist_in_qm0 */
#define TSTORM_ETH_CONN_AG_CTX_BIT0_SHIFT 0
-#define TSTORM_ETH_CONN_AG_CTX_BIT1_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_BIT1_MASK 0x1 /* exist_in_qm1 */
#define TSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT 1
-#define TSTORM_ETH_CONN_AG_CTX_BIT2_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_BIT2_MASK 0x1 /* bit2 */
#define TSTORM_ETH_CONN_AG_CTX_BIT2_SHIFT 2
-#define TSTORM_ETH_CONN_AG_CTX_BIT3_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_BIT3_MASK 0x1 /* bit3 */
#define TSTORM_ETH_CONN_AG_CTX_BIT3_SHIFT 3
-#define TSTORM_ETH_CONN_AG_CTX_BIT4_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_BIT4_MASK 0x1 /* bit4 */
#define TSTORM_ETH_CONN_AG_CTX_BIT4_SHIFT 4
-#define TSTORM_ETH_CONN_AG_CTX_BIT5_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_BIT5_MASK 0x1 /* bit5 */
#define TSTORM_ETH_CONN_AG_CTX_BIT5_SHIFT 5
-#define TSTORM_ETH_CONN_AG_CTX_CF0_MASK 0x3
+#define TSTORM_ETH_CONN_AG_CTX_CF0_MASK 0x3 /* timer0cf */
#define TSTORM_ETH_CONN_AG_CTX_CF0_SHIFT 6
u8 flags1;
-#define TSTORM_ETH_CONN_AG_CTX_CF1_MASK 0x3
+#define TSTORM_ETH_CONN_AG_CTX_CF1_MASK 0x3 /* timer1cf */
#define TSTORM_ETH_CONN_AG_CTX_CF1_SHIFT 0
-#define TSTORM_ETH_CONN_AG_CTX_CF2_MASK 0x3
+#define TSTORM_ETH_CONN_AG_CTX_CF2_MASK 0x3 /* timer2cf */
#define TSTORM_ETH_CONN_AG_CTX_CF2_SHIFT 2
-#define TSTORM_ETH_CONN_AG_CTX_CF3_MASK 0x3
+#define TSTORM_ETH_CONN_AG_CTX_CF3_MASK 0x3 /* timer_stop_all */
#define TSTORM_ETH_CONN_AG_CTX_CF3_SHIFT 4
-#define TSTORM_ETH_CONN_AG_CTX_CF4_MASK 0x3
+#define TSTORM_ETH_CONN_AG_CTX_CF4_MASK 0x3 /* cf4 */
#define TSTORM_ETH_CONN_AG_CTX_CF4_SHIFT 6
u8 flags2;
-#define TSTORM_ETH_CONN_AG_CTX_CF5_MASK 0x3
+#define TSTORM_ETH_CONN_AG_CTX_CF5_MASK 0x3 /* cf5 */
#define TSTORM_ETH_CONN_AG_CTX_CF5_SHIFT 0
-#define TSTORM_ETH_CONN_AG_CTX_CF6_MASK 0x3
+#define TSTORM_ETH_CONN_AG_CTX_CF6_MASK 0x3 /* cf6 */
#define TSTORM_ETH_CONN_AG_CTX_CF6_SHIFT 2
-#define TSTORM_ETH_CONN_AG_CTX_CF7_MASK 0x3
+#define TSTORM_ETH_CONN_AG_CTX_CF7_MASK 0x3 /* cf7 */
#define TSTORM_ETH_CONN_AG_CTX_CF7_SHIFT 4
-#define TSTORM_ETH_CONN_AG_CTX_CF8_MASK 0x3
+#define TSTORM_ETH_CONN_AG_CTX_CF8_MASK 0x3 /* cf8 */
#define TSTORM_ETH_CONN_AG_CTX_CF8_SHIFT 6
u8 flags3;
-#define TSTORM_ETH_CONN_AG_CTX_CF9_MASK 0x3
+#define TSTORM_ETH_CONN_AG_CTX_CF9_MASK 0x3 /* cf9 */
#define TSTORM_ETH_CONN_AG_CTX_CF9_SHIFT 0
-#define TSTORM_ETH_CONN_AG_CTX_CF10_MASK 0x3
+#define TSTORM_ETH_CONN_AG_CTX_CF10_MASK 0x3 /* cf10 */
#define TSTORM_ETH_CONN_AG_CTX_CF10_SHIFT 2
-#define TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_CF0EN_MASK 0x1 /* cf0en */
#define TSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT 4
-#define TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_CF1EN_MASK 0x1 /* cf1en */
#define TSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT 5
-#define TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_CF2EN_MASK 0x1 /* cf2en */
#define TSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT 6
-#define TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_CF3EN_MASK 0x1 /* cf3en */
#define TSTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT 7
u8 flags4;
-#define TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_CF4EN_MASK 0x1 /* cf4en */
#define TSTORM_ETH_CONN_AG_CTX_CF4EN_SHIFT 0
-#define TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_CF5EN_MASK 0x1 /* cf5en */
#define TSTORM_ETH_CONN_AG_CTX_CF5EN_SHIFT 1
-#define TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_CF6EN_MASK 0x1 /* cf6en */
#define TSTORM_ETH_CONN_AG_CTX_CF6EN_SHIFT 2
-#define TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_CF7EN_MASK 0x1 /* cf7en */
#define TSTORM_ETH_CONN_AG_CTX_CF7EN_SHIFT 3
-#define TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_CF8EN_MASK 0x1 /* cf8en */
#define TSTORM_ETH_CONN_AG_CTX_CF8EN_SHIFT 4
-#define TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_CF9EN_MASK 0x1 /* cf9en */
#define TSTORM_ETH_CONN_AG_CTX_CF9EN_SHIFT 5
-#define TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_CF10EN_MASK 0x1 /* cf10en */
#define TSTORM_ETH_CONN_AG_CTX_CF10EN_SHIFT 6
-#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK 0x1 /* rule0en */
#define TSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT 7
u8 flags5;
-#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK 0x1 /* rule1en */
#define TSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT 0
-#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK 0x1 /* rule2en */
#define TSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT 1
-#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK 0x1 /* rule3en */
#define TSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT 2
-#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK 0x1 /* rule4en */
#define TSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT 3
-#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_MASK 0x1 /* rule5en */
#define TSTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT 4
-#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_MASK 0x1 /* rule6en */
#define TSTORM_ETH_CONN_AG_CTX_RX_BD_EN_SHIFT 5
-#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_MASK 0x1 /* rule7en */
#define TSTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT 6
-#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK 0x1
+#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_MASK 0x1 /* rule8en */
#define TSTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT 7
__le32 reg0 /* reg0 */;
__le32 reg1 /* reg1 */;
@@ -456,57 +562,82 @@ struct ustorm_eth_conn_ag_ctx {
u8 byte0 /* cdu_validation */;
u8 byte1 /* state */;
u8 flags0;
+/* exist_in_qm0 */
#define USTORM_ETH_CONN_AG_CTX_BIT0_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_BIT0_SHIFT 0
+/* exist_in_qm1 */
#define USTORM_ETH_CONN_AG_CTX_BIT1_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_BIT1_SHIFT 1
+/* timer0cf */
#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_MASK 0x3
#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_SHIFT 2
+/* timer1cf */
#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_MASK 0x3
#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_SHIFT 4
+/* timer2cf */
#define USTORM_ETH_CONN_AG_CTX_CF2_MASK 0x3
#define USTORM_ETH_CONN_AG_CTX_CF2_SHIFT 6
u8 flags1;
+/* timer_stop_all */
#define USTORM_ETH_CONN_AG_CTX_CF3_MASK 0x3
#define USTORM_ETH_CONN_AG_CTX_CF3_SHIFT 0
+/* cf4 */
#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_MASK 0x3
#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_SHIFT 2
+/* cf5 */
#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_MASK 0x3
#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_SHIFT 4
+/* cf6 */
#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_MASK 0x3
#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_SHIFT 6
u8 flags2;
+/* cf0en */
#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_TX_PMD_TERMINATE_CF_EN_SHIFT 0
+/* cf1en */
#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_RX_PMD_TERMINATE_CF_EN_SHIFT 1
+/* cf2en */
#define USTORM_ETH_CONN_AG_CTX_CF2EN_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT 2
+/* cf3en */
#define USTORM_ETH_CONN_AG_CTX_CF3EN_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_CF3EN_SHIFT 3
+/* cf4en */
#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_TX_ARM_CF_EN_SHIFT 4
+/* cf5en */
#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_RX_ARM_CF_EN_SHIFT 5
+/* cf6en */
#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_TX_BD_CONS_UPD_CF_EN_SHIFT 6
+/* rule0en */
#define USTORM_ETH_CONN_AG_CTX_RULE0EN_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT 7
u8 flags3;
+/* rule1en */
#define USTORM_ETH_CONN_AG_CTX_RULE1EN_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT 0
+/* rule2en */
#define USTORM_ETH_CONN_AG_CTX_RULE2EN_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT 1
+/* rule3en */
#define USTORM_ETH_CONN_AG_CTX_RULE3EN_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT 2
+/* rule4en */
#define USTORM_ETH_CONN_AG_CTX_RULE4EN_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT 3
+/* rule5en */
#define USTORM_ETH_CONN_AG_CTX_RULE5EN_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_RULE5EN_SHIFT 4
+/* rule6en */
#define USTORM_ETH_CONN_AG_CTX_RULE6EN_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_RULE6EN_SHIFT 5
+/* rule7en */
#define USTORM_ETH_CONN_AG_CTX_RULE7EN_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_RULE7EN_SHIFT 6
+/* rule8en */
#define USTORM_ETH_CONN_AG_CTX_RULE8EN_MASK 0x1
#define USTORM_ETH_CONN_AG_CTX_RULE8EN_SHIFT 7
u8 byte2 /* byte2 */;
@@ -539,83 +670,79 @@ struct mstorm_eth_conn_st_ctx {
* eth connection context
*/
struct eth_conn_context {
- struct tstorm_eth_conn_st_ctx tstorm_st_context
- /* tstorm storm context */;
+/* tstorm storm context */
+ struct tstorm_eth_conn_st_ctx tstorm_st_context;
struct regpair tstorm_st_padding[2] /* padding */;
- struct pstorm_eth_conn_st_ctx pstorm_st_context
- /* pstorm storm context */;
- struct xstorm_eth_conn_st_ctx xstorm_st_context
- /* xstorm storm context */;
- struct xstorm_eth_conn_ag_ctx xstorm_ag_context
- /* xstorm aggregative context */;
- struct ystorm_eth_conn_st_ctx ystorm_st_context
- /* ystorm storm context */;
- struct ystorm_eth_conn_ag_ctx ystorm_ag_context
- /* ystorm aggregative context */;
- struct tstorm_eth_conn_ag_ctx tstorm_ag_context
- /* tstorm aggregative context */;
- struct ustorm_eth_conn_ag_ctx ustorm_ag_context
- /* ustorm aggregative context */;
- struct ustorm_eth_conn_st_ctx ustorm_st_context
- /* ustorm storm context */;
- struct mstorm_eth_conn_st_ctx mstorm_st_context
- /* mstorm storm context */;
+/* pstorm storm context */
+ struct pstorm_eth_conn_st_ctx pstorm_st_context;
+/* xstorm storm context */
+ struct xstorm_eth_conn_st_ctx xstorm_st_context;
+/* xstorm aggregative context */
+ struct xstorm_eth_conn_ag_ctx xstorm_ag_context;
+/* ystorm storm context */
+ struct ystorm_eth_conn_st_ctx ystorm_st_context;
+/* ystorm aggregative context */
+ struct ystorm_eth_conn_ag_ctx ystorm_ag_context;
+/* tstorm aggregative context */
+ struct tstorm_eth_conn_ag_ctx tstorm_ag_context;
+/* ustorm aggregative context */
+ struct ustorm_eth_conn_ag_ctx ustorm_ag_context;
+/* ustorm storm context */
+ struct ustorm_eth_conn_st_ctx ustorm_st_context;
+/* mstorm storm context */
+ struct mstorm_eth_conn_st_ctx mstorm_st_context;
};
+
/*
* Ethernet filter types: mac/vlan/pair
*/
enum eth_error_code {
ETH_OK = 0x00 /* command succeeded */,
- ETH_FILTERS_MAC_ADD_FAIL_FULL
- /* mac add filters command failed due to cam full state */,
- ETH_FILTERS_MAC_ADD_FAIL_FULL_MTT2
- /* mac add filters command failed due to mtt2 full state */,
- ETH_FILTERS_MAC_ADD_FAIL_DUP_MTT2
- /* mac add filters command failed due to duplicate mac address */,
- ETH_FILTERS_MAC_ADD_FAIL_DUP_STT2
- /* mac add filters command failed due to duplicate mac address */,
- ETH_FILTERS_MAC_DEL_FAIL_NOF
- /* mac delete filters command failed due to not found state */,
- ETH_FILTERS_MAC_DEL_FAIL_NOF_MTT2
- /* mac delete filters command failed due to not found state */,
- ETH_FILTERS_MAC_DEL_FAIL_NOF_STT2
- /* mac delete filters command failed due to not found state */,
- ETH_FILTERS_MAC_ADD_FAIL_ZERO_MAC
- /* mac add filters command failed due to MAC Address of
- * 00:00:00:00:00:00
- */
- ,
- ETH_FILTERS_VLAN_ADD_FAIL_FULL
- /* vlan add filters command failed due to cam full state */,
- ETH_FILTERS_VLAN_ADD_FAIL_DUP
- /* vlan add filters command failed due to duplicate VLAN filter */,
- ETH_FILTERS_VLAN_DEL_FAIL_NOF
- /* vlan delete filters command failed due to not found state */,
- ETH_FILTERS_VLAN_DEL_FAIL_NOF_TT1
- /* vlan delete filters command failed due to not found state */,
- ETH_FILTERS_PAIR_ADD_FAIL_DUP
- /* pair add filters command failed due to duplicate request */,
- ETH_FILTERS_PAIR_ADD_FAIL_FULL
- /* pair add filters command failed due to full state */,
- ETH_FILTERS_PAIR_ADD_FAIL_FULL_MAC
- /* pair add filters command failed due to full state */,
- ETH_FILTERS_PAIR_DEL_FAIL_NOF
- /* pair add filters command failed due not found state */,
- ETH_FILTERS_PAIR_DEL_FAIL_NOF_TT1
- /* pair add filters command failed due not found state */,
- ETH_FILTERS_PAIR_ADD_FAIL_ZERO_MAC
- /* pair add filters command failed due to MAC Address of
- * 00:00:00:00:00:00
- */
- ,
- ETH_FILTERS_VNI_ADD_FAIL_FULL
- /* vni add filters command failed due to cam full state */,
- ETH_FILTERS_VNI_ADD_FAIL_DUP
- /* vni add filters command failed due to duplicate VNI filter */,
+/* mac add filters command failed due to cam full state */
+ ETH_FILTERS_MAC_ADD_FAIL_FULL,
+/* mac add filters command failed due to mtt2 full state */
+ ETH_FILTERS_MAC_ADD_FAIL_FULL_MTT2,
+/* mac add filters command failed due to duplicate mac address */
+ ETH_FILTERS_MAC_ADD_FAIL_DUP_MTT2,
+/* mac add filters command failed due to duplicate mac address */
+ ETH_FILTERS_MAC_ADD_FAIL_DUP_STT2,
+/* mac delete filters command failed due to not found state */
+ ETH_FILTERS_MAC_DEL_FAIL_NOF,
+/* mac delete filters command failed due to not found state */
+ ETH_FILTERS_MAC_DEL_FAIL_NOF_MTT2,
+/* mac delete filters command failed due to not found state */
+ ETH_FILTERS_MAC_DEL_FAIL_NOF_STT2,
+/* mac add filters command failed due to MAC Address of 00:00:00:00:00:00 */
+ ETH_FILTERS_MAC_ADD_FAIL_ZERO_MAC,
+/* vlan add filters command failed due to cam full state */
+ ETH_FILTERS_VLAN_ADD_FAIL_FULL,
+/* vlan add filters command failed due to duplicate VLAN filter */
+ ETH_FILTERS_VLAN_ADD_FAIL_DUP,
+/* vlan delete filters command failed due to not found state */
+ ETH_FILTERS_VLAN_DEL_FAIL_NOF,
+/* vlan delete filters command failed due to not found state */
+ ETH_FILTERS_VLAN_DEL_FAIL_NOF_TT1,
+/* pair add filters command failed due to duplicate request */
+ ETH_FILTERS_PAIR_ADD_FAIL_DUP,
+/* pair add filters command failed due to full state */
+ ETH_FILTERS_PAIR_ADD_FAIL_FULL,
+/* pair add filters command failed due to full state */
+ ETH_FILTERS_PAIR_ADD_FAIL_FULL_MAC,
+/* pair add filters command failed due not found state */
+ ETH_FILTERS_PAIR_DEL_FAIL_NOF,
+/* pair add filters command failed due not found state */
+ ETH_FILTERS_PAIR_DEL_FAIL_NOF_TT1,
+/* pair add filters command failed due to MAC Address of 00:00:00:00:00:00 */
+ ETH_FILTERS_PAIR_ADD_FAIL_ZERO_MAC,
+/* vni add filters command failed due to cam full state */
+ ETH_FILTERS_VNI_ADD_FAIL_FULL,
+/* vni add filters command failed due to duplicate VNI filter */
+ ETH_FILTERS_VNI_ADD_FAIL_DUP,
MAX_ETH_ERROR_CODE
};
+
/*
* opcodes for the event ring
*/
@@ -640,6 +767,7 @@ enum eth_event_opcode {
MAX_ETH_EVENT_OPCODE
};
+
/*
* Classify rule types in E2/E3
*/
@@ -647,11 +775,12 @@ enum eth_filter_action {
ETH_FILTER_ACTION_UNUSED,
ETH_FILTER_ACTION_REMOVE,
ETH_FILTER_ACTION_ADD,
- ETH_FILTER_ACTION_REMOVE_ALL
- /* Remove all filters of given type and vport ID. */,
+/* Remove all filters of given type and vport ID. */
+ ETH_FILTER_ACTION_REMOVE_ALL,
MAX_ETH_FILTER_ACTION
};
+
/*
* Command for adding/removing a classification rule $$KEEP_ENDIANNESS$$
*/
@@ -667,6 +796,7 @@ struct eth_filter_cmd {
__le16 vlan_id;
};
+
/*
* $$KEEP_ENDIANNESS$$
*/
@@ -674,10 +804,14 @@ struct eth_filter_cmd_header {
u8 rx /* If set, apply these commands to the RX path */;
u8 tx /* If set, apply these commands to the TX path */;
u8 cmd_cnt /* Number of filter commands */;
+/* 0 - dont assert in case of filter configuration error. Just return an error
+ * code. 1 - assert in case of filter configuration error.
+ */
u8 assert_on_error;
u8 reserved1[4];
};
+
/*
* Ethernet filter types: mac/vlan/pair
*/
@@ -689,25 +823,27 @@ enum eth_filter_type {
ETH_FILTER_TYPE_INNER_MAC /* Add/remove a inner MAC address */,
ETH_FILTER_TYPE_INNER_VLAN /* Add/remove a inner VLAN */,
ETH_FILTER_TYPE_INNER_PAIR /* Add/remove a inner MAC-VLAN pair */,
- ETH_FILTER_TYPE_INNER_MAC_VNI_PAIR /* Add/remove a inner MAC-VNI pair */
- ,
+/* Add/remove a inner MAC-VNI pair */
+ ETH_FILTER_TYPE_INNER_MAC_VNI_PAIR,
ETH_FILTER_TYPE_MAC_VNI_PAIR /* Add/remove a MAC-VNI pair */,
ETH_FILTER_TYPE_VNI /* Add/remove a VNI */,
MAX_ETH_FILTER_TYPE
};
+
/*
* eth IPv4 Fragment Type
*/
enum eth_ipv4_frag_type {
ETH_IPV4_NOT_FRAG /* IPV4 Packet Not Fragmented */,
- ETH_IPV4_FIRST_FRAG
- /* First Fragment of IPv4 Packet (contains headers) */,
- ETH_IPV4_NON_FIRST_FRAG
- /* Non-First Fragment of IPv4 Packet (does not contain headers) */,
+/* First Fragment of IPv4 Packet (contains headers) */
+ ETH_IPV4_FIRST_FRAG,
+/* Non-First Fragment of IPv4 Packet (does not contain headers) */
+ ETH_IPV4_NON_FIRST_FRAG,
MAX_ETH_IPV4_FRAG_TYPE
};
+
/*
* eth IPv4 Fragment Type
*/
@@ -717,6 +853,7 @@ enum eth_ip_type {
MAX_ETH_IP_TYPE
};
+
/*
* Ethernet Ramrod Command IDs
*/
@@ -747,73 +884,101 @@ enum eth_ramrod_cmd_id {
MAX_ETH_RAMROD_CMD_ID
};
+
/*
* return code from eth sp ramrods
*/
struct eth_return_code {
u8 value;
+/* error code (use enum eth_error_code) */
#define ETH_RETURN_CODE_ERR_CODE_MASK 0x1F
#define ETH_RETURN_CODE_ERR_CODE_SHIFT 0
#define ETH_RETURN_CODE_RESERVED_MASK 0x3
#define ETH_RETURN_CODE_RESERVED_SHIFT 5
+/* rx path - 0, tx path - 1 */
#define ETH_RETURN_CODE_RX_TX_MASK 0x1
#define ETH_RETURN_CODE_RX_TX_SHIFT 7
};
+
/*
* What to do in case an error occurs
*/
enum eth_tx_err {
ETH_TX_ERR_DROP /* Drop erroneous packet. */,
- ETH_TX_ERR_ASSERT_MALICIOUS
- /* Assert an interrupt for PF, declare as malicious for VF */,
+/* Assert an interrupt for PF, declare as malicious for VF */
+ ETH_TX_ERR_ASSERT_MALICIOUS,
MAX_ETH_TX_ERR
};
+
/*
* Array of the different error type behaviors
*/
struct eth_tx_err_vals {
__le16 values;
+/* Wrong VLAN insertion mode (use enum eth_tx_err) */
#define ETH_TX_ERR_VALS_ILLEGAL_VLAN_MODE_MASK 0x1
#define ETH_TX_ERR_VALS_ILLEGAL_VLAN_MODE_SHIFT 0
+/* Packet is below minimal size (use enum eth_tx_err) */
#define ETH_TX_ERR_VALS_PACKET_TOO_SMALL_MASK 0x1
#define ETH_TX_ERR_VALS_PACKET_TOO_SMALL_SHIFT 1
+/* Vport has sent spoofed packet (use enum eth_tx_err) */
#define ETH_TX_ERR_VALS_ANTI_SPOOFING_ERR_MASK 0x1
#define ETH_TX_ERR_VALS_ANTI_SPOOFING_ERR_SHIFT 2
+/* Packet with illegal type of inband tag (use enum eth_tx_err) */
#define ETH_TX_ERR_VALS_ILLEGAL_INBAND_TAGS_MASK 0x1
#define ETH_TX_ERR_VALS_ILLEGAL_INBAND_TAGS_SHIFT 3
+/* Packet marked for VLAN insertion when inband tag is present
+ * (use enum eth_tx_err)
+ */
#define ETH_TX_ERR_VALS_VLAN_INSERTION_W_INBAND_TAG_MASK 0x1
#define ETH_TX_ERR_VALS_VLAN_INSERTION_W_INBAND_TAG_SHIFT 4
+/* Non LSO packet larger than MTU (use enum eth_tx_err) */
#define ETH_TX_ERR_VALS_MTU_VIOLATION_MASK 0x1
#define ETH_TX_ERR_VALS_MTU_VIOLATION_SHIFT 5
+/* VF/PF has sent LLDP/PFC or any other type of control packet which is not
+ * allowed to (use enum eth_tx_err)
+ */
#define ETH_TX_ERR_VALS_ILLEGAL_CONTROL_FRAME_MASK 0x1
#define ETH_TX_ERR_VALS_ILLEGAL_CONTROL_FRAME_SHIFT 6
#define ETH_TX_ERR_VALS_RESERVED_MASK 0x1FF
#define ETH_TX_ERR_VALS_RESERVED_SHIFT 7
};
+
/*
* vport rss configuration data
*/
struct eth_vport_rss_config {
__le16 capabilities;
+/* configuration of the IpV4 2-tuple capability */
#define ETH_VPORT_RSS_CONFIG_IPV4_CAPABILITY_MASK 0x1
#define ETH_VPORT_RSS_CONFIG_IPV4_CAPABILITY_SHIFT 0
+/* configuration of the IpV6 2-tuple capability */
#define ETH_VPORT_RSS_CONFIG_IPV6_CAPABILITY_MASK 0x1
#define ETH_VPORT_RSS_CONFIG_IPV6_CAPABILITY_SHIFT 1
+/* configuration of the IpV4 4-tuple capability for TCP */
#define ETH_VPORT_RSS_CONFIG_IPV4_TCP_CAPABILITY_MASK 0x1
#define ETH_VPORT_RSS_CONFIG_IPV4_TCP_CAPABILITY_SHIFT 2
+/* configuration of the IpV6 4-tuple capability for TCP */
#define ETH_VPORT_RSS_CONFIG_IPV6_TCP_CAPABILITY_MASK 0x1
#define ETH_VPORT_RSS_CONFIG_IPV6_TCP_CAPABILITY_SHIFT 3
+/* configuration of the IpV4 4-tuple capability for UDP */
#define ETH_VPORT_RSS_CONFIG_IPV4_UDP_CAPABILITY_MASK 0x1
#define ETH_VPORT_RSS_CONFIG_IPV4_UDP_CAPABILITY_SHIFT 4
+/* configuration of the IpV6 4-tuple capability for UDP */
#define ETH_VPORT_RSS_CONFIG_IPV6_UDP_CAPABILITY_MASK 0x1
#define ETH_VPORT_RSS_CONFIG_IPV6_UDP_CAPABILITY_SHIFT 5
+/* configuration of the 5-tuple capability */
#define ETH_VPORT_RSS_CONFIG_EN_5_TUPLE_CAPABILITY_MASK 0x1
#define ETH_VPORT_RSS_CONFIG_EN_5_TUPLE_CAPABILITY_SHIFT 6
+/* if set update the rss keys */
#define ETH_VPORT_RSS_CONFIG_RESERVED0_MASK 0x1FF
#define ETH_VPORT_RSS_CONFIG_RESERVED0_SHIFT 7
+/* The RSS engine ID. Must be allocated to each vport with RSS enabled.
+ * Total number of RSS engines is ETH_RSS_ENGINE_NUM_ , according to chip type.
+ */
u8 rss_id;
u8 rss_mode /* The RSS mode for this function */;
u8 update_rss_key /* if set update the rss key */;
@@ -821,13 +986,14 @@ struct eth_vport_rss_config {
u8 update_rss_capabilities /* if set update the capabilities */;
u8 tbl_size /* rss mask (Tbl size) */;
__le32 reserved2[2];
- __le16 indirection_table[ETH_RSS_IND_TABLE_ENTRIES_NUM]
- /* RSS indirection table */;
- __le32 rss_key[ETH_RSS_KEY_SIZE_REGS] /* RSS key supplied to us by OS */
- ;
+/* RSS indirection table */
+ __le16 indirection_table[ETH_RSS_IND_TABLE_ENTRIES_NUM];
+/* RSS key supplied to us by OS */
+ __le32 rss_key[ETH_RSS_KEY_SIZE_REGS];
__le32 reserved3[2];
};
+
/*
* eth vport RSS mode
*/
@@ -837,21 +1003,28 @@ enum eth_vport_rss_mode {
MAX_ETH_VPORT_RSS_MODE
};
+
/*
* Command for setting classification flags for a vport $$KEEP_ENDIANNESS$$
*/
struct eth_vport_rx_mode {
__le16 state;
+/* drop all unicast packets */
#define ETH_VPORT_RX_MODE_UCAST_DROP_ALL_MASK 0x1
#define ETH_VPORT_RX_MODE_UCAST_DROP_ALL_SHIFT 0
+/* accept all unicast packets (subject to vlan) */
#define ETH_VPORT_RX_MODE_UCAST_ACCEPT_ALL_MASK 0x1
#define ETH_VPORT_RX_MODE_UCAST_ACCEPT_ALL_SHIFT 1
+/* accept all unmatched unicast packets */
#define ETH_VPORT_RX_MODE_UCAST_ACCEPT_UNMATCHED_MASK 0x1
#define ETH_VPORT_RX_MODE_UCAST_ACCEPT_UNMATCHED_SHIFT 2
+/* drop all multicast packets */
#define ETH_VPORT_RX_MODE_MCAST_DROP_ALL_MASK 0x1
#define ETH_VPORT_RX_MODE_MCAST_DROP_ALL_SHIFT 3
+/* accept all multicast packets (subject to vlan) */
#define ETH_VPORT_RX_MODE_MCAST_ACCEPT_ALL_MASK 0x1
#define ETH_VPORT_RX_MODE_MCAST_ACCEPT_ALL_SHIFT 4
+/* accept all broadcast packets (subject to vlan) */
#define ETH_VPORT_RX_MODE_BCAST_ACCEPT_ALL_MASK 0x1
#define ETH_VPORT_RX_MODE_BCAST_ACCEPT_ALL_SHIFT 5
#define ETH_VPORT_RX_MODE_RESERVED1_MASK 0x3FF
@@ -859,6 +1032,7 @@ struct eth_vport_rx_mode {
__le16 reserved2[3];
};
+
/*
* Command for setting tpa parameters
*/
@@ -867,39 +1041,45 @@ struct eth_vport_tpa_param {
u8 tpa_ipv6_en_flg /* Enable TPA for IPv6 packets */;
u8 tpa_ipv4_tunn_en_flg /* Enable TPA for IPv4 over tunnel */;
u8 tpa_ipv6_tunn_en_flg /* Enable TPA for IPv6 over tunnel */;
+/* If set, start each tpa segment on new SGE (GRO mode). One SGE per segment
+ * allowed
+ */
u8 tpa_pkt_split_flg;
- u8 tpa_hdr_data_split_flg
- /* If set, put header of first TPA segment on bd and data on SGE */
- ;
- u8 tpa_gro_consistent_flg
- /* If set, GRO data consistent will checked for TPA continue */;
- u8 tpa_max_aggs_num
- /* maximum number of opened aggregations per v-port */;
+/* If set, put header of first TPA segment on bd and data on SGE */
+ u8 tpa_hdr_data_split_flg;
+/* If set, GRO data consistent will checked for TPA continue */
+ u8 tpa_gro_consistent_flg;
+/* maximum number of opened aggregations per v-port */
+ u8 tpa_max_aggs_num;
__le16 tpa_max_size /* maximal size for the aggregated TPA packets */;
- __le16 tpa_min_size_to_start
- /* minimum TCP payload size for a packet to start aggregation */;
- __le16 tpa_min_size_to_cont
- /* minimum TCP payload size for a packet to continue aggregation */
- ;
- u8 max_buff_num
- /* maximal number of buffers that can be used for one aggregation */
- ;
+/* minimum TCP payload size for a packet to start aggregation */
+ __le16 tpa_min_size_to_start;
+/* minimum TCP payload size for a packet to continue aggregation */
+ __le16 tpa_min_size_to_cont;
+/* maximal number of buffers that can be used for one aggregation */
+ u8 max_buff_num;
u8 reserved;
};
+
/*
* Command for setting classification flags for a vport $$KEEP_ENDIANNESS$$
*/
struct eth_vport_tx_mode {
__le16 state;
+/* drop all unicast packets */
#define ETH_VPORT_TX_MODE_UCAST_DROP_ALL_MASK 0x1
#define ETH_VPORT_TX_MODE_UCAST_DROP_ALL_SHIFT 0
+/* accept all unicast packets (subject to vlan) */
#define ETH_VPORT_TX_MODE_UCAST_ACCEPT_ALL_MASK 0x1
#define ETH_VPORT_TX_MODE_UCAST_ACCEPT_ALL_SHIFT 1
+/* drop all multicast packets */
#define ETH_VPORT_TX_MODE_MCAST_DROP_ALL_MASK 0x1
#define ETH_VPORT_TX_MODE_MCAST_DROP_ALL_SHIFT 2
+/* accept all multicast packets (subject to vlan) */
#define ETH_VPORT_TX_MODE_MCAST_ACCEPT_ALL_MASK 0x1
#define ETH_VPORT_TX_MODE_MCAST_ACCEPT_ALL_SHIFT 3
+/* accept all broadcast packets (subject to vlan) */
#define ETH_VPORT_TX_MODE_BCAST_ACCEPT_ALL_MASK 0x1
#define ETH_VPORT_TX_MODE_BCAST_ACCEPT_ALL_SHIFT 4
#define ETH_VPORT_TX_MODE_RESERVED1_MASK 0x7FF
@@ -907,6 +1087,7 @@ struct eth_vport_tx_mode {
__le16 reserved2[3];
};
+
/*
* Ramrod data for rx create gft action
*/
@@ -916,6 +1097,7 @@ enum gft_filter_update_action {
MAX_GFT_FILTER_UPDATE_ACTION
};
+
/*
* Ramrod data for rx create gft action
*/
@@ -925,6 +1107,9 @@ enum gft_logic_filter_type {
MAX_GFT_LOGIC_FILTER_TYPE
};
+
+
+
/*
* Ramrod data for rx add openflow filter
*/
@@ -933,10 +1118,12 @@ struct rx_add_openflow_filter_data {
u8 priority /* Searcher String - Packet priority */;
u8 reserved0;
__le32 tenant_id /* Searcher String - Tenant ID */;
- __le16 dst_mac_hi /* Searcher String - Destination Mac Bytes 0 to 1 */;
- __le16 dst_mac_mid /* Searcher String - Destination Mac Bytes 2 to 3 */
- ;
- __le16 dst_mac_lo /* Searcher String - Destination Mac Bytes 4 to 5 */;
+/* Searcher String - Destination Mac Bytes 0 to 1 */
+ __le16 dst_mac_hi;
+/* Searcher String - Destination Mac Bytes 2 to 3 */
+ __le16 dst_mac_mid;
+/* Searcher String - Destination Mac Bytes 4 to 5 */
+ __le16 dst_mac_lo;
__le16 src_mac_hi /* Searcher String - Source Mac 0 to 1 */;
__le16 src_mac_mid /* Searcher String - Source Mac 2 to 3 */;
__le16 src_mac_lo /* Searcher String - Source Mac 4 to 5 */;
@@ -952,6 +1139,7 @@ struct rx_add_openflow_filter_data {
__le16 l4_src_port /* Searcher String - TCP/UDP Source Port */;
};
+
/*
* Ramrod data for rx create gft action
*/
@@ -960,6 +1148,7 @@ struct rx_create_gft_action_data {
u8 reserved[7];
};
+
/*
* Ramrod data for rx create openflow action
*/
@@ -968,6 +1157,7 @@ struct rx_create_openflow_action_data {
u8 reserved[7];
};
+
/*
* Ramrod data for rx queue start ramrod
*/
@@ -984,16 +1174,19 @@ struct rx_queue_start_ramrod_data {
u8 stats_counter_id /* Statistics counter ID */;
u8 pin_context /* Pin context in CCFC to improve performance */;
u8 pxp_tph_valid_bd /* PXP command TPH Valid - for BD/SGE fetch */;
- u8 pxp_tph_valid_pkt /* PXP command TPH Valid - for packet placement */
- ;
- u8 pxp_st_hint
- /* PXP command Steering tag hint. Use enum pxp_tph_st_hint */;
+/* PXP command TPH Valid - for packet placement */
+ u8 pxp_tph_valid_pkt;
+/* PXP command Steering tag hint. Use enum pxp_tph_st_hint */
+ u8 pxp_st_hint;
__le16 pxp_st_index /* PXP command Steering tag index */;
- u8 pmd_mode
- /* Indicates that current queue belongs to poll-mode driver */;
+/* Indicates that current queue belongs to poll-mode driver */
+ u8 pmd_mode;
+/* Indicates that the current queue is using the TX notification queue
+ * mechanism - should be set only for PMD queue
+ */
u8 notify_en;
- u8 toggle_val
- /* Initial value for the toggle valid bit - used in PMD mode */;
+/* Initial value for the toggle valid bit - used in PMD mode */
+ u8 toggle_val;
/* Index of RX producers in VF zone. Used for VF only. */
u8 vf_rx_prod_index;
/* Backward compatibility mode. If set, unprotected mStorm queue zone will used
@@ -1007,6 +1200,7 @@ struct rx_queue_start_ramrod_data {
struct regpair reserved2 /* FW reserved. */;
};
+
/*
* Ramrod data for rx queue stop ramrod
*/
@@ -1018,6 +1212,7 @@ struct rx_queue_stop_ramrod_data {
u8 reserved[3];
};
+
/*
* Ramrod data for rx queue update ramrod
*/
@@ -1035,6 +1230,7 @@ struct rx_queue_update_ramrod_data {
struct regpair reserved6 /* FW reserved. */;
};
+
/*
* Ramrod data for rx Add UDP Filter
*/
@@ -1044,17 +1240,16 @@ struct rx_udp_filter_data {
u8 ip_type /* Searcher String - IP Type */;
u8 tenant_id_exists /* Searcher String - Tenant ID Exists */;
__le16 reserved1;
+/* Searcher String - IP Destination Address, for IPv4 use ip_dst_addr[0] only */
__le32 ip_dst_addr[4];
- /* Searcher String-IP Dest Addr for IPv4 use ip_dst_addr[0] only */
- ;
- __le32 ip_src_addr[4]
- /* Searcher String-IP Src Addr, for IPv4 use ip_dst_addr[0] only */
- ;
+/* Searcher String - IP Source Address, for IPv4 use ip_dst_addr[0] only */
+ __le32 ip_src_addr[4];
__le16 udp_dst_port /* Searcher String - UDP Destination Port */;
__le16 udp_src_port /* Searcher String - UDP Source Port */;
__le32 tenant_id /* Searcher String - Tenant ID */;
};
+
/*
* Ramrod to add filter - filter is packet headr of type of packet wished to
* pass certin FW flow
@@ -1075,6 +1270,8 @@ struct rx_update_gft_filter_data {
u8 reserved;
};
+
+
/*
* Ramrod data for tx queue start ramrod
*/
@@ -1086,16 +1283,26 @@ struct tx_queue_start_ramrod_data {
u8 stats_counter_id /* Statistics counter ID to use */;
__le16 qm_pq_id /* QM PQ ID */;
u8 flags;
+/* 0: Enable QM opportunistic flow. 1: Disable QM opportunistic flow */
#define TX_QUEUE_START_RAMROD_DATA_DISABLE_OPPORTUNISTIC_MASK 0x1
#define TX_QUEUE_START_RAMROD_DATA_DISABLE_OPPORTUNISTIC_SHIFT 0
+/* If set, Test Mode - packets will be duplicated by Xstorm handler */
#define TX_QUEUE_START_RAMROD_DATA_TEST_MODE_PKT_DUP_MASK 0x1
#define TX_QUEUE_START_RAMROD_DATA_TEST_MODE_PKT_DUP_SHIFT 1
+/* If set, Test Mode - packets destination will be determined by dest_port_mode
+ * field from Tx BD
+ */
#define TX_QUEUE_START_RAMROD_DATA_TEST_MODE_TX_DEST_MASK 0x1
#define TX_QUEUE_START_RAMROD_DATA_TEST_MODE_TX_DEST_SHIFT 2
+/* Indicates that current queue belongs to poll-mode driver */
#define TX_QUEUE_START_RAMROD_DATA_PMD_MODE_MASK 0x1
#define TX_QUEUE_START_RAMROD_DATA_PMD_MODE_SHIFT 3
+/* Indicates that the current queue is using the TX notification queue
+ * mechanism - should be set only for PMD queue
+ */
#define TX_QUEUE_START_RAMROD_DATA_NOTIFY_EN_MASK 0x1
#define TX_QUEUE_START_RAMROD_DATA_NOTIFY_EN_SHIFT 4
+/* Pin context in CCFC to improve performance */
#define TX_QUEUE_START_RAMROD_DATA_PIN_CONTEXT_MASK 0x1
#define TX_QUEUE_START_RAMROD_DATA_PIN_CONTEXT_SHIFT 5
#define TX_QUEUE_START_RAMROD_DATA_RESERVED1_MASK 0x3
@@ -1104,12 +1311,13 @@ struct tx_queue_start_ramrod_data {
u8 pxp_tph_valid_bd /* PXP command TPH Valid - for BD fetch */;
u8 pxp_tph_valid_pkt /* PXP command TPH Valid - for packet fetch */;
__le16 pxp_st_index /* PXP command Steering tag index */;
- __le16 comp_agg_size /* TX completion min agg size - for PMD queues */;
+/* TX completion min agg size - for PMD queues */
+ __le16 comp_agg_size;
__le16 queue_zone_id /* queue zone ID to use */;
__le16 reserved2 /* FW reserved. (test_dup_count) */;
__le16 pbl_size /* Number of BD pages pointed by PBL */;
- __le16 tx_queue_id
- /* unique Queue ID - currently used only by PMD flow */;
+/* unique Queue ID - currently used only by PMD flow */
+ __le16 tx_queue_id;
/* Unique Same-As-Last Resource ID - improves performance for same-as-last
* packets per connection (range 0..ETH_TX_NUM_SAME_AS_LAST_ENTRIES-1 IDs
* available)
@@ -1117,10 +1325,11 @@ struct tx_queue_start_ramrod_data {
__le16 same_as_last_id;
__le16 reserved[3];
struct regpair pbl_base_addr /* address of the pbl page */;
- struct regpair bd_cons_address
- /* BD consumer address in host - for PMD queues */;
+/* BD consumer address in host - for PMD queues */
+ struct regpair bd_cons_address;
};
+
/*
* Ramrod data for tx queue stop ramrod
*/
@@ -1128,16 +1337,19 @@ struct tx_queue_stop_ramrod_data {
__le16 reserved[4];
};
+
+
/*
* Ramrod data for vport update ramrod
*/
struct vport_filter_update_ramrod_data {
- struct eth_filter_cmd_header filter_cmd_hdr
- /* Header for Filter Commands (RX/TX, Add/Remove/Replace, etc) */;
- struct eth_filter_cmd filter_cmds[ETH_FILTER_RULES_COUNT]
- /* Filter Commands */;
+/* Header for Filter Commands (RX/TX, Add/Remove/Replace, etc) */
+ struct eth_filter_cmd_header filter_cmd_hdr;
+/* Filter Commands */
+ struct eth_filter_cmd filter_cmds[ETH_FILTER_RULES_COUNT];
};
+
/*
* Ramrod data for vport start ramrod
*/
@@ -1149,21 +1361,26 @@ struct vport_start_ramrod_data {
u8 inner_vlan_removal_en;
struct eth_vport_rx_mode rx_mode /* Rx filter data */;
struct eth_vport_tx_mode tx_mode /* Tx filter data */;
- struct eth_vport_tpa_param tpa_param /* TPA configuration parameters */
- ;
+/* TPA configuration parameters */
+ struct eth_vport_tpa_param tpa_param;
__le16 default_vlan /* Default Vlan value to be forced by FW */;
u8 tx_switching_en /* Tx switching is enabled for current Vport */;
- u8 anti_spoofing_en
- /* Anti-spoofing verification is set for current Vport */;
- u8 default_vlan_en
- /* If set, the default Vlan value is forced by the FW */;
- u8 handle_ptp_pkts /* If set, the vport handles PTP Timesync Packets */
- ;
+/* Anti-spoofing verification is set for current Vport */
+ u8 anti_spoofing_en;
+/* If set, the default Vlan value is forced by the FW */
+ u8 default_vlan_en;
+/* If set, the vport handles PTP Timesync Packets */
+ u8 handle_ptp_pkts;
+/* If enable then innerVlan will be striped and not written to cqe */
u8 silent_vlan_removal_en;
- /* If enable then innerVlan will be striped and not written to cqe */
+/* If set untagged filter (vlan0) is added to current Vport, otherwise port is
+ * marked as any-vlan
+ */
u8 untagged;
- struct eth_tx_err_vals tx_err_behav
- /* Desired behavior per TX error type */;
+/* Desired behavior per TX error type */
+ struct eth_tx_err_vals tx_err_behav;
+/* If set, ETH header padding will not inserted. placement_offset will be zero.
+ */
u8 zero_placement_offset;
/* If set, Contorl frames will be filtered according to MAC check. */
u8 ctl_frame_mac_check_en;
@@ -1172,6 +1389,7 @@ struct vport_start_ramrod_data {
u8 reserved[5];
};
+
/*
* Ramrod data for vport stop ramrod
*/
@@ -1180,6 +1398,7 @@ struct vport_stop_ramrod_data {
u8 reserved[7];
};
+
/*
* Ramrod data for vport update ramrod
*/
@@ -1191,37 +1410,41 @@ struct vport_update_ramrod_data_cmn {
u8 tx_active_flg /* tx active flag value */;
u8 update_rx_mode_flg /* set if rx state data should be handled */;
u8 update_tx_mode_flg /* set if tx state data should be handled */;
- u8 update_approx_mcast_flg
- /* set if approx. mcast data should be handled */;
+/* set if approx. mcast data should be handled */
+ u8 update_approx_mcast_flg;
u8 update_rss_flg /* set if rss data should be handled */;
- u8 update_inner_vlan_removal_en_flg
- /* set if inner_vlan_removal_en should be handled */;
+/* set if inner_vlan_removal_en should be handled */
+ u8 update_inner_vlan_removal_en_flg;
u8 inner_vlan_removal_en;
+/* set if tpa parameters should be handled, TPA must be disable before */
u8 update_tpa_param_flg;
u8 update_tpa_en_flg /* set if tpa enable changes */;
- u8 update_tx_switching_en_flg
- /* set if tx switching en flag should be handled */;
+/* set if tx switching en flag should be handled */
+ u8 update_tx_switching_en_flg;
u8 tx_switching_en /* tx switching en value */;
- u8 update_anti_spoofing_en_flg
- /* set if anti spoofing flag should be handled */;
+/* set if anti spoofing flag should be handled */
+ u8 update_anti_spoofing_en_flg;
u8 anti_spoofing_en /* Anti-spoofing verification en value */;
- u8 update_handle_ptp_pkts
- /* set if handle_ptp_pkts should be handled. */;
- u8 handle_ptp_pkts /* If set, the vport handles PTP Timesync Packets */
- ;
- u8 update_default_vlan_en_flg
- /* If set, the default Vlan enable flag is updated */;
- u8 default_vlan_en
- /* If set, the default Vlan value is forced by the FW */;
- u8 update_default_vlan_flg
- /* If set, the default Vlan value is updated */;
+/* set if handle_ptp_pkts should be handled. */
+ u8 update_handle_ptp_pkts;
+/* If set, the vport handles PTP Timesync Packets */
+ u8 handle_ptp_pkts;
+/* If set, the default Vlan enable flag is updated */
+ u8 update_default_vlan_en_flg;
+/* If set, the default Vlan value is forced by the FW */
+ u8 default_vlan_en;
+/* If set, the default Vlan value is updated */
+ u8 update_default_vlan_flg;
__le16 default_vlan /* Default Vlan value to be forced by FW */;
- u8 update_accept_any_vlan_flg
- /* set if accept_any_vlan should be handled */;
+/* set if accept_any_vlan should be handled */
+ u8 update_accept_any_vlan_flg;
u8 accept_any_vlan /* accept_any_vlan updated value */;
+/* Set to remove vlan silently, update_inner_vlan_removal_en_flg must be enabled
+ * as well. If Rx is in noSgl mode send rx_queue_update_ramrod_data
+ */
u8 silent_vlan_removal_en;
- u8 update_mtu_flg
- /* If set, MTU will be updated. Vport must be not active. */;
+/* If set, MTU will be updated. Vport must be not active. */
+ u8 update_mtu_flg;
__le16 mtu /* New MTU value. Used if update_mtu_flg are set */;
u8 reserved[2];
};
@@ -1234,54 +1457,76 @@ struct vport_update_ramrod_mcast {
* Ramrod data for vport update ramrod
*/
struct vport_update_ramrod_data {
- struct vport_update_ramrod_data_cmn common
- /* Common data for all vport update ramrods */;
+/* Common data for all vport update ramrods */
+ struct vport_update_ramrod_data_cmn common;
struct eth_vport_rx_mode rx_mode /* vport rx mode bitmap */;
struct eth_vport_tx_mode tx_mode /* vport tx mode bitmap */;
- struct eth_vport_tpa_param tpa_param /* TPA configuration parameters */
- ;
+/* TPA configuration parameters */
+ struct eth_vport_tpa_param tpa_param;
struct vport_update_ramrod_mcast approx_mcast;
struct eth_vport_rss_config rss_config /* rss config data */;
};
+
+
+
+
+
/*
* GFT CAM line struct
*/
struct gft_cam_line {
__le32 camline;
+/* Indication if the line is valid. */
#define GFT_CAM_LINE_VALID_MASK 0x1
#define GFT_CAM_LINE_VALID_SHIFT 0
+/* Data bits, the word that compared with the profile key */
#define GFT_CAM_LINE_DATA_MASK 0x3FFF
#define GFT_CAM_LINE_DATA_SHIFT 1
+/* Mask bits, indicate the bits in the data that are Dont-Care */
#define GFT_CAM_LINE_MASK_BITS_MASK 0x3FFF
#define GFT_CAM_LINE_MASK_BITS_SHIFT 15
#define GFT_CAM_LINE_RESERVED1_MASK 0x7
#define GFT_CAM_LINE_RESERVED1_SHIFT 29
};
+
/*
* GFT CAM line struct (for driversim use)
*/
struct gft_cam_line_mapped {
__le32 camline;
+/* Indication if the line is valid. */
#define GFT_CAM_LINE_MAPPED_VALID_MASK 0x1
#define GFT_CAM_LINE_MAPPED_VALID_SHIFT 0
+/* use enum gft_profile_ip_version (use enum gft_profile_ip_version) */
#define GFT_CAM_LINE_MAPPED_IP_VERSION_MASK 0x1
#define GFT_CAM_LINE_MAPPED_IP_VERSION_SHIFT 1
+/* use enum gft_profile_ip_version (use enum gft_profile_ip_version) */
#define GFT_CAM_LINE_MAPPED_TUNNEL_IP_VERSION_MASK 0x1
#define GFT_CAM_LINE_MAPPED_TUNNEL_IP_VERSION_SHIFT 2
+/* use enum gft_profile_upper_protocol_type
+ * (use enum gft_profile_upper_protocol_type)
+ */
#define GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK 0xF
#define GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_SHIFT 3
+/* use enum gft_profile_tunnel_type (use enum gft_profile_tunnel_type) */
#define GFT_CAM_LINE_MAPPED_TUNNEL_TYPE_MASK 0xF
#define GFT_CAM_LINE_MAPPED_TUNNEL_TYPE_SHIFT 7
#define GFT_CAM_LINE_MAPPED_PF_ID_MASK 0xF
#define GFT_CAM_LINE_MAPPED_PF_ID_SHIFT 11
+/* use enum gft_profile_ip_version (use enum gft_profile_ip_version) */
#define GFT_CAM_LINE_MAPPED_IP_VERSION_MASK_MASK 0x1
#define GFT_CAM_LINE_MAPPED_IP_VERSION_MASK_SHIFT 15
+/* use enum gft_profile_ip_version (use enum gft_profile_ip_version) */
#define GFT_CAM_LINE_MAPPED_TUNNEL_IP_VERSION_MASK_MASK 0x1
#define GFT_CAM_LINE_MAPPED_TUNNEL_IP_VERSION_MASK_SHIFT 16
+/* use enum gft_profile_upper_protocol_type
+ * (use enum gft_profile_upper_protocol_type)
+ */
#define GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK_MASK 0xF
#define GFT_CAM_LINE_MAPPED_UPPER_PROTOCOL_TYPE_MASK_SHIFT 17
+/* use enum gft_profile_tunnel_type (use enum gft_profile_tunnel_type) */
#define GFT_CAM_LINE_MAPPED_TUNNEL_TYPE_MASK_MASK 0xF
#define GFT_CAM_LINE_MAPPED_TUNNEL_TYPE_MASK_SHIFT 21
#define GFT_CAM_LINE_MAPPED_PF_ID_MASK_MASK 0xF
@@ -1290,11 +1535,13 @@ struct gft_cam_line_mapped {
#define GFT_CAM_LINE_MAPPED_RESERVED1_SHIFT 29
};
+
union gft_cam_line_union {
struct gft_cam_line cam_line;
struct gft_cam_line_mapped cam_line_mapped;
};
+
/*
* Used in gft_profile_key: Indication for ip version
*/
@@ -1304,17 +1551,24 @@ enum gft_profile_ip_version {
MAX_GFT_PROFILE_IP_VERSION
};
+
/*
* Profile key stucr fot GFT logic in Prs
*/
struct gft_profile_key {
__le16 profile_key;
+/* use enum gft_profile_ip_version (use enum gft_profile_ip_version) */
#define GFT_PROFILE_KEY_IP_VERSION_MASK 0x1
#define GFT_PROFILE_KEY_IP_VERSION_SHIFT 0
+/* use enum gft_profile_ip_version (use enum gft_profile_ip_version) */
#define GFT_PROFILE_KEY_TUNNEL_IP_VERSION_MASK 0x1
#define GFT_PROFILE_KEY_TUNNEL_IP_VERSION_SHIFT 1
+/* use enum gft_profile_upper_protocol_type
+ * (use enum gft_profile_upper_protocol_type)
+ */
#define GFT_PROFILE_KEY_UPPER_PROTOCOL_TYPE_MASK 0xF
#define GFT_PROFILE_KEY_UPPER_PROTOCOL_TYPE_SHIFT 2
+/* use enum gft_profile_tunnel_type (use enum gft_profile_tunnel_type) */
#define GFT_PROFILE_KEY_TUNNEL_TYPE_MASK 0xF
#define GFT_PROFILE_KEY_TUNNEL_TYPE_SHIFT 6
#define GFT_PROFILE_KEY_PF_ID_MASK 0xF
@@ -1323,6 +1577,7 @@ struct gft_profile_key {
#define GFT_PROFILE_KEY_RESERVED0_SHIFT 14
};
+
/*
* Used in gft_profile_key: Indication for tunnel type
*/
@@ -1336,6 +1591,7 @@ enum gft_profile_tunnel_type {
MAX_GFT_PROFILE_TUNNEL_TYPE
};
+
/*
* Used in gft_profile_key: Indication for protocol type
*/
@@ -1359,11 +1615,13 @@ enum gft_profile_upper_protocol_type {
MAX_GFT_PROFILE_UPPER_PROTOCOL_TYPE
};
+
/*
* GFT RAM line struct
*/
struct gft_ram_line {
__le32 low32bits;
+/* (use enum gft_vlan_select) */
#define GFT_RAM_LINE_VLAN_SELECT_MASK 0x3
#define GFT_RAM_LINE_VLAN_SELECT_SHIFT 0
#define GFT_RAM_LINE_TUNNEL_ENTROPHY_MASK 0x1
@@ -1451,6 +1709,7 @@ struct gft_ram_line {
#define GFT_RAM_LINE_RESERVED1_SHIFT 10
};
+
/*
* Used in the first 2 bits for gft_ram_line: Indication for vlan mask
*/
@@ -1462,36 +1721,39 @@ enum gft_vlan_select {
MAX_GFT_VLAN_SELECT
};
+
struct mstorm_eth_conn_ag_ctx {
u8 byte0 /* cdu_validation */;
u8 byte1 /* state */;
u8 flags0;
+/* exist_in_qm0 */
#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_MASK 0x1
#define MSTORM_ETH_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
+/* exist_in_qm1 */
#define MSTORM_ETH_CONN_AG_CTX_BIT1_MASK 0x1
#define MSTORM_ETH_CONN_AG_CTX_BIT1_SHIFT 1
-#define MSTORM_ETH_CONN_AG_CTX_CF0_MASK 0x3
+#define MSTORM_ETH_CONN_AG_CTX_CF0_MASK 0x3 /* cf0 */
#define MSTORM_ETH_CONN_AG_CTX_CF0_SHIFT 2
-#define MSTORM_ETH_CONN_AG_CTX_CF1_MASK 0x3
+#define MSTORM_ETH_CONN_AG_CTX_CF1_MASK 0x3 /* cf1 */
#define MSTORM_ETH_CONN_AG_CTX_CF1_SHIFT 4
-#define MSTORM_ETH_CONN_AG_CTX_CF2_MASK 0x3
+#define MSTORM_ETH_CONN_AG_CTX_CF2_MASK 0x3 /* cf2 */
#define MSTORM_ETH_CONN_AG_CTX_CF2_SHIFT 6
u8 flags1;
-#define MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK 0x1
+#define MSTORM_ETH_CONN_AG_CTX_CF0EN_MASK 0x1 /* cf0en */
#define MSTORM_ETH_CONN_AG_CTX_CF0EN_SHIFT 0
-#define MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK 0x1
+#define MSTORM_ETH_CONN_AG_CTX_CF1EN_MASK 0x1 /* cf1en */
#define MSTORM_ETH_CONN_AG_CTX_CF1EN_SHIFT 1
-#define MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK 0x1
+#define MSTORM_ETH_CONN_AG_CTX_CF2EN_MASK 0x1 /* cf2en */
#define MSTORM_ETH_CONN_AG_CTX_CF2EN_SHIFT 2
-#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK 0x1
+#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_MASK 0x1 /* rule0en */
#define MSTORM_ETH_CONN_AG_CTX_RULE0EN_SHIFT 3
-#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK 0x1
+#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_MASK 0x1 /* rule1en */
#define MSTORM_ETH_CONN_AG_CTX_RULE1EN_SHIFT 4
-#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK 0x1
+#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_MASK 0x1 /* rule2en */
#define MSTORM_ETH_CONN_AG_CTX_RULE2EN_SHIFT 5
-#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK 0x1
+#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_MASK 0x1 /* rule3en */
#define MSTORM_ETH_CONN_AG_CTX_RULE3EN_SHIFT 6
-#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK 0x1
+#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_MASK 0x1 /* rule4en */
#define MSTORM_ETH_CONN_AG_CTX_RULE4EN_SHIFT 7
__le16 word0 /* word0 */;
__le16 word1 /* word1 */;
@@ -1500,214 +1762,312 @@ struct mstorm_eth_conn_ag_ctx {
};
+
+
struct xstormEthConnAgCtxDqExtLdPart {
u8 reserved0 /* cdu_validation */;
u8 eth_state /* state */;
u8 flags0;
+/* exist_in_qm0 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM0_SHIFT 0
+/* exist_in_qm1 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED1_SHIFT 1
+/* exist_in_qm2 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED2_SHIFT 2
+/* exist_in_qm3 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_EXIST_IN_QM3_SHIFT 3
+/* bit4 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED3_SHIFT 4
+/* cf_array_active */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED4_SHIFT 5
+/* bit6 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED5_SHIFT 6
+/* bit7 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED6_SHIFT 7
u8 flags1;
+/* bit8 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED7_SHIFT 0
+/* bit9 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED8_SHIFT 1
+/* bit10 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED9_SHIFT 2
+/* bit11 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT11_SHIFT 3
+/* bit12 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT12_SHIFT 4
+/* bit13 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_BIT13_SHIFT 5
+/* bit14 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_RULE_ACTIVE_SHIFT 6
+/* bit15 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_ACTIVE_SHIFT 7
u8 flags2;
+/* timer0cf */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0_SHIFT 0
+/* timer1cf */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1_SHIFT 2
+/* timer2cf */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2_SHIFT 4
+/* timer_stop_all */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3_SHIFT 6
u8 flags3;
+/* cf4 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4_SHIFT 0
+/* cf5 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5_SHIFT 2
+/* cf6 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6_SHIFT 4
+/* cf7 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7_SHIFT 6
u8 flags4;
+/* cf8 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8_SHIFT 0
+/* cf9 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9_SHIFT 2
+/* cf10 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10_SHIFT 4
+/* cf11 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11_SHIFT 6
u8 flags5;
+/* cf12 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12_SHIFT 0
+/* cf13 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13_SHIFT 2
+/* cf14 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14_SHIFT 4
+/* cf15 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15_SHIFT 6
u8 flags6;
+/* cf16 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_SHIFT 0
+/* cf_array_cf */
#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_SHIFT 2
+/* cf18 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_SHIFT 4
+/* cf19 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_SHIFT 6
u8 flags7;
+/* cf20 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_SHIFT 0
+/* cf21 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED10_SHIFT 2
+/* cf22 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_SHIFT 4
+/* cf0en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF0EN_SHIFT 6
+/* cf1en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF1EN_SHIFT 7
u8 flags8;
+/* cf2en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF2EN_SHIFT 0
+/* cf3en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF3EN_SHIFT 1
+/* cf4en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF4EN_SHIFT 2
+/* cf5en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF5EN_SHIFT 3
+/* cf6en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF6EN_SHIFT 4
+/* cf7en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF7EN_SHIFT 5
+/* cf8en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF8EN_SHIFT 6
+/* cf9en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF9EN_SHIFT 7
u8 flags9;
+/* cf10en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF10EN_SHIFT 0
+/* cf11en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF11EN_SHIFT 1
+/* cf12en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF12EN_SHIFT 2
+/* cf13en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF13EN_SHIFT 3
+/* cf14en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF14EN_SHIFT 4
+/* cf15en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_CF15EN_SHIFT 5
+/* cf16en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_GO_TO_BD_CONS_CF_EN_SHIFT 6
+/* cf_array_cf_en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_MULTI_UNICAST_CF_EN_SHIFT 7
u8 flags10;
+/* cf18en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_DQ_CF_EN_SHIFT 0
+/* cf19en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_TERMINATE_CF_EN_SHIFT 1
+/* cf20en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_FLUSH_Q0_EN_SHIFT 2
+/* cf21en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED11_SHIFT 3
+/* cf22en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_SLOW_PATH_EN_SHIFT 4
+/* cf23en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_EN_RESERVED_SHIFT 5
+/* rule0en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED12_SHIFT 6
+/* rule1en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED13_SHIFT 7
u8 flags11;
+/* rule2en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED14_SHIFT 0
+/* rule3en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RESERVED15_SHIFT 1
+/* rule4en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_TX_DEC_RULE_EN_SHIFT 2
+/* rule5en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE5EN_SHIFT 3
+/* rule6en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE6EN_SHIFT 4
+/* rule7en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE7EN_SHIFT 5
+/* rule8en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED1_SHIFT 6
+/* rule9en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE9EN_SHIFT 7
u8 flags12;
+/* rule10en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE10EN_SHIFT 0
+/* rule11en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE11EN_SHIFT 1
+/* rule12en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED2_SHIFT 2
+/* rule13en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED3_SHIFT 3
+/* rule14en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE14EN_SHIFT 4
+/* rule15en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE15EN_SHIFT 5
+/* rule16en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE16EN_SHIFT 6
+/* rule17en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE17EN_SHIFT 7
u8 flags13;
+/* rule18en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE18EN_SHIFT 0
+/* rule19en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_RULE19EN_SHIFT 1
+/* rule20en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED4_SHIFT 2
+/* rule21en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED5_SHIFT 3
+/* rule22en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED6_SHIFT 4
+/* rule23en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED7_SHIFT 5
+/* rule24en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED8_SHIFT 6
+/* rule25en */
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_A0_RESERVED9_SHIFT 7
u8 flags14;
+/* bit16 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_USE_EXT_HDR_SHIFT 0
+/* bit17 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_RAW_L3L4_SHIFT 1
+/* bit18 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_INBAND_PROP_HDR_SHIFT 2
+/* bit19 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_EDPM_SEND_EXT_TUNNEL_SHIFT 3
+/* bit20 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_L2_EDPM_ENABLE_SHIFT 4
+/* bit21 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_MASK 0x1
#define XSTORMETHCONNAGCTXDQEXTLDPART_ROCE_EDPM_ENABLE_SHIFT 5
+/* cf23 */
#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_MASK 0x3
#define XSTORMETHCONNAGCTXDQEXTLDPART_TPH_ENABLE_SHIFT 6
u8 edpm_event_id /* byte2 */;
@@ -1729,214 +2089,312 @@ struct xstormEthConnAgCtxDqExtLdPart {
__le32 reg4 /* reg4 */;
};
+
+
struct xstorm_eth_hw_conn_ag_ctx {
u8 reserved0 /* cdu_validation */;
u8 eth_state /* state */;
u8 flags0;
+/* exist_in_qm0 */
#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM0_SHIFT 0
+/* exist_in_qm1 */
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED1_SHIFT 1
+/* exist_in_qm2 */
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED2_SHIFT 2
+/* exist_in_qm3 */
#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_EXIST_IN_QM3_SHIFT 3
+/* bit4 */
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED3_SHIFT 4
+/* cf_array_active */
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED4_SHIFT 5
+/* bit6 */
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED5_SHIFT 6
+/* bit7 */
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED6_SHIFT 7
u8 flags1;
+/* bit8 */
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED7_SHIFT 0
+/* bit9 */
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED8_SHIFT 1
+/* bit10 */
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED9_SHIFT 2
+/* bit11 */
#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_BIT11_SHIFT 3
+/* bit12 */
#define XSTORM_ETH_HW_CONN_AG_CTX_BIT12_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_BIT12_SHIFT 4
+/* bit13 */
#define XSTORM_ETH_HW_CONN_AG_CTX_BIT13_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_BIT13_SHIFT 5
+/* bit14 */
#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_TX_RULE_ACTIVE_SHIFT 6
+/* bit15 */
#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_ACTIVE_SHIFT 7
u8 flags2;
+/* timer0cf */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_CF0_SHIFT 0
+/* timer1cf */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_CF1_SHIFT 2
+/* timer2cf */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_CF2_SHIFT 4
+/* timer_stop_all */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_CF3_SHIFT 6
u8 flags3;
+/* cf4 */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_CF4_SHIFT 0
+/* cf5 */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_CF5_SHIFT 2
+/* cf6 */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_CF6_SHIFT 4
+/* cf7 */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_CF7_SHIFT 6
u8 flags4;
+/* cf8 */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_CF8_SHIFT 0
+/* cf9 */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_CF9_SHIFT 2
+/* cf10 */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_CF10_SHIFT 4
+/* cf11 */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_CF11_SHIFT 6
u8 flags5;
+/* cf12 */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_CF12_SHIFT 0
+/* cf13 */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_CF13_SHIFT 2
+/* cf14 */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_CF14_SHIFT 4
+/* cf15 */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_CF15_SHIFT 6
u8 flags6;
+/* cf16 */
#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_SHIFT 0
+/* cf_array_cf */
#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_SHIFT 2
+/* cf18 */
#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_SHIFT 4
+/* cf19 */
#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_SHIFT 6
u8 flags7;
+/* cf20 */
#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_SHIFT 0
+/* cf21 */
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED10_SHIFT 2
+/* cf22 */
#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_SHIFT 4
+/* cf0en */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_CF0EN_SHIFT 6
+/* cf1en */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_CF1EN_SHIFT 7
u8 flags8;
+/* cf2en */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_CF2EN_SHIFT 0
+/* cf3en */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_CF3EN_SHIFT 1
+/* cf4en */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_CF4EN_SHIFT 2
+/* cf5en */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_CF5EN_SHIFT 3
+/* cf6en */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_CF6EN_SHIFT 4
+/* cf7en */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_CF7EN_SHIFT 5
+/* cf8en */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_CF8EN_SHIFT 6
+/* cf9en */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_CF9EN_SHIFT 7
u8 flags9;
+/* cf10en */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_CF10EN_SHIFT 0
+/* cf11en */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_CF11EN_SHIFT 1
+/* cf12en */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_CF12EN_SHIFT 2
+/* cf13en */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_CF13EN_SHIFT 3
+/* cf14en */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_CF14EN_SHIFT 4
+/* cf15en */
#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_CF15EN_SHIFT 5
+/* cf16en */
#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_GO_TO_BD_CONS_CF_EN_SHIFT 6
+/* cf_array_cf_en */
#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_MULTI_UNICAST_CF_EN_SHIFT 7
u8 flags10;
+/* cf18en */
#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_DQ_CF_EN_SHIFT 0
+/* cf19en */
#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_TERMINATE_CF_EN_SHIFT 1
+/* cf20en */
#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_FLUSH_Q0_EN_SHIFT 2
+/* cf21en */
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED11_SHIFT 3
+/* cf22en */
#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_SLOW_PATH_EN_SHIFT 4
+/* cf23en */
#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_EN_RESERVED_SHIFT 5
+/* rule0en */
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED12_SHIFT 6
+/* rule1en */
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED13_SHIFT 7
u8 flags11;
+/* rule2en */
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED14_SHIFT 0
+/* rule3en */
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RESERVED15_SHIFT 1
+/* rule4en */
#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_TX_DEC_RULE_EN_SHIFT 2
+/* rule5en */
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE5EN_SHIFT 3
+/* rule6en */
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE6EN_SHIFT 4
+/* rule7en */
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE7EN_SHIFT 5
+/* rule8en */
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED1_SHIFT 6
+/* rule9en */
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE9EN_SHIFT 7
u8 flags12;
+/* rule10en */
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE10EN_SHIFT 0
+/* rule11en */
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE11EN_SHIFT 1
+/* rule12en */
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED2_SHIFT 2
+/* rule13en */
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED3_SHIFT 3
+/* rule14en */
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE14EN_SHIFT 4
+/* rule15en */
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE15EN_SHIFT 5
+/* rule16en */
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE16EN_SHIFT 6
+/* rule17en */
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE17EN_SHIFT 7
u8 flags13;
+/* rule18en */
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE18EN_SHIFT 0
+/* rule19en */
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_RULE19EN_SHIFT 1
+/* rule20en */
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED4_SHIFT 2
+/* rule21en */
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED5_SHIFT 3
+/* rule22en */
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED6_SHIFT 4
+/* rule23en */
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED7_SHIFT 5
+/* rule24en */
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED8_SHIFT 6
+/* rule25en */
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_A0_RESERVED9_SHIFT 7
u8 flags14;
+/* bit16 */
#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_USE_EXT_HDR_SHIFT 0
+/* bit17 */
#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_RAW_L3L4_SHIFT 1
+/* bit18 */
#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_INBAND_PROP_HDR_SHIFT 2
+/* bit19 */
#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_EDPM_SEND_EXT_TUNNEL_SHIFT 3
+/* bit20 */
#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_L2_EDPM_ENABLE_SHIFT 4
+/* bit21 */
#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_MASK 0x1
#define XSTORM_ETH_HW_CONN_AG_CTX_ROCE_EDPM_ENABLE_SHIFT 5
+/* cf23 */
#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_MASK 0x3
#define XSTORM_ETH_HW_CONN_AG_CTX_TPH_ENABLE_SHIFT 6
u8 edpm_event_id /* byte2 */;
@@ -1949,4 +2407,5 @@ struct xstorm_eth_hw_conn_ag_ctx {
__le16 conn_dpi /* conn_dpi */;
};
+
#endif /* __ECORE_HSI_ETH__ */
diff --git a/drivers/net/qede/base/ecore_hw_defs.h b/drivers/net/qede/base/ecore_hw_defs.h
index deb8e34..4456af4 100644
--- a/drivers/net/qede/base/ecore_hw_defs.h
+++ b/drivers/net/qede/base/ecore_hw_defs.h
@@ -10,19 +10,30 @@
#define _ECORE_IGU_DEF_H_
/* Fields of IGU PF CONFIGRATION REGISTER */
-#define IGU_PF_CONF_FUNC_EN (0x1 << 0) /* function enable */
-#define IGU_PF_CONF_MSI_MSIX_EN (0x1 << 1) /* MSI/MSIX enable */
-#define IGU_PF_CONF_INT_LINE_EN (0x1 << 2) /* INT enable */
-#define IGU_PF_CONF_ATTN_BIT_EN (0x1 << 3) /* attention enable */
-#define IGU_PF_CONF_SINGLE_ISR_EN (0x1 << 4) /* single ISR mode enable */
-#define IGU_PF_CONF_SIMD_MODE (0x1 << 5) /* simd all ones mode */
+/* function enable */
+#define IGU_PF_CONF_FUNC_EN (0x1 << 0)
+/* MSI/MSIX enable */
+#define IGU_PF_CONF_MSI_MSIX_EN (0x1 << 1)
+/* INT enable */
+#define IGU_PF_CONF_INT_LINE_EN (0x1 << 2)
+/* attention enable */
+#define IGU_PF_CONF_ATTN_BIT_EN (0x1 << 3)
+/* single ISR mode enable */
+#define IGU_PF_CONF_SINGLE_ISR_EN (0x1 << 4)
+/* simd all ones mode */
+#define IGU_PF_CONF_SIMD_MODE (0x1 << 5)
/* Fields of IGU VF CONFIGRATION REGISTER */
-#define IGU_VF_CONF_FUNC_EN (0x1 << 0) /* function enable */
-#define IGU_VF_CONF_MSI_MSIX_EN (0x1 << 1) /* MSI/MSIX enable */
-#define IGU_VF_CONF_SINGLE_ISR_EN (0x1 << 4) /* single ISR mode enable */
-#define IGU_VF_CONF_PARENT_MASK (0xF) /* Parent PF */
-#define IGU_VF_CONF_PARENT_SHIFT 5 /* Parent PF */
+/* function enable */
+#define IGU_VF_CONF_FUNC_EN (0x1 << 0)
+/* MSI/MSIX enable */
+#define IGU_VF_CONF_MSI_MSIX_EN (0x1 << 1)
+/* single ISR mode enable */
+#define IGU_VF_CONF_SINGLE_ISR_EN (0x1 << 4)
+/* Parent PF */
+#define IGU_VF_CONF_PARENT_MASK (0xF)
+/* Parent PF */
+#define IGU_VF_CONF_PARENT_SHIFT 5
/* Igu control commands
*/
diff --git a/drivers/net/qede/base/ecore_init_ops.h b/drivers/net/qede/base/ecore_init_ops.h
index f6b0a2d..d58c7d6 100644
--- a/drivers/net/qede/base/ecore_init_ops.h
+++ b/drivers/net/qede/base/ecore_init_ops.h
@@ -32,7 +32,9 @@ void ecore_init_iro_array(struct ecore_dev *p_dev);
*/
enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
- int phase, int phase_id, int modes);
+ int phase,
+ int phase_id,
+ int modes);
/**
* @brief ecore_init_hwfn_allocate - Allocate RT array, Store 'values' ptrs.
@@ -52,6 +54,7 @@ enum _ecore_status_t ecore_init_alloc(struct ecore_hwfn *p_hwfn);
*/
void ecore_init_free(struct ecore_hwfn *p_hwfn);
+
/**
* @brief ecore_init_clear_rt_data - Clears the runtime init array.
*
@@ -96,6 +99,7 @@ void ecore_init_store_rt_agg(struct ecore_hwfn *p_hwfn,
#define STORE_RT_REG_AGG(hwfn, offset, val) \
ecore_init_store_rt_agg(hwfn, offset, (u32 *)&val, sizeof(val))
+
/**
* @brief
* Initialize GTT global windows and set admin window
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 8a2c84c..64c639f 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -30,19 +30,30 @@
ecore_device_num_engines((_p_hwfn)->p_dev)))
struct ecore_mcp_info {
- osal_spinlock_t lock; /* Spinlock used for accessing MCP mailbox */
+ /* Spinlock used for protecting the access to the MFW mailbox */
+ osal_spinlock_t lock;
/* Flag to indicate whether sending a MFW mailbox is forbidden */
bool block_mb_sending;
- u32 public_base; /* Address of the MCP public area */
- u32 drv_mb_addr; /* Address of the driver mailbox */
- u32 mfw_mb_addr; /* Address of the MFW mailbox */
- u32 port_addr; /* Address of the port configuration (link) */
- u16 drv_mb_seq; /* Current driver mailbox sequence */
- u16 drv_pulse_seq; /* Current driver pulse sequence */
- struct ecore_mcp_link_params link_input;
- struct ecore_mcp_link_state link_output;
+
+ /* Address of the MCP public area */
+ u32 public_base;
+ /* Address of the driver mailbox */
+ u32 drv_mb_addr;
+ /* Address of the MFW mailbox */
+ u32 mfw_mb_addr;
+ /* Address of the port configuration (link) */
+ u32 port_addr;
+
+ /* Current driver mailbox sequence */
+ u16 drv_mb_seq;
+ /* Current driver pulse sequence */
+ u16 drv_pulse_seq;
+
+ struct ecore_mcp_link_params link_input;
+ struct ecore_mcp_link_state link_output;
struct ecore_mcp_link_capabilities link_capabilities;
- struct ecore_mcp_function_info func_info;
+
+ struct ecore_mcp_function_info func_info;
u8 *mfw_mb_cur;
u8 *mfw_mb_shadow;
diff --git a/drivers/net/qede/base/ecore_sp_api.h b/drivers/net/qede/base/ecore_sp_api.h
index 71e2359..a4cb507 100644
--- a/drivers/net/qede/base/ecore_sp_api.h
+++ b/drivers/net/qede/base/ecore_sp_api.h
@@ -23,10 +23,13 @@ struct eth_slow_path_rx_cqe;
struct ecore_spq_comp_cb {
void (*function)(struct ecore_hwfn *,
- void *, union event_ring_data *, u8 fw_return_code);
+ void *,
+ union event_ring_data *,
+ u8 fw_return_code);
void *cookie;
};
+
/**
* @brief ecore_eth_cqe_completion - handles the completion of a
* ramrod on the cqe ring
diff --git a/drivers/net/qede/base/eth_common.h b/drivers/net/qede/base/eth_common.h
index 137b374..3213070 100644
--- a/drivers/net/qede/base/eth_common.h
+++ b/drivers/net/qede/base/eth_common.h
@@ -90,12 +90,17 @@
#define ETH_RSS_KEY_SIZE_REGS 10
/* number of available RSS engines in K2 */
#define ETH_RSS_ENGINE_NUM_K2 207
+/* number of available RSS engines in BB */
#define ETH_RSS_ENGINE_NUM_BB 127
/* TPA constants */
+/* Maximum number of open TPA aggregations */
#define ETH_TPA_MAX_AGGS_NUM 64
+/* Maximum number of additional buffers, reported by TPA-start CQE */
#define ETH_TPA_CQE_START_LEN_LIST_SIZE ETH_RX_MAX_BUFF_PER_PKT
+/* Maximum number of buffers, reported by TPA-continue CQE */
#define ETH_TPA_CQE_CONT_LEN_LIST_SIZE 6
+/* Maximum number of buffers, reported by TPA-end CQE */
#define ETH_TPA_CQE_END_LEN_LIST_SIZE 4
/* Control frame check constants */
@@ -115,6 +120,7 @@ enum dest_port_mode {
MAX_DEST_PORT_MODE
};
+
/*
* Ethernet address type
*/
@@ -126,22 +132,31 @@ enum eth_addr_type {
MAX_ETH_ADDR_TYPE
};
+
struct eth_tx_1st_bd_flags {
u8 bitfields;
+/* Set to 1 in the first BD. (for debug) */
#define ETH_TX_1ST_BD_FLAGS_START_BD_MASK 0x1
#define ETH_TX_1ST_BD_FLAGS_START_BD_SHIFT 0
+/* Do not allow additional VLAN manipulations on this packet. */
#define ETH_TX_1ST_BD_FLAGS_FORCE_VLAN_MODE_MASK 0x1
#define ETH_TX_1ST_BD_FLAGS_FORCE_VLAN_MODE_SHIFT 1
+/* IP checksum recalculation in needed */
#define ETH_TX_1ST_BD_FLAGS_IP_CSUM_MASK 0x1
#define ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT 2
+/* TCP/UDP checksum recalculation in needed */
#define ETH_TX_1ST_BD_FLAGS_L4_CSUM_MASK 0x1
#define ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT 3
+/* If set, need to add the VLAN in vlan field to the packet. */
#define ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_MASK 0x1
#define ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_SHIFT 4
+/* If set, this is an LSO packet. */
#define ETH_TX_1ST_BD_FLAGS_LSO_MASK 0x1
#define ETH_TX_1ST_BD_FLAGS_LSO_SHIFT 5
+/* Recalculate Tunnel IP Checksum (if Tunnel IP Header is IPv4) */
#define ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_MASK 0x1
#define ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_SHIFT 6
+/* Recalculate Tunnel UDP/GRE Checksum (Depending on Tunnel Type) */
#define ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_MASK 0x1
#define ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_SHIFT 7
};
@@ -171,29 +186,50 @@ struct eth_tx_data_1st_bd {
* The parsing information data for the second tx bd of a given packet.
*/
struct eth_tx_data_2nd_bd {
+/* For tunnel with IPv6+ext - Tunnel header IP datagram length (in BYTEs) */
__le16 tunn_ip_size;
__le16 bitfields1;
+/* For Tunnel header with IPv6 ext. - Inner L2 Header Size (in 2-byte WORDs) */
#define ETH_TX_DATA_2ND_BD_TUNN_INNER_L2_HDR_SIZE_W_MASK 0xF
#define ETH_TX_DATA_2ND_BD_TUNN_INNER_L2_HDR_SIZE_W_SHIFT 0
+/* For Tunnel header with IPv6 ext. - Inner L2 Header MAC DA Type
+ * (use enum eth_addr_type)
+ */
#define ETH_TX_DATA_2ND_BD_TUNN_INNER_ETH_TYPE_MASK 0x3
#define ETH_TX_DATA_2ND_BD_TUNN_INNER_ETH_TYPE_SHIFT 4
+/* Destination port mode. (use enum dest_port_mode) */
#define ETH_TX_DATA_2ND_BD_DEST_PORT_MODE_MASK 0x3
#define ETH_TX_DATA_2ND_BD_DEST_PORT_MODE_SHIFT 6
+/* Should be 0 in all the BDs, except the first one. (for debug) */
#define ETH_TX_DATA_2ND_BD_START_BD_MASK 0x1
#define ETH_TX_DATA_2ND_BD_START_BD_SHIFT 8
+/* For Tunnel header with IPv6 ext. - Tunnel Type (use enum eth_tx_tunn_type) */
#define ETH_TX_DATA_2ND_BD_TUNN_TYPE_MASK 0x3
#define ETH_TX_DATA_2ND_BD_TUNN_TYPE_SHIFT 9
+/* For LSO / Tunnel header with IPv6+ext - Set if inner header is IPv6 */
#define ETH_TX_DATA_2ND_BD_TUNN_INNER_IPV6_MASK 0x1
#define ETH_TX_DATA_2ND_BD_TUNN_INNER_IPV6_SHIFT 11
+/* For LSO / Tunnel header with IPv6+ext - Set if outer header has IPv6+ext */
#define ETH_TX_DATA_2ND_BD_IPV6_EXT_MASK 0x1
#define ETH_TX_DATA_2ND_BD_IPV6_EXT_SHIFT 12
+/* Set if Tunnel header has IPv6 ext. (3rd BD is required) */
#define ETH_TX_DATA_2ND_BD_TUNN_IPV6_EXT_MASK 0x1
#define ETH_TX_DATA_2ND_BD_TUNN_IPV6_EXT_SHIFT 13
+/* Set if (inner) L4 protocol is UDP. (Required when IPv6+ext (or tunnel with
+ * inner or outer Ipv6+ext) and l4_csum is set)
+ */
#define ETH_TX_DATA_2ND_BD_L4_UDP_MASK 0x1
#define ETH_TX_DATA_2ND_BD_L4_UDP_SHIFT 14
+/* The pseudo header checksum type in the L4 checksum field. Required when
+ * IPv6+ext and l4_csum is set. (use enum eth_l4_pseudo_checksum_mode)
+ */
#define ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE_MASK 0x1
#define ETH_TX_DATA_2ND_BD_L4_PSEUDO_CSUM_MODE_SHIFT 15
__le16 bitfields2;
+/* For inner/outer header IPv6+ext - (inner) L4 header offset (in 2-byte WORDs).
+ * For regular packet - offset from the beginning of the packet. For tunneled
+ * packet - offset from the beginning of the inner header
+ */
#define ETH_TX_DATA_2ND_BD_L4_HDR_START_OFFSET_W_MASK 0x1FFF
#define ETH_TX_DATA_2ND_BD_L4_HDR_START_OFFSET_W_SHIFT 0
#define ETH_TX_DATA_2ND_BD_RESERVED0_MASK 0x7
@@ -211,6 +247,7 @@ struct eth_edpm_fw_data {
__le32 reserved;
};
+
/*
* FW debug.
*/
@@ -268,8 +305,10 @@ struct eth_pmd_flow_flags {
struct eth_fast_path_rx_reg_cqe {
u8 type /* CQE type */;
u8 bitfields;
+/* Type of calculated RSS hash (use enum rss_hash_type) */
#define ETH_FAST_PATH_RX_REG_CQE_RSS_HASH_TYPE_MASK 0x7
#define ETH_FAST_PATH_RX_REG_CQE_RSS_HASH_TYPE_SHIFT 0
+/* Traffic Class */
#define ETH_FAST_PATH_RX_REG_CQE_TC_MASK 0xF
#define ETH_FAST_PATH_RX_REG_CQE_TC_SHIFT 3
#define ETH_FAST_PATH_RX_REG_CQE_RESERVED0_MASK 0x1
@@ -290,14 +329,15 @@ struct eth_fast_path_rx_reg_cqe {
struct eth_pmd_flow_flags pmd_flags /* CQE valid and toggle bits */;
};
+
/*
* TPA-continue ETH Rx FP CQE.
*/
struct eth_fast_path_rx_tpa_cont_cqe {
u8 type /* CQE type */;
u8 tpa_agg_index /* TPA aggregation index */;
- __le16 len_list[ETH_TPA_CQE_CONT_LEN_LIST_SIZE]
- /* List of the segment sizes */;
+/* List of the segment sizes */
+ __le16 len_list[ETH_TPA_CQE_CONT_LEN_LIST_SIZE];
u8 reserved;
u8 reserved1 /* FW reserved. */;
__le16 reserved2[ETH_TPA_CQE_CONT_LEN_LIST_SIZE] /* FW reserved. */;
@@ -305,6 +345,7 @@ struct eth_fast_path_rx_tpa_cont_cqe {
struct eth_pmd_flow_flags pmd_flags /* CQE valid and toggle bits */;
};
+
/*
* TPA-end ETH Rx FP CQE .
*/
@@ -313,33 +354,36 @@ struct eth_fast_path_rx_tpa_end_cqe {
u8 tpa_agg_index /* TPA aggregation index */;
__le16 total_packet_len /* Total aggregated packet length */;
u8 num_of_bds /* Total number of BDs comprising the packet */;
- u8 end_reason /* Aggregation end reason. Use enum eth_tpa_end_reason */
- ;
+/* Aggregation end reason. Use enum eth_tpa_end_reason */
+ u8 end_reason;
__le16 num_of_coalesced_segs /* Number of coalesced TCP segments */;
__le32 ts_delta /* TCP timestamp delta */;
- __le16 len_list[ETH_TPA_CQE_END_LEN_LIST_SIZE]
- /* List of the segment sizes */;
+/* List of the segment sizes */
+ __le16 len_list[ETH_TPA_CQE_END_LEN_LIST_SIZE];
__le16 reserved3[ETH_TPA_CQE_END_LEN_LIST_SIZE] /* FW reserved. */;
__le16 reserved1;
u8 reserved2 /* FW reserved. */;
struct eth_pmd_flow_flags pmd_flags /* CQE valid and toggle bits */;
};
+
/*
* TPA-start ETH Rx FP CQE.
*/
struct eth_fast_path_rx_tpa_start_cqe {
u8 type /* CQE type */;
u8 bitfields;
+/* Type of calculated RSS hash (use enum rss_hash_type) */
#define ETH_FAST_PATH_RX_TPA_START_CQE_RSS_HASH_TYPE_MASK 0x7
#define ETH_FAST_PATH_RX_TPA_START_CQE_RSS_HASH_TYPE_SHIFT 0
+/* Traffic Class */
#define ETH_FAST_PATH_RX_TPA_START_CQE_TC_MASK 0xF
#define ETH_FAST_PATH_RX_TPA_START_CQE_TC_SHIFT 3
#define ETH_FAST_PATH_RX_TPA_START_CQE_RESERVED0_MASK 0x1
#define ETH_FAST_PATH_RX_TPA_START_CQE_RESERVED0_SHIFT 7
__le16 seg_len /* Segment length (packetLen from the parser) */;
- struct parsing_and_err_flags pars_flags
- /* Parsing and error flags from the parser */;
+/* Parsing and error flags from the parser */
+ struct parsing_and_err_flags pars_flags;
__le16 vlan_tag /* 802.1q VLAN tag */;
__le32 rss_hash /* RSS hash result */;
__le16 len_on_first_bd /* Number of bytes placed on first BD */;
@@ -348,32 +392,34 @@ struct eth_fast_path_rx_tpa_start_cqe {
struct eth_tunnel_parsing_flags tunnel_pars_flags;
u8 tpa_agg_index /* TPA aggregation index */;
u8 header_len /* Packet L2+L3+L4 header length */;
- __le16 ext_bd_len_list[ETH_TPA_CQE_START_LEN_LIST_SIZE]
- /* Additional BDs length list. */;
+/* Additional BDs length list. */
+ __le16 ext_bd_len_list[ETH_TPA_CQE_START_LEN_LIST_SIZE];
struct eth_fast_path_cqe_fw_debug fw_debug /* FW reserved. */;
u8 reserved;
struct eth_pmd_flow_flags pmd_flags /* CQE valid and toggle bits */;
};
+
/*
* The L4 pseudo checksum mode for Ethernet
*/
enum eth_l4_pseudo_checksum_mode {
- ETH_L4_PSEUDO_CSUM_CORRECT_LENGTH
- /* Pseudo Header checksum on packet is calculated
- * with the correct packet length field.
- */
- ,
- ETH_L4_PSEUDO_CSUM_ZERO_LENGTH
- /* Pseudo Hdr checksum on packet is calc with zero len field. */
- ,
+/* Pseudo Header checksum on packet is calculated with the correct packet length
+ * field.
+ */
+ ETH_L4_PSEUDO_CSUM_CORRECT_LENGTH,
+/* Pseudo Header checksum on packet is calculated with zero length field. */
+ ETH_L4_PSEUDO_CSUM_ZERO_LENGTH,
MAX_ETH_L4_PSEUDO_CHECKSUM_MODE
};
+
+
struct eth_rx_bd {
struct regpair addr /* single continues buffer */;
};
+
/*
* regular ETH Rx SP CQE
*/
@@ -391,16 +437,18 @@ struct eth_slow_path_rx_cqe {
* union for all ETH Rx CQE types
*/
union eth_rx_cqe {
- struct eth_fast_path_rx_reg_cqe fast_path_regular /* Regular FP CQE */;
- struct eth_fast_path_rx_tpa_start_cqe fast_path_tpa_start
- /* TPA-start CQE */;
- struct eth_fast_path_rx_tpa_cont_cqe fast_path_tpa_cont
- /* TPA-continue CQE */;
- struct eth_fast_path_rx_tpa_end_cqe fast_path_tpa_end /* TPA-end CQE */
- ;
+/* Regular FP CQE */
+ struct eth_fast_path_rx_reg_cqe fast_path_regular;
+/* TPA-start CQE */
+ struct eth_fast_path_rx_tpa_start_cqe fast_path_tpa_start;
+/* TPA-continue CQE */
+ struct eth_fast_path_rx_tpa_cont_cqe fast_path_tpa_cont;
+/* TPA-end CQE */
+ struct eth_fast_path_rx_tpa_end_cqe fast_path_tpa_end;
struct eth_slow_path_rx_cqe slow_path /* SP CQE */;
};
+
/*
* ETH Rx CQE type
*/
@@ -414,6 +462,7 @@ enum eth_rx_cqe_type {
MAX_ETH_RX_CQE_TYPE
};
+
/*
* Wrapper for PD RX CQE - used in order to cover full cache line when writing
* CQE
@@ -423,6 +472,7 @@ struct eth_rx_pmd_cqe {
u8 reserved[ETH_RX_CQE_GAP];
};
+
/*
* Eth RX Tunnel Type
*/
@@ -434,19 +484,32 @@ enum eth_rx_tunn_type {
MAX_ETH_RX_TUNN_TYPE
};
+
+
/*
* Aggregation end reason.
*/
enum eth_tpa_end_reason {
ETH_AGG_END_UNUSED,
ETH_AGG_END_SP_UPDATE /* SP configuration update */,
- ETH_AGG_END_MAX_LEN
- /* Maximum aggregation length or maximum buffer number used. */,
- ETH_AGG_END_LAST_SEG
- /* TCP PSH flag or TCP payload length below continue threshold. */,
+/* Maximum aggregation length or maximum buffer number used. */
+ ETH_AGG_END_MAX_LEN,
+/* TCP PSH flag or TCP payload length below continue threshold. */
+ ETH_AGG_END_LAST_SEG,
ETH_AGG_END_TIMEOUT /* Timeout expiration. */,
+/* Packet header not consistency: different IPv4 TOS, TTL or flags, IPv6 TC,
+ * Hop limit or Flow label, TCP header length or TS options. In GRO different
+ * TS value, SMAC, DMAC, ackNum, windowSize or VLAN
+ */
ETH_AGG_END_NOT_CONSISTENT,
+/* Out of order or retransmission packet: sequence, ack or timestamp not
+ * consistent with previous segment.
+ */
ETH_AGG_END_OUT_OF_ORDER,
+/* Next segment cant be aggregated due to LLC/SNAP, IP error, IP fragment, IPv4
+ * options, IPv6 extension, IP ECN = CE, TCP errors, TCP options, zero TCP
+ * payload length , TCP flags or not supported tunnel header options.
+ */
ETH_AGG_END_NON_TPA_SEG,
MAX_ETH_TPA_END_REASON
};
@@ -462,6 +525,8 @@ struct eth_tx_1st_bd {
struct eth_tx_data_1st_bd data /* Parsing information data. */;
};
+
+
/*
* The second tx bd of a given packet
*/
@@ -471,21 +536,29 @@ struct eth_tx_2nd_bd {
struct eth_tx_data_2nd_bd data /* Parsing information data. */;
};
+
/*
* The parsing information data for the third tx bd of a given packet.
*/
struct eth_tx_data_3rd_bd {
__le16 lso_mss /* For LSO packet - the MSS in bytes. */;
__le16 bitfields;
+/* For LSO with inner/outer IPv6+ext - TCP header length (in 4-byte WORDs) */
#define ETH_TX_DATA_3RD_BD_TCP_HDR_LEN_DW_MASK 0xF
#define ETH_TX_DATA_3RD_BD_TCP_HDR_LEN_DW_SHIFT 0
+/* LSO - number of BDs which contain headers. value should be in range
+ * (1..ETH_TX_MAX_LSO_HDR_NBD).
+ */
#define ETH_TX_DATA_3RD_BD_HDR_NBD_MASK 0xF
#define ETH_TX_DATA_3RD_BD_HDR_NBD_SHIFT 4
+/* Should be 0 in all the BDs, except the first one. (for debug) */
#define ETH_TX_DATA_3RD_BD_START_BD_MASK 0x1
#define ETH_TX_DATA_3RD_BD_START_BD_SHIFT 8
#define ETH_TX_DATA_3RD_BD_RESERVED0_MASK 0x7F
#define ETH_TX_DATA_3RD_BD_RESERVED0_SHIFT 9
+/* For tunnel with IPv6+ext - Pointer to tunnel L4 Header (in 2-byte WORDs) */
u8 tunn_l4_hdr_start_offset_w;
+/* For tunnel with IPv6+ext - Total size of Tunnel Header (in 2-byte WORDs) */
u8 tunn_hdr_size_w;
};
@@ -498,6 +571,7 @@ struct eth_tx_3rd_bd {
struct eth_tx_data_3rd_bd data /* Parsing information data. */;
};
+
/*
* Complementary information for the regular tx bd of a given packet.
*/
@@ -506,6 +580,7 @@ struct eth_tx_data_bd {
__le16 bitfields;
#define ETH_TX_DATA_BD_RESERVED1_MASK 0xFF
#define ETH_TX_DATA_BD_RESERVED1_SHIFT 0
+/* Should be 0 in all the BDs, except the first one. (for debug) */
#define ETH_TX_DATA_BD_START_BD_MASK 0x1
#define ETH_TX_DATA_BD_START_BD_SHIFT 8
#define ETH_TX_DATA_BD_RESERVED2_MASK 0x7F
@@ -522,14 +597,20 @@ struct eth_tx_bd {
struct eth_tx_data_bd data /* Complementary information. */;
};
+
union eth_tx_bd_types {
struct eth_tx_1st_bd first_bd /* The first tx bd of a given packet */;
- struct eth_tx_2nd_bd second_bd /* The second tx bd of a given packet */
- ;
+/* The second tx bd of a given packet */
+ struct eth_tx_2nd_bd second_bd;
struct eth_tx_3rd_bd third_bd /* The third tx bd of a given packet */;
struct eth_tx_bd reg_bd /* The common non-special bd */;
};
+
+
+
+
+
/*
* Eth Tx Tunnel Type
*/
@@ -546,26 +627,31 @@ enum eth_tx_tunn_type {
* Ystorm Queue Zone
*/
struct xstorm_eth_queue_zone {
- struct coalescing_timeset int_coalescing_timeset
- /* Tx interrupt coalescing TimeSet */;
+/* Tx interrupt coalescing TimeSet */
+ struct coalescing_timeset int_coalescing_timeset;
u8 reserved[7];
};
+
/*
* ETH doorbell data
*/
struct eth_db_data {
u8 params;
+/* destination of doorbell (use enum db_dest) */
#define ETH_DB_DATA_DEST_MASK 0x3
#define ETH_DB_DATA_DEST_SHIFT 0
+/* aggregative command to CM (use enum db_agg_cmd_sel) */
#define ETH_DB_DATA_AGG_CMD_MASK 0x3
#define ETH_DB_DATA_AGG_CMD_SHIFT 2
-#define ETH_DB_DATA_BYPASS_EN_MASK 0x1
+#define ETH_DB_DATA_BYPASS_EN_MASK 0x1 /* enable QM bypass */
#define ETH_DB_DATA_BYPASS_EN_SHIFT 4
#define ETH_DB_DATA_RESERVED_MASK 0x1
#define ETH_DB_DATA_RESERVED_SHIFT 5
+/* aggregative value selection */
#define ETH_DB_DATA_AGG_VAL_SEL_MASK 0x3
#define ETH_DB_DATA_AGG_VAL_SEL_SHIFT 6
+/* bit for every DQ counter flags in CM context that DQ can increment */
u8 agg_flags;
__le16 bd_prod;
};
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index d430752..96efc3c 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -252,11 +252,9 @@ struct lldp_config_params_s {
struct lldp_status_params_s {
u32 prefix_seq_num;
u32 status; /* TBD */
- /* Holds remote Chassis ID TLV header, subtype and 9B of payload.
- */
+ /* Holds remote Chassis ID TLV header, subtype and 9B of payload. */
u32 peer_chassis_id[LLDP_CHASSIS_ID_STAT_LEN];
- /* Holds remote Port ID TLV header, subtype and 9B of payload.
- */
+ /* Holds remote Port ID TLV header, subtype and 9B of payload. */
u32 peer_port_id[LLDP_PORT_ID_STAT_LEN];
u32 suffix_seq_num;
};
@@ -327,6 +325,7 @@ struct dcbx_app_priority_entry {
#define DCBX_APP_PROTOCOL_ID_SHIFT 16
};
+
/* FW structure in BE */
struct dcbx_app_priority_feature {
u32 flags;
@@ -337,9 +336,9 @@ struct dcbx_app_priority_feature {
#define DCBX_APP_ERROR_MASK 0x00000004
#define DCBX_APP_ERROR_SHIFT 2
/* Not in use
- * #define DCBX_APP_DEFAULT_PRI_MASK 0x00000f00
- * #define DCBX_APP_DEFAULT_PRI_SHIFT 8
- */
+ #define DCBX_APP_DEFAULT_PRI_MASK 0x00000f00
+ #define DCBX_APP_DEFAULT_PRI_SHIFT 8
+ */
#define DCBX_APP_MAX_TCS_MASK 0x0000f000
#define DCBX_APP_MAX_TCS_SHIFT 12
#define DCBX_APP_NUM_ENTRIES_MASK 0x00ff0000
@@ -398,13 +397,13 @@ struct dcbx_mib {
u32 prefix_seq_num;
u32 flags;
/*
- * #define DCBX_CONFIG_VERSION_MASK 0x00000007
- * #define DCBX_CONFIG_VERSION_SHIFT 0
- * #define DCBX_CONFIG_VERSION_DISABLED 0
- * #define DCBX_CONFIG_VERSION_IEEE 1
- * #define DCBX_CONFIG_VERSION_CEE 2
- * #define DCBX_CONFIG_VERSION_STATIC 4
- */
+ #define DCBX_CONFIG_VERSION_MASK 0x00000007
+ #define DCBX_CONFIG_VERSION_SHIFT 0
+ #define DCBX_CONFIG_VERSION_DISABLED 0
+ #define DCBX_CONFIG_VERSION_IEEE 1
+ #define DCBX_CONFIG_VERSION_CEE 2
+ #define DCBX_CONFIG_VERSION_STATIC 4
+ */
struct dcbx_features features;
u32 suffix_seq_num;
};
@@ -439,6 +438,7 @@ struct public_global {
u32 debug_mb_offset;
u32 phymod_dbg_mb_offset;
struct couple_mode_teaming cmt;
+/* Temperature in Celcius (-255C / +255C), measured every second. */
s32 internal_temperature;
u32 mfw_ver;
u32 running_bundle_id;
@@ -732,7 +732,8 @@ struct public_func {
/* MTU size per funciton is needed for the OV feature */
u32 mtu_size;
- /* 9 entires for the C2S PCP map for each inner VLAN PCP + 1 default */
+/* 9 entires for the C2S PCP map for each inner VLAN PCP + 1 default */
+
/* For PCP values 0-3 use the map lower */
/* 0xFF000000 - PCP 0, 0x00FF0000 - PCP 1,
* 0x0000FF00 - PCP 2, 0x000000FF PCP 3
@@ -757,6 +758,7 @@ struct public_func {
#define FUNC_MF_CFG_PAUSE_ON_HOST_RING 0x00000002
#define FUNC_MF_CFG_PAUSE_ON_HOST_RING_SHIFT 0x00000001
+
#define FUNC_MF_CFG_PROTOCOL_MASK 0x000000f0
#define FUNC_MF_CFG_PROTOCOL_SHIFT 4
#define FUNC_MF_CFG_PROTOCOL_ETHERNET 0x00000000
@@ -1040,49 +1042,105 @@ struct public_drv_mb {
#define DRV_MSG_CODE_INITIATE_PF_FLR 0x02010000
#define DRV_MSG_CODE_VF_DISABLED_DONE 0xc0000000
#define DRV_MSG_CODE_CFG_VF_MSIX 0xc0010000
+/* Param is either DRV_MB_PARAM_NVM_PUT_FILE_BEGIN_MFW/IMAGE */
#define DRV_MSG_CODE_NVM_PUT_FILE_BEGIN 0x00010000
+/* Param should be set to the transaction size (up to 64 bytes) */
#define DRV_MSG_CODE_NVM_PUT_FILE_DATA 0x00020000
+/* MFW will place the file offset and len in file_att struct */
#define DRV_MSG_CODE_NVM_GET_FILE_ATT 0x00030000
+/* Read 32bytes of nvram data. Param is [0:23] – Offset [24:31] –
+ * Len in Bytes
+ */
#define DRV_MSG_CODE_NVM_READ_NVRAM 0x00050000
+/* Writes up to 32Bytes to nvram. Param is [0:23] – Offset [24:31] –
+ * Len in Bytes. In case this address is in the range of secured file in
+ * secured mode, the operation will fail
+ */
#define DRV_MSG_CODE_NVM_WRITE_NVRAM 0x00060000
+/* Delete a file from nvram. Param is image_type. */
#define DRV_MSG_CODE_NVM_DEL_FILE 0x00080000
+/* Reset MCP when no NVM operation is going on, and no drivers are loaded.
+ * In case operation succeed, MCP will not ack back.
+ */
#define DRV_MSG_CODE_MCP_RESET 0x00090000
+/* Temporary command to set secure mode, where the param is 0 (None secure) /
+ * 1 (Secure) / 2 (Full-Secure)
+ */
#define DRV_MSG_CODE_SET_SECURE_MODE 0x000a0000
+/* Param: [0:15] - Address, [16:18] - lane# (0/1/2/3 - for single lane,
+ * 4/5 - for dual lanes, 6 - for all lanes, [28] - PMD reg, [29] - select port,
+ * [30:31] - port
+ */
#define DRV_MSG_CODE_PHY_RAW_READ 0x000b0000
+/* Param: [0:15] - Address, [16:18] - lane# (0/1/2/3 - for single lane,
+ * 4/5 - for dual lanes, 6 - for all lanes, [28] - PMD reg, [29] - select port,
+ * [30:31] - port
+ */
#define DRV_MSG_CODE_PHY_RAW_WRITE 0x000c0000
+/* Param: [0:15] - Address, [30:31] - port */
#define DRV_MSG_CODE_PHY_CORE_READ 0x000d0000
+/* Param: [0:15] - Address, [30:31] - port */
#define DRV_MSG_CODE_PHY_CORE_WRITE 0x000e0000
+/* Param: [0:3] - version, [4:15] - name (null terminated) */
#define DRV_MSG_CODE_SET_VERSION 0x000f0000
+/* Halts the MCP. To resume MCP, user will need to use
+ * MCP_REG_CPU_STATE/MCP_REG_CPU_MODE registers.
+ */
#define DRV_MSG_CODE_MCP_HALT 0x00100000
+/* Host shall provide buffer and size for MFW */
#define DRV_MSG_CODE_PMD_DIAG_DUMP 0x00140000
+/* Host shall provide buffer and size for MFW */
#define DRV_MSG_CODE_PMD_DIAG_EYE 0x00150000
+/* Param: [0:1] - Port, [2:7] - read size, [8:15] - I2C address,
+ * [16:31] - offset
+ */
#define DRV_MSG_CODE_TRANSCEIVER_READ 0x00160000
+/* Param: [0:1] - Port, [2:7] - write size, [8:15] - I2C address,
+ * [16:31] - offset
+ */
#define DRV_MSG_CODE_TRANSCEIVER_WRITE 0x00170000
+/* Set virtual mac address, params [31:6] - reserved, [5:4] - type,
+ * [3:0] - func, drv_data[7:0] - MAC/WWNN/WWPN
+ */
#define DRV_MSG_CODE_SET_VMAC 0x00110000
+/* Set virtual mac address, params [31:6] - reserved, [5:4] - type,
+ * [3:0] - func, drv_data[7:0] - MAC/WWNN/WWPN
+ */
#define DRV_MSG_CODE_GET_VMAC 0x00120000
#define DRV_MSG_CODE_VMAC_TYPE_MAC 1
#define DRV_MSG_CODE_VMAC_TYPE_WWNN 2
#define DRV_MSG_CODE_VMAC_TYPE_WWPN 3
+/* Get statistics from pf, params [31:4] - reserved, [3:0] - stats type */
#define DRV_MSG_CODE_GET_STATS 0x00130000
#define DRV_MSG_CODE_STATS_TYPE_LAN 1
#define DRV_MSG_CODE_STATS_TYPE_FCOE 2
#define DRV_MSG_CODE_STATS_TYPE_ISCSI 3
#define DRV_MSG_CODE_STATS_TYPE_RDMA 4
+/* indicate OCBB related information */
#define DRV_MSG_CODE_OCBB_DATA 0x00180000
+
+/* Set function BW, params[15:8] - min, params[7:0] - max */
#define DRV_MSG_CODE_SET_BW 0x00190000
#define BW_MAX_MASK 0x000000ff
#define BW_MAX_SHIFT 0
#define BW_MIN_MASK 0x0000ff00
#define BW_MIN_SHIFT 8
+
+/* When param is set to 1, all parities will be masked(disabled). When params
+ * are set to 0, parities will be unmasked again.
+ */
#define DRV_MSG_CODE_MASK_PARITIES 0x001a0000
+/* param[0] - Simulate fan failure, param[1] - simulate over temp. */
#define DRV_MSG_CODE_INDUCE_FAILURE 0x001b0000
#define DRV_MSG_FAN_FAILURE_TYPE (1 << 0)
#define DRV_MSG_TEMPERATURE_FAILURE_TYPE (1 << 1)
+/* Param: [0:15] - gpio number */
#define DRV_MSG_CODE_GPIO_READ 0x001c0000
+/* Param: [0:15] - gpio number, [16:31] - gpio value */
#define DRV_MSG_CODE_GPIO_WRITE 0x001d0000
/* Param: [0:15] - gpio number */
#define DRV_MSG_CODE_GPIO_INFO 0x00270000
@@ -1091,6 +1149,7 @@ struct public_drv_mb {
#define DRV_MSG_CODE_BIST_TEST 0x001e0000
#define DRV_MSG_CODE_GET_TEMPERATURE 0x001f0000
+/* Set LED mode params :0 operational, 1 LED turn ON, 2 LED turn OFF */
#define DRV_MSG_CODE_SET_LED_MODE 0x00200000
/* drv_data[7:0] - EPOC in seconds, drv_data[15:8] -
* driver version (MAJ MIN BUILD SUB)
@@ -1244,9 +1303,12 @@ struct public_drv_mb {
#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_SHIFT 0
#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_MASK 0xF
#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_UNKNOWN 0x1
+/* Not Installed*/
#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_NOT_LOADED 0x2
#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_LOADING 0x3
+/* installed but disabled by user/admin/OS */
#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_DISABLED 0x4
+/* installed and active */
#define DRV_MSG_CODE_OV_UPDATE_DRIVER_STATE_ACTIVE 0x5
#define DRV_MB_PARAM_OV_MTU_SIZE_SHIFT 0
@@ -1354,6 +1416,7 @@ struct public_drv_mb {
#define FW_MSG_CODE_NVM_FILE_READ_ONLY 0x00200000
#define FW_MSG_CODE_NVM_UNKNOWN_FILE 0x00300000
#define FW_MSG_CODE_NVM_PUT_FILE_FINISH_OK 0x00400000
+/* MFW reject "mcp reset" command if one of the drivers is up */
#define FW_MSG_CODE_MCP_RESET_REJECT 0x00600000
#define FW_MSG_CODE_PHY_OK 0x00110000
#define FW_MSG_CODE_PHY_ERROR 0x00120000
@@ -1389,6 +1452,7 @@ struct public_drv_mb {
#define FW_MSG_SEQ_NUMBER_MASK 0x0000ffff
+
u32 fw_mb_param;
/* Resource Allocation params - MFW version support*/
#define FW_MB_PARAM_RESOURCE_ALLOC_VERSION_MAJOR_MASK 0xFFFF0000
@@ -1420,7 +1484,10 @@ struct public_drv_mb {
#define MCP_EVENT_MASK 0xffff0000
#define MCP_EVENT_OTHER_DRIVER_RESET_REQ 0x00010000
+/* The union data is used by the driver to pass parameters to the scratchpad. */
+
union drv_union_data union_data;
+
};
/* MFW - DRV MB */
@@ -1477,7 +1544,9 @@ enum MFW_DRV_MSG_TYPE {
struct public_mfw_mb {
u32 sup_msgs; /* Assigend with MFW_DRV_MSG_MAX */
+/* Incremented by the MFW */
u32 msg[MFW_DRV_MSG_MAX_DWORDS(MFW_DRV_MSG_MAX)];
+/* Incremented by the driver */
u32 ack[MFW_DRV_MSG_MAX_DWORDS(MFW_DRV_MSG_MAX)];
};
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 14/32] net/qede/base: add MFW crash dump support
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (12 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 13/32] net/qede/base: comment enhancements Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 15/32] net/qede: enable support for unequal number of Rx/Tx queues Rasesh Mody
` (18 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Rasesh Mody
Add support for management firmware(MFW) crash dump collection.
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
---
drivers/net/qede/base/ecore.h | 3 +
drivers/net/qede/base/ecore_dev.c | 22 ++---
drivers/net/qede/base/ecore_dev_api.h | 29 ++++---
drivers/net/qede/base/ecore_mcp.c | 151 ++++++++++++++++++++++++++++++++++
drivers/net/qede/base/ecore_mcp.h | 45 ++++++++++
drivers/net/qede/base/ecore_mcp_api.h | 10 +++
drivers/net/qede/qede_main.c | 17 ++--
7 files changed, 249 insertions(+), 28 deletions(-)
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 874c3a3..89e2bd0 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -735,6 +735,9 @@ struct ecore_dev {
bool attn_clr_en;
+ /* Indicates whether allowing the MFW to collect a crash dump */
+ bool mdump_en;
+
/* Indicates if the reg_fifo is checked after any register access */
bool chk_reg_fifo;
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 319edeb..b530173 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1619,24 +1619,20 @@ static void ecore_reset_mb_shadow(struct ecore_hwfn *p_hwfn,
}
enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
- struct ecore_tunn_start_params *p_tunn,
- bool b_hw_start,
- enum ecore_int_mode int_mode,
- bool allow_npar_tx_switch,
- const u8 *bin_fw_data)
+ struct ecore_hw_init_params *p_params)
{
enum _ecore_status_t rc, mfw_rc;
u32 load_code, param;
int i, j;
- if ((int_mode == ECORE_INT_MODE_MSI) && (p_dev->num_hwfns > 1)) {
+ if (p_params->int_mode == ECORE_INT_MODE_MSI && p_dev->num_hwfns > 1) {
DP_NOTICE(p_dev, false,
"MSI mode is not supported for CMT devices\n");
return ECORE_INVAL;
}
if (IS_PF(p_dev)) {
- rc = ecore_init_fw_data(p_dev, bin_fw_data);
+ rc = ecore_init_fw_data(p_dev, p_params->bin_fw_data);
if (rc != ECORE_SUCCESS)
return rc;
}
@@ -1733,9 +1729,11 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
/* Fall into */
case FW_MSG_CODE_DRV_LOAD_FUNCTION:
rc = ecore_hw_init_pf(p_hwfn, p_hwfn->p_main_ptt,
- p_tunn, p_hwfn->hw_info.hw_mode,
- b_hw_start, int_mode,
- allow_npar_tx_switch);
+ p_params->p_tunn,
+ p_hwfn->hw_info.hw_mode,
+ p_params->b_hw_start,
+ p_params->int_mode,
+ p_params->allow_npar_tx_switch);
break;
default:
rc = ECORE_NOTIMPL;
@@ -1759,6 +1757,10 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
return mfw_rc;
}
+ ecore_mcp_mdump_get_info(p_hwfn, p_hwfn->p_main_ptt);
+ ecore_mcp_mdump_set_values(p_hwfn, p_hwfn->p_main_ptt,
+ p_params->epoch);
+
/* send DCBX attention request command */
DP_VERBOSE(p_hwfn, ECORE_MSG_DCB,
"sending phony dcbx set command to trigger DCBx attention handling\n");
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 1a810b5..042c0af 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -57,26 +57,31 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev);
*/
void ecore_resc_setup(struct ecore_dev *p_dev);
+struct ecore_hw_init_params {
+ /* tunnelling parameters */
+ struct ecore_tunn_start_params *p_tunn;
+ bool b_hw_start;
+ /* interrupt mode [msix, inta, etc.] to use */
+ enum ecore_int_mode int_mode;
+/* npar tx switching to be used for vports configured for tx-switching */
+
+ bool allow_npar_tx_switch;
+ /* binary fw data pointer in binary fw file */
+ const u8 *bin_fw_data;
+ /* the OS Epoch time in seconds */
+ u32 epoch;
+};
+
/**
* @brief ecore_hw_init -
*
* @param p_dev
- * @param p_tunn - tunneling parameters
- * @param b_hw_start
- * @param int_mode - interrupt mode [msix, inta, etc.] to use.
- * @param allow_npar_tx_switch - npar tx switching to be used
- * for vports configured for tx-switching.
- * @param bin_fw_data - binary fw data pointer in binary fw file.
- * Pass NULL if not using binary fw file.
+ * @param p_params
*
* @return enum _ecore_status_t
*/
enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
- struct ecore_tunn_start_params *p_tunn,
- bool b_hw_start,
- enum ecore_int_mode int_mode,
- bool allow_npar_tx_switch,
- const u8 *bin_fw_data);
+ struct ecore_hw_init_params *p_params);
/**
* @brief ecore_hw_timers_stop_all -
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index cf67fa1..500368e 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -1043,6 +1043,154 @@ static void ecore_mcp_handle_fan_failure(struct ecore_hwfn *p_hwfn,
ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_FAN_FAIL);
}
+static enum _ecore_status_t
+ecore_mcp_mdump_cmd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ u32 mdump_cmd, union drv_union_data *p_data_src,
+ union drv_union_data *p_data_dst, u32 *p_mcp_resp)
+{
+ struct ecore_mcp_mb_params mb_params;
+ enum _ecore_status_t rc;
+
+ OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+ mb_params.cmd = DRV_MSG_CODE_MDUMP_CMD;
+ mb_params.param = mdump_cmd;
+ mb_params.p_data_src = p_data_src;
+ mb_params.p_data_dst = p_data_dst;
+ rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ *p_mcp_resp = mb_params.mcp_resp;
+ if (*p_mcp_resp == FW_MSG_CODE_MDUMP_INVALID_CMD) {
+ DP_NOTICE(p_hwfn, false,
+ "MFW claims that the mdump command is illegal [mdump_cmd 0x%x]\n",
+ mdump_cmd);
+ rc = ECORE_INVAL;
+ }
+
+ return rc;
+}
+
+static enum _ecore_status_t ecore_mcp_mdump_ack(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt)
+{
+ u32 mcp_resp;
+
+ return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_ACK,
+ OSAL_NULL, OSAL_NULL, &mcp_resp);
+}
+
+enum _ecore_status_t ecore_mcp_mdump_set_values(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u32 epoch)
+{
+ union drv_union_data union_data;
+ u32 mcp_resp;
+
+ OSAL_MEMCPY(&union_data.raw_data, &epoch, sizeof(epoch));
+
+ return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_SET_VALUES,
+ &union_data, OSAL_NULL, &mcp_resp);
+}
+
+enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt)
+{
+ u32 mcp_resp;
+
+ return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_TRIGGER,
+ OSAL_NULL, OSAL_NULL, &mcp_resp);
+}
+
+enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt)
+{
+ u32 mcp_resp;
+
+ return ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_CLEAR_LOGS,
+ OSAL_NULL, OSAL_NULL, &mcp_resp);
+}
+
+static enum _ecore_status_t
+ecore_mcp_mdump_get_config(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ struct mdump_config_stc *p_mdump_config)
+{
+ union drv_union_data union_data;
+ u32 mcp_resp;
+ enum _ecore_status_t rc;
+
+ rc = ecore_mcp_mdump_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_MDUMP_GET_CONFIG,
+ OSAL_NULL, &union_data, &mcp_resp);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ /* A zero response implies that the mdump command is not supported */
+ if (!mcp_resp)
+ return ECORE_NOTIMPL;
+
+ if (mcp_resp != FW_MSG_CODE_OK) {
+ DP_NOTICE(p_hwfn, false,
+ "Failed to get the mdump configuration and logs info [mcp_resp 0x%x]\n",
+ mcp_resp);
+ rc = ECORE_UNKNOWN_ERROR;
+ }
+
+ OSAL_MEMCPY(p_mdump_config, &union_data.mdump_config,
+ sizeof(*p_mdump_config));
+
+ return rc;
+}
+
+enum _ecore_status_t ecore_mcp_mdump_get_info(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt)
+{
+ struct mdump_config_stc mdump_config;
+ enum _ecore_status_t rc;
+
+ rc = ecore_mcp_mdump_get_config(p_hwfn, p_ptt, &mdump_config);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
+ "MFW mdump_config: version 0x%x, config 0x%x, epoch 0x%x, num_of_logs 0x%x, valid_logs 0x%x\n",
+ mdump_config.version, mdump_config.config, mdump_config.epoc,
+ mdump_config.num_of_logs, mdump_config.valid_logs);
+
+ if (mdump_config.valid_logs > 0) {
+ DP_NOTICE(p_hwfn, false,
+ "* * * IMPORTANT - HW ERROR register dump captured by device * * *\n");
+ }
+
+ return rc;
+}
+
+void ecore_mcp_mdump_enable(struct ecore_dev *p_dev, bool mdump_enable)
+{
+ p_dev->mdump_en = mdump_enable;
+}
+
+static void ecore_mcp_handle_critical_error(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt)
+{
+ /* In CMT mode - no need for more than a single acknowledgment to the
+ * MFW, and no more than a single notification to the upper driver.
+ */
+ if (p_hwfn != ECORE_LEADING_HWFN(p_hwfn->p_dev))
+ return;
+
+ DP_NOTICE(p_hwfn, false,
+ "Received a critical error notification from the MFW!\n");
+
+ if (p_hwfn->p_dev->mdump_en) {
+ DP_NOTICE(p_hwfn, false,
+ "Not acknowledging the notification to allow the MFW crash dump\n");
+ return;
+ }
+
+ ecore_mcp_mdump_ack(p_hwfn, p_ptt);
+ ecore_hw_err_notify(p_hwfn, ECORE_HW_ERR_HW_ATTN);
+}
+
enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
{
@@ -1104,6 +1252,9 @@ enum _ecore_status_t ecore_mcp_handle_events(struct ecore_hwfn *p_hwfn,
case MFW_DRV_MSG_FAILURE_DETECTED:
ecore_mcp_handle_fan_failure(p_hwfn, p_ptt);
break;
+ case MFW_DRV_MSG_CRITICAL_ERROR_OCCURRED:
+ ecore_mcp_handle_critical_error(p_hwfn, p_ptt);
+ break;
default:
/* @DPDK */
DP_NOTICE(p_hwfn, false,
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 64c639f..d3103ff 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -303,6 +303,51 @@ int __ecore_configure_pf_min_bandwidth(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t ecore_mcp_mask_parities(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u32 mask_parities);
+/**
+ * @brief - Sends crash mdump related info to the MFW.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mcp_mdump_set_values(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u32 epoch);
+
+/**
+ * @brief - Triggers a MFW crash dump procedure.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mcp_mdump_trigger(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt);
+
+/**
+ * @brief - Clears the MFW crash dump logs.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mcp_mdump_clear_logs(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt);
+
+/**
+ * @brief - Gets the MFW crash dump configuration and logs info.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mcp_mdump_get_info(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt);
+
enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct resource_info *p_resc_info,
diff --git a/drivers/net/qede/base/ecore_mcp_api.h b/drivers/net/qede/base/ecore_mcp_api.h
index ff4f1ca..c26b494 100644
--- a/drivers/net/qede/base/ecore_mcp_api.h
+++ b/drivers/net/qede/base/ecore_mcp_api.h
@@ -792,4 +792,14 @@ enum _ecore_status_t ecore_mcp_mem_ecc_events(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
u64 *num_events);
+/**
+ * @brief Sets whether a critical error notification from the MFW is acked, or
+ * is it being ignored and thus allowing the MFW crash dump.
+ *
+ * @param p_dev
+ * @param mdump_enable
+ *
+ */
+void ecore_mcp_mdump_enable(struct ecore_dev *p_dev, bool mdump_enable);
+
#endif
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index e4ef4f0..60655b7 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -7,6 +7,7 @@
*/
#include <limits.h>
+#include <time.h>
#include <rte_alarm.h>
#include "qede_ethdev.h"
@@ -221,6 +222,7 @@ static int qed_slowpath_start(struct ecore_dev *edev,
const uint8_t *data = NULL;
struct ecore_hwfn *hwfn;
struct ecore_mcp_drv_version drv_version;
+ struct ecore_hw_init_params hw_init_params;
struct qede_dev *qdev = (struct qede_dev *)edev;
int rc;
#ifdef QED_ENC_SUPPORTED
@@ -259,7 +261,6 @@ static int qed_slowpath_start(struct ecore_dev *edev,
qed_start_iov_task(edev);
#endif
- /* Start the slowpath */
#ifdef CONFIG_ECORE_BINARY_FW
if (IS_PF(edev))
data = (const uint8_t *)edev->firmware + sizeof(u32);
@@ -267,6 +268,8 @@ static int qed_slowpath_start(struct ecore_dev *edev,
allow_npar_tx_switching = npar_tx_switching ? true : false;
+ /* Start the slowpath */
+ memset(&hw_init_params, 0, sizeof(hw_init_params));
#ifdef QED_ENC_SUPPORTED
memset(&tunn_info, 0, sizeof(tunn_info));
tunn_info.tunn_mode |= 1 << QED_MODE_VXLAN_TUNN |
@@ -276,12 +279,14 @@ static int qed_slowpath_start(struct ecore_dev *edev,
tunn_info.tunn_clss_vxlan = QED_TUNN_CLSS_MAC_VLAN;
tunn_info.tunn_clss_l2gre = QED_TUNN_CLSS_MAC_VLAN;
tunn_info.tunn_clss_ipgre = QED_TUNN_CLSS_MAC_VLAN;
- rc = ecore_hw_init(edev, &tunn_info, true, ECORE_INT_MODE_MSIX,
- allow_npar_tx_switching, data);
-#else
- rc = ecore_hw_init(edev, NULL, true, ECORE_INT_MODE_MSIX,
- allow_npar_tx_switching, data);
+ hw_init_params.p_tunn = &tunn_info;
#endif
+ hw_init_params.b_hw_start = true;
+ hw_init_params.int_mode = ECORE_INT_MODE_MSIX;
+ hw_init_params.allow_npar_tx_switch = allow_npar_tx_switching;
+ hw_init_params.bin_fw_data = data;
+ hw_init_params.epoch = (u32)time(NULL);
+ rc = ecore_hw_init(edev, &hw_init_params);
if (rc) {
DP_ERR(edev, "ecore_hw_init failed\n");
goto err2;
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 15/32] net/qede: enable support for unequal number of Rx/Tx queues
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (13 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 14/32] net/qede/base: add MFW crash dump support Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 16/32] net/qede: fix port (re)configuration issue Rasesh Mody
` (17 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Sony Chacko
From: Sony Chacko <sony.chacko@qlogic.com>
Previous release of the qede PMD had a limitation that the
driver expects the number of tx and rx queues to be the same.
This patch fixes this issue by making appropriate changes in
control and data path.
Fixes: 2ea6f76aff40 ("qede: add core driver")
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
---
doc/guides/nics/qede.rst | 3 +-
drivers/net/qede/qede_ethdev.c | 23 +--
drivers/net/qede/qede_ethdev.h | 16 +-
drivers/net/qede/qede_rxtx.c | 376 ++++++++++++++++++++---------------------
drivers/net/qede/qede_rxtx.h | 7 +-
5 files changed, 212 insertions(+), 213 deletions(-)
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index d107a7d..4ec75e4 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -50,7 +50,7 @@ Supported Features
- Jumbo frames (using single buffer)
- VLAN offload - Filtering and stripping
- Stateless checksum offloads (IPv4/TCP/UDP)
-- Multiple Rx/Tx queues (queue-pairs)
+- Multiple Rx/Tx queues
- RSS (with user configurable table/key)
- TSS
- Multiple MAC address
@@ -63,7 +63,6 @@ Non-supported Features
----------------------
- Scatter-Gather Rx/Tx frames
-- Unequal number of Rx/Tx queues
- SR-IOV PF
- Tunneling offloads
- Reload of the PMD after a non-graceful termination
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 452139d..e2141cb 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -496,31 +496,26 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
PMD_INIT_FUNC_TRACE(edev);
- if (eth_dev->data->nb_rx_queues != eth_dev->data->nb_tx_queues) {
- DP_NOTICE(edev, false,
- "Unequal number of rx/tx queues "
- "is not supported RX=%u TX=%u\n",
- eth_dev->data->nb_rx_queues,
- eth_dev->data->nb_tx_queues);
- return -EINVAL;
- }
-
/* Check requirements for 100G mode */
if (edev->num_hwfns > 1) {
- if (eth_dev->data->nb_rx_queues < 2) {
+ if (eth_dev->data->nb_rx_queues < 2 ||
+ eth_dev->data->nb_tx_queues < 2) {
DP_NOTICE(edev, false,
- "100G mode requires minimum two queues\n");
+ "100G mode needs min. 2 RX/TX queues\n");
return -EINVAL;
}
- if ((eth_dev->data->nb_rx_queues % 2) != 0) {
+ if ((eth_dev->data->nb_rx_queues % 2 != 0) ||
+ (eth_dev->data->nb_tx_queues % 2 != 0)) {
DP_NOTICE(edev, false,
- "100G mode requires even number of queues\n");
+ "100G mode needs even no. of RX/TX queues\n");
return -EINVAL;
}
}
- qdev->num_rss = eth_dev->data->nb_rx_queues;
+ qdev->fp_num_tx = eth_dev->data->nb_tx_queues;
+ qdev->fp_num_rx = eth_dev->data->nb_rx_queues;
+ qdev->num_queues = qdev->fp_num_tx + qdev->fp_num_rx;
/* Initial state */
qdev->state = QEDE_CLOSE;
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index abb33af..1f62283 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -62,8 +62,8 @@
#define QEDE_MAX_TSS_CNT(edev) ((edev)->dev_info.num_queues * \
(edev)->dev_info.num_tc)
-#define QEDE_RSS_CNT(edev) ((edev)->num_rss)
-#define QEDE_TSS_CNT(edev) ((edev)->num_rss * (edev)->num_tc)
+#define QEDE_RSS_CNT(edev) ((edev)->fp_num_rx)
+#define QEDE_TSS_CNT(edev) ((edev)->fp_num_rx * (edev)->num_tc)
#define QEDE_DUPLEX_FULL 1
#define QEDE_DUPLEX_HALF 2
@@ -76,6 +76,12 @@
#define QEDE_INIT_EDEV(adapter) (&((struct qede_dev *)adapter)->edev)
+#define QEDE_QUEUE_CNT(qdev) ((qdev)->num_queues)
+#define QEDE_RSS_COUNT(qdev) ((qdev)->num_queues - (qdev)->fp_num_tx)
+#define QEDE_TSS_COUNT(qdev) (((qdev)->num_queues - (qdev)->fp_num_rx) * \
+ (qdev)->num_tc)
+#define QEDE_TC_IDX(qdev, txqidx) ((txqidx) / QEDE_TSS_COUNT(qdev))
+
#define QEDE_INIT(eth_dev) { \
struct qede_dev *qdev = eth_dev->data->dev_private; \
struct ecore_dev *edev = &qdev->edev; \
@@ -138,8 +144,10 @@ struct qede_dev {
struct qed_update_vport_rss_params rss_params;
uint32_t flags;
bool gro_disable;
- struct qede_rx_queue **rx_queues;
- struct qede_tx_queue **tx_queues;
+ uint16_t num_queues;
+ uint8_t fp_num_tx;
+ uint8_t fp_num_rx;
+
enum dev_state state;
/* Vlans */
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index b5a40fe..00584a9 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -10,6 +10,9 @@
static bool gro_disable = 1; /* mod_param */
+#define QEDE_FASTPATH_TX (1 << 0)
+#define QEDE_FASTPATH_RX (1 << 1)
+
static inline int qede_alloc_rx_buffer(struct qede_rx_queue *rxq)
{
struct rte_mbuf *new_mb = NULL;
@@ -65,6 +68,22 @@ void qede_rx_queue_release(void *rx_queue)
}
}
+static void qede_tx_queue_release_mbufs(struct qede_tx_queue *txq)
+{
+ unsigned int i;
+
+ PMD_TX_LOG(DEBUG, txq, "releasing %u mbufs\n", txq->nb_tx_desc);
+
+ if (txq->sw_tx_ring) {
+ for (i = 0; i < txq->nb_tx_desc; i++) {
+ if (txq->sw_tx_ring[i].mbuf) {
+ rte_pktmbuf_free(txq->sw_tx_ring[i].mbuf);
+ txq->sw_tx_ring[i].mbuf = NULL;
+ }
+ }
+ }
+}
+
int
qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
uint16_t nb_desc, unsigned int socket_id,
@@ -116,15 +135,8 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
data_size = (uint16_t)rte_pktmbuf_data_room_size(mp) -
RTE_PKTMBUF_HEADROOM;
- if (pkt_len > data_size) {
- DP_ERR(edev, "MTU %u should not exceed dataroom %u\n",
- pkt_len, data_size);
- rte_free(rxq);
- return -EINVAL;
- }
-
qdev->mtu = pkt_len;
- rxq->rx_buf_size = pkt_len + QEDE_ETH_OVERHEAD;
+ rxq->rx_buf_size = data_size;
DP_INFO(edev, "MTU = %u ; RX buffer = %u\n",
qdev->mtu, rxq->rx_buf_size);
@@ -166,6 +178,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
rxq->sw_rx_ring = NULL;
rte_free(rxq);
rxq = NULL;
+ return -ENOMEM;
}
/* Allocate FW completion ring */
@@ -185,6 +198,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
rte_free(rxq->sw_rx_ring);
rxq->sw_rx_ring = NULL;
rte_free(rxq);
+ return -ENOMEM;
}
/* Allocate buffers for the Rx ring */
@@ -198,8 +212,6 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
}
dev->data->rx_queues[queue_idx] = rxq;
- if (!qdev->rx_queues)
- qdev->rx_queues = (struct qede_rx_queue **)dev->data->rx_queues;
DP_INFO(edev, "rxq %d num_desc %u rx_buf_size=%u socket %u\n",
queue_idx, nb_desc, qdev->mtu, socket_id);
@@ -210,22 +222,6 @@ err4:
return -ENOMEM;
}
-static void qede_tx_queue_release_mbufs(struct qede_tx_queue *txq)
-{
- unsigned int i;
-
- PMD_TX_LOG(DEBUG, txq, "releasing %u mbufs\n", txq->nb_tx_desc);
-
- if (txq->sw_tx_ring != NULL) {
- for (i = 0; i < txq->nb_tx_desc; i++) {
- if (txq->sw_tx_ring[i].mbuf != NULL) {
- rte_pktmbuf_free(txq->sw_tx_ring[i].mbuf);
- txq->sw_tx_ring[i].mbuf = NULL;
- }
- }
- }
-}
-
void qede_tx_queue_release(void *tx_queue)
{
struct qede_tx_queue *txq = tx_queue;
@@ -319,10 +315,6 @@ qede_tx_queue_setup(struct rte_eth_dev *dev,
(txq->nb_tx_desc - QEDE_DEFAULT_TX_FREE_THRESH);
dev->data->tx_queues[queue_idx] = txq;
- if (!qdev->tx_queues)
- qdev->tx_queues = (struct qede_tx_queue **)dev->data->tx_queues;
-
- txq->txq_counter = 0;
DP_INFO(edev,
"txq %u num_desc %u tx_free_thresh %u socket %u\n",
@@ -332,35 +324,50 @@ qede_tx_queue_setup(struct rte_eth_dev *dev,
}
/* This function inits fp content and resets the SB, RXQ and TXQ arrays */
-static void qede_init_fp(struct qede_dev *qdev)
+static void qede_init_fp(struct rte_eth_dev *eth_dev)
{
struct qede_fastpath *fp;
- int rss_id, txq_index, tc;
+ uint8_t i, rss_id, index, tc;
+ struct qede_dev *qdev = eth_dev->data->dev_private;
+ int fp_rx = qdev->fp_num_rx, rxq = 0, txq = 0;
- memset((void *)qdev->fp_array, 0, (QEDE_RSS_CNT(qdev) *
+ memset((void *)qdev->fp_array, 0, (QEDE_QUEUE_CNT(qdev) *
sizeof(*qdev->fp_array)));
- memset((void *)qdev->sb_array, 0, (QEDE_RSS_CNT(qdev) *
+ memset((void *)qdev->sb_array, 0, (QEDE_QUEUE_CNT(qdev) *
sizeof(*qdev->sb_array)));
- for_each_rss(rss_id) {
- fp = &qdev->fp_array[rss_id];
+ for_each_queue(i) {
+ fp = &qdev->fp_array[i];
+ if (fp_rx) {
+ fp->type = QEDE_FASTPATH_RX;
+ fp_rx--;
+ } else{
+ fp->type = QEDE_FASTPATH_TX;
+ }
+ }
+ for_each_queue(i) {
+ fp = &qdev->fp_array[i];
fp->qdev = qdev;
- fp->rss_id = rss_id;
+ fp->id = i;
/* Point rxq to generic rte queues that was created
* as part of queue creation.
*/
- fp->rxq = qdev->rx_queues[rss_id];
- fp->sb_info = &qdev->sb_array[rss_id];
+ if (fp->type & QEDE_FASTPATH_RX) {
+ fp->rxq = eth_dev->data->rx_queues[i];
+ fp->rxq->queue_id = rxq++;
+ }
+ fp->sb_info = &qdev->sb_array[i];
- for (tc = 0; tc < qdev->num_tc; tc++) {
- txq_index = tc * QEDE_RSS_CNT(qdev) + rss_id;
- fp->txqs[tc] = qdev->tx_queues[txq_index];
- fp->txqs[tc]->queue_id = txq_index;
- /* Updating it to main structure */
- snprintf(fp->name, sizeof(fp->name), "%s-fp-%d",
- "qdev", rss_id);
+ if (fp->type & QEDE_FASTPATH_TX) {
+ for (tc = 0; tc < qdev->num_tc; tc++) {
+ index = tc * QEDE_TSS_CNT(qdev) + txq;
+ fp->txqs[tc] = eth_dev->data->tx_queues[index];
+ fp->txqs[tc]->queue_id = index;
+ }
+ txq++;
}
+ snprintf(fp->name, sizeof(fp->name), "%s-fp-%d", "qdev", i);
}
qdev->gro_disable = gro_disable;
@@ -386,7 +393,7 @@ int qede_alloc_fp_array(struct qede_dev *qdev)
struct ecore_dev *edev = &qdev->edev;
int i;
- qdev->fp_array = rte_calloc("fp", QEDE_RSS_CNT(qdev),
+ qdev->fp_array = rte_calloc("fp", QEDE_QUEUE_CNT(qdev),
sizeof(*qdev->fp_array),
RTE_CACHE_LINE_SIZE);
@@ -395,7 +402,7 @@ int qede_alloc_fp_array(struct qede_dev *qdev)
return -ENOMEM;
}
- qdev->sb_array = rte_calloc("sb", QEDE_RSS_CNT(qdev),
+ qdev->sb_array = rte_calloc("sb", QEDE_QUEUE_CNT(qdev),
sizeof(*qdev->sb_array),
RTE_CACHE_LINE_SIZE);
@@ -437,50 +444,6 @@ qede_alloc_mem_sb(struct qede_dev *qdev, struct ecore_sb_info *sb_info,
return 0;
}
-static int qede_alloc_mem_fp(struct qede_dev *qdev, struct qede_fastpath *fp)
-{
- return qede_alloc_mem_sb(qdev, fp->sb_info, fp->rss_id);
-}
-
-static void qede_shrink_txq(struct qede_dev *qdev, uint16_t num_rss)
-{
- /* @@@TBD - this should also re-set the qed interrupts */
-}
-
-/* This function allocates all qede memory at NIC load. */
-static int qede_alloc_mem_load(struct qede_dev *qdev)
-{
- int rc = 0, rss_id;
- struct ecore_dev *edev = &qdev->edev;
-
- for (rss_id = 0; rss_id < QEDE_RSS_CNT(qdev); rss_id++) {
- struct qede_fastpath *fp = &qdev->fp_array[rss_id];
-
- rc = qede_alloc_mem_fp(qdev, fp);
- if (rc)
- break;
- }
-
- if (rss_id != QEDE_RSS_CNT(qdev)) {
- /* Failed allocating memory for all the queues */
- if (!rss_id) {
- DP_ERR(edev,
- "Failed to alloc memory for leading queue\n");
- rc = -ENOMEM;
- } else {
- DP_NOTICE(edev, false,
- "Failed to allocate memory for all of "
- "RSS queues\n"
- "Desired: %d queues, allocated: %d queues\n",
- QEDE_RSS_CNT(qdev), rss_id);
- qede_shrink_txq(qdev, rss_id);
- }
- qdev->num_rss = rss_id;
- }
-
- return 0;
-}
-
static inline void
qede_update_rx_prod(struct qede_dev *edev, struct qede_rx_queue *rxq)
{
@@ -605,7 +568,7 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
int vlan_removal_en = 1;
int rc, tc, i;
- if (!qdev->num_rss) {
+ if (!qdev->fp_num_rx) {
DP_ERR(edev,
"Cannot update V-VPORT as active as "
"there are no Rx queues\n");
@@ -630,44 +593,51 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
" MTU = %d, vlan_removal_en = %d\n",
start.vport_id, qdev->mtu, vlan_removal_en);
- for_each_rss(i) {
+ for_each_queue(i) {
struct qede_fastpath *fp = &qdev->fp_array[i];
- dma_addr_t p_phys_table;
- uint16_t page_cnt;
+ dma_addr_t tbl;
+ uint16_t cnt;
- p_phys_table = ecore_chain_get_pbl_phys(&fp->rxq->rx_comp_ring);
- page_cnt = ecore_chain_get_page_cnt(&fp->rxq->rx_comp_ring);
+ if (fp->type & QEDE_FASTPATH_RX) {
+ tbl = ecore_chain_get_pbl_phys(&fp->rxq->rx_comp_ring);
+ cnt = ecore_chain_get_page_cnt(&fp->rxq->rx_comp_ring);
- ecore_sb_ack(fp->sb_info, IGU_INT_DISABLE, 0); /* @DPDK */
+ ecore_sb_ack(fp->sb_info, IGU_INT_DISABLE, 0);
- rc = qdev->ops->q_rx_start(edev, i, i, 0,
+ rc = qdev->ops->q_rx_start(edev, i, fp->rxq->queue_id,
+ 0,
fp->sb_info->igu_sb_id,
RX_PI,
fp->rxq->rx_buf_size,
fp->rxq->rx_bd_ring.p_phys_addr,
- p_phys_table,
- page_cnt,
- &fp->rxq->hw_rxq_prod_addr);
- if (rc) {
- DP_ERR(edev, "Start RXQ #%d failed %d\n", i, rc);
+ tbl,
+ cnt,
+ &fp->rxq->hw_rxq_prod_addr);
+ if (rc) {
+ DP_ERR(edev,
+ "Start rxq #%d failed %d\n",
+ fp->rxq->queue_id, rc);
return rc;
}
fp->rxq->hw_cons_ptr = &fp->sb_info->sb_virt->pi_array[RX_PI];
- qede_update_rx_prod(qdev, fp->rxq);
+ qede_update_rx_prod(qdev, fp->rxq);
+ }
+ if (!(fp->type & QEDE_FASTPATH_TX))
+ continue;
for (tc = 0; tc < qdev->num_tc; tc++) {
struct qede_tx_queue *txq = fp->txqs[tc];
int txq_index = tc * QEDE_RSS_CNT(qdev) + i;
- p_phys_table = ecore_chain_get_pbl_phys(&txq->tx_pbl);
- page_cnt = ecore_chain_get_page_cnt(&txq->tx_pbl);
- rc = qdev->ops->q_tx_start(edev, i, txq_index,
+ tbl = ecore_chain_get_pbl_phys(&txq->tx_pbl);
+ cnt = ecore_chain_get_page_cnt(&txq->tx_pbl);
+ rc = qdev->ops->q_tx_start(edev, i, txq->queue_id,
0,
fp->sb_info->igu_sb_id,
TX_PI(tc),
- p_phys_table, page_cnt,
+ tbl, cnt,
&txq->doorbell_addr);
if (rc) {
DP_ERR(edev, "Start txq %u failed %d\n",
@@ -893,7 +863,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (unlikely(cqe_type == ETH_RX_CQE_TYPE_SLOW_PATH)) {
PMD_RX_LOG(DEBUG, rxq, "Got a slowath CQE\n");
- qdev->ops->eth_cqe_completion(edev, fp->rss_id,
+ qdev->ops->eth_cqe_completion(edev, fp->id,
(struct eth_slow_path_rx_cqe *)cqe);
goto next_cqe;
}
@@ -1065,7 +1035,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
struct qede_tx_queue *txq = p_txq;
struct qede_dev *qdev = txq->qdev;
struct ecore_dev *edev = &qdev->edev;
- struct qede_fastpath *fp = &qdev->fp_array[txq->queue_id];
+ struct qede_fastpath *fp;
struct eth_tx_1st_bd *first_bd;
uint16_t nb_tx_pkts;
uint16_t nb_pkt_sent = 0;
@@ -1073,6 +1043,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
uint16_t idx;
uint16_t tx_count;
+ fp = &qdev->fp_array[QEDE_RSS_COUNT(qdev) + txq->queue_id];
+
if (unlikely(txq->nb_tx_avail < txq->tx_free_thresh)) {
PMD_TX_LOG(DEBUG, txq, "send=%u avail=%u free_thresh=%u\n",
nb_pkts, txq->nb_tx_avail, txq->tx_free_thresh);
@@ -1149,51 +1121,51 @@ int qede_dev_start(struct rte_eth_dev *eth_dev)
struct qede_dev *qdev = eth_dev->data->dev_private;
struct ecore_dev *edev = &qdev->edev;
struct qed_link_output link_output;
- int rc;
+ struct qede_fastpath *fp;
+ int rc, i;
- DP_INFO(edev, "port %u\n", eth_dev->data->port_id);
+ DP_INFO(edev, "Device state is %d\n", qdev->state);
- if (qdev->state == QEDE_START) {
- DP_INFO(edev, "device already started\n");
+ switch (qdev->state) {
+ case QEDE_START:
+ DP_INFO(edev, "Device already started\n");
return 0;
- }
-
- if (qdev->state == QEDE_CLOSE) {
- rc = qede_alloc_fp_array(qdev);
- qede_init_fp(qdev);
- rc = qede_alloc_mem_load(qdev);
- DP_INFO(edev, "Allocated %d RSS queues on %d TC/s\n",
- QEDE_RSS_CNT(qdev), qdev->num_tc);
- } else if (qdev->state == QEDE_STOP) {
- DP_INFO(edev, "restarting port %u\n", eth_dev->data->port_id);
- } else {
- DP_INFO(edev, "unknown state port %u\n",
+ case QEDE_CLOSE:
+ if (qede_alloc_fp_array(qdev))
+ return -ENOMEM;
+ qede_init_fp(eth_dev);
+ /* Fall-thru */
+ case QEDE_STOP:
+ for (i = 0; i < QEDE_QUEUE_CNT(qdev); i++) {
+ fp = &qdev->fp_array[i];
+ if (qede_alloc_mem_sb(qdev, fp->sb_info, i))
+ return -ENOMEM;
+ }
+ break;
+ default:
+ DP_INFO(edev, "Unknown state for port %u\n",
eth_dev->data->port_id);
return -EINVAL;
}
rc = qede_start_queues(eth_dev, true);
-
if (rc) {
DP_ERR(edev, "Failed to start queues\n");
/* TBD: free */
return rc;
}
+ DP_INFO(edev, "Allocated %d RSS queues on %d TC/s\n",
+ QEDE_RSS_CNT(qdev), qdev->num_tc);
- DP_INFO(edev, "Start VPORT, RXQ and TXQ succeeded\n");
-
+ /* Bring-up the link */
qede_dev_set_link_state(eth_dev, true);
-
- /* Query whether link is already-up */
- memset(&link_output, 0, sizeof(link_output));
- qdev->ops->common->get_link(edev, &link_output);
- DP_NOTICE(edev, false, "link status: %s\n",
- link_output.link_up ? "up" : "down");
-
qdev->state = QEDE_START;
-
qede_config_rx_mode(eth_dev);
+ /* Init the queues */
+ if (qede_reset_fp_rings(qdev))
+ return -ENOMEM;
+
DP_INFO(edev, "dev_state is QEDE_START\n");
return 0;
@@ -1261,48 +1233,58 @@ static int qede_stop_queues(struct qede_dev *qdev)
DP_INFO(edev, "Flushing tx queues\n");
/* Flush Tx queues. If needed, request drain from MCP */
- for_each_rss(i) {
+ for_each_queue(i) {
struct qede_fastpath *fp = &qdev->fp_array[i];
- for (tc = 0; tc < qdev->num_tc; tc++) {
- struct qede_tx_queue *txq = fp->txqs[tc];
- rc = qede_drain_txq(qdev, txq, true);
- if (rc)
- return rc;
+
+ if (fp->type & QEDE_FASTPATH_TX) {
+ for (tc = 0; tc < qdev->num_tc; tc++) {
+ struct qede_tx_queue *txq = fp->txqs[tc];
+
+ rc = qede_drain_txq(qdev, txq, true);
+ if (rc)
+ return rc;
+ }
}
}
/* Stop all Queues in reverse order */
- for (i = QEDE_RSS_CNT(qdev) - 1; i >= 0; i--) {
+ for (i = QEDE_QUEUE_CNT(qdev) - 1; i >= 0; i--) {
struct qed_stop_rxq_params rx_params;
/* Stop the Tx Queue(s) */
- for (tc = 0; tc < qdev->num_tc; tc++) {
- struct qed_stop_txq_params tx_params;
-
- tx_params.rss_id = i;
- tx_params.tx_queue_id = tc * QEDE_RSS_CNT(qdev) + i;
-
- DP_INFO(edev, "Stopping tx queues\n");
- rc = qdev->ops->q_tx_stop(edev, &tx_params);
- if (rc) {
- DP_ERR(edev, "Failed to stop TXQ #%d\n",
- tx_params.tx_queue_id);
- return rc;
+ if (qdev->fp_array[i].type & QEDE_FASTPATH_TX) {
+ for (tc = 0; tc < qdev->num_tc; tc++) {
+ struct qed_stop_txq_params tx_params;
+ u8 val;
+
+ tx_params.rss_id = i;
+ val = qdev->fp_array[i].txqs[tc]->queue_id;
+ tx_params.tx_queue_id = val;
+
+ DP_INFO(edev, "Stopping tx queues\n");
+ rc = qdev->ops->q_tx_stop(edev, &tx_params);
+ if (rc) {
+ DP_ERR(edev, "Failed to stop TXQ #%d\n",
+ tx_params.tx_queue_id);
+ return rc;
+ }
}
}
/* Stop the Rx Queue */
- memset(&rx_params, 0, sizeof(rx_params));
- rx_params.rss_id = i;
- rx_params.rx_queue_id = i;
- rx_params.eq_completion_only = 1;
+ if (qdev->fp_array[i].type & QEDE_FASTPATH_RX) {
+ memset(&rx_params, 0, sizeof(rx_params));
+ rx_params.rss_id = i;
+ rx_params.rx_queue_id = qdev->fp_array[i].rxq->queue_id;
+ rx_params.eq_completion_only = 1;
- DP_INFO(edev, "Stopping rx queues\n");
+ DP_INFO(edev, "Stopping rx queues\n");
- rc = qdev->ops->q_rx_stop(edev, &rx_params);
- if (rc) {
- DP_ERR(edev, "Failed to stop RXQ #%d\n", i);
- return rc;
+ rc = qdev->ops->q_rx_stop(edev, &rx_params);
+ if (rc) {
+ DP_ERR(edev, "Failed to stop RXQ #%d\n", i);
+ return rc;
+ }
}
}
@@ -1316,21 +1298,42 @@ static int qede_stop_queues(struct qede_dev *qdev)
return rc;
}
-void qede_reset_fp_rings(struct qede_dev *qdev)
+int qede_reset_fp_rings(struct qede_dev *qdev)
{
- uint16_t rss_id;
+ struct qede_fastpath *fp;
+ struct qede_tx_queue *txq;
uint8_t tc;
-
- for_each_rss(rss_id) {
- DP_INFO(&qdev->edev, "reset fp chain for rss %u\n", rss_id);
- struct qede_fastpath *fp = &qdev->fp_array[rss_id];
- ecore_chain_reset(&fp->rxq->rx_bd_ring);
- ecore_chain_reset(&fp->rxq->rx_comp_ring);
- for (tc = 0; tc < qdev->num_tc; tc++) {
- struct qede_tx_queue *txq = fp->txqs[tc];
- ecore_chain_reset(&txq->tx_pbl);
+ uint16_t id, i;
+
+ for_each_queue(id) {
+ DP_INFO(&qdev->edev, "Reset FP chain for RSS %u\n", id);
+ fp = &qdev->fp_array[id];
+
+ if (fp->type & QEDE_FASTPATH_RX) {
+ qede_rx_queue_release_mbufs(fp->rxq);
+ ecore_chain_reset(&fp->rxq->rx_bd_ring);
+ ecore_chain_reset(&fp->rxq->rx_comp_ring);
+ fp->rxq->sw_rx_prod = 0;
+ fp->rxq->sw_rx_cons = 0;
+ for (i = 0; i < fp->rxq->nb_rx_desc; i++) {
+ if (qede_alloc_rx_buffer(fp->rxq)) {
+ DP_ERR(&qdev->edev,
+ "RX buffer allocation failed\n");
+ return -ENOMEM;
+ }
+ }
+ }
+ if (fp->type & QEDE_FASTPATH_TX) {
+ for (tc = 0; tc < qdev->num_tc; tc++) {
+ txq = fp->txqs[tc];
+ ecore_chain_reset(&txq->tx_pbl);
+ txq->sw_tx_cons = 0;
+ txq->sw_tx_prod = 0;
+ }
}
}
+
+ return 0;
}
/* This function frees all memory of a single fp */
@@ -1347,41 +1350,34 @@ void qede_free_mem_load(struct qede_dev *qdev)
{
uint8_t rss_id;
- for_each_rss(rss_id) {
+ for_each_queue(rss_id) {
struct qede_fastpath *fp = &qdev->fp_array[rss_id];
qede_free_mem_fp(qdev, fp);
}
/* qdev->num_rss = 0; */
}
-/*
- * Stop an Ethernet device. The device can be restarted with a call to
- * rte_eth_dev_start().
- * Do not change link state and do not release sw structures.
- */
void qede_dev_stop(struct rte_eth_dev *eth_dev)
{
struct qede_dev *qdev = eth_dev->data->dev_private;
struct ecore_dev *edev = &qdev->edev;
- int rc;
DP_INFO(edev, "port %u\n", eth_dev->data->port_id);
if (qdev->state != QEDE_START) {
- DP_INFO(edev, "device not yet started\n");
+ DP_INFO(edev, "Device not yet started\n");
return;
}
- rc = qede_stop_queues(qdev);
-
- if (rc)
+ if (qede_stop_queues(qdev))
DP_ERR(edev, "Didn't succeed to close queues\n");
DP_INFO(edev, "Stopped queues\n");
qdev->ops->fastpath_stop(edev);
- qede_reset_fp_rings(qdev);
+ /* Bring the link down */
+ qede_dev_set_link_state(eth_dev, false);
qdev->state = QEDE_STOP;
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 90da603..a8520ed 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -75,7 +75,7 @@
#define MAX_NUM_TC 8
-#define for_each_rss(i) for (i = 0; i < qdev->num_rss; i++)
+#define for_each_queue(i) for (i = 0; i < qdev->num_queues; i++)
/*
* RX BD descriptor ring
@@ -139,7 +139,8 @@ struct qede_tx_queue {
struct qede_fastpath {
struct qede_dev *qdev;
- uint8_t rss_id;
+ u8 type;
+ uint8_t id;
struct ecore_sb_info *sb_info;
struct qede_rx_queue *rxq;
struct qede_tx_queue *txqs[MAX_NUM_TC];
@@ -168,7 +169,7 @@ int qede_dev_start(struct rte_eth_dev *eth_dev);
void qede_dev_stop(struct rte_eth_dev *eth_dev);
-void qede_reset_fp_rings(struct qede_dev *qdev);
+int qede_reset_fp_rings(struct qede_dev *qdev);
void qede_free_fp_arrays(struct qede_dev *qdev);
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 16/32] net/qede: fix port (re)configuration issue
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (14 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 15/32] net/qede: enable support for unequal number of Rx/Tx queues Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 17/32] net/qede/base: allow MTU change via vport-update Rasesh Mody
` (16 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Harish Patil
From: Harish Patil <harish.patil@qlogic.com>
Some applications set port configuration params like promisc mode
before calling dev_start(). This config results in a firmware exception
since this operation internally translates to sending of VPORT-UPDATE
before VPORT-START ramrod which is considered illegal from firmware
standpoint. So the fix is to send VPORT-START ramrod sooner
in dev_configure() rather than deferring it to dev_start().
This requires a bit of reshuffling in the code to move sending of
VPORT-START from qede_start_queues() to qede_dev_configure()
and VPORT-STOP from qede_stop_queues() to qede_dev_stop().
This sequence change also exposes a flaw in the port restart
flows where the fastpath resource allocation routine qede_init_fp()
functionalities need to be split, so that appropriate action is taken
based on the current port state. Eg: Do not re-initialize the status
block in a port restart case. This change ensures port start/stop
can be paired.
A new port state QEDE_DEV_CONFIG is added to distinguish between
port started from scratch vs port requiring a reconfig (like MTU).
The function qede_config_rx_mode() is removed since the individual
port config will be replayed anyways on a restart.
Fixes: 2ea6f76aff40 ("qede: add core driver")
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
drivers/net/qede/qede_eth_if.c | 14 ++-
drivers/net/qede/qede_eth_if.h | 2 +
drivers/net/qede/qede_ethdev.c | 130 ++++++++++++-----------
drivers/net/qede/qede_ethdev.h | 12 +--
drivers/net/qede/qede_rxtx.c | 232 +++++++++++++++++++++--------------------
drivers/net/qede/qede_rxtx.h | 7 +-
6 files changed, 208 insertions(+), 189 deletions(-)
diff --git a/drivers/net/qede/qede_eth_if.c b/drivers/net/qede/qede_eth_if.c
index a00f05d..e108af1 100644
--- a/drivers/net/qede/qede_eth_if.c
+++ b/drivers/net/qede/qede_eth_if.c
@@ -40,8 +40,6 @@ qed_start_vport(struct ecore_dev *edev, struct qed_start_vport_params *p_params)
return rc;
}
- ecore_hw_start_fastpath(p_hwfn);
-
DP_VERBOSE(edev, ECORE_MSG_SPQ,
"Started V-PORT %d with MTU %d\n",
p_params->vport_id, p_params->mtu);
@@ -295,6 +293,17 @@ static int qed_fastpath_stop(struct ecore_dev *edev)
return 0;
}
+static void qed_fastpath_start(struct ecore_dev *edev)
+{
+ struct ecore_hwfn *p_hwfn;
+ int i;
+
+ for_each_hwfn(edev, i) {
+ p_hwfn = &edev->hwfns[i];
+ ecore_hw_start_fastpath(p_hwfn);
+ }
+}
+
static void
qed_get_vport_stats(struct ecore_dev *edev, struct ecore_eth_stats *stats)
{
@@ -444,6 +453,7 @@ static const struct qed_eth_ops qed_eth_ops_pass = {
INIT_STRUCT_FIELD(q_tx_stop, &qed_stop_txq),
INIT_STRUCT_FIELD(eth_cqe_completion, &qed_fp_cqe_completion),
INIT_STRUCT_FIELD(fastpath_stop, &qed_fastpath_stop),
+ INIT_STRUCT_FIELD(fastpath_start, &qed_fastpath_start),
INIT_STRUCT_FIELD(get_vport_stats, &qed_get_vport_stats),
INIT_STRUCT_FIELD(filter_config, &qed_configure_filter),
};
diff --git a/drivers/net/qede/qede_eth_if.h b/drivers/net/qede/qede_eth_if.h
index 26968eb..299a2aa 100644
--- a/drivers/net/qede/qede_eth_if.h
+++ b/drivers/net/qede/qede_eth_if.h
@@ -158,6 +158,8 @@ struct qed_eth_ops {
int (*fastpath_stop)(struct ecore_dev *edev);
+ void (*fastpath_start)(struct ecore_dev *edev);
+
void (*get_vport_stats)(struct ecore_dev *edev,
struct ecore_eth_stats *stats);
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index e2141cb..ec5e8a2 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -348,52 +348,6 @@ static void qede_config_accept_any_vlan(struct qede_dev *qdev, bool action)
}
}
-void qede_config_rx_mode(struct rte_eth_dev *eth_dev)
-{
- struct qede_dev *qdev = eth_dev->data->dev_private;
- struct ecore_dev *edev = &qdev->edev;
- /* TODO: - QED_FILTER_TYPE_UCAST */
- enum qed_filter_rx_mode_type accept_flags =
- QED_FILTER_RX_MODE_TYPE_REGULAR;
- struct qed_filter_params rx_mode;
- int rc;
-
- /* Configure the struct for the Rx mode */
- memset(&rx_mode, 0, sizeof(struct qed_filter_params));
- rx_mode.type = QED_FILTER_TYPE_RX_MODE;
-
- rc = qede_set_ucast_rx_mac(qdev, QED_FILTER_XCAST_TYPE_REPLACE,
- eth_dev->data->mac_addrs[0].addr_bytes);
- if (rte_eth_promiscuous_get(eth_dev->data->port_id) == 1) {
- accept_flags = QED_FILTER_RX_MODE_TYPE_PROMISC;
- } else {
- rc = qede_set_ucast_rx_mac(qdev, QED_FILTER_XCAST_TYPE_ADD,
- eth_dev->data->
- mac_addrs[0].addr_bytes);
- if (rc) {
- DP_ERR(edev, "Unable to add filter\n");
- return;
- }
- }
-
- /* take care of VLAN mode */
- if (rte_eth_promiscuous_get(eth_dev->data->port_id) == 1) {
- qede_config_accept_any_vlan(qdev, true);
- } else if (!qdev->non_configured_vlans) {
- /* If we dont have non-configured VLANs and promisc
- * is not set, then check if we need to disable
- * accept_any_vlan mode.
- * Because in this case, accept_any_vlan mode is set
- * as part of IFF_RPOMISC flag handling.
- */
- qede_config_accept_any_vlan(qdev, false);
- }
- rx_mode.filter.accept_flags = accept_flags;
- rc = qdev->ops->filter_config(edev, &rx_mode);
- if (rc)
- DP_ERR(edev, "Filter config failed rc=%d\n", rc);
-}
-
static int qede_vlan_stripping(struct rte_eth_dev *eth_dev, bool set_stripping)
{
struct qed_update_vport_params vport_update_params;
@@ -488,11 +442,39 @@ static int qede_vlan_filter_set(struct rte_eth_dev *eth_dev,
return rc;
}
+static int qede_init_vport(struct qede_dev *qdev)
+{
+ struct ecore_dev *edev = &qdev->edev;
+ struct qed_start_vport_params start = {0};
+ int rc;
+
+ start.remove_inner_vlan = 1;
+ start.gro_enable = 0;
+ start.mtu = ETHER_MTU + QEDE_ETH_OVERHEAD;
+ start.vport_id = 0;
+ start.drop_ttl0 = false;
+ start.clear_stats = 1;
+ start.handle_ptp_pkts = 0;
+
+ rc = qdev->ops->vport_start(edev, &start);
+ if (rc) {
+ DP_ERR(edev, "Start V-PORT failed %d\n", rc);
+ return rc;
+ }
+
+ DP_INFO(edev,
+ "Start vport ramrod passed, vport_id = %d, MTU = %u\n",
+ start.vport_id, ETHER_MTU);
+
+ return 0;
+}
+
static int qede_dev_configure(struct rte_eth_dev *eth_dev)
{
struct qede_dev *qdev = eth_dev->data->dev_private;
struct ecore_dev *edev = &qdev->edev;
struct rte_eth_rxmode *rxmode = ð_dev->data->dev_conf.rxmode;
+ int rc;
PMD_INIT_FUNC_TRACE(edev);
@@ -517,11 +499,7 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
qdev->fp_num_rx = eth_dev->data->nb_rx_queues;
qdev->num_queues = qdev->fp_num_tx + qdev->fp_num_rx;
- /* Initial state */
- qdev->state = QEDE_CLOSE;
-
/* Sanity checks and throw warnings */
-
if (rxmode->enable_scatter == 1) {
DP_ERR(edev, "RX scatter packets is not supported\n");
return -EINVAL;
@@ -539,16 +517,33 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
DP_INFO(edev, "IP/UDP/TCP checksum offload is always enabled "
"in hw\n");
+ /* Check for the port restart case */
+ if (qdev->state != QEDE_DEV_INIT) {
+ rc = qdev->ops->vport_stop(edev, 0);
+ if (rc != 0)
+ return rc;
+ qede_dealloc_fp_resc(eth_dev);
+ }
+
+ /* Fastpath status block should be initialized before sending
+ * VPORT-START in the case of VF. Anyway, do it for both VF/PF.
+ */
+ rc = qede_alloc_fp_resc(qdev);
+ if (rc != 0)
+ return rc;
+
+ /* Issue VPORT-START with default config values to allow
+ * other port configurations early on.
+ */
+ rc = qede_init_vport(qdev);
+ if (rc != 0)
+ return rc;
- DP_INFO(edev, "Allocated %d RSS queues on %d TC/s\n",
- QEDE_RSS_CNT(qdev), qdev->num_tc);
+ /* Add primary mac for PF */
+ if (IS_PF(edev))
+ qede_mac_addr_set(eth_dev, &qdev->primary_mac);
- DP_INFO(edev, "my_id %u rel_pf_id %u abs_pf_id %u"
- " port %u first_on_engine %d\n",
- edev->hwfns[0].my_id,
- edev->hwfns[0].rel_pf_id,
- edev->hwfns[0].abs_pf_id,
- edev->hwfns[0].port_id, edev->hwfns[0].first_on_engine);
+ qdev->state = QEDE_DEV_CONFIG;
return 0;
}
@@ -719,8 +714,9 @@ static void qede_poll_sp_sb_cb(void *param)
static void qede_dev_close(struct rte_eth_dev *eth_dev)
{
- struct qede_dev *qdev = eth_dev->data->dev_private;
- struct ecore_dev *edev = &qdev->edev;
+ struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+ struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+ int rc;
PMD_INIT_FUNC_TRACE(edev);
@@ -729,16 +725,16 @@ static void qede_dev_close(struct rte_eth_dev *eth_dev)
* by the app without reconfiguration. However, in dev_close() we
* can release all the resources and device can be brought up newly
*/
- if (qdev->state != QEDE_STOP)
+ if (qdev->state != QEDE_DEV_STOP)
qede_dev_stop(eth_dev);
else
DP_INFO(edev, "Device is already stopped\n");
- qede_free_mem_load(qdev);
-
- qede_free_fp_arrays(qdev);
+ rc = qdev->ops->vport_stop(edev, 0);
+ if (rc != 0)
+ DP_ERR(edev, "Failed to stop VPORT\n");
- qede_dev_set_link_state(eth_dev, false);
+ qede_dealloc_fp_resc(eth_dev);
qdev->ops->common->slowpath_stop(edev);
@@ -752,7 +748,7 @@ static void qede_dev_close(struct rte_eth_dev *eth_dev)
if (edev->num_hwfns > 1)
rte_eal_alarm_cancel(qede_poll_sp_sb_cb, (void *)eth_dev);
- qdev->state = QEDE_CLOSE;
+ qdev->state = QEDE_DEV_INIT; /* Go back to init state */
}
static void
@@ -1388,6 +1384,8 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
do_once = false;
}
+ adapter->state = QEDE_DEV_INIT;
+
DP_NOTICE(edev, false, "MAC address : %02x:%02x:%02x:%02x:%02x:%02x\n",
adapter->primary_mac.addr_bytes[0],
adapter->primary_mac.addr_bytes[1],
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 1f62283..f0bb720 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -109,10 +109,11 @@
extern char fw_file[];
/* Port/function states */
-enum dev_state {
- QEDE_START,
- QEDE_STOP,
- QEDE_CLOSE
+enum qede_dev_state {
+ QEDE_DEV_INIT, /* Init the chip and Slowpath */
+ QEDE_DEV_CONFIG, /* Create Vport/Fastpath resources */
+ QEDE_DEV_START, /* Start RX/TX queues, enable traffic */
+ QEDE_DEV_STOP, /* Deactivate vport and stop traffic */
};
struct qed_int_param {
@@ -148,7 +149,7 @@ struct qede_dev {
uint8_t fp_num_tx;
uint8_t fp_num_rx;
- enum dev_state state;
+ enum qede_dev_state state;
/* Vlans */
osal_list_t vlan_list;
@@ -165,6 +166,5 @@ struct qede_dev {
int qed_fill_eth_dev_info(struct ecore_dev *edev,
struct qed_dev_eth_info *info);
int qede_dev_set_link_state(struct rte_eth_dev *eth_dev, bool link_up);
-void qede_config_rx_mode(struct rte_eth_dev *eth_dev);
#endif /* _QEDE_ETHDEV_H_ */
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 00584a9..e3409a9 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -324,11 +324,10 @@ qede_tx_queue_setup(struct rte_eth_dev *dev,
}
/* This function inits fp content and resets the SB, RXQ and TXQ arrays */
-static void qede_init_fp(struct rte_eth_dev *eth_dev)
+static void qede_init_fp(struct qede_dev *qdev)
{
struct qede_fastpath *fp;
- uint8_t i, rss_id, index, tc;
- struct qede_dev *qdev = eth_dev->data->dev_private;
+ uint8_t i, rss_id, tc;
int fp_rx = qdev->fp_num_rx, rxq = 0, txq = 0;
memset((void *)qdev->fp_array, 0, (QEDE_QUEUE_CNT(qdev) *
@@ -343,30 +342,9 @@ static void qede_init_fp(struct rte_eth_dev *eth_dev)
} else{
fp->type = QEDE_FASTPATH_TX;
}
- }
-
- for_each_queue(i) {
- fp = &qdev->fp_array[i];
fp->qdev = qdev;
fp->id = i;
-
- /* Point rxq to generic rte queues that was created
- * as part of queue creation.
- */
- if (fp->type & QEDE_FASTPATH_RX) {
- fp->rxq = eth_dev->data->rx_queues[i];
- fp->rxq->queue_id = rxq++;
- }
fp->sb_info = &qdev->sb_array[i];
-
- if (fp->type & QEDE_FASTPATH_TX) {
- for (tc = 0; tc < qdev->num_tc; tc++) {
- index = tc * QEDE_TSS_CNT(qdev) + txq;
- fp->txqs[tc] = eth_dev->data->tx_queues[index];
- fp->txqs[tc]->queue_id = index;
- }
- txq++;
- }
snprintf(fp->name, sizeof(fp->name), "%s-fp-%d", "qdev", i);
}
@@ -444,6 +422,39 @@ qede_alloc_mem_sb(struct qede_dev *qdev, struct ecore_sb_info *sb_info,
return 0;
}
+int qede_alloc_fp_resc(struct qede_dev *qdev)
+{
+ struct qede_fastpath *fp;
+ int rc, i;
+
+ if (qdev->fp_array)
+ qede_free_fp_arrays(qdev);
+
+ rc = qede_alloc_fp_array(qdev);
+ if (rc != 0)
+ return rc;
+
+ qede_init_fp(qdev);
+
+ for (i = 0; i < QEDE_QUEUE_CNT(qdev); i++) {
+ fp = &qdev->fp_array[i];
+ if (qede_alloc_mem_sb(qdev, fp->sb_info, i)) {
+ qede_free_fp_arrays(qdev);
+ return -ENOMEM;
+ }
+ }
+
+ return 0;
+}
+
+void qede_dealloc_fp_resc(struct rte_eth_dev *eth_dev)
+{
+ struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+
+ qede_free_mem_load(eth_dev);
+ qede_free_fp_arrays(qdev);
+}
+
static inline void
qede_update_rx_prod(struct qede_dev *edev, struct qede_rx_queue *rxq)
{
@@ -564,43 +575,21 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
struct qed_update_vport_rss_params *rss_params = &qdev->rss_params;
struct qed_dev_info *qed_info = &qdev->dev_info.common;
struct qed_update_vport_params vport_update_params;
- struct qed_start_vport_params start = { 0 };
+ struct qede_tx_queue *txq;
+ struct qede_fastpath *fp;
+ dma_addr_t p_phys_table;
+ int txq_index;
+ uint16_t page_cnt;
int vlan_removal_en = 1;
int rc, tc, i;
- if (!qdev->fp_num_rx) {
- DP_ERR(edev,
- "Cannot update V-VPORT as active as "
- "there are no Rx queues\n");
- return -EINVAL;
- }
-
- start.remove_inner_vlan = vlan_removal_en;
- start.gro_enable = !qdev->gro_disable;
- start.mtu = qdev->mtu;
- start.vport_id = 0;
- start.drop_ttl0 = true;
- start.clear_stats = clear_stats;
-
- rc = qdev->ops->vport_start(edev, &start);
- if (rc) {
- DP_ERR(edev, "Start V-PORT failed %d\n", rc);
- return rc;
- }
-
- DP_INFO(edev,
- "Start vport ramrod passed, vport_id = %d,"
- " MTU = %d, vlan_removal_en = %d\n",
- start.vport_id, qdev->mtu, vlan_removal_en);
-
for_each_queue(i) {
- struct qede_fastpath *fp = &qdev->fp_array[i];
- dma_addr_t tbl;
- uint16_t cnt;
-
+ fp = &qdev->fp_array[i];
if (fp->type & QEDE_FASTPATH_RX) {
- tbl = ecore_chain_get_pbl_phys(&fp->rxq->rx_comp_ring);
- cnt = ecore_chain_get_page_cnt(&fp->rxq->rx_comp_ring);
+ p_phys_table = ecore_chain_get_pbl_phys(&fp->rxq->
+ rx_comp_ring);
+ page_cnt = ecore_chain_get_page_cnt(&fp->rxq->
+ rx_comp_ring);
ecore_sb_ack(fp->sb_info, IGU_INT_DISABLE, 0);
@@ -610,17 +599,17 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
RX_PI,
fp->rxq->rx_buf_size,
fp->rxq->rx_bd_ring.p_phys_addr,
- tbl,
- cnt,
- &fp->rxq->hw_rxq_prod_addr);
+ p_phys_table,
+ page_cnt,
+ &fp->rxq->hw_rxq_prod_addr);
if (rc) {
- DP_ERR(edev,
- "Start rxq #%d failed %d\n",
+ DP_ERR(edev, "Start rxq #%d failed %d\n",
fp->rxq->queue_id, rc);
- return rc;
- }
+ return rc;
+ }
- fp->rxq->hw_cons_ptr = &fp->sb_info->sb_virt->pi_array[RX_PI];
+ fp->rxq->hw_cons_ptr =
+ &fp->sb_info->sb_virt->pi_array[RX_PI];
qede_update_rx_prod(qdev, fp->rxq);
}
@@ -628,16 +617,16 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
if (!(fp->type & QEDE_FASTPATH_TX))
continue;
for (tc = 0; tc < qdev->num_tc; tc++) {
- struct qede_tx_queue *txq = fp->txqs[tc];
- int txq_index = tc * QEDE_RSS_CNT(qdev) + i;
+ txq = fp->txqs[tc];
+ txq_index = tc * QEDE_RSS_CNT(qdev) + i;
- tbl = ecore_chain_get_pbl_phys(&txq->tx_pbl);
- cnt = ecore_chain_get_page_cnt(&txq->tx_pbl);
+ p_phys_table = ecore_chain_get_pbl_phys(&txq->tx_pbl);
+ page_cnt = ecore_chain_get_page_cnt(&txq->tx_pbl);
rc = qdev->ops->q_tx_start(edev, i, txq->queue_id,
0,
fp->sb_info->igu_sb_id,
TX_PI(tc),
- tbl, cnt,
+ p_phys_table, page_cnt,
&txq->doorbell_addr);
if (rc) {
DP_ERR(edev, "Start txq %u failed %d\n",
@@ -661,7 +650,7 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
/* Prepare and send the vport enable */
memset(&vport_update_params, 0, sizeof(vport_update_params));
- vport_update_params.vport_id = start.vport_id;
+ vport_update_params.vport_id = 0;
vport_update_params.update_vport_active_flg = 1;
vport_update_params.vport_active_flg = 1;
@@ -1116,6 +1105,32 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
return nb_pkt_sent;
}
+static void qede_init_fp_queue(struct rte_eth_dev *eth_dev)
+{
+ struct qede_dev *qdev = eth_dev->data->dev_private;
+ struct qede_fastpath *fp;
+ uint8_t i, rss_id, txq_index, tc;
+ int rxq = 0, txq = 0;
+
+ for_each_queue(i) {
+ fp = &qdev->fp_array[i];
+ if (fp->type & QEDE_FASTPATH_RX) {
+ fp->rxq = eth_dev->data->rx_queues[i];
+ fp->rxq->queue_id = rxq++;
+ }
+
+ if (fp->type & QEDE_FASTPATH_TX) {
+ for (tc = 0; tc < qdev->num_tc; tc++) {
+ txq_index = tc * QEDE_TSS_CNT(qdev) + txq;
+ fp->txqs[tc] =
+ eth_dev->data->tx_queues[txq_index];
+ fp->txqs[tc]->queue_id = txq_index;
+ }
+ txq++;
+ }
+ }
+}
+
int qede_dev_start(struct rte_eth_dev *eth_dev)
{
struct qede_dev *qdev = eth_dev->data->dev_private;
@@ -1126,47 +1141,34 @@ int qede_dev_start(struct rte_eth_dev *eth_dev)
DP_INFO(edev, "Device state is %d\n", qdev->state);
- switch (qdev->state) {
- case QEDE_START:
- DP_INFO(edev, "Device already started\n");
+ if (qdev->state == QEDE_DEV_START) {
+ DP_INFO(edev, "Port is already started\n");
return 0;
- case QEDE_CLOSE:
- if (qede_alloc_fp_array(qdev))
- return -ENOMEM;
- qede_init_fp(eth_dev);
- /* Fall-thru */
- case QEDE_STOP:
- for (i = 0; i < QEDE_QUEUE_CNT(qdev); i++) {
- fp = &qdev->fp_array[i];
- if (qede_alloc_mem_sb(qdev, fp->sb_info, i))
- return -ENOMEM;
- }
- break;
- default:
- DP_INFO(edev, "Unknown state for port %u\n",
- eth_dev->data->port_id);
- return -EINVAL;
}
+ if (qdev->state == QEDE_DEV_CONFIG)
+ qede_init_fp_queue(eth_dev);
+
rc = qede_start_queues(eth_dev, true);
if (rc) {
DP_ERR(edev, "Failed to start queues\n");
/* TBD: free */
return rc;
}
- DP_INFO(edev, "Allocated %d RSS queues on %d TC/s\n",
- QEDE_RSS_CNT(qdev), qdev->num_tc);
/* Bring-up the link */
qede_dev_set_link_state(eth_dev, true);
- qdev->state = QEDE_START;
- qede_config_rx_mode(eth_dev);
- /* Init the queues */
+ /* Reset ring */
if (qede_reset_fp_rings(qdev))
return -ENOMEM;
- DP_INFO(edev, "dev_state is QEDE_START\n");
+ /* Start/resume traffic */
+ qdev->ops->fastpath_start(edev);
+
+ qdev->state = QEDE_DEV_START;
+
+ DP_INFO(edev, "dev_state is QEDE_DEV_START\n");
return 0;
}
@@ -1222,7 +1224,7 @@ static int qede_stop_queues(struct qede_dev *qdev)
vport_update_params.vport_active_flg = 0;
vport_update_params.update_rss_flg = 0;
- DP_INFO(edev, "vport_update\n");
+ DP_INFO(edev, "Deactivate vport\n");
rc = qdev->ops->vport_update(edev, &vport_update_params);
if (rc) {
@@ -1288,14 +1290,7 @@ static int qede_stop_queues(struct qede_dev *qdev)
}
}
- DP_INFO(edev, "Stopping vports\n");
-
- /* Stop the vport */
- rc = qdev->ops->vport_stop(edev, 0);
- if (rc)
- DP_ERR(edev, "Failed to stop VPORT\n");
-
- return rc;
+ return 0;
}
int qede_reset_fp_rings(struct qede_dev *qdev)
@@ -1306,15 +1301,17 @@ int qede_reset_fp_rings(struct qede_dev *qdev)
uint16_t id, i;
for_each_queue(id) {
- DP_INFO(&qdev->edev, "Reset FP chain for RSS %u\n", id);
fp = &qdev->fp_array[id];
if (fp->type & QEDE_FASTPATH_RX) {
+ DP_INFO(&qdev->edev,
+ "Reset FP chain for RSS %u\n", id);
qede_rx_queue_release_mbufs(fp->rxq);
ecore_chain_reset(&fp->rxq->rx_bd_ring);
ecore_chain_reset(&fp->rxq->rx_comp_ring);
fp->rxq->sw_rx_prod = 0;
fp->rxq->sw_rx_cons = 0;
+ *fp->rxq->hw_cons_ptr = 0;
for (i = 0; i < fp->rxq->nb_rx_desc; i++) {
if (qede_alloc_rx_buffer(fp->rxq)) {
DP_ERR(&qdev->edev,
@@ -1329,6 +1326,7 @@ int qede_reset_fp_rings(struct qede_dev *qdev)
ecore_chain_reset(&txq->tx_pbl);
txq->sw_tx_cons = 0;
txq->sw_tx_prod = 0;
+ *txq->hw_cons_ptr = 0;
}
}
}
@@ -1337,24 +1335,30 @@ int qede_reset_fp_rings(struct qede_dev *qdev)
}
/* This function frees all memory of a single fp */
-static void qede_free_mem_fp(struct qede_dev *qdev, struct qede_fastpath *fp)
+static void qede_free_mem_fp(struct rte_eth_dev *eth_dev,
+ struct qede_fastpath *fp)
{
+ struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
uint8_t tc;
qede_rx_queue_release(fp->rxq);
- for (tc = 0; tc < qdev->num_tc; tc++)
+ for (tc = 0; tc < qdev->num_tc; tc++) {
qede_tx_queue_release(fp->txqs[tc]);
+ eth_dev->data->tx_queues[tc] = NULL;
+ }
}
-void qede_free_mem_load(struct qede_dev *qdev)
+void qede_free_mem_load(struct rte_eth_dev *eth_dev)
{
+ struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
+ struct qede_fastpath *fp;
uint8_t rss_id;
for_each_queue(rss_id) {
- struct qede_fastpath *fp = &qdev->fp_array[rss_id];
- qede_free_mem_fp(qdev, fp);
+ fp = &qdev->fp_array[rss_id];
+ qede_free_mem_fp(eth_dev, fp);
+ eth_dev->data->rx_queues[rss_id] = NULL;
}
- /* qdev->num_rss = 0; */
}
void qede_dev_stop(struct rte_eth_dev *eth_dev)
@@ -1364,7 +1368,7 @@ void qede_dev_stop(struct rte_eth_dev *eth_dev)
DP_INFO(edev, "port %u\n", eth_dev->data->port_id);
- if (qdev->state != QEDE_START) {
+ if (qdev->state != QEDE_DEV_START) {
DP_INFO(edev, "Device not yet started\n");
return;
}
@@ -1379,7 +1383,7 @@ void qede_dev_stop(struct rte_eth_dev *eth_dev)
/* Bring the link down */
qede_dev_set_link_state(eth_dev, false);
- qdev->state = QEDE_STOP;
+ qdev->state = QEDE_DEV_STOP;
- DP_INFO(edev, "dev_state is QEDE_STOP\n");
+ DP_INFO(edev, "dev_state is QEDE_DEV_STOP\n");
}
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index a8520ed..9ee69ed 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -173,7 +173,7 @@ int qede_reset_fp_rings(struct qede_dev *qdev);
void qede_free_fp_arrays(struct qede_dev *qdev);
-void qede_free_mem_load(struct qede_dev *qdev);
+void qede_free_mem_load(struct rte_eth_dev *eth_dev);
uint16_t qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
@@ -181,4 +181,9 @@ uint16_t qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts,
uint16_t qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts,
uint16_t nb_pkts);
+/* Fastpath resource alloc/dealloc helpers */
+int qede_alloc_fp_resc(struct qede_dev *qdev);
+
+void qede_dealloc_fp_resc(struct rte_eth_dev *eth_dev);
+
#endif /* _QEDE_RXTX_H_ */
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 17/32] net/qede/base: allow MTU change via vport-update
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (15 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 16/32] net/qede: fix port (re)configuration issue Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 18/32] net/qede: add missing 100G link speed capability Rasesh Mody
` (15 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Harish Patil
From: Harish Patil <harish.patil@qlogic.com>
Add support to allow MTU change on a deactivated vport in
the qede/base driver and the core driver shall utilize the same.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
drivers/net/qede/base/ecore_l2.c | 5 +++++
drivers/net/qede/base/ecore_l2_api.h | 4 ++++
drivers/net/qede/qede_eth_if.c | 1 +
drivers/net/qede/qede_eth_if.h | 1 +
drivers/net/qede/qede_rxtx.c | 2 ++
5 files changed, 13 insertions(+)
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 5a38ad2..83a62e0 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -427,6 +427,11 @@ ecore_sp_vport_update(struct ecore_hwfn *p_hwfn,
ecore_sp_update_accept_mode(p_hwfn, p_ramrod, p_params->accept_flags);
ecore_sp_vport_update_sge_tpa(p_hwfn, p_ramrod,
p_params->sge_tpa_params);
+ if (p_params->mtu) {
+ p_ramrod->common.update_mtu_flg = 1;
+ p_ramrod->common.mtu = OSAL_CPU_TO_LE16(p_params->mtu);
+ }
+
return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
}
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 09688eb..447d1fb 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -322,6 +322,10 @@ struct ecore_sp_vport_update_params {
struct ecore_rss_params *rss_params;
struct ecore_filter_accept_flags accept_flags;
struct ecore_sge_tpa_params *sge_tpa_params;
+ /* MTU change - notice this requires the vport to be disabled.
+ * If non-zero, value would be used.
+ */
+ u16 mtu;
};
/**
diff --git a/drivers/net/qede/qede_eth_if.c b/drivers/net/qede/qede_eth_if.c
index e108af1..9855d0e 100644
--- a/drivers/net/qede/qede_eth_if.c
+++ b/drivers/net/qede/qede_eth_if.c
@@ -92,6 +92,7 @@ qed_update_vport(struct ecore_dev *edev, struct qed_update_vport_params *params)
sp_params.accept_any_vlan = params->accept_any_vlan;
sp_params.update_accept_any_vlan_flg =
params->update_accept_any_vlan_flg;
+ sp_params.mtu = params->mtu;
/* RSS - is a bit tricky, since upper-layer isn't familiar with hwfns.
* We need to re-fix the rss values per engine for CMT.
diff --git a/drivers/net/qede/qede_eth_if.h b/drivers/net/qede/qede_eth_if.h
index 299a2aa..7840a37 100644
--- a/drivers/net/qede/qede_eth_if.h
+++ b/drivers/net/qede/qede_eth_if.h
@@ -75,6 +75,7 @@ struct qed_update_vport_params {
uint8_t accept_any_vlan;
uint8_t update_rss_flg;
struct qed_update_vport_rss_params rss_params;
+ uint16_t mtu;
};
struct qed_start_vport_params {
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index e3409a9..6973d1c 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -650,6 +650,8 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
/* Prepare and send the vport enable */
memset(&vport_update_params, 0, sizeof(vport_update_params));
+ /* Update MTU via vport update */
+ vport_update_params.mtu = qdev->mtu;
vport_update_params.vport_id = 0;
vport_update_params.update_vport_active_flg = 1;
vport_update_params.vport_active_flg = 1;
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 18/32] net/qede: add missing 100G link speed capability
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (16 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 17/32] net/qede/base: allow MTU change via vport-update Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-26 15:41 ` Thomas Monjalon
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 19/32] net/qede: remove unused/dead code Rasesh Mody
` (14 subsequent siblings)
32 siblings, 1 reply; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Harish Patil
From: Harish Patil <harish.patil@qlogic.com>
This patch fixes the missing 100G link speed advertisement
when the 100G support was initially added.
Fixes: 2af14ca79c0a ("net/qede: support 100G")
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
doc/guides/nics/features/qede.ini | 1 +
doc/guides/nics/features/qede_vf.ini | 1 +
drivers/net/qede/qede_ethdev.c | 3 ++-
3 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features/qede.ini b/doc/guides/nics/features/qede.ini
index 7690773..1d28a23 100644
--- a/doc/guides/nics/features/qede.ini
+++ b/doc/guides/nics/features/qede.ini
@@ -4,6 +4,7 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Speed capabilities = Y
Link status = Y
Link status event = Y
MTU update = Y
diff --git a/doc/guides/nics/features/qede_vf.ini b/doc/guides/nics/features/qede_vf.ini
index aeb20d2..b4eba0c 100644
--- a/doc/guides/nics/features/qede_vf.ini
+++ b/doc/guides/nics/features/qede_vf.ini
@@ -4,6 +4,7 @@
; Refer to default.ini for the full list of available PMD features.
;
[Features]
+Speed capabilities = Y
Link status = Y
Link status event = Y
MTU update = Y
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index ec5e8a2..5da540a 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -599,7 +599,8 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_TX_OFFLOAD_UDP_CKSUM |
DEV_TX_OFFLOAD_TCP_CKSUM);
- dev_info->speed_capa = ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G;
+ dev_info->speed_capa = ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
+ ETH_LINK_SPEED_100G;
}
/* return 0 means link status changed, -1 means not changed */
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 19/32] net/qede: remove unused/dead code
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (17 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 18/32] net/qede: add missing 100G link speed capability Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 20/32] net/qede: fixes for VLAN filters Rasesh Mody
` (13 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Harish Patil, Rasesh Mody
From: Harish Patil <harish.patil@qlogic.com>
Fixes: 2ea6f76aff40 ("qede: add core driver")
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
---
drivers/net/qede/base/ecore_iov_api.h | 312 ----------------------------------
drivers/net/qede/qede_eth_if.c | 10 --
drivers/net/qede/qede_ethdev.c | 3 -
drivers/net/qede/qede_ethdev.h | 12 --
drivers/net/qede/qede_if.h | 9 -
5 files changed, 346 deletions(-)
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index 14f3f47..bb8df82 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -680,318 +680,6 @@ int ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid);
*/
enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
int vfid, u32 rate);
-#else
-static OSAL_INLINE void ecore_iov_set_vfs_to_disable(struct ecore_hwfn *p_hwfn,
- u8 to_disable)
-{
-}
-
-static OSAL_INLINE void ecore_iov_set_vf_to_disable(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id,
- u8 to_disable)
-{
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_iov_init_hw_for_vf(struct
- ecore_hwfn
- * p_hwfn,
- struct
- ecore_ptt
- * p_ptt,
- u16 rel_vf_id,
- u16
- num_rx_queues)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE void ecore_iov_process_mbx_req(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- int vfid)
-{
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_iov_release_hw_for_vf(struct
- ecore_hwfn
- * p_hwfn,
- struct
- ecore_ptt
- * p_ptt,
- u16
- rel_vf_id)
-{
- return ECORE_SUCCESS;
-}
-
-#ifndef LINUX_REMOVE
-static OSAL_INLINE enum _ecore_status_t ecore_iov_set_vf_ctx(struct ecore_hwfn
- *p_hwfn, u16 vf_id,
- void *ctx)
-{
- return ECORE_INVAL;
-}
-#endif
-static OSAL_INLINE enum _ecore_status_t ecore_iov_vf_flr_cleanup(struct
- ecore_hwfn
- * p_hwfn,
- struct
- ecore_ptt
- * p_ptt)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_iov_single_vf_flr_cleanup(
- struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt, u16 rel_vf_id)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE void ecore_iov_set_link(struct ecore_hwfn *p_hwfn, u16 vfid,
- struct ecore_mcp_link_params *params,
- struct ecore_mcp_link_state *link,
- struct ecore_mcp_link_capabilities
- *p_caps)
-{
-}
-
-static OSAL_INLINE void ecore_iov_get_link(struct ecore_hwfn *p_hwfn, u16 vfid,
- struct ecore_mcp_link_params *params,
- struct ecore_mcp_link_state *link,
- struct ecore_mcp_link_capabilities
- *p_caps)
-{
-}
-
-static OSAL_INLINE bool ecore_iov_is_vf_pending_flr(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id)
-{
- return false;
-}
-
-static OSAL_INLINE bool ecore_iov_is_valid_vfid(struct ecore_hwfn *p_hwfn,
- int rel_vf_id,
- bool b_enabled_only)
-{
- return false;
-}
-
-static OSAL_INLINE struct ecore_public_vf_info *
- ecore_iov_get_public_vf_info(struct ecore_hwfn *p_hwfn, u16 vfid,
- bool b_enabled_only)
-{
- return OSAL_NULL;
-}
-
-static OSAL_INLINE void ecore_iov_pf_add_pending_events(struct ecore_hwfn
- *p_hwfn, u8 vfid)
-{
-}
-
-static OSAL_INLINE void ecore_iov_pf_get_and_clear_pending_events(struct
- ecore_hwfn
- * p_hwfn,
- u64 *events)
-{
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_iov_copy_vf_msg(struct ecore_hwfn
- *p_hwfn,
- struct ecore_ptt
- *ptt, int vfid)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE void ecore_iov_bulletin_set_forced_mac(struct ecore_hwfn
- *p_hwfn, u8 *mac,
- int vfid)
-{
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_iov_bulletin_set_mac(struct
- ecore_hwfn
- * p_hwfn,
- u8 *mac,
- int vfid)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE void ecore_iov_bulletin_set_forced_vlan(struct ecore_hwfn
- p_hwfn, u16 pvid,
- int vfid)
-{
-}
-
-static OSAL_INLINE void ecore_iov_get_vfs_opaque_fid(struct ecore_hwfn *p_hwfn,
- int vfid, u16 *opaque_fid)
-{
-}
-
-static OSAL_INLINE void ecore_iov_get_vfs_vport_id(struct ecore_hwfn *p_hwfn,
- int vfid, u8 *p_vport_id)
-{
-}
-
-static OSAL_INLINE bool ecore_iov_vf_has_vport_instance(struct ecore_hwfn
- *p_hwfn, int vfid)
-{
- return false;
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_iov_post_vf_bulletin(struct
- ecore_hwfn
- * p_hwfn,
- int vfid,
- struct
- ecore_ptt
- * p_ptt)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE bool ecore_iov_is_vf_stopped(struct ecore_hwfn *p_hwfn,
- int vfid)
-{
- return false;
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_iov_spoofchk_set(struct ecore_hwfn
- *p_hwfn,
- int vfid,
- bool val)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE bool ecore_iov_spoofchk_get(struct ecore_hwfn *p_hwfn,
- int vfid)
-{
- return false;
-}
-
-static OSAL_INLINE bool ecore_iov_pf_sanity_check(struct ecore_hwfn *p_hwfn,
- int vfid)
-{
- return false;
-}
-
-static OSAL_INLINE u8 ecore_iov_vf_chains_per_pf(struct ecore_hwfn *p_hwfn)
-{
- return 0;
-}
-
-static OSAL_INLINE void ecore_iov_get_vf_req_virt_mbx_params(struct ecore_hwfn
- *p_hwfn,
- u16 rel_vf_id,
- void
- **pp_req_virt_addr,
- u16 *
- p_req_virt_size)
-{
-}
-
-static OSAL_INLINE void ecore_iov_get_vf_reply_virt_mbx_params(struct ecore_hwfn
- *p_hwfn,
- u16 rel_vf_id,
- void
- **pp_reply_virt_addr,
- u16 *
- p_reply_virt_size)
-{
-}
-
-static OSAL_INLINE bool ecore_iov_is_valid_vfpf_msg_length(u32 length)
-{
- return false;
-}
-
-static OSAL_INLINE u32 ecore_iov_pfvf_msg_length(void)
-{
- return 0;
-}
-
-static OSAL_INLINE u8 *ecore_iov_bulletin_get_forced_mac(struct ecore_hwfn
- *p_hwfn, u16 rel_vf_id)
-{
- return OSAL_NULL;
-}
-
-static OSAL_INLINE u16 ecore_iov_bulletin_get_forced_vlan(struct ecore_hwfn
- *p_hwfn,
- u16 rel_vf_id)
-{
- return 0;
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_iov_configure_tx_rate(struct
- ecore_hwfn
- * p_hwfn,
- struct
- ecore_ptt
- * p_ptt,
- int vfid,
- int val)
-{
- return ECORE_INVAL;
-}
-
-static OSAL_INLINE u8 ecore_iov_get_vf_num_rxqs(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id)
-{
- return 0;
-}
-
-static OSAL_INLINE u8 ecore_iov_get_vf_num_active_rxqs(struct ecore_hwfn
- *p_hwfn, u16 rel_vf_id)
-{
- return 0;
-}
-
-static OSAL_INLINE void *ecore_iov_get_vf_ctx(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id)
-{
- return OSAL_NULL;
-}
-
-static OSAL_INLINE u8 ecore_iov_get_vf_num_sbs(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id)
-{
- return 0;
-}
-
-static OSAL_INLINE bool ecore_iov_is_vf_wait_for_acquire(struct ecore_hwfn
- *p_hwfn, u16 rel_vf_id)
-{
- return false;
-}
-
-static OSAL_INLINE bool ecore_iov_is_vf_acquired_not_initialized(struct
- ecore_hwfn
- * p_hwfn,
- u16 rel_vf_id)
-{
- return false;
-}
-
-static OSAL_INLINE bool ecore_iov_is_vf_initialized(struct ecore_hwfn *p_hwfn,
- u16 rel_vf_id)
-{
- return false;
-}
-
-static OSAL_INLINE int ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn,
- int vfid)
-{
- return 0;
-}
-
-static OSAL_INLINE enum _ecore_status_t ecore_iov_configure_min_tx_rate(
- struct ecore_dev *p_dev, int vfid, u32 rate)
-{
- return ECORE_INVAL;
-}
#endif
/**
diff --git a/drivers/net/qede/qede_eth_if.c b/drivers/net/qede/qede_eth_if.c
index 9855d0e..bf41390 100644
--- a/drivers/net/qede/qede_eth_if.c
+++ b/drivers/net/qede/qede_eth_if.c
@@ -459,16 +459,6 @@ static const struct qed_eth_ops qed_eth_ops_pass = {
INIT_STRUCT_FIELD(filter_config, &qed_configure_filter),
};
-uint32_t qed_get_protocol_version(enum qed_protocol protocol)
-{
- switch (protocol) {
- case QED_PROTOCOL_ETH:
- return QED_ETH_INTERFACE_VERSION;
- default:
- return 0;
- }
-}
-
const struct qed_eth_ops *qed_get_eth_ops(void)
{
return &qed_eth_ops_pass;
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 5da540a..f7cc313 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1217,7 +1217,6 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
struct ecore_dev *edev;
struct qed_dev_eth_info dev_info;
struct qed_slowpath_params params;
- uint32_t qed_ver;
static bool do_once = true;
uint8_t bulletin_change;
uint8_t vf_mac[ETHER_ADDR_LEN];
@@ -1253,8 +1252,6 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
rte_eth_copy_pci_info(eth_dev, pci_dev);
- qed_ver = qed_get_protocol_version(QED_PROTOCOL_ETH);
-
qed_ops = qed_get_eth_ops();
if (!qed_ops) {
DP_ERR(edev, "Failed to get qed_eth_ops_pass\n");
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index f0bb720..f2e908c 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -116,18 +116,6 @@ enum qede_dev_state {
QEDE_DEV_STOP, /* Deactivate vport and stop traffic */
};
-struct qed_int_param {
- uint32_t int_mode;
- uint8_t num_vectors;
- uint8_t min_msix_cnt;
-};
-
-struct qed_int_params {
- struct qed_int_param in;
- struct qed_int_param out;
- bool fp_initialized;
-};
-
/*
* Structure to store private data for each port.
*/
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 1b05ff8..935eed8 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -152,13 +152,4 @@ struct qed_common_ops {
uint32_t dp_module, uint8_t dp_level);
};
-/**
- * @brief qed_get_protocol_version
- *
- * @param protocol
- *
- * @return version supported by qed for given protocol driver
- */
-uint32_t qed_get_protocol_version(enum qed_protocol protocol);
-
#endif /* _QEDE_IF_H */
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 20/32] net/qede: fixes for VLAN filters
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (18 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 19/32] net/qede: remove unused/dead code Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 21/32] net/qede: add enable/disable VLAN filtering Rasesh Mody
` (12 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Harish Patil
From: Harish Patil <harish.patil@qlogic.com>
- fix to prevent duplicate VLAN filters
librte_ether does not keep track of VLAN filters
configured, so it becomes driver's responsibility to
keep track of it and prevent duplicate filter
programming. The fix is to use a singly linked
list for tracking the entries and there by prevent
duplicates.
- fix num vlan filters
Fix num vlan filter when filling Ethernet device information.
Fixes: 2ea6f76aff40 ("qede: add core driver")
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
drivers/net/qede/qede_eth_if.h | 2 +-
drivers/net/qede/qede_ethdev.c | 67 ++++++++++++++++++++++++++++++++++--------
drivers/net/qede/qede_ethdev.h | 15 +++++-----
drivers/net/qede/qede_main.c | 10 +++++--
4 files changed, 71 insertions(+), 23 deletions(-)
diff --git a/drivers/net/qede/qede_eth_if.h b/drivers/net/qede/qede_eth_if.h
index 7840a37..5a7fdc9 100644
--- a/drivers/net/qede/qede_eth_if.h
+++ b/drivers/net/qede/qede_eth_if.h
@@ -46,7 +46,7 @@ struct qed_dev_eth_info {
uint8_t num_tc;
struct ether_addr port_mac;
- uint8_t num_vlan_filters;
+ uint16_t num_vlan_filters;
uint32_t num_mac_addrs;
};
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index f7cc313..b97e3be 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -406,10 +406,11 @@ static int qede_vlan_filter_set(struct rte_eth_dev *eth_dev,
struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
struct qed_dev_eth_info *dev_info = &qdev->dev_info;
+ struct qede_vlan_entry *tmp = NULL;
+ struct qede_vlan_entry *vlan;
int rc;
- if (vlan_id != 0 &&
- qdev->configured_vlans == dev_info->num_vlan_filters) {
+ if (qdev->configured_vlans == dev_info->num_vlan_filters) {
DP_NOTICE(edev, false, "Reached max VLAN filter limit"
" enabling accept_any_vlan\n");
qede_config_accept_any_vlan(qdev, true);
@@ -417,28 +418,66 @@ static int qede_vlan_filter_set(struct rte_eth_dev *eth_dev,
}
if (on) {
+ SLIST_FOREACH(tmp, &qdev->vlan_list_head, list) {
+ if (tmp->vid == vlan_id) {
+ DP_ERR(edev, "VLAN %u already configured\n",
+ vlan_id);
+ return -EEXIST;
+ }
+ }
+
+ vlan = rte_malloc(NULL, sizeof(struct qede_vlan_entry),
+ RTE_CACHE_LINE_SIZE);
+
+ if (!vlan) {
+ DP_ERR(edev, "Did not allocate memory for VLAN\n");
+ return -ENOMEM;
+ }
+
rc = qede_set_ucast_rx_vlan(qdev, QED_FILTER_XCAST_TYPE_ADD,
vlan_id);
- if (rc)
+ if (rc) {
DP_ERR(edev, "Failed to add VLAN %u rc %d\n", vlan_id,
rc);
- else
- if (vlan_id != 0)
- qdev->configured_vlans++;
+ rte_free(vlan);
+ } else {
+ vlan->vid = vlan_id;
+ SLIST_INSERT_HEAD(&qdev->vlan_list_head, vlan, list);
+ qdev->configured_vlans++;
+ DP_INFO(edev, "VLAN %u added, configured_vlans %u\n",
+ vlan_id, qdev->configured_vlans);
+ }
} else {
+ SLIST_FOREACH(tmp, &qdev->vlan_list_head, list) {
+ if (tmp->vid == vlan_id)
+ break;
+ }
+
+ if (!tmp) {
+ if (qdev->configured_vlans == 0) {
+ DP_INFO(edev,
+ "No VLAN filters configured yet\n");
+ return 0;
+ }
+
+ DP_ERR(edev, "VLAN %u not configured\n", vlan_id);
+ return -EINVAL;
+ }
+
+ SLIST_REMOVE(&qdev->vlan_list_head, tmp, qede_vlan_entry, list);
+
rc = qede_set_ucast_rx_vlan(qdev, QED_FILTER_XCAST_TYPE_DEL,
vlan_id);
- if (rc)
+ if (rc) {
DP_ERR(edev, "Failed to delete VLAN %u rc %d\n",
vlan_id, rc);
- else
- if (vlan_id != 0)
- qdev->configured_vlans--;
+ } else {
+ qdev->configured_vlans--;
+ DP_INFO(edev, "VLAN %u removed configured_vlans %u\n",
+ vlan_id, qdev->configured_vlans);
+ }
}
- DP_INFO(edev, "vlan_id %u on %u rc %d configured_vlans %u\n",
- vlan_id, on, rc, qdev->configured_vlans);
-
return rc;
}
@@ -517,6 +556,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
DP_INFO(edev, "IP/UDP/TCP checksum offload is always enabled "
"in hw\n");
+ SLIST_INIT(&qdev->vlan_list_head);
+
/* Check for the port restart case */
if (qdev->state != QEDE_DEV_INIT) {
rc = qdev->ops->vport_stop(edev, 0);
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index f2e908c..ed2d41c 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -10,6 +10,8 @@
#ifndef _QEDE_ETHDEV_H_
#define _QEDE_ETHDEV_H_
+#include <sys/queue.h>
+
#include <rte_ether.h>
#include <rte_ethdev.h>
#include <rte_dev.h>
@@ -116,6 +118,11 @@ enum qede_dev_state {
QEDE_DEV_STOP, /* Deactivate vport and stop traffic */
};
+struct qede_vlan_entry {
+ SLIST_ENTRY(qede_vlan_entry) list;
+ uint16_t vid;
+};
+
/*
* Structure to store private data for each port.
*/
@@ -136,16 +143,10 @@ struct qede_dev {
uint16_t num_queues;
uint8_t fp_num_tx;
uint8_t fp_num_rx;
-
enum qede_dev_state state;
-
- /* Vlans */
- osal_list_t vlan_list;
+ SLIST_HEAD(vlan_list_head, qede_vlan_entry)vlan_list_head;
uint16_t configured_vlans;
- uint16_t non_configured_vlans;
bool accept_any_vlan;
- uint16_t vxlan_dst_port;
-
struct ether_addr primary_mac;
bool handle_hw_err;
char drv_ver[QED_DRV_VER_STR_SIZE];
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 60655b7..c83893d 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -391,12 +391,18 @@ qed_fill_eth_dev_info(struct ecore_dev *edev, struct qed_dev_eth_info *info)
info->num_tc = 1 /* @@@TBD aelior MULTI_COS */;
if (IS_PF(edev)) {
+ int max_vf_vlan_filters = 0;
+
info->num_queues = 0;
for_each_hwfn(edev, i)
info->num_queues +=
FEAT_NUM(&edev->hwfns[i], ECORE_PF_L2_QUE);
- info->num_vlan_filters = RESC_NUM(&edev->hwfns[0], ECORE_VLAN);
+ if (edev->p_iov_info)
+ max_vf_vlan_filters = edev->p_iov_info->total_vfs *
+ ECORE_ETH_VF_NUM_VLAN_FILTERS;
+ info->num_vlan_filters = RESC_NUM(&edev->hwfns[0], ECORE_VLAN) -
+ max_vf_vlan_filters;
rte_memcpy(&info->port_mac, &edev->hwfns[0].hw_info.hw_mac_addr,
ETHER_ADDR_LEN);
@@ -404,7 +410,7 @@ qed_fill_eth_dev_info(struct ecore_dev *edev, struct qed_dev_eth_info *info)
ecore_vf_get_num_rxqs(&edev->hwfns[0], &info->num_queues);
ecore_vf_get_num_vlan_filters(&edev->hwfns[0],
- &info->num_vlan_filters);
+ (u8 *)&info->num_vlan_filters);
ecore_vf_get_port_mac(&edev->hwfns[0],
(uint8_t *)&info->port_mac);
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 21/32] net/qede: add enable/disable VLAN filtering
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (19 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 20/32] net/qede: fixes for VLAN filters Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 22/32] net/qede: fix RSS related issues Rasesh Mody
` (11 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Harish Patil
From: Harish Patil <harish.patil@qlogic.com>
The device doesn't explicitly support enable/disable
of VLAN filtering. However, VLAN filtering takes effect
when a matching VLAN is configured. So in order to
support enable/disable of VLAN filtering, VLAN 0 is
added/removed respectively. A check is added to ensure that
the user removes all the configured VLANs before disabling
VLAN filtering.
Also VLAN offloads shall be enabled by default and
vlan_tci_outer is to set to 0 for Q-in-Q packets.
Fixes: 2ea6f76 ("qede: add core driver")
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
drivers/net/qede/qede_ethdev.c | 35 ++++++++++++++++++++++++++++++++---
drivers/net/qede/qede_ethdev.h | 4 ++++
drivers/net/qede/qede_rxtx.c | 1 +
3 files changed, 37 insertions(+), 3 deletions(-)
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index b97e3be..4afc905 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -372,16 +372,40 @@ static void qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
+ struct rte_eth_rxmode *rxmode = ð_dev->data->dev_conf.rxmode;
if (mask & ETH_VLAN_STRIP_MASK) {
- if (eth_dev->data->dev_conf.rxmode.hw_vlan_strip)
+ if (rxmode->hw_vlan_strip)
(void)qede_vlan_stripping(eth_dev, 1);
else
(void)qede_vlan_stripping(eth_dev, 0);
}
- DP_INFO(edev, "vlan offload mask %d vlan-strip %d\n",
- mask, eth_dev->data->dev_conf.rxmode.hw_vlan_strip);
+ if (mask & ETH_VLAN_FILTER_MASK) {
+ /* VLAN filtering kicks in when a VLAN is added */
+ if (rxmode->hw_vlan_filter) {
+ qede_vlan_filter_set(eth_dev, 0, 1);
+ } else {
+ if (qdev->configured_vlans > 1) { /* Excluding VLAN0 */
+ DP_NOTICE(edev, false,
+ " Please remove existing VLAN filters"
+ " before disabling VLAN filtering\n");
+ /* Signal app that VLAN filtering is still
+ * enabled
+ */
+ rxmode->hw_vlan_filter = true;
+ } else {
+ qede_vlan_filter_set(eth_dev, 0, 0);
+ }
+ }
+ }
+
+ if (mask & ETH_VLAN_EXTEND_MASK)
+ DP_INFO(edev, "No offloads are supported with VLAN Q-in-Q"
+ " and classification is based on outer tag only\n");
+
+ DP_INFO(edev, "vlan offload mask %d vlan-strip %d vlan-filter %d\n",
+ mask, rxmode->hw_vlan_strip, rxmode->hw_vlan_filter);
}
static int qede_set_ucast_rx_vlan(struct qede_dev *qdev,
@@ -584,6 +608,11 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
if (IS_PF(edev))
qede_mac_addr_set(eth_dev, &qdev->primary_mac);
+ /* Enable VLAN offloads by default */
+ qede_vlan_offload_set(eth_dev, ETH_VLAN_STRIP_MASK |
+ ETH_VLAN_FILTER_MASK |
+ ETH_VLAN_EXTEND_MASK);
+
qdev->state = QEDE_DEV_CONFIG;
return 0;
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index ed2d41c..526d3be 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -152,6 +152,10 @@ struct qede_dev {
char drv_ver[QED_DRV_VER_STR_SIZE];
};
+/* Static functions */
+static int qede_vlan_filter_set(struct rte_eth_dev *eth_dev,
+ uint16_t vlan_id, int on);
+
int qed_fill_eth_dev_info(struct ecore_dev *edev,
struct qed_dev_eth_info *info);
int qede_dev_set_link_state(struct rte_eth_dev *eth_dev, bool link_up);
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 6973d1c..9df0d133 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -945,6 +945,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
*/
rx_mb->vlan_tci = rte_le_to_cpu_16(fp_cqe->vlan_tag);
rx_mb->ol_flags |= PKT_RX_QINQ_PKT;
+ rx_mb->vlan_tci_outer = 0;
}
rx_pkts[rx_pkt] = rx_mb;
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 22/32] net/qede: fix RSS related issues
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (20 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 21/32] net/qede: add enable/disable VLAN filtering Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 23/32] net/qede: add scatter gather support Rasesh Mody
` (10 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Harish Patil
From: Harish Patil <harish.patil@qlogic.com>
This patch contains few RSS related changes as follows:
o Fix inadvertent initializing of rss_params outside of the
if block in qed_update_vport() which could cause FW exception.
o Fix disabling of RSS when hash function is 0.
o Rename qede_config_rss() to qede_check_vport_rss_enable()
for better clarity.
o Avoid code duplication using a helper function
qede_init_rss_caps().
Fixes: 4c98f2768eef ("net/qede: support RSS hash configuration")
Fixes: 2ea6f76aff40 ("qede: add core driver")
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
doc/guides/nics/qede.rst | 2 +-
drivers/net/qede/qede_eth_if.c | 2 +-
drivers/net/qede/qede_ethdev.c | 77 +++++++++++++++++++++---------------
drivers/net/qede/qede_ethdev.h | 23 ++++++-----
drivers/net/qede/qede_rxtx.c | 88 +++++++++++++++++-------------------------
5 files changed, 97 insertions(+), 95 deletions(-)
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index 4ec75e4..e19c3db 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -51,7 +51,7 @@ Supported Features
- VLAN offload - Filtering and stripping
- Stateless checksum offloads (IPv4/TCP/UDP)
- Multiple Rx/Tx queues
-- RSS (with user configurable table/key)
+- RSS (with RETA/hash table/key)
- TSS
- Multiple MAC address
- Default pause flow control
diff --git a/drivers/net/qede/qede_eth_if.c b/drivers/net/qede/qede_eth_if.c
index bf41390..a19b22e 100644
--- a/drivers/net/qede/qede_eth_if.c
+++ b/drivers/net/qede/qede_eth_if.c
@@ -143,8 +143,8 @@ qed_update_vport(struct ecore_dev *edev, struct qed_update_vport_params *params)
ECORE_RSS_IND_TABLE_SIZE * sizeof(uint16_t));
rte_memcpy(sp_rss_params.rss_key, params->rss_params.rss_key,
ECORE_RSS_KEY_SIZE * sizeof(uint32_t));
+ sp_params.rss_params = &sp_rss_params;
}
- sp_params.rss_params = &sp_rss_params;
for_each_hwfn(edev, i) {
struct ecore_hwfn *p_hwfn = &edev->hwfns[i];
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 4afc905..1c67eac 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -537,7 +537,7 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
struct qede_dev *qdev = eth_dev->data->dev_private;
struct ecore_dev *edev = &qdev->edev;
struct rte_eth_rxmode *rxmode = ð_dev->data->dev_conf.rxmode;
- int rc;
+ int rc, i, j;
PMD_INIT_FUNC_TRACE(edev);
@@ -558,10 +558,6 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
}
}
- qdev->fp_num_tx = eth_dev->data->nb_tx_queues;
- qdev->fp_num_rx = eth_dev->data->nb_rx_queues;
- qdev->num_queues = qdev->fp_num_tx + qdev->fp_num_rx;
-
/* Sanity checks and throw warnings */
if (rxmode->enable_scatter == 1) {
DP_ERR(edev, "RX scatter packets is not supported\n");
@@ -580,8 +576,6 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
DP_INFO(edev, "IP/UDP/TCP checksum offload is always enabled "
"in hw\n");
- SLIST_INIT(&qdev->vlan_list_head);
-
/* Check for the port restart case */
if (qdev->state != QEDE_DEV_INIT) {
rc = qdev->ops->vport_stop(edev, 0);
@@ -590,6 +584,10 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
qede_dealloc_fp_resc(eth_dev);
}
+ qdev->fp_num_tx = eth_dev->data->nb_tx_queues;
+ qdev->fp_num_rx = eth_dev->data->nb_rx_queues;
+ qdev->num_queues = qdev->fp_num_tx + qdev->fp_num_rx;
+
/* Fastpath status block should be initialized before sending
* VPORT-START in the case of VF. Anyway, do it for both VF/PF.
*/
@@ -604,6 +602,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
if (rc != 0)
return rc;
+ SLIST_INIT(&qdev->vlan_list_head);
+
/* Add primary mac for PF */
if (IS_PF(edev))
qede_mac_addr_set(eth_dev, &qdev->primary_mac);
@@ -615,6 +615,10 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
qdev->state = QEDE_DEV_CONFIG;
+ DP_INFO(edev, "Allocated RSS=%d TSS=%d (with CoS=%d)\n",
+ (int)QEDE_RSS_COUNT(qdev), (int)QEDE_TSS_COUNT(qdev),
+ qdev->num_tc);
+
return 0;
}
@@ -1037,42 +1041,51 @@ qede_dev_supported_ptypes_get(struct rte_eth_dev *eth_dev)
return NULL;
}
-int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf)
+void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf)
+{
+ *rss_caps = 0;
+ *rss_caps |= (hf & ETH_RSS_IPV4) ? ECORE_RSS_IPV4 : 0;
+ *rss_caps |= (hf & ETH_RSS_IPV6) ? ECORE_RSS_IPV6 : 0;
+ *rss_caps |= (hf & ETH_RSS_IPV6_EX) ? ECORE_RSS_IPV6 : 0;
+ *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? ECORE_RSS_IPV4_TCP : 0;
+ *rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? ECORE_RSS_IPV6_TCP : 0;
+ *rss_caps |= (hf & ETH_RSS_IPV6_TCP_EX) ? ECORE_RSS_IPV6_TCP : 0;
+}
+
+static int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_conf *rss_conf)
{
struct qed_update_vport_params vport_update_params;
struct qede_dev *qdev = eth_dev->data->dev_private;
struct ecore_dev *edev = &qdev->edev;
- uint8_t rss_caps;
uint32_t *key = (uint32_t *)rss_conf->rss_key;
uint64_t hf = rss_conf->rss_hf;
int i;
- if (hf == 0)
- DP_ERR(edev, "hash function 0 will disable RSS\n");
+ memset(&vport_update_params, 0, sizeof(vport_update_params));
- rss_caps = 0;
- rss_caps |= (hf & ETH_RSS_IPV4) ? ECORE_RSS_IPV4 : 0;
- rss_caps |= (hf & ETH_RSS_IPV6) ? ECORE_RSS_IPV6 : 0;
- rss_caps |= (hf & ETH_RSS_IPV6_EX) ? ECORE_RSS_IPV6 : 0;
- rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? ECORE_RSS_IPV4_TCP : 0;
- rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? ECORE_RSS_IPV6_TCP : 0;
- rss_caps |= (hf & ETH_RSS_IPV6_TCP_EX) ? ECORE_RSS_IPV6_TCP : 0;
+ if (hf != 0) {
+ /* Enable RSS */
+ qede_init_rss_caps(&qdev->rss_params.rss_caps, hf);
+ memcpy(&vport_update_params.rss_params, &qdev->rss_params,
+ sizeof(vport_update_params.rss_params));
+ if (key)
+ memcpy(qdev->rss_params.rss_key, rss_conf->rss_key,
+ rss_conf->rss_key_len);
+ vport_update_params.update_rss_flg = 1;
+ qdev->rss_enabled = 1;
+ } else {
+ /* Disable RSS */
+ qdev->rss_enabled = 0;
+ }
/* If the mapping doesn't fit any supported, return */
- if (rss_caps == 0 && hf != 0)
+ if (qdev->rss_params.rss_caps == 0 && hf != 0)
return -EINVAL;
- memset(&vport_update_params, 0, sizeof(vport_update_params));
-
- if (key != NULL)
- memcpy(qdev->rss_params.rss_key, rss_conf->rss_key,
- rss_conf->rss_key_len);
+ DP_INFO(edev, "%s\n", (vport_update_params.update_rss_flg) ?
+ "Enabling RSS" : "Disabling RSS");
- qdev->rss_params.rss_caps = rss_caps;
- memcpy(&vport_update_params.rss_params, &qdev->rss_params,
- sizeof(vport_update_params.rss_params));
- vport_update_params.update_rss_flg = 1;
vport_update_params.vport_id = 0;
return qdev->ops->vport_update(edev, &vport_update_params);
@@ -1110,9 +1123,9 @@ int qede_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
return 0;
}
-int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size)
+static int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size)
{
struct qed_update_vport_params vport_update_params;
struct qede_dev *qdev = eth_dev->data->dev_private;
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 526d3be..5838f33 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -64,8 +64,10 @@
#define QEDE_MAX_TSS_CNT(edev) ((edev)->dev_info.num_queues * \
(edev)->dev_info.num_tc)
-#define QEDE_RSS_CNT(edev) ((edev)->fp_num_rx)
-#define QEDE_TSS_CNT(edev) ((edev)->fp_num_rx * (edev)->num_tc)
+#define QEDE_QUEUE_CNT(qdev) ((qdev)->num_queues)
+#define QEDE_RSS_COUNT(qdev) ((qdev)->num_queues - (qdev)->fp_num_tx)
+#define QEDE_TSS_COUNT(qdev) (((qdev)->num_queues - (qdev)->fp_num_rx) * \
+ (qdev)->num_tc)
#define QEDE_DUPLEX_FULL 1
#define QEDE_DUPLEX_HALF 2
@@ -78,12 +80,6 @@
#define QEDE_INIT_EDEV(adapter) (&((struct qede_dev *)adapter)->edev)
-#define QEDE_QUEUE_CNT(qdev) ((qdev)->num_queues)
-#define QEDE_RSS_COUNT(qdev) ((qdev)->num_queues - (qdev)->fp_num_tx)
-#define QEDE_TSS_COUNT(qdev) (((qdev)->num_queues - (qdev)->fp_num_rx) * \
- (qdev)->num_tc)
-#define QEDE_TC_IDX(qdev, txqidx) ((txqidx) / QEDE_TSS_COUNT(qdev))
-
#define QEDE_INIT(eth_dev) { \
struct qede_dev *qdev = eth_dev->data->dev_private; \
struct ecore_dev *edev = &qdev->edev; \
@@ -133,7 +129,6 @@ struct qede_dev {
struct qed_dev_eth_info dev_info;
struct ecore_sb_info *sb_array;
struct qede_fastpath *fp_array;
- uint16_t num_rss;
uint8_t num_tc;
uint16_t mtu;
bool rss_enabled;
@@ -156,6 +151,16 @@ struct qede_dev {
static int qede_vlan_filter_set(struct rte_eth_dev *eth_dev,
uint16_t vlan_id, int on);
+static int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_conf *rss_conf);
+
+static int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
+ struct rte_eth_rss_reta_entry64 *reta_conf,
+ uint16_t reta_size);
+
+/* Non-static functions */
+void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf);
+
int qed_fill_eth_dev_info(struct ecore_dev *edev,
struct qed_dev_eth_info *info);
int qede_dev_set_link_state(struct rte_eth_dev *eth_dev, bool link_up);
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 9df0d133..ab16c04 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -64,7 +64,7 @@ void qede_rx_queue_release(void *rx_queue)
rte_free(rxq->sw_rx_ring);
rxq->sw_rx_ring = NULL;
rte_free(rxq);
- rx_queue = NULL;
+ rxq = NULL;
}
}
@@ -234,7 +234,7 @@ void qede_tx_queue_release(void *tx_queue)
}
rte_free(txq);
}
- tx_queue = NULL;
+ txq = NULL;
}
int
@@ -502,9 +502,9 @@ static void qede_prandom_bytes(uint32_t *buff, size_t bytes)
buff[i] = rand();
}
-static int
-qede_config_rss(struct rte_eth_dev *eth_dev,
- struct qed_update_vport_rss_params *rss_params)
+static bool
+qede_check_vport_rss_enable(struct rte_eth_dev *eth_dev,
+ struct qed_update_vport_rss_params *rss_params)
{
struct rte_eth_rss_conf rss_conf;
enum rte_eth_rx_mq_mode mode = eth_dev->data->dev_conf.rxmode.mq_mode;
@@ -515,57 +515,46 @@ qede_config_rss(struct rte_eth_dev *eth_dev,
uint64_t hf;
uint32_t *key;
+ PMD_INIT_FUNC_TRACE(edev);
+
rss_conf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf;
key = (uint32_t *)rss_conf.rss_key;
hf = rss_conf.rss_hf;
- PMD_INIT_FUNC_TRACE(edev);
/* Check if RSS conditions are met.
* Note: Even though its meaningless to enable RSS with one queue, it
* could be used to produce RSS Hash, so skipping that check.
*/
-
if (!(mode & ETH_MQ_RX_RSS)) {
DP_INFO(edev, "RSS flag is not set\n");
- return -EINVAL;
+ return false;
}
- DP_INFO(edev, "RSS flag is set\n");
-
- if (rss_conf.rss_hf == 0)
- DP_NOTICE(edev, false, "RSS hash function = 0, disables RSS\n");
-
- if (rss_conf.rss_key != NULL)
- memcpy(qdev->rss_params.rss_key, rss_conf.rss_key,
- rss_conf.rss_key_len);
+ if (hf == 0) {
+ DP_INFO(edev, "Request to disable RSS\n");
+ return false;
+ }
memset(rss_params, 0, sizeof(*rss_params));
for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++)
rss_params->rss_ind_table[i] = qede_rxfh_indir_default(i,
- QEDE_RSS_CNT(qdev));
+ QEDE_RSS_COUNT(qdev));
- /* key and protocols */
- if (rss_conf.rss_key == NULL)
+ if (!key)
qede_prandom_bytes(rss_params->rss_key,
sizeof(rss_params->rss_key));
else
memcpy(rss_params->rss_key, rss_conf.rss_key,
rss_conf.rss_key_len);
- rss_caps = 0;
- rss_caps |= (hf & ETH_RSS_IPV4) ? ECORE_RSS_IPV4 : 0;
- rss_caps |= (hf & ETH_RSS_IPV6) ? ECORE_RSS_IPV6 : 0;
- rss_caps |= (hf & ETH_RSS_IPV6_EX) ? ECORE_RSS_IPV6 : 0;
- rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_TCP) ? ECORE_RSS_IPV4_TCP : 0;
- rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_TCP) ? ECORE_RSS_IPV6_TCP : 0;
- rss_caps |= (hf & ETH_RSS_IPV6_TCP_EX) ? ECORE_RSS_IPV6_TCP : 0;
+ qede_init_rss_caps(&rss_caps, hf);
rss_params->rss_caps = rss_caps;
- DP_INFO(edev, "RSS check passes\n");
+ DP_INFO(edev, "RSS conditions are met\n");
- return 0;
+ return true;
}
static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
@@ -618,7 +607,7 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
continue;
for (tc = 0; tc < qdev->num_tc; tc++) {
txq = fp->txqs[tc];
- txq_index = tc * QEDE_RSS_CNT(qdev) + i;
+ txq_index = tc * QEDE_RSS_COUNT(qdev) + i;
p_phys_table = ecore_chain_get_pbl_phys(&txq->tx_pbl);
page_cnt = ecore_chain_get_page_cnt(&txq->tx_pbl);
@@ -663,14 +652,11 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
vport_update_params.tx_switching_flg = 1;
}
- if (!qede_config_rss(eth_dev, rss_params)) {
+ if (qede_check_vport_rss_enable(eth_dev, rss_params)) {
vport_update_params.update_rss_flg = 1;
-
qdev->rss_enabled = 1;
- DP_INFO(edev, "Updating RSS flag\n");
} else {
qdev->rss_enabled = 0;
- DP_INFO(edev, "Not Updating RSS flag\n");
}
rte_memcpy(&vport_update_params.rss_params, rss_params,
@@ -1124,7 +1110,7 @@ static void qede_init_fp_queue(struct rte_eth_dev *eth_dev)
if (fp->type & QEDE_FASTPATH_TX) {
for (tc = 0; tc < qdev->num_tc; tc++) {
- txq_index = tc * QEDE_TSS_CNT(qdev) + txq;
+ txq_index = tc * QEDE_TSS_COUNT(qdev) + txq;
fp->txqs[tc] =
eth_dev->data->tx_queues[txq_index];
fp->txqs[tc]->queue_id = txq_index;
@@ -1326,6 +1312,7 @@ int qede_reset_fp_rings(struct qede_dev *qdev)
if (fp->type & QEDE_FASTPATH_TX) {
for (tc = 0; tc < qdev->num_tc; tc++) {
txq = fp->txqs[tc];
+ qede_tx_queue_release_mbufs(txq);
ecore_chain_reset(&txq->tx_pbl);
txq->sw_tx_cons = 0;
txq->sw_tx_prod = 0;
@@ -1338,29 +1325,26 @@ int qede_reset_fp_rings(struct qede_dev *qdev)
}
/* This function frees all memory of a single fp */
-static void qede_free_mem_fp(struct rte_eth_dev *eth_dev,
- struct qede_fastpath *fp)
-{
- struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
- uint8_t tc;
-
- qede_rx_queue_release(fp->rxq);
- for (tc = 0; tc < qdev->num_tc; tc++) {
- qede_tx_queue_release(fp->txqs[tc]);
- eth_dev->data->tx_queues[tc] = NULL;
- }
-}
-
void qede_free_mem_load(struct rte_eth_dev *eth_dev)
{
struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
struct qede_fastpath *fp;
- uint8_t rss_id;
+ uint16_t txq_idx;
+ uint8_t id;
+ uint8_t tc;
- for_each_queue(rss_id) {
- fp = &qdev->fp_array[rss_id];
- qede_free_mem_fp(eth_dev, fp);
- eth_dev->data->rx_queues[rss_id] = NULL;
+ for_each_queue(id) {
+ fp = &qdev->fp_array[id];
+ if (fp->type & QEDE_FASTPATH_RX) {
+ qede_rx_queue_release(fp->rxq);
+ eth_dev->data->rx_queues[id] = NULL;
+ } else {
+ for (tc = 0; tc < qdev->num_tc; tc++) {
+ txq_idx = fp->txqs[tc]->queue_id;
+ qede_tx_queue_release(fp->txqs[tc]);
+ eth_dev->data->tx_queues[txq_idx] = NULL;
+ }
+ }
}
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 23/32] net/qede: add scatter gather support
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (21 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 22/32] net/qede: fix RSS related issues Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 24/32] net/qede/base: change Rx Tx queue start APIs Rasesh Mody
` (9 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Sony Chacko
From: Sony Chacko <sony.chacko@qlogic.com>
Add scatter gather support to enable transmit and receive of larger
packets.
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
---
doc/guides/nics/features/qede.ini | 1 +
doc/guides/nics/features/qede_vf.ini | 1 +
doc/guides/nics/qede.rst | 4 +-
drivers/net/qede/qede_ethdev.c | 20 ++--
drivers/net/qede/qede_rxtx.c | 217 ++++++++++++++++++++++++++++-------
drivers/net/qede/qede_rxtx.h | 3 -
6 files changed, 191 insertions(+), 55 deletions(-)
diff --git a/doc/guides/nics/features/qede.ini b/doc/guides/nics/features/qede.ini
index 1d28a23..7d75030 100644
--- a/doc/guides/nics/features/qede.ini
+++ b/doc/guides/nics/features/qede.ini
@@ -9,6 +9,7 @@ Link status = Y
Link status event = Y
MTU update = Y
Jumbo frame = Y
+Scattered Rx = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/features/qede_vf.ini b/doc/guides/nics/features/qede_vf.ini
index b4eba0c..acb1b99 100644
--- a/doc/guides/nics/features/qede_vf.ini
+++ b/doc/guides/nics/features/qede_vf.ini
@@ -9,6 +9,7 @@ Link status = Y
Link status event = Y
MTU update = Y
Jumbo frame = Y
+Scattered Rx = Y
Promiscuous mode = Y
Allmulticast mode = Y
Unicast MAC filter = Y
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index e19c3db..c19f499 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -47,7 +47,7 @@ Supported Features
- Promiscuous mode
- Allmulti mode
- Port hardware statistics
-- Jumbo frames (using single buffer)
+- Jumbo frames
- VLAN offload - Filtering and stripping
- Stateless checksum offloads (IPv4/TCP/UDP)
- Multiple Rx/Tx queues
@@ -58,11 +58,11 @@ Supported Features
- SR-IOV VF
- MTU change
- Multiprocess aware
+- Scatter-Gather
Non-supported Features
----------------------
-- Scatter-Gather Rx/Tx frames
- SR-IOV PF
- Tunneling offloads
- Reload of the PMD after a non-graceful termination
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 1c67eac..c93b7af 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -434,14 +434,14 @@ static int qede_vlan_filter_set(struct rte_eth_dev *eth_dev,
struct qede_vlan_entry *vlan;
int rc;
- if (qdev->configured_vlans == dev_info->num_vlan_filters) {
- DP_NOTICE(edev, false, "Reached max VLAN filter limit"
- " enabling accept_any_vlan\n");
- qede_config_accept_any_vlan(qdev, true);
- return 0;
- }
-
if (on) {
+ if (qdev->configured_vlans == dev_info->num_vlan_filters) {
+ DP_INFO(edev, "Reached max VLAN filter limit"
+ " enabling accept_any_vlan\n");
+ qede_config_accept_any_vlan(qdev, true);
+ return 0;
+ }
+
SLIST_FOREACH(tmp, &qdev->vlan_list_head, list) {
if (tmp->vid == vlan_id) {
DP_ERR(edev, "VLAN %u already configured\n",
@@ -559,10 +559,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
}
/* Sanity checks and throw warnings */
- if (rxmode->enable_scatter == 1) {
- DP_ERR(edev, "RX scatter packets is not supported\n");
- return -EINVAL;
- }
+ if (rxmode->enable_scatter == 1)
+ eth_dev->data->scattered_rx = 1;
if (rxmode->enable_lro == 1) {
DP_INFO(edev, "LRO is not supported\n");
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index ab16c04..54cd849 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -135,8 +135,19 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
data_size = (uint16_t)rte_pktmbuf_data_room_size(mp) -
RTE_PKTMBUF_HEADROOM;
+ if (pkt_len > data_size && !dev->data->scattered_rx) {
+ DP_ERR(edev, "MTU %u should not exceed dataroom %u\n",
+ pkt_len, data_size);
+ rte_free(rxq);
+ return -EINVAL;
+ }
+
+ if (dev->data->scattered_rx)
+ rxq->rx_buf_size = data_size;
+ else
+ rxq->rx_buf_size = pkt_len + QEDE_ETH_OVERHEAD;
+
qdev->mtu = pkt_len;
- rxq->rx_buf_size = data_size;
DP_INFO(edev, "MTU = %u ; RX buffer = %u\n",
qdev->mtu, rxq->rx_buf_size);
@@ -804,6 +815,58 @@ static inline uint32_t qede_rx_cqe_to_pkt_type(uint16_t flags)
return RTE_PTYPE_L2_ETHER | p_type;
}
+int qede_process_sg_pkts(void *p_rxq, struct rte_mbuf *rx_mb,
+ int num_frags, uint16_t pkt_len)
+{
+ struct qede_rx_queue *rxq = p_rxq;
+ struct qede_dev *qdev = rxq->qdev;
+ struct ecore_dev *edev = &qdev->edev;
+ uint16_t sw_rx_index, cur_size;
+
+ register struct rte_mbuf *seg1 = NULL;
+ register struct rte_mbuf *seg2 = NULL;
+
+ seg1 = rx_mb;
+ while (num_frags) {
+ cur_size = pkt_len > rxq->rx_buf_size ?
+ rxq->rx_buf_size : pkt_len;
+ if (!cur_size) {
+ PMD_RX_LOG(DEBUG, rxq,
+ "SG packet, len and num BD mismatch\n");
+ qede_recycle_rx_bd_ring(rxq, qdev, num_frags);
+ return -EINVAL;
+ }
+
+ if (qede_alloc_rx_buffer(rxq)) {
+ uint8_t index;
+
+ PMD_RX_LOG(DEBUG, rxq, "Buffer allocation failed\n");
+ index = rxq->port_id;
+ rte_eth_devices[index].data->rx_mbuf_alloc_failed++;
+ rxq->rx_alloc_errors++;
+ return -ENOMEM;
+ }
+
+ sw_rx_index = rxq->sw_rx_cons & NUM_RX_BDS(rxq);
+ seg2 = rxq->sw_rx_ring[sw_rx_index].mbuf;
+ qede_rx_bd_ring_consume(rxq);
+ pkt_len -= cur_size;
+ seg2->data_len = cur_size;
+ seg1->next = seg2;
+ seg1 = seg1->next;
+ num_frags--;
+ continue;
+ }
+ seg1 = NULL;
+
+ if (pkt_len)
+ PMD_RX_LOG(DEBUG, rxq,
+ "Mapped all BDs of jumbo, but still have %d bytes\n",
+ pkt_len);
+
+ return ECORE_SUCCESS;
+}
+
uint16_t
qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
{
@@ -816,12 +879,12 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
union eth_rx_cqe *cqe;
struct eth_fast_path_rx_reg_cqe *fp_cqe;
register struct rte_mbuf *rx_mb = NULL;
+ register struct rte_mbuf *seg1 = NULL;
enum eth_rx_cqe_type cqe_type;
- uint16_t len, pad;
- uint16_t preload_idx;
- uint8_t csum_flag;
- uint16_t parse_flag;
+ uint16_t len, pad, preload_idx, pkt_len, parse_flag;
+ uint8_t csum_flag, num_frags;
enum rss_hash_type htype;
+ int ret;
hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr);
sw_comp_cons = ecore_chain_get_cons_idx(&rxq->rx_comp_ring);
@@ -891,20 +954,31 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
qede_rx_bd_ring_consume(rxq);
+ if (fp_cqe->bd_num > 1) {
+ pkt_len = rte_le_to_cpu_16(fp_cqe->pkt_len);
+ num_frags = fp_cqe->bd_num - 1;
+
+ pkt_len -= len;
+ seg1 = rx_mb;
+ ret = qede_process_sg_pkts(p_rxq, seg1, num_frags,
+ pkt_len);
+ if (ret != ECORE_SUCCESS) {
+ qede_recycle_rx_bd_ring(rxq, qdev,
+ fp_cqe->bd_num);
+ goto next_cqe;
+ }
+ }
+
/* Prefetch next mbuf while processing current one. */
preload_idx = rxq->sw_rx_cons & NUM_RX_BDS(rxq);
rte_prefetch0(rxq->sw_rx_ring[preload_idx].mbuf);
- if (fp_cqe->bd_num != 1)
- PMD_RX_LOG(DEBUG, rxq,
- "Jumbo-over-BD packet not supported\n");
-
/* Update MBUF fields */
rx_mb->ol_flags = 0;
rx_mb->data_off = pad + RTE_PKTMBUF_HEADROOM;
- rx_mb->nb_segs = 1;
+ rx_mb->nb_segs = fp_cqe->bd_num;
rx_mb->data_len = len;
- rx_mb->pkt_len = len;
+ rx_mb->pkt_len = fp_cqe->pkt_len;
rx_mb->port = rxq->port_id;
rx_mb->packet_type = qede_rx_cqe_to_pkt_type(parse_flag);
@@ -957,24 +1031,28 @@ next_cqe:
static inline int
qede_free_tx_pkt(struct ecore_dev *edev, struct qede_tx_queue *txq)
{
- uint16_t idx = TX_CONS(txq);
+ uint16_t nb_segs, idx = TX_CONS(txq);
struct eth_tx_bd *tx_data_bd;
struct rte_mbuf *mbuf = txq->sw_tx_ring[idx].mbuf;
if (unlikely(!mbuf)) {
+ PMD_TX_LOG(ERR, txq, "null mbuf\n");
PMD_TX_LOG(ERR, txq,
- "null mbuf nb_tx_desc %u nb_tx_avail %u "
- "sw_tx_cons %u sw_tx_prod %u\n",
+ "tx_desc %u tx_avail %u tx_cons %u tx_prod %u\n",
txq->nb_tx_desc, txq->nb_tx_avail, idx,
TX_PROD(txq));
return -1;
}
- /* Free now */
- rte_pktmbuf_free_seg(mbuf);
+ nb_segs = mbuf->nb_segs;
+ while (nb_segs) {
+ /* It's like consuming rxbuf in recv() */
+ ecore_chain_consume(&txq->tx_pbl);
+ txq->nb_tx_avail++;
+ nb_segs--;
+ }
+ rte_pktmbuf_free(mbuf);
txq->sw_tx_ring[idx].mbuf = NULL;
- ecore_chain_consume(&txq->tx_pbl);
- txq->nb_tx_avail++;
return 0;
}
@@ -984,18 +1062,16 @@ qede_process_tx_compl(struct ecore_dev *edev, struct qede_tx_queue *txq)
{
uint16_t tx_compl = 0;
uint16_t hw_bd_cons;
- int rc;
hw_bd_cons = rte_le_to_cpu_16(*txq->hw_cons_ptr);
rte_compiler_barrier();
while (hw_bd_cons != ecore_chain_get_cons_idx(&txq->tx_pbl)) {
- rc = qede_free_tx_pkt(edev, txq);
- if (rc) {
- DP_NOTICE(edev, false,
- "hw_bd_cons = %d, chain_cons=%d\n",
- hw_bd_cons,
- ecore_chain_get_cons_idx(&txq->tx_pbl));
+ if (qede_free_tx_pkt(edev, txq)) {
+ PMD_TX_LOG(ERR, txq,
+ "hw_bd_cons = %u, chain_cons = %u\n",
+ hw_bd_cons,
+ ecore_chain_get_cons_idx(&txq->tx_pbl));
break;
}
txq->sw_tx_cons++; /* Making TXD available */
@@ -1007,6 +1083,55 @@ qede_process_tx_compl(struct ecore_dev *edev, struct qede_tx_queue *txq)
return tx_compl;
}
+/* Populate scatter gather buffer descriptor fields */
+static inline uint16_t qede_encode_sg_bd(struct qede_tx_queue *p_txq,
+ struct rte_mbuf *m_seg,
+ uint16_t count,
+ struct eth_tx_1st_bd *bd1)
+{
+ struct qede_tx_queue *txq = p_txq;
+ struct eth_tx_2nd_bd *bd2 = NULL;
+ struct eth_tx_3rd_bd *bd3 = NULL;
+ struct eth_tx_bd *tx_bd = NULL;
+ uint16_t nb_segs = count;
+ dma_addr_t mapping;
+
+ /* Check for scattered buffers */
+ while (m_seg) {
+ if (nb_segs == 1) {
+ bd2 = (struct eth_tx_2nd_bd *)
+ ecore_chain_produce(&txq->tx_pbl);
+ memset(bd2, 0, sizeof(*bd2));
+ mapping = rte_mbuf_data_dma_addr(m_seg);
+ bd2->addr.hi = rte_cpu_to_le_32(U64_HI(mapping));
+ bd2->addr.lo = rte_cpu_to_le_32(U64_LO(mapping));
+ bd2->nbytes = rte_cpu_to_le_16(m_seg->data_len);
+ } else if (nb_segs == 2) {
+ bd3 = (struct eth_tx_3rd_bd *)
+ ecore_chain_produce(&txq->tx_pbl);
+ memset(bd3, 0, sizeof(*bd3));
+ mapping = rte_mbuf_data_dma_addr(m_seg);
+ bd3->addr.hi = rte_cpu_to_le_32(U64_HI(mapping));
+ bd3->addr.lo = rte_cpu_to_le_32(U64_LO(mapping));
+ bd3->nbytes = rte_cpu_to_le_16(m_seg->data_len);
+ } else {
+ tx_bd = (struct eth_tx_bd *)
+ ecore_chain_produce(&txq->tx_pbl);
+ memset(tx_bd, 0, sizeof(*tx_bd));
+ mapping = rte_mbuf_data_dma_addr(m_seg);
+ tx_bd->addr.hi = rte_cpu_to_le_32(U64_HI(mapping));
+ tx_bd->addr.lo = rte_cpu_to_le_32(U64_LO(mapping));
+ tx_bd->nbytes = rte_cpu_to_le_16(m_seg->data_len);
+ }
+ nb_segs++;
+ bd1->data.nbds = nb_segs;
+ m_seg = m_seg->next;
+ }
+
+ /* Return total scattered buffers */
+ return nb_segs;
+}
+
uint16_t
qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
@@ -1014,12 +1139,14 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
struct qede_dev *qdev = txq->qdev;
struct ecore_dev *edev = &qdev->edev;
struct qede_fastpath *fp;
- struct eth_tx_1st_bd *first_bd;
+ struct eth_tx_1st_bd *bd1;
+ struct rte_mbuf *m_seg = NULL;
uint16_t nb_tx_pkts;
uint16_t nb_pkt_sent = 0;
uint16_t bd_prod;
uint16_t idx;
uint16_t tx_count;
+ uint16_t nb_segs = 0;
fp = &qdev->fp_array[QEDE_RSS_COUNT(qdev) + txq->queue_id];
@@ -1029,7 +1156,8 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
(void)qede_process_tx_compl(edev, txq);
}
- nb_tx_pkts = RTE_MIN(nb_pkts, (txq->nb_tx_avail / MAX_NUM_TX_BDS));
+ nb_tx_pkts = RTE_MIN(nb_pkts, (txq->nb_tx_avail /
+ ETH_TX_MAX_BDS_PER_NON_LSO_PACKET));
if (unlikely(nb_tx_pkts == 0)) {
PMD_TX_LOG(DEBUG, txq, "Out of BDs nb_pkts=%u avail=%u\n",
nb_pkts, txq->nb_tx_avail);
@@ -1041,38 +1169,49 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* Fill the entry in the SW ring and the BDs in the FW ring */
idx = TX_PROD(txq);
struct rte_mbuf *mbuf = *tx_pkts++;
+
txq->sw_tx_ring[idx].mbuf = mbuf;
- first_bd = (struct eth_tx_1st_bd *)
- ecore_chain_produce(&txq->tx_pbl);
- first_bd->data.bd_flags.bitfields =
- 1 << ETH_TX_1ST_BD_FLAGS_START_BD_SHIFT;
+ bd1 = (struct eth_tx_1st_bd *)ecore_chain_produce(&txq->tx_pbl);
+ /* Zero init struct fields */
+ bd1->data.bd_flags.bitfields = 0;
+ bd1->data.bitfields = 0;
+
+ bd1->data.bd_flags.bitfields =
+ 1 << ETH_TX_1ST_BD_FLAGS_START_BD_SHIFT;
/* Map MBUF linear data for DMA and set in the first BD */
- QEDE_BD_SET_ADDR_LEN(first_bd, rte_mbuf_data_dma_addr(mbuf),
- mbuf->data_len);
+ QEDE_BD_SET_ADDR_LEN(bd1, rte_mbuf_data_dma_addr(mbuf),
+ mbuf->pkt_len);
/* Descriptor based VLAN insertion */
if (mbuf->ol_flags & (PKT_TX_VLAN_PKT | PKT_TX_QINQ_PKT)) {
- first_bd->data.vlan = rte_cpu_to_le_16(mbuf->vlan_tci);
- first_bd->data.bd_flags.bitfields |=
+ bd1->data.vlan = rte_cpu_to_le_16(mbuf->vlan_tci);
+ bd1->data.bd_flags.bitfields |=
1 << ETH_TX_1ST_BD_FLAGS_VLAN_INSERTION_SHIFT;
}
/* Offload the IP checksum in the hardware */
if (mbuf->ol_flags & PKT_TX_IP_CKSUM) {
- first_bd->data.bd_flags.bitfields |=
+ bd1->data.bd_flags.bitfields |=
1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
}
/* L4 checksum offload (tcp or udp) */
if (mbuf->ol_flags & (PKT_TX_TCP_CKSUM | PKT_TX_UDP_CKSUM)) {
- first_bd->data.bd_flags.bitfields |=
+ bd1->data.bd_flags.bitfields |=
1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
/* IPv6 + extn. -> later */
}
- first_bd->data.nbds = MAX_NUM_TX_BDS;
+
+ /* Handle fragmented MBUF */
+ m_seg = mbuf->next;
+ nb_segs++;
+ bd1->data.nbds = nb_segs;
+ /* Encode scatter gather buffer descriptors if required */
+ nb_segs = qede_encode_sg_bd(txq, m_seg, nb_segs, bd1);
+ txq->nb_tx_avail = txq->nb_tx_avail - nb_segs;
+ nb_segs = 0;
txq->sw_tx_prod++;
rte_prefetch0(txq->sw_tx_ring[TX_PROD(txq)].mbuf);
- txq->nb_tx_avail--;
bd_prod =
rte_cpu_to_le_16(ecore_chain_get_prod_idx(&txq->tx_pbl));
nb_pkt_sent++;
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index 9ee69ed..da47b21 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -30,9 +30,6 @@
#define TX_CONS(txq) (txq->sw_tx_cons & NUM_TX_BDS(txq))
#define TX_PROD(txq) (txq->sw_tx_prod & NUM_TX_BDS(txq))
-/* Number of TX BDs per packet used currently */
-#define MAX_NUM_TX_BDS 1
-
#define QEDE_DEFAULT_TX_FREE_THRESH 32
#define QEDE_CSUM_ERROR (1 << 0)
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 24/32] net/qede/base: change Rx Tx queue start APIs
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (22 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 23/32] net/qede: add scatter gather support Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 25/32] net/qede/base: add support to initiate PF FLR Rasesh Mody
` (8 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Rasesh Mody
Changed q_{rx,tx}_start APIs to use common queue start parameters
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
---
drivers/net/qede/base/ecore_l2.c | 131 +++++++++++++++--------------------
drivers/net/qede/base/ecore_l2.h | 26 ++-----
drivers/net/qede/base/ecore_l2_api.h | 69 +++++++++---------
drivers/net/qede/base/ecore_sriov.c | 28 +++++---
drivers/net/qede/qede_eth_if.c | 47 ++++++-------
drivers/net/qede/qede_eth_if.h | 11 ++-
drivers/net/qede/qede_rxtx.c | 27 +++++---
7 files changed, 155 insertions(+), 184 deletions(-)
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 83a62e0..74f61b0 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -548,12 +548,7 @@ enum _ecore_status_t
ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
u16 opaque_fid,
u32 cid,
- u16 rx_queue_id,
- u8 vf_rx_queue_id,
- u8 vport_id,
- u8 stats_id,
- u16 sb,
- u8 sb_index,
+ struct ecore_queue_start_common_params *p_params,
u16 bd_max_bytes,
dma_addr_t bd_chain_phys_addr,
dma_addr_t cqe_pbl_addr,
@@ -568,22 +563,23 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t rc = ECORE_NOTIMPL;
/* Store information for the stop */
- p_rx_cid = &p_hwfn->p_rx_cids[rx_queue_id];
+ p_rx_cid = &p_hwfn->p_rx_cids[p_params->queue_id];
p_rx_cid->cid = cid;
p_rx_cid->opaque_fid = opaque_fid;
- p_rx_cid->vport_id = vport_id;
+ p_rx_cid->vport_id = p_params->vport_id;
- rc = ecore_fw_vport(p_hwfn, vport_id, &abs_vport_id);
+ rc = ecore_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
if (rc != ECORE_SUCCESS)
return rc;
- rc = ecore_fw_l2_queue(p_hwfn, rx_queue_id, &abs_rx_q_id);
+ rc = ecore_fw_l2_queue(p_hwfn, p_params->queue_id, &abs_rx_q_id);
if (rc != ECORE_SUCCESS)
return rc;
DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
"opaque_fid=0x%x, cid=0x%x, rx_qid=0x%x, vport_id=0x%x, sb_id=0x%x\n",
- opaque_fid, cid, rx_queue_id, vport_id, sb);
+ opaque_fid, cid, p_params->queue_id,
+ p_params->vport_id, p_params->sb);
/* Get SPQ entry */
OSAL_MEMSET(&init_data, 0, sizeof(init_data));
@@ -599,10 +595,10 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
p_ramrod = &p_ent->ramrod.rx_queue_start;
- p_ramrod->sb_id = OSAL_CPU_TO_LE16(sb);
- p_ramrod->sb_index = sb_index;
+ p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_params->sb);
+ p_ramrod->sb_index = (u8)p_params->sb_idx;
p_ramrod->vport_id = abs_vport_id;
- p_ramrod->stats_counter_id = stats_id;
+ p_ramrod->stats_counter_id = p_params->stats_id;
p_ramrod->rx_queue_id = OSAL_CPU_TO_LE16(abs_rx_q_id);
p_ramrod->complete_cqe_flg = 0;
p_ramrod->complete_event_flg = 1;
@@ -613,30 +609,27 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
p_ramrod->num_of_pbl_pages = OSAL_CPU_TO_LE16(cqe_pbl_size);
DMA_REGPAIR_LE(p_ramrod->cqe_pbl_addr, cqe_pbl_addr);
- if (vf_rx_queue_id || b_use_zone_a_prod) {
- p_ramrod->vf_rx_prod_index = vf_rx_queue_id;
+ if (p_params->vf_qid || b_use_zone_a_prod) {
+ p_ramrod->vf_rx_prod_index = (u8)p_params->vf_qid;
DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
"Queue%s is meant for VF rxq[%02x]\n",
b_use_zone_a_prod ? " [legacy]" : "",
- vf_rx_queue_id);
+ p_params->vf_qid);
p_ramrod->vf_rx_prod_use_zone_a = b_use_zone_a_prod;
}
return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
}
-enum _ecore_status_t ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
- u16 opaque_fid,
- u8 rx_queue_id,
- u8 vport_id,
- u8 stats_id,
- u16 sb,
- u8 sb_index,
- u16 bd_max_bytes,
- dma_addr_t bd_chain_phys_addr,
- dma_addr_t cqe_pbl_addr,
- u16 cqe_pbl_size,
- void OSAL_IOMEM **pp_prod)
+enum _ecore_status_t
+ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
+ u16 opaque_fid,
+ struct ecore_queue_start_common_params *p_params,
+ u16 bd_max_bytes,
+ dma_addr_t bd_chain_phys_addr,
+ dma_addr_t cqe_pbl_addr,
+ u16 cqe_pbl_size,
+ void OSAL_IOMEM * *pp_prod)
{
struct ecore_hw_cid_data *p_rx_cid;
u32 init_prod_val = 0;
@@ -646,20 +639,20 @@ enum _ecore_status_t ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
if (IS_VF(p_hwfn->p_dev)) {
return ecore_vf_pf_rxq_start(p_hwfn,
- rx_queue_id,
- sb,
- sb_index,
+ p_params->queue_id,
+ p_params->sb,
+ (u8)p_params->sb_idx,
bd_max_bytes,
bd_chain_phys_addr,
cqe_pbl_addr,
cqe_pbl_size, pp_prod);
}
- rc = ecore_fw_l2_queue(p_hwfn, rx_queue_id, &abs_l2_queue);
+ rc = ecore_fw_l2_queue(p_hwfn, p_params->queue_id, &abs_l2_queue);
if (rc != ECORE_SUCCESS)
return rc;
- rc = ecore_fw_vport(p_hwfn, stats_id, &abs_stats_id);
+ rc = ecore_fw_vport(p_hwfn, p_params->stats_id, &abs_stats_id);
if (rc != ECORE_SUCCESS)
return rc;
@@ -672,7 +665,7 @@ enum _ecore_status_t ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
(u32 *)(&init_prod_val));
/* Allocate a CID for the queue */
- p_rx_cid = &p_hwfn->p_rx_cids[rx_queue_id];
+ p_rx_cid = &p_hwfn->p_rx_cids[p_params->queue_id];
rc = ecore_cxt_acquire_cid(p_hwfn, PROTOCOLID_ETH,
&p_rx_cid->cid);
if (rc != ECORE_SUCCESS) {
@@ -680,16 +673,13 @@ enum _ecore_status_t ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
return rc;
}
p_rx_cid->b_cid_allocated = true;
+ p_params->stats_id = abs_stats_id;
+ p_params->vf_qid = 0;
rc = ecore_sp_eth_rxq_start_ramrod(p_hwfn,
opaque_fid,
p_rx_cid->cid,
- rx_queue_id,
- 0,
- vport_id,
- abs_stats_id,
- sb,
- sb_index,
+ p_params,
bd_max_bytes,
bd_chain_phys_addr,
cqe_pbl_addr,
@@ -816,12 +806,8 @@ ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t
ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
u16 opaque_fid,
- u16 tx_queue_id,
u32 cid,
- u8 vport_id,
- u8 stats_id,
- u16 sb,
- u8 sb_index,
+ struct ecore_queue_start_common_params *p_params,
dma_addr_t pbl_addr,
u16 pbl_size,
union ecore_qm_pq_params *p_pq_params)
@@ -835,15 +821,15 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t rc = ECORE_NOTIMPL;
/* Store information for the stop */
- p_tx_cid = &p_hwfn->p_tx_cids[tx_queue_id];
+ p_tx_cid = &p_hwfn->p_tx_cids[p_params->queue_id];
p_tx_cid->cid = cid;
p_tx_cid->opaque_fid = opaque_fid;
- rc = ecore_fw_vport(p_hwfn, vport_id, &abs_vport_id);
+ rc = ecore_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
if (rc != ECORE_SUCCESS)
return rc;
- rc = ecore_fw_l2_queue(p_hwfn, tx_queue_id, &abs_tx_q_id);
+ rc = ecore_fw_l2_queue(p_hwfn, p_params->queue_id, &abs_tx_q_id);
if (rc != ECORE_SUCCESS)
return rc;
@@ -862,9 +848,9 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
p_ramrod = &p_ent->ramrod.tx_queue_start;
p_ramrod->vport_id = abs_vport_id;
- p_ramrod->sb_id = OSAL_CPU_TO_LE16(sb);
- p_ramrod->sb_index = sb_index;
- p_ramrod->stats_counter_id = stats_id;
+ p_ramrod->sb_id = OSAL_CPU_TO_LE16(p_params->sb);
+ p_ramrod->sb_index = (u8)p_params->sb_idx;
+ p_ramrod->stats_counter_id = p_params->stats_id;
p_ramrod->queue_zone_id = OSAL_CPU_TO_LE16(abs_tx_q_id);
@@ -877,17 +863,14 @@ ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
}
-enum _ecore_status_t ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
- u16 opaque_fid,
- u16 tx_queue_id,
- u8 vport_id,
- u8 stats_id,
- u16 sb,
- u8 sb_index,
- u8 tc,
- dma_addr_t pbl_addr,
- u16 pbl_size,
- void OSAL_IOMEM **pp_doorbell)
+enum _ecore_status_t
+ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
+ u16 opaque_fid,
+ struct ecore_queue_start_common_params *p_params,
+ u8 tc,
+ dma_addr_t pbl_addr,
+ u16 pbl_size,
+ void OSAL_IOMEM * *pp_doorbell)
{
struct ecore_hw_cid_data *p_tx_cid;
union ecore_qm_pq_params pq_params;
@@ -896,19 +879,19 @@ enum _ecore_status_t ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
if (IS_VF(p_hwfn->p_dev)) {
return ecore_vf_pf_txq_start(p_hwfn,
- tx_queue_id,
- sb,
- sb_index,
+ p_params->queue_id,
+ p_params->sb,
+ (u8)p_params->sb_idx,
pbl_addr,
pbl_size,
pp_doorbell);
}
- rc = ecore_fw_vport(p_hwfn, stats_id, &abs_stats_id);
+ rc = ecore_fw_vport(p_hwfn, p_params->stats_id, &abs_stats_id);
if (rc != ECORE_SUCCESS)
return rc;
- p_tx_cid = &p_hwfn->p_tx_cids[tx_queue_id];
+ p_tx_cid = &p_hwfn->p_tx_cids[p_params->queue_id];
OSAL_MEMSET(p_tx_cid, 0, sizeof(*p_tx_cid));
OSAL_MEMSET(&pq_params, 0, sizeof(pq_params));
@@ -924,18 +907,16 @@ enum _ecore_status_t ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
"opaque_fid=0x%x, cid=0x%x, tx_qid=0x%x, vport_id=0x%x, sb_id=0x%x\n",
- opaque_fid, p_tx_cid->cid, tx_queue_id,
- vport_id, sb);
+ opaque_fid, p_tx_cid->cid, p_params->queue_id,
+ p_params->vport_id, p_params->sb);
+
+ p_params->stats_id = abs_stats_id;
/* TODO - set tc in the pq_params for multi-cos */
rc = ecore_sp_eth_txq_start_ramrod(p_hwfn,
opaque_fid,
- tx_queue_id,
p_tx_cid->cid,
- vport_id,
- abs_stats_id,
- sb,
- sb_index,
+ p_params,
pbl_addr,
pbl_size,
&pq_params);
diff --git a/drivers/net/qede/base/ecore_l2.h b/drivers/net/qede/base/ecore_l2.h
index c8419a3..9c1bd38 100644
--- a/drivers/net/qede/base/ecore_l2.h
+++ b/drivers/net/qede/base/ecore_l2.h
@@ -40,11 +40,8 @@ ecore_sp_eth_vport_start(struct ecore_hwfn *p_hwfn,
* @param p_hwfn
* @param opaque_fid
* @param cid
- * @param rx_queue_id
- * @param vport_id
- * @param stats_id
- * @param sb
- * @param sb_index
+ * @param p_params [queue_id, vport_id, stats_id, sb, sb_idx, vf_qid]
+ stats_id is absolute packed in p_params.
* @param bd_max_bytes
* @param bd_chain_phys_addr
* @param cqe_pbl_addr
@@ -57,12 +54,7 @@ enum _ecore_status_t
ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
u16 opaque_fid,
u32 cid,
- u16 rx_queue_id,
- u8 vf_rx_queue_id,
- u8 vport_id,
- u8 stats_id,
- u16 sb,
- u8 sb_index,
+ struct ecore_queue_start_common_params *p_params,
u16 bd_max_bytes,
dma_addr_t bd_chain_phys_addr,
dma_addr_t cqe_pbl_addr,
@@ -74,12 +66,8 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
*
* @param p_hwfn
* @param opaque_fid
- * @param tx_queue_id
* @param cid
- * @param vport_id
- * @param stats_id
- * @param sb
- * @param sb_index
+ * @param p_params [queue_id, vport_id,stats_id, sb, sb_idx, vf_qid]
* @param pbl_addr
* @param pbl_size
* @param p_pq_params - parameters for choosing the PQ for this Tx queue
@@ -89,12 +77,8 @@ ecore_sp_eth_rxq_start_ramrod(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t
ecore_sp_eth_txq_start_ramrod(struct ecore_hwfn *p_hwfn,
u16 opaque_fid,
- u16 tx_queue_id,
u32 cid,
- u8 vport_id,
- u8 stats_id,
- u16 sb,
- u8 sb_index,
+ struct ecore_queue_start_common_params *p_params,
dma_addr_t pbl_addr,
u16 pbl_size,
union ecore_qm_pq_params *p_pq_params);
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 447d1fb..326fa45 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -27,6 +27,18 @@ enum ecore_rss_caps {
#define ECORE_RSS_KEY_SIZE 10 /* size in 32b chunks */
#endif
+struct ecore_queue_start_common_params {
+ /* Rx/Tx queue id */
+ u8 queue_id;
+ u8 vport_id;
+
+ /* stats_id is relative or absolute depends on function */
+ u8 stats_id;
+ u16 sb;
+ u16 sb_idx;
+ u16 vf_qid;
+};
+
struct ecore_rss_params {
u8 update_rss_config;
u8 rss_enable;
@@ -154,14 +166,7 @@ ecore_filter_accept_cmd(
*
* @param p_hwfn
* @param opaque_fid
- * @param rx_queue_id RX Queue ID: Zero based, per VPort, allocated
- * by assignment (=rssId)
- * @param vport_id VPort ID
- * @param u8 stats_id VPort ID which the queue stats
- * will be added to
- * @param sb Status Block of the Function Event Ring
- * @param sb_index Index into the status block of the
- * Function Event Ring
+ * @p_params [stats_id is relative, packed in p_params]
* @param bd_max_bytes Maximum bytes that can be placed on a BD
* @param bd_chain_phys_addr Physical address of BDs for receive.
* @param cqe_pbl_addr Physical address of the CQE PBL Table.
@@ -172,18 +177,15 @@ ecore_filter_accept_cmd(
*
* @return enum _ecore_status_t
*/
-enum _ecore_status_t ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
- u16 opaque_fid,
- u8 rx_queue_id,
- u8 vport_id,
- u8 stats_id,
- u16 sb,
- u8 sb_index,
- u16 bd_max_bytes,
- dma_addr_t bd_chain_phys_addr,
- dma_addr_t cqe_pbl_addr,
- u16 cqe_pbl_size,
- void OSAL_IOMEM **pp_prod);
+enum _ecore_status_t
+ecore_sp_eth_rx_queue_start(struct ecore_hwfn *p_hwfn,
+ u16 opaque_fid,
+ struct ecore_queue_start_common_params *p_params,
+ u16 bd_max_bytes,
+ dma_addr_t bd_chain_phys_addr,
+ dma_addr_t cqe_pbl_addr,
+ u16 cqe_pbl_size,
+ void OSAL_IOMEM * *pp_prod);
/**
* @brief ecore_sp_eth_rx_queue_stop -
@@ -216,13 +218,7 @@ ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
*
* @param p_hwfn
* @param opaque_fid
- * @param tx_queue_id TX Queue ID
- * @param vport_id VPort ID
- * @param u8 stats_id VPort ID which the queue stats
- * will be added to
- * @param sb Status Block of the Function Event Ring
- * @param sb_index Index into the status block of the Function
- * Event Ring
+ * @p_params
* @param tc traffic class to use with this L2 txq
* @param pbl_addr address of the pbl array
* @param pbl_size number of entries in pbl
@@ -232,17 +228,14 @@ ecore_sp_eth_rx_queue_stop(struct ecore_hwfn *p_hwfn,
*
* @return enum _ecore_status_t
*/
-enum _ecore_status_t ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
- u16 opaque_fid,
- u16 tx_queue_id,
- u8 vport_id,
- u8 stats_id,
- u16 sb,
- u8 sb_index,
- u8 tc,
- dma_addr_t pbl_addr,
- u16 pbl_size,
- void OSAL_IOMEM **pp_doorbell);
+enum _ecore_status_t
+ecore_sp_eth_tx_queue_start(struct ecore_hwfn *p_hwfn,
+ u16 opaque_fid,
+ struct ecore_queue_start_common_params *p_params,
+ u8 tc,
+ dma_addr_t pbl_addr,
+ u16 pbl_size,
+ void OSAL_IOMEM * *pp_doorbell);
/**
* @brief ecore_sp_eth_tx_queue_stop -
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index eb3a1e2..b28d728 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -1961,6 +1961,7 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct ecore_vf_info *vf)
{
+ struct ecore_queue_start_common_params p_params;
struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
u8 status = PFVF_STATUS_NO_RESOURCE;
struct vfpf_start_rxq_tlv *req;
@@ -1968,6 +1969,13 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t rc;
req = &mbx->req_virt->start_rxq;
+ OSAL_MEMSET(&p_params, 0, sizeof(p_params));
+ p_params.queue_id = (u8)vf->vf_queues[req->rx_qid].fw_rx_qid;
+ p_params.vf_qid = req->rx_qid;
+ p_params.vport_id = vf->vport_id;
+ p_params.stats_id = vf->abs_vf_id + 0x10,
+ p_params.sb = req->hw_sb;
+ p_params.sb_idx = req->sb_index;
if (!ecore_iov_validate_rxq(p_hwfn, vf, req->rx_qid) ||
!ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
@@ -1987,12 +1995,7 @@ static void ecore_iov_vf_mbx_start_rxq(struct ecore_hwfn *p_hwfn,
rc = ecore_sp_eth_rxq_start_ramrod(p_hwfn, vf->opaque_fid,
vf->vf_queues[req->rx_qid].fw_cid,
- vf->vf_queues[req->rx_qid].fw_rx_qid,
- (u8)req->rx_qid,
- vf->vport_id,
- vf->abs_vf_id + 0x10,
- req->hw_sb,
- req->sb_index,
+ &p_params,
req->bd_max_bytes,
req->rxq_addr,
req->cqe_pbl_addr,
@@ -2057,6 +2060,7 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct ecore_vf_info *vf)
{
+ struct ecore_queue_start_common_params p_params;
struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
u8 status = PFVF_STATUS_NO_RESOURCE;
union ecore_qm_pq_params pq_params;
@@ -2069,6 +2073,12 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
pq_params.eth.vf_id = vf->relative_vf_id;
req = &mbx->req_virt->start_txq;
+ OSAL_MEMSET(&p_params, 0, sizeof(p_params));
+ p_params.queue_id = (u8)vf->vf_queues[req->tx_qid].fw_tx_qid;
+ p_params.vport_id = vf->vport_id;
+ p_params.stats_id = vf->abs_vf_id + 0x10,
+ p_params.sb = req->hw_sb;
+ p_params.sb_idx = req->sb_index;
if (!ecore_iov_validate_txq(p_hwfn, vf, req->tx_qid) ||
!ecore_iov_validate_sb(p_hwfn, vf, req->hw_sb))
@@ -2077,12 +2087,8 @@ static void ecore_iov_vf_mbx_start_txq(struct ecore_hwfn *p_hwfn,
rc = ecore_sp_eth_txq_start_ramrod(
p_hwfn,
vf->opaque_fid,
- vf->vf_queues[req->tx_qid].fw_tx_qid,
vf->vf_queues[req->tx_qid].fw_cid,
- vf->vport_id,
- vf->abs_vf_id + 0x10,
- req->hw_sb,
- req->sb_index,
+ &p_params,
req->pbl_addr,
req->pbl_size,
&pq_params);
diff --git a/drivers/net/qede/qede_eth_if.c b/drivers/net/qede/qede_eth_if.c
index a19b22e..1ae6127 100644
--- a/drivers/net/qede/qede_eth_if.c
+++ b/drivers/net/qede/qede_eth_if.c
@@ -168,9 +168,9 @@ qed_update_vport(struct ecore_dev *edev, struct qed_update_vport_params *params)
static int
qed_start_rxq(struct ecore_dev *edev,
- uint8_t rss_id, uint8_t rx_queue_id,
- uint8_t vport_id, uint16_t sb,
- uint8_t sb_index, uint16_t bd_max_bytes,
+ uint8_t rss_num,
+ struct ecore_queue_start_common_params *p_params,
+ uint16_t bd_max_bytes,
dma_addr_t bd_chain_phys_addr,
dma_addr_t cqe_pbl_addr,
uint16_t cqe_pbl_size, void OSAL_IOMEM * *pp_prod)
@@ -178,28 +178,28 @@ qed_start_rxq(struct ecore_dev *edev,
struct ecore_hwfn *p_hwfn;
int rc, hwfn_index;
- hwfn_index = rss_id % edev->num_hwfns;
+ hwfn_index = rss_num % edev->num_hwfns;
p_hwfn = &edev->hwfns[hwfn_index];
+ p_params->queue_id = p_params->queue_id / edev->num_hwfns;
+ p_params->stats_id = p_params->vport_id;
+
rc = ecore_sp_eth_rx_queue_start(p_hwfn,
p_hwfn->hw_info.opaque_fid,
- rx_queue_id / edev->num_hwfns,
- vport_id,
- vport_id,
- sb,
- sb_index,
+ p_params,
bd_max_bytes,
bd_chain_phys_addr,
cqe_pbl_addr, cqe_pbl_size, pp_prod);
if (rc) {
- DP_ERR(edev, "Failed to start RXQ#%d\n", rx_queue_id);
+ DP_ERR(edev, "Failed to start RXQ#%d\n", p_params->queue_id);
return rc;
}
DP_VERBOSE(edev, ECORE_MSG_SPQ,
- "Started RX-Q %d [rss %d] on V-PORT %d and SB %d\n",
- rx_queue_id, rss_id, vport_id, sb);
+ "Started RX-Q %d [rss_num %d] on V-PORT %d and SB %d\n",
+ p_params->queue_id, rss_num, p_params->vport_id,
+ p_params->sb);
return 0;
}
@@ -226,36 +226,35 @@ qed_stop_rxq(struct ecore_dev *edev, struct qed_stop_rxq_params *params)
static int
qed_start_txq(struct ecore_dev *edev,
- uint8_t rss_id, uint16_t tx_queue_id,
- uint8_t vport_id, uint16_t sb,
- uint8_t sb_index,
+ uint8_t rss_num,
+ struct ecore_queue_start_common_params *p_params,
dma_addr_t pbl_addr,
uint16_t pbl_size, void OSAL_IOMEM * *pp_doorbell)
{
struct ecore_hwfn *p_hwfn;
int rc, hwfn_index;
- hwfn_index = rss_id % edev->num_hwfns;
+ hwfn_index = rss_num % edev->num_hwfns;
p_hwfn = &edev->hwfns[hwfn_index];
+ p_params->queue_id = p_params->queue_id / edev->num_hwfns;
+ p_params->stats_id = p_params->vport_id;
+
rc = ecore_sp_eth_tx_queue_start(p_hwfn,
p_hwfn->hw_info.opaque_fid,
- tx_queue_id / edev->num_hwfns,
- vport_id,
- vport_id,
- sb,
- sb_index,
+ p_params,
0 /* tc */,
pbl_addr, pbl_size, pp_doorbell);
if (rc) {
- DP_ERR(edev, "Failed to start TXQ#%d\n", tx_queue_id);
+ DP_ERR(edev, "Failed to start TXQ#%d\n", p_params->queue_id);
return rc;
}
DP_VERBOSE(edev, ECORE_MSG_SPQ,
- "Started TX-Q %d [rss %d] on V-PORT %d and SB %d\n",
- tx_queue_id, rss_id, vport_id, sb);
+ "Started TX-Q %d [rss_num %d] on V-PORT %d and SB %d\n",
+ p_params->queue_id, rss_num, p_params->vport_id,
+ p_params->sb);
return 0;
}
diff --git a/drivers/net/qede/qede_eth_if.h b/drivers/net/qede/qede_eth_if.h
index 5a7fdc9..33655c3 100644
--- a/drivers/net/qede/qede_eth_if.h
+++ b/drivers/net/qede/qede_eth_if.h
@@ -133,9 +133,9 @@ struct qed_eth_ops {
struct qed_update_vport_params *params);
int (*q_rx_start)(struct ecore_dev *cdev,
- uint8_t rss_id, uint8_t rx_queue_id,
- uint8_t vport_id, uint16_t sb,
- uint8_t sb_index, uint16_t bd_max_bytes,
+ uint8_t rss_num,
+ struct ecore_queue_start_common_params *p_params,
+ uint16_t bd_max_bytes,
dma_addr_t bd_chain_phys_addr,
dma_addr_t cqe_pbl_addr,
uint16_t cqe_pbl_size, void OSAL_IOMEM * *pp_prod);
@@ -144,9 +144,8 @@ struct qed_eth_ops {
struct qed_stop_rxq_params *params);
int (*q_tx_start)(struct ecore_dev *edev,
- uint8_t rss_id, uint16_t tx_queue_id,
- uint8_t vport_id, uint16_t sb,
- uint8_t sb_index,
+ uint8_t rss_num,
+ struct ecore_queue_start_common_params *p_params,
dma_addr_t pbl_addr,
uint16_t pbl_size, void OSAL_IOMEM * *pp_doorbell);
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 54cd849..8f83497 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -572,6 +572,7 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
{
struct qede_dev *qdev = eth_dev->data->dev_private;
struct ecore_dev *edev = &qdev->edev;
+ struct ecore_queue_start_common_params q_params;
struct qed_update_vport_rss_params *rss_params = &qdev->rss_params;
struct qed_dev_info *qed_info = &qdev->dev_info.common;
struct qed_update_vport_params vport_update_params;
@@ -591,12 +592,15 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
page_cnt = ecore_chain_get_page_cnt(&fp->rxq->
rx_comp_ring);
+ memset(&q_params, 0, sizeof(q_params));
+ q_params.queue_id = i;
+ q_params.vport_id = 0;
+ q_params.sb = fp->sb_info->igu_sb_id;
+ q_params.sb_idx = RX_PI;
+
ecore_sb_ack(fp->sb_info, IGU_INT_DISABLE, 0);
- rc = qdev->ops->q_rx_start(edev, i, fp->rxq->queue_id,
- 0,
- fp->sb_info->igu_sb_id,
- RX_PI,
+ rc = qdev->ops->q_rx_start(edev, i, &q_params,
fp->rxq->rx_buf_size,
fp->rxq->rx_bd_ring.p_phys_addr,
p_phys_table,
@@ -622,11 +626,16 @@ static int qede_start_queues(struct rte_eth_dev *eth_dev, bool clear_stats)
p_phys_table = ecore_chain_get_pbl_phys(&txq->tx_pbl);
page_cnt = ecore_chain_get_page_cnt(&txq->tx_pbl);
- rc = qdev->ops->q_tx_start(edev, i, txq->queue_id,
- 0,
- fp->sb_info->igu_sb_id,
- TX_PI(tc),
- p_phys_table, page_cnt,
+
+ memset(&q_params, 0, sizeof(q_params));
+ q_params.queue_id = txq->queue_id;
+ q_params.vport_id = 0;
+ q_params.sb = fp->sb_info->igu_sb_id;
+ q_params.sb_idx = TX_PI(tc);
+
+ rc = qdev->ops->q_tx_start(edev, i, &q_params,
+ p_phys_table,
+ page_cnt, /* **pp_doorbell */
&txq->doorbell_addr);
if (rc) {
DP_ERR(edev, "Start txq %u failed %d\n",
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 25/32] net/qede/base: add support to initiate PF FLR
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (23 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 24/32] net/qede/base: change Rx Tx queue start APIs Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 26/32] net/qede: skip slowpath polling for 100G VF device Rasesh Mody
` (7 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Harish Patil
From: Harish Patil <harish.patil@qlogic.com>
Add support to send PF FLR request to the management firmware to
bringup the device in clean slate. This cleanup is necessary
in some corner cases where the device would be left in a bad
state from its previous operations. The driver will send PF FLR
request before slowpath initialization.
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
doc/guides/nics/qede.rst | 1 -
drivers/net/qede/base/ecore_dev.c | 15 +++++++++++----
drivers/net/qede/base/ecore_mcp.c | 9 +++++++++
drivers/net/qede/base/ecore_mcp.h | 11 +++++++++++
4 files changed, 31 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index c19f499..9d85d8d 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -65,7 +65,6 @@ Non-supported Features
- SR-IOV PF
- Tunneling offloads
-- Reload of the PMD after a non-graceful termination
Supported QLogic Adapters
-------------------------
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index b530173..6060f9e 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2952,13 +2952,14 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
void OSAL_IOMEM *p_doorbells,
struct ecore_hw_prepare_params *p_params)
{
+ struct ecore_dev *p_dev = p_hwfn->p_dev;
enum _ecore_status_t rc = ECORE_SUCCESS;
/* Split PCI bars evenly between hwfns */
p_hwfn->regview = p_regview;
p_hwfn->doorbells = p_doorbells;
- if (IS_VF(p_hwfn->p_dev))
+ if (IS_VF(p_dev))
return ecore_vf_hw_prepare(p_hwfn);
/* Validate that chip access is feasible */
@@ -2982,7 +2983,7 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
/* First hwfn learns basic information, e.g., number of hwfns */
if (!p_hwfn->my_id) {
- rc = ecore_get_dev_info(p_hwfn->p_dev);
+ rc = ecore_get_dev_info(p_dev);
if (rc != ECORE_SUCCESS)
goto err1;
}
@@ -2996,6 +2997,12 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
goto err1;
}
+ if (p_hwfn == ECORE_LEADING_HWFN(p_dev) && !p_dev->recov_in_prog) {
+ rc = ecore_mcp_initiate_pf_flr(p_hwfn, p_hwfn->p_main_ptt);
+ if (rc != ECORE_SUCCESS)
+ DP_NOTICE(p_hwfn, false, "Failed to initiate PF FLR\n");
+ }
+
/* Read the device configuration information from the HW and SHMEM */
rc = ecore_get_hw_info(p_hwfn, p_hwfn->p_main_ptt,
p_params->personality, p_params->drv_resc_alloc);
@@ -3011,7 +3018,7 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
goto err2;
}
#ifndef ASIC_ONLY
- if (CHIP_REV_IS_FPGA(p_hwfn->p_dev)) {
+ if (CHIP_REV_IS_FPGA(p_dev)) {
DP_NOTICE(p_hwfn, false,
"FPGA: workaround; Prevent DMAE parities\n");
ecore_wr(p_hwfn, p_hwfn->p_main_ptt, PCIE_REG_PRTY_MASK, 7);
@@ -3026,7 +3033,7 @@ ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
return rc;
err2:
if (IS_LEAD_HWFN(p_hwfn))
- ecore_iov_free_hw_info(p_hwfn->p_dev);
+ ecore_iov_free_hw_info(p_dev);
ecore_mcp_free(p_hwfn);
err1:
ecore_hw_hwfn_free(p_hwfn);
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 500368e..2ff9715 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -2442,3 +2442,12 @@ enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
+
+enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt)
+{
+ u32 mcp_resp, mcp_param;
+
+ return ecore_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_INITIATE_PF_FLR,
+ 0, &mcp_resp, &mcp_param);
+}
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index d3103ff..831890c 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -353,4 +353,15 @@ enum _ecore_status_t ecore_mcp_get_resc_info(struct ecore_hwfn *p_hwfn,
struct resource_info *p_resc_info,
u32 *p_mcp_resp, u32 *p_mcp_param);
+/**
+ * @brief - Initiates PF FLR
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @param return ECORE_SUCCESS upon success.
+ */
+enum _ecore_status_t ecore_mcp_initiate_pf_flr(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt);
+
#endif /* __ECORE_MCP_H__ */
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 26/32] net/qede: skip slowpath polling for 100G VF device
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (24 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 25/32] net/qede/base: add support to initiate PF FLR Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 27/32] net/qede: fix driver version string Rasesh Mody
` (6 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Harish Patil
From: Harish Patil <harish.patil@qlogic.com>
There is no need to poll for slowpath events for VF
device since the ramrod responses are received over
PF-VF backchannel synchronously. So the fix is to
restrict the slowpath polling for PF device only.
Fixes: 2af14ca79c0a ("net/qede: support 100G")
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
drivers/net/qede/qede_ethdev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index c93b7af..2763d63 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1372,7 +1372,7 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
* This is required since uio device uses only one MSI-x
* interrupt vector but we need one for each engine.
*/
- if (edev->num_hwfns > 1) {
+ if (edev->num_hwfns > 1 && IS_PF(edev)) {
rc = rte_eal_alarm_set(timer_period * US_PER_S,
qede_poll_sp_sb_cb,
(void *)eth_dev);
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 27/32] net/qede: fix driver version string
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (25 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 26/32] net/qede: skip slowpath polling for 100G VF device Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 28/32] net/qede: fix status block index for VF queues Rasesh Mody
` (5 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Harish Patil
From: Harish Patil <harish.patil@qlogic.com>
This patch fixes the base driver version display.
The driver version notation is:
<Base-Version_PMD-Version>
Fixes: 2ea6f76aff40 ("qede: add core driver")
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
drivers/net/qede/qede_ethdev.c | 43 +++++++++++++++++++++---------------------
drivers/net/qede/qede_ethdev.h | 17 ++++++++---------
drivers/net/qede/qede_if.h | 3 +--
drivers/net/qede/qede_main.c | 4 ++--
4 files changed, 32 insertions(+), 35 deletions(-)
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 2763d63..394f89a 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -8,6 +8,7 @@
#include "qede_ethdev.h"
#include <rte_alarm.h>
+#include <rte_version.h>
/* Globals */
static const struct qed_eth_ops *qed_ops;
@@ -188,31 +189,28 @@ static void qede_print_adapter_info(struct qede_dev *qdev)
{
struct ecore_dev *edev = &qdev->edev;
struct qed_dev_info *info = &qdev->dev_info.common;
- static char ver_str[QED_DRV_VER_STR_SIZE];
+ static char drv_ver[QEDE_PMD_DRV_VER_STR_SIZE];
+ static char ver_str[QEDE_PMD_DRV_VER_STR_SIZE];
DP_INFO(edev, "*********************************\n");
+ DP_INFO(edev, " DPDK version:%s\n", rte_version());
DP_INFO(edev, " Chip details : %s%d\n",
- ECORE_IS_BB(edev) ? "BB" : "AH",
- CHIP_REV_IS_A0(edev) ? 0 : 1);
-
- sprintf(ver_str, "%s %s_%d.%d.%d.%d", QEDE_PMD_VER_PREFIX,
- edev->ver_str, QEDE_PMD_VERSION_MAJOR, QEDE_PMD_VERSION_MINOR,
- QEDE_PMD_VERSION_REVISION, QEDE_PMD_VERSION_PATCH);
- strcpy(qdev->drv_ver, ver_str);
- DP_INFO(edev, " Driver version : %s\n", ver_str);
-
- sprintf(ver_str, "%d.%d.%d.%d", info->fw_major, info->fw_minor,
- info->fw_rev, info->fw_eng);
+ ECORE_IS_BB(edev) ? "BB" : "AH",
+ CHIP_REV_IS_A0(edev) ? 0 : 1);
+ snprintf(ver_str, QEDE_PMD_DRV_VER_STR_SIZE, "%d.%d.%d.%d",
+ info->fw_major, info->fw_minor, info->fw_rev, info->fw_eng);
+ snprintf(drv_ver, QEDE_PMD_DRV_VER_STR_SIZE, "%s_%s",
+ ver_str, QEDE_PMD_VERSION);
+ DP_INFO(edev, " Driver version : %s\n", drv_ver);
DP_INFO(edev, " Firmware version : %s\n", ver_str);
- sprintf(ver_str, "%d.%d.%d.%d",
+ snprintf(ver_str, MCP_DRV_VER_STR_SIZE,
+ "%d.%d.%d.%d",
(info->mfw_rev >> 24) & 0xff,
(info->mfw_rev >> 16) & 0xff,
(info->mfw_rev >> 8) & 0xff, (info->mfw_rev) & 0xff);
- DP_INFO(edev, " Management firmware version : %s\n", ver_str);
-
+ DP_INFO(edev, " Management Firmware version : %s\n", ver_str);
DP_INFO(edev, " Firmware file : %s\n", fw_file);
-
DP_INFO(edev, "*********************************\n");
}
@@ -1362,11 +1360,12 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
/* Start the Slowpath-process */
memset(¶ms, 0, sizeof(struct qed_slowpath_params));
params.int_mode = ECORE_INT_MODE_MSIX;
- params.drv_major = QEDE_MAJOR_VERSION;
- params.drv_minor = QEDE_MINOR_VERSION;
- params.drv_rev = QEDE_REVISION_VERSION;
- params.drv_eng = QEDE_ENGINEERING_VERSION;
- strncpy((char *)params.name, "qede LAN", QED_DRV_VER_STR_SIZE);
+ params.drv_major = QEDE_PMD_VERSION_MAJOR;
+ params.drv_minor = QEDE_PMD_VERSION_MINOR;
+ params.drv_rev = QEDE_PMD_VERSION_REVISION;
+ params.drv_eng = QEDE_PMD_VERSION_PATCH;
+ strncpy((char *)params.name, QEDE_PMD_VER_PREFIX,
+ QEDE_PMD_DRV_VER_STR_SIZE);
/* For CMT mode device do periodic polling for slowpath events.
* This is required since uio device uses only one MSI-x
@@ -1403,7 +1402,7 @@ static int qede_common_dev_init(struct rte_eth_dev *eth_dev, bool is_vf)
qede_alloc_etherdev(adapter, &dev_info);
- adapter->ops->common->set_id(edev, edev->name, QEDE_DRV_MODULE_VERSION);
+ adapter->ops->common->set_id(edev, edev->name, QEDE_PMD_VERSION);
if (!is_vf)
adapter->dev_info.num_mac_addrs =
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 5838f33..c3b87e8 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -46,15 +46,14 @@
#define QEDE_PMD_VERSION_REVISION 0
#define QEDE_PMD_VERSION_PATCH 1
-#define QEDE_MAJOR_VERSION 8
-#define QEDE_MINOR_VERSION 7
-#define QEDE_REVISION_VERSION 9
-#define QEDE_ENGINEERING_VERSION 0
+#define QEDE_PMD_VERSION qede_stringify(QEDE_PMD_VERSION_MAJOR) "." \
+ qede_stringify(QEDE_PMD_VERSION_MINOR) "." \
+ qede_stringify(QEDE_PMD_VERSION_REVISION) "." \
+ qede_stringify(QEDE_PMD_VERSION_PATCH)
+
+#define QEDE_PMD_DRV_VER_STR_SIZE NAME_SIZE
+#define QEDE_PMD_VER_PREFIX "QEDE PMD"
-#define QEDE_DRV_MODULE_VERSION qede_stringify(QEDE_MAJOR_VERSION) "." \
- qede_stringify(QEDE_MINOR_VERSION) "." \
- qede_stringify(QEDE_REVISION_VERSION) "." \
- qede_stringify(QEDE_ENGINEERING_VERSION)
#define QEDE_RSS_INDIR_INITED (1 << 0)
#define QEDE_RSS_KEY_INITED (1 << 1)
@@ -144,7 +143,7 @@ struct qede_dev {
bool accept_any_vlan;
struct ether_addr primary_mac;
bool handle_hw_err;
- char drv_ver[QED_DRV_VER_STR_SIZE];
+ char drv_ver[QEDE_PMD_DRV_VER_STR_SIZE];
};
/* Static functions */
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 935eed8..2d38b1b 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -76,14 +76,13 @@ struct qed_link_output {
uint32_t pause_config;
};
-#define QED_DRV_VER_STR_SIZE 80
struct qed_slowpath_params {
uint32_t int_mode;
uint8_t drv_major;
uint8_t drv_minor;
uint8_t drv_rev;
uint8_t drv_eng;
- uint8_t name[QED_DRV_VER_STR_SIZE];
+ uint8_t name[NAME_SIZE];
};
#define ILT_PAGE_SIZE_TCFC 0x8000 /* 32KB */
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index c83893d..0483116 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -426,7 +426,7 @@ qed_fill_eth_dev_info(struct ecore_dev *edev, struct qed_dev_eth_info *info)
static void
qed_set_id(struct ecore_dev *edev, char name[NAME_SIZE],
- const char ver_str[VER_SIZE])
+ const char ver_str[NAME_SIZE])
{
int i;
@@ -434,7 +434,7 @@ qed_set_id(struct ecore_dev *edev, char name[NAME_SIZE],
for_each_hwfn(edev, i) {
snprintf(edev->hwfns[i].name, NAME_SIZE, "%s-%d", name, i);
}
- rte_memcpy(edev->ver_str, ver_str, VER_SIZE);
+ memcpy(edev->ver_str, ver_str, NAME_SIZE);
edev->drv_type = DRV_ID_DRV_TYPE_LINUX;
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 28/32] net/qede: fix status block index for VF queues
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (26 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 27/32] net/qede: fix driver version string Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 29/32] net/qede: add support for queue statistics Rasesh Mody
` (4 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Harish Patil
From: Harish Patil <harish.patil@qlogic.com>
o Fix the fastpath status block index such that each queue pair shares
the same index value.
o Add ecore_vf_get_num_sbs() API that returns the number of status
blocks assigned by PF. Use that to decide how many VF queues can be
advertised. Additionally, restrict maximum number of VF queues to 16
for 100G VF case.
Fixes: 2ea6f76aff40 ("qede: add core driver")
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
drivers/net/qede/base/ecore_vf.c | 11 ++++++++++-
drivers/net/qede/base/ecore_vf_api.h | 3 +++
drivers/net/qede/qede_ethdev.h | 1 +
drivers/net/qede/qede_main.c | 13 ++++++++++++-
drivers/net/qede/qede_rxtx.c | 15 ++++++++++++++-
5 files changed, 40 insertions(+), 3 deletions(-)
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index 634e2bb..be8b1ec 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -336,7 +336,7 @@ static enum _ecore_status_t ecore_vf_pf_acquire(struct ecore_hwfn *p_hwfn)
/* Learn of the possibility of CMT */
if (IS_LEAD_HWFN(p_hwfn)) {
if (resp->pfdev_info.capabilities & PFVF_ACQUIRE_CAP_100G) {
- DP_NOTICE(p_hwfn, false, "100g VF\n");
+ DP_INFO(p_hwfn, "100g VF\n");
p_hwfn->p_dev->num_hwfns = 2;
}
}
@@ -1382,6 +1382,15 @@ void ecore_vf_get_num_mac_filters(struct ecore_hwfn *p_hwfn,
*num_mac = p_vf->acquire_resp.resc.num_mac_filters;
}
+void ecore_vf_get_num_sbs(struct ecore_hwfn *p_hwfn,
+ u32 *num_sbs)
+{
+ struct ecore_vf_iov *p_vf;
+
+ p_vf = p_hwfn->vf_iov_info;
+ *num_sbs = (u32)p_vf->acquire_resp.resc.num_sbs;
+}
+
bool ecore_vf_check_mac(struct ecore_hwfn *p_hwfn, u8 *mac)
{
struct ecore_bulletin_content *bulletin;
diff --git a/drivers/net/qede/base/ecore_vf_api.h b/drivers/net/qede/base/ecore_vf_api.h
index c73244f..571fd37 100644
--- a/drivers/net/qede/base/ecore_vf_api.h
+++ b/drivers/net/qede/base/ecore_vf_api.h
@@ -87,6 +87,9 @@ void ecore_vf_get_num_vlan_filters(struct ecore_hwfn *p_hwfn,
void ecore_vf_get_num_mac_filters(struct ecore_hwfn *p_hwfn,
u32 *num_mac_filters);
+void ecore_vf_get_num_sbs(struct ecore_hwfn *p_hwfn,
+ u32 *num_sbs);
+
/**
* @brief Check if VF can set a MAC address
*
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index c3b87e8..7b235b1 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -29,6 +29,7 @@
#include "base/ecore_hsi_eth.h"
#include "base/ecore_dev_api.h"
#include "base/ecore_iov_api.h"
+#include "base/ecore_cxt.h"
#include "qede_logs.h"
#include "qede_if.h"
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 0483116..4f1e3b0 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -384,6 +384,7 @@ int
qed_fill_eth_dev_info(struct ecore_dev *edev, struct qed_dev_eth_info *info)
{
struct qede_dev *qdev = (struct qede_dev *)edev;
+ uint8_t queues = 0;
int i;
memset(info, 0, sizeof(*info));
@@ -407,7 +408,17 @@ qed_fill_eth_dev_info(struct ecore_dev *edev, struct qed_dev_eth_info *info)
rte_memcpy(&info->port_mac, &edev->hwfns[0].hw_info.hw_mac_addr,
ETHER_ADDR_LEN);
} else {
- ecore_vf_get_num_rxqs(&edev->hwfns[0], &info->num_queues);
+ ecore_vf_get_num_rxqs(ECORE_LEADING_HWFN(edev),
+ &info->num_queues);
+ if (edev->num_hwfns > 1) {
+ ecore_vf_get_num_rxqs(&edev->hwfns[1], &queues);
+ info->num_queues += queues;
+ /* Restrict 100G VF to advertise 16 queues till the
+ * required support is available to go beyond 16.
+ */
+ info->num_queues = RTE_MIN(info->num_queues,
+ ECORE_MAX_VF_CHAINS_PER_PF);
+ }
ecore_vf_get_num_vlan_filters(&edev->hwfns[0],
(u8 *)&info->num_vlan_filters);
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 8f83497..bfcb16c 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -435,9 +435,22 @@ qede_alloc_mem_sb(struct qede_dev *qdev, struct ecore_sb_info *sb_info,
int qede_alloc_fp_resc(struct qede_dev *qdev)
{
+ struct ecore_dev *edev = &qdev->edev;
struct qede_fastpath *fp;
+ uint32_t num_sbs;
int rc, i;
+ if (IS_VF(edev))
+ ecore_vf_get_num_sbs(ECORE_LEADING_HWFN(edev), &num_sbs);
+ else
+ num_sbs = (ecore_cxt_get_proto_cid_count
+ (ECORE_LEADING_HWFN(edev), PROTOCOLID_ETH, NULL)) / 2;
+
+ if (num_sbs == 0) {
+ DP_ERR(edev, "No status blocks available\n");
+ return -EINVAL;
+ }
+
if (qdev->fp_array)
qede_free_fp_arrays(qdev);
@@ -449,7 +462,7 @@ int qede_alloc_fp_resc(struct qede_dev *qdev)
for (i = 0; i < QEDE_QUEUE_CNT(qdev); i++) {
fp = &qdev->fp_array[i];
- if (qede_alloc_mem_sb(qdev, fp->sb_info, i)) {
+ if (qede_alloc_mem_sb(qdev, fp->sb_info, i % num_sbs)) {
qede_free_fp_arrays(qdev);
return -ENOMEM;
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 29/32] net/qede: add support for queue statistics
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (27 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 28/32] net/qede: fix status block index for VF queues Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 30/32] net/qede: remove zlib dependency and enable PMD by default Rasesh Mody
` (3 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Rasesh Mody
This patch adds support for pulling per queue statistics.
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
---
drivers/net/qede/qede_ethdev.c | 98 +++++++++++++++++++++++++++++++++++++-----
drivers/net/qede/qede_ethdev.h | 3 ++
drivers/net/qede/qede_rxtx.c | 23 +++++-----
drivers/net/qede/qede_rxtx.h | 4 +-
4 files changed, 107 insertions(+), 21 deletions(-)
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 394f89a..b91b478 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -160,6 +160,15 @@ static const struct rte_qede_xstats_name_off qede_xstats_strings[] = {
offsetof(struct ecore_eth_stats, tpa_coalesced_bytes)},
};
+static const struct rte_qede_xstats_name_off qede_rxq_xstats_strings[] = {
+ {"rx_q_segments",
+ offsetof(struct qede_rx_queue, rx_segs)},
+ {"rx_q_hw_errors",
+ offsetof(struct qede_rx_queue, rx_hw_errors)},
+ {"rx_q_allocation_errors",
+ offsetof(struct qede_rx_queue, rx_alloc_errors)}
+};
+
static void qede_interrupt_action(struct ecore_hwfn *p_hwfn)
{
ecore_int_sp_dpc((osal_int_ptr_t)(p_hwfn));
@@ -828,6 +837,8 @@ qede_get_stats(struct rte_eth_dev *eth_dev, struct rte_eth_stats *eth_stats)
struct qede_dev *qdev = eth_dev->data->dev_private;
struct ecore_dev *edev = &qdev->edev;
struct ecore_eth_stats stats;
+ unsigned int i = 0, j = 0, qid;
+ struct qede_tx_queue *txq;
qdev->ops->get_vport_stats(edev, &stats);
@@ -858,20 +869,73 @@ qede_get_stats(struct rte_eth_dev *eth_dev, struct rte_eth_stats *eth_stats)
stats.tx_mcast_bytes + stats.tx_bcast_bytes;
eth_stats->oerrors = stats.tx_err_drop_pkts;
+
+ /* Queue stats */
+ for (qid = 0; qid < QEDE_QUEUE_CNT(qdev); qid++) {
+ if (qdev->fp_array[qid].type & QEDE_FASTPATH_RX) {
+ eth_stats->q_ipackets[i] =
+ *(uint64_t *)(
+ ((char *)(qdev->fp_array[(qid)].rxq)) +
+ offsetof(struct qede_rx_queue,
+ rcv_pkts));
+ eth_stats->q_errors[i] =
+ *(uint64_t *)(
+ ((char *)(qdev->fp_array[(qid)].rxq)) +
+ offsetof(struct qede_rx_queue,
+ rx_hw_errors)) +
+ *(uint64_t *)(
+ ((char *)(qdev->fp_array[(qid)].rxq)) +
+ offsetof(struct qede_rx_queue,
+ rx_alloc_errors));
+ i++;
+ }
+
+ if (qdev->fp_array[qid].type & QEDE_FASTPATH_TX) {
+ txq = qdev->fp_array[(qid)].txqs[0];
+ eth_stats->q_opackets[j] =
+ *((uint64_t *)(uintptr_t)
+ (((uint64_t)(uintptr_t)(txq)) +
+ offsetof(struct qede_tx_queue,
+ xmit_pkts)));
+ j++;
+ }
+ }
+}
+
+static unsigned
+qede_get_xstats_count(struct qede_dev *qdev) {
+ return RTE_DIM(qede_xstats_strings) +
+ (RTE_DIM(qede_rxq_xstats_strings) * QEDE_RSS_COUNT(qdev));
}
static int
qede_get_xstats_names(__rte_unused struct rte_eth_dev *dev,
struct rte_eth_xstat_name *xstats_names, unsigned limit)
{
- unsigned int i, stat_cnt = RTE_DIM(qede_xstats_strings);
+ struct qede_dev *qdev = dev->data->dev_private;
+ const unsigned int stat_cnt = qede_get_xstats_count(qdev);
+ unsigned int i, qid, stat_idx = 0;
- if (xstats_names != NULL)
- for (i = 0; i < stat_cnt; i++)
- snprintf(xstats_names[i].name,
- sizeof(xstats_names[i].name),
+ if (xstats_names != NULL) {
+ for (i = 0; i < RTE_DIM(qede_xstats_strings); i++) {
+ snprintf(xstats_names[stat_idx].name,
+ sizeof(xstats_names[stat_idx].name),
"%s",
qede_xstats_strings[i].name);
+ stat_idx++;
+ }
+
+ for (qid = 0; qid < QEDE_RSS_COUNT(qdev); qid++) {
+ for (i = 0; i < RTE_DIM(qede_rxq_xstats_strings); i++) {
+ snprintf(xstats_names[stat_idx].name,
+ sizeof(xstats_names[stat_idx].name),
+ "%.4s%d%s",
+ qede_rxq_xstats_strings[i].name, qid,
+ qede_rxq_xstats_strings[i].name + 4);
+ stat_idx++;
+ }
+ }
+ }
return stat_cnt;
}
@@ -883,18 +947,32 @@ qede_get_xstats(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats,
struct qede_dev *qdev = dev->data->dev_private;
struct ecore_dev *edev = &qdev->edev;
struct ecore_eth_stats stats;
- unsigned int num = RTE_DIM(qede_xstats_strings);
+ const unsigned int num = qede_get_xstats_count(qdev);
+ unsigned int i, qid, stat_idx = 0;
if (n < num)
return num;
qdev->ops->get_vport_stats(edev, &stats);
- for (num = 0; num < n; num++)
- xstats[num].value = *(u64 *)(((char *)&stats) +
- qede_xstats_strings[num].offset);
+ for (i = 0; i < RTE_DIM(qede_xstats_strings); i++) {
+ xstats[stat_idx].value = *(uint64_t *)(((char *)&stats) +
+ qede_xstats_strings[i].offset);
+ stat_idx++;
+ }
+
+ for (qid = 0; qid < QEDE_QUEUE_CNT(qdev); qid++) {
+ if (qdev->fp_array[qid].type & QEDE_FASTPATH_RX) {
+ for (i = 0; i < RTE_DIM(qede_rxq_xstats_strings); i++) {
+ xstats[stat_idx].value = *(uint64_t *)(
+ ((char *)(qdev->fp_array[(qid)].rxq)) +
+ qede_rxq_xstats_strings[i].offset);
+ stat_idx++;
+ }
+ }
+ }
- return num;
+ return stat_idx;
}
static void
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 7b235b1..5ab36a5 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -69,6 +69,9 @@
#define QEDE_TSS_COUNT(qdev) (((qdev)->num_queues - (qdev)->fp_num_rx) * \
(qdev)->num_tc)
+#define QEDE_FASTPATH_TX (1 << 0)
+#define QEDE_FASTPATH_RX (1 << 1)
+
#define QEDE_DUPLEX_FULL 1
#define QEDE_DUPLEX_HALF 2
#define QEDE_DUPLEX_UNKNOWN 0xff
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index bfcb16c..2e181c8 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -10,9 +10,6 @@
static bool gro_disable = 1; /* mod_param */
-#define QEDE_FASTPATH_TX (1 << 0)
-#define QEDE_FASTPATH_RX (1 << 1)
-
static inline int qede_alloc_rx_buffer(struct qede_rx_queue *rxq)
{
struct rte_mbuf *new_mb = NULL;
@@ -838,7 +835,7 @@ static inline uint32_t qede_rx_cqe_to_pkt_type(uint16_t flags)
}
int qede_process_sg_pkts(void *p_rxq, struct rte_mbuf *rx_mb,
- int num_frags, uint16_t pkt_len)
+ int num_segs, uint16_t pkt_len)
{
struct qede_rx_queue *rxq = p_rxq;
struct qede_dev *qdev = rxq->qdev;
@@ -849,13 +846,13 @@ int qede_process_sg_pkts(void *p_rxq, struct rte_mbuf *rx_mb,
register struct rte_mbuf *seg2 = NULL;
seg1 = rx_mb;
- while (num_frags) {
+ while (num_segs) {
cur_size = pkt_len > rxq->rx_buf_size ?
rxq->rx_buf_size : pkt_len;
if (!cur_size) {
PMD_RX_LOG(DEBUG, rxq,
"SG packet, len and num BD mismatch\n");
- qede_recycle_rx_bd_ring(rxq, qdev, num_frags);
+ qede_recycle_rx_bd_ring(rxq, qdev, num_segs);
return -EINVAL;
}
@@ -876,7 +873,8 @@ int qede_process_sg_pkts(void *p_rxq, struct rte_mbuf *rx_mb,
seg2->data_len = cur_size;
seg1->next = seg2;
seg1 = seg1->next;
- num_frags--;
+ num_segs--;
+ rxq->rx_segs++;
continue;
}
seg1 = NULL;
@@ -904,7 +902,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
register struct rte_mbuf *seg1 = NULL;
enum eth_rx_cqe_type cqe_type;
uint16_t len, pad, preload_idx, pkt_len, parse_flag;
- uint8_t csum_flag, num_frags;
+ uint8_t csum_flag, num_segs;
enum rss_hash_type htype;
int ret;
@@ -978,11 +976,13 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (fp_cqe->bd_num > 1) {
pkt_len = rte_le_to_cpu_16(fp_cqe->pkt_len);
- num_frags = fp_cqe->bd_num - 1;
+ num_segs = fp_cqe->bd_num - 1;
+
+ rxq->rx_segs++;
pkt_len -= len;
seg1 = rx_mb;
- ret = qede_process_sg_pkts(p_rxq, seg1, num_frags,
+ ret = qede_process_sg_pkts(p_rxq, seg1, num_segs,
pkt_len);
if (ret != ECORE_SUCCESS) {
qede_recycle_rx_bd_ring(rxq, qdev,
@@ -1045,6 +1045,8 @@ next_cqe:
qede_update_rx_prod(qdev, rxq);
+ rxq->rcv_pkts += rx_pkt;
+
PMD_RX_LOG(DEBUG, rxq, "rx_pkts=%u core=%d\n", rx_pkt, rte_lcore_id());
return rx_pkt;
@@ -1237,6 +1239,7 @@ qede_xmit_pkts(void *p_txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
bd_prod =
rte_cpu_to_le_16(ecore_chain_get_prod_idx(&txq->tx_pbl));
nb_pkt_sent++;
+ txq->xmit_pkts++;
}
/* Write value of prod idx into bd_prod */
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index da47b21..ed9a529 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -99,6 +99,8 @@ struct qede_rx_queue {
uint16_t queue_id;
uint16_t port_id;
uint16_t rx_buf_size;
+ uint64_t rcv_pkts;
+ uint64_t rx_segs;
uint64_t rx_hw_errors;
uint64_t rx_alloc_errors;
struct qede_dev *qdev;
@@ -130,7 +132,7 @@ struct qede_tx_queue {
void OSAL_IOMEM *doorbell_addr;
volatile union db_prod tx_db;
uint16_t port_id;
- uint64_t txq_counter;
+ uint64_t xmit_pkts;
struct qede_dev *qdev;
};
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 30/32] net/qede: remove zlib dependency and enable PMD by default
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (28 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 29/32] net/qede: add support for queue statistics Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 31/32] doc: update qede pmd documentation Rasesh Mody
` (2 subsequent siblings)
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Rasesh Mody
The QEDE PMD now uses unzipped firmware file eliminating the dependency
on zlib. Hence remove LDLIBS entry form the Makefile and enable qede
PMD by default.
Fixes: 6adac0bf30b3 ("qede: add missing external dependency and disable by default")
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
---
config/common_base | 2 +-
doc/guides/nics/qede.rst | 5 +----
drivers/net/qede/Makefile | 2 --
drivers/net/qede/base/ecore.h | 2 +-
drivers/net/qede/qede_main.c | 2 +-
mk/rte.app.mk | 2 +-
6 files changed, 5 insertions(+), 10 deletions(-)
diff --git a/config/common_base b/config/common_base
index c7fd3db..750577d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -317,7 +317,7 @@ CONFIG_RTE_LIBRTE_BOND_DEBUG_ALB_L1=n
# QLogic 25G/40G/100G PMD
#
-CONFIG_RTE_LIBRTE_QEDE_PMD=n
+CONFIG_RTE_LIBRTE_QEDE_PMD=y
CONFIG_RTE_LIBRTE_QEDE_DEBUG_INIT=n
CONFIG_RTE_LIBRTE_QEDE_DEBUG_INFO=n
CONFIG_RTE_LIBRTE_QEDE_DEBUG_DRIVER=n
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index 9d85d8d..902e82b 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -77,14 +77,11 @@ Prerequisites
- Requires firmware version **8.10.x.** and management firmware
version **8.10.x or higher**. Firmware may be available
inbox in certain newer Linux distros under the standard directory
- ``E.g. /lib/firmware/qed/qed_init_values_zipped-8.10.9.0.bin``
+ ``E.g. /lib/firmware/qed/qed_init_values-8.10.9.0.bin``
- If the required firmware files are not available then visit
`QLogic Driver Download Center <http://driverdownloads.qlogic.com>`_.
-- This driver relies on external zlib library (-lz) for uncompressing
- the firmware file.
-
Performance note
~~~~~~~~~~~~~~~~
diff --git a/drivers/net/qede/Makefile b/drivers/net/qede/Makefile
index 7965a83..39751e4 100644
--- a/drivers/net/qede/Makefile
+++ b/drivers/net/qede/Makefile
@@ -14,8 +14,6 @@ LIB = librte_pmd_qede.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
-LDLIBS += -lz
-
EXPORT_MAP := rte_pmd_qede_version.map
LIBABIVER := 1
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 89e2bd0..907b35b 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -15,7 +15,7 @@
#include <unistd.h>
#define CONFIG_ECORE_BINARY_FW
-#define CONFIG_ECORE_ZIPPED_FW
+#undef CONFIG_ECORE_ZIPPED_FW
#ifdef CONFIG_ECORE_ZIPPED_FW
#include <zlib.h>
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 4f1e3b0..2d354e1 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -21,7 +21,7 @@ static uint8_t npar_tx_switching = 1;
char fw_file[PATH_MAX];
const char *QEDE_DEFAULT_FIRMWARE =
- "/lib/firmware/qed/qed_init_values_zipped-8.10.9.0.bin";
+ "/lib/firmware/qed/qed_init_values-8.10.9.0.bin";
static void
qed_update_pf_params(struct ecore_dev *edev, struct ecore_pf_params *params)
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 72c2fe7..2b95b75 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -121,7 +121,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MPIPE_PMD) += -lrte_pmd_mpipe -lgxio
_LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += -lrte_pmd_nfp -lm
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += -lrte_pmd_null
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += -lrte_pmd_pcap -lpcap
-_LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += -lrte_pmd_qede -lz
+_LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += -lrte_pmd_qede
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_RING) += -lrte_pmd_ring
_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SZEDATA2) += -lrte_pmd_szedata2 -lsze2
_LDLIBS-$(CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD) += -lrte_pmd_thunderx_nicvf -lm
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 31/32] doc: update qede pmd documentation
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (29 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 30/32] net/qede: remove zlib dependency and enable PMD by default Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 32/32] net/qede: update driver version Rasesh Mody
2016-10-24 13:41 ` [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Bruce Richardson
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Rasesh Mody
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
doc/guides/nics/qede.rst | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/qede.rst b/doc/guides/nics/qede.rst
index 902e82b..b6f54fd 100644
--- a/doc/guides/nics/qede.rst
+++ b/doc/guides/nics/qede.rst
@@ -65,6 +65,8 @@ Non-supported Features
- SR-IOV PF
- Tunneling offloads
+- LRO/TSO
+- NPAR
Supported QLogic Adapters
-------------------------
@@ -239,7 +241,7 @@ SR-IOV: Prerequisites and Sample Application Notes
This section provides instructions to configure SR-IOV with Linux OS.
-**Note**: librte_pmd_qede will be used to bind to SR-IOV VF device and Linux native kernel driver (QEDE) will function as SR-IOV PF driver.
+**Note**: librte_pmd_qede will be used to bind to SR-IOV VF device and Linux native kernel driver (QEDE) will function as SR-IOV PF driver. Requires PF driver to be 8.10.x.x or higher.
#. Verify SR-IOV and ARI capability is enabled on the adapter using ``lspci``:
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v4 32/32] net/qede: update driver version
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (30 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 31/32] doc: update qede pmd documentation Rasesh Mody
@ 2016-10-19 4:11 ` Rasesh Mody
2016-10-24 13:41 ` [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Bruce Richardson
32 siblings, 0 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-19 4:11 UTC (permalink / raw)
To: ferruh.yigit, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev, Rasesh Mody
This patch updates the qede pmd version to 1.2.0.1.
Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
---
drivers/net/qede/qede_ethdev.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 5ab36a5..5eb3f52 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -43,7 +43,7 @@
/* Driver versions */
#define QEDE_PMD_VER_PREFIX "QEDE PMD"
#define QEDE_PMD_VERSION_MAJOR 1
-#define QEDE_PMD_VERSION_MINOR 1
+#define QEDE_PMD_VERSION_MINOR 2
#define QEDE_PMD_VERSION_REVISION 0
#define QEDE_PMD_VERSION_PATCH 1
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH v4 04/32] net/qede/base: add HSI changes and register defines
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 04/32] net/qede/base: add HSI changes and register defines Rasesh Mody
@ 2016-10-19 12:37 ` Ferruh Yigit
2016-10-19 13:46 ` Mody, Rasesh
0 siblings, 1 reply; 59+ messages in thread
From: Ferruh Yigit @ 2016-10-19 12:37 UTC (permalink / raw)
To: Rasesh Mody, thomas.monjalon, bruce.richardson; +Cc: dev, Dept-EngDPDKDev
On 10/19/2016 5:11 AM, Rasesh Mody wrote:
> - add the hardware software interface(HSI) changes
> - add register definitions
>
> These will be required for 8.10.9.0 FW upgrade.
>
> Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
<...>
> /*
> * Igu cleanup bit values to distinguish between clean or producer consumer
> */
> @@ -1376,7 +1572,7 @@ struct dmae_cmd {
> __le32 src_addr_hi;
> __le32 dst_addr_lo;
> __le32 dst_addr_hi;
> - __le16 length /* Length in DW */;
> + __le16 length_dw /* Length in DW */;
This cause a compilation error [1] in patch by patch compilation, when
debug is enabled. Fix seems straightforward [2].
Do you confirm the fix?
If you confirm, you don't need to send a new version, fix can be applied
while getting the patch.
Thanks,
ferruh
[1]:
====
In file included from .../drivers/net/qede/base/ecore.h:36:0,
from .../drivers/net/qede/base/ecore_hw.c:12:
.../drivers/net/qede/base/ecore_hw.c: In function ‘ecore_dmae_post_command’:
.../drivers/net/qede/base/ecore_hw.c:478:20: error: ‘struct dmae_cmd’
has no member named ‘length’; did you mean ‘length_dw’?
(int)p_command->length,
^
.../drivers/net/qede/base/../qede_logs.h:47:11: note: in definition of
macro ‘DP_VERBOSE’
##__VA_ARGS__); \
^~~~~~~~~~~
.../mk/internal/rte.compile-pre.mk:138: recipe for target
'base/ecore_hw.o' failed
[2]:
====
diff --git a/drivers/net/qede/base/ecore_hw.c
b/drivers/net/qede/base/ecore_hw.c
index 72bc6de..5412ed1 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -475,7 +475,7 @@ ecore_dmae_post_command(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
"len=0x%x src=0x%x:%x dst=0x%x:%x\n",
idx_cmd, (u32)p_command->opcode,
(u16)p_command->opcode_b,
- (int)p_command->length,
+ (int)p_command->length_dw,
(int)p_command->src_addr_hi,
(int)p_command->src_addr_lo,
(int)p_command->dst_addr_hi,
(int)p_command->dst_addr_lo);
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH v4 04/32] net/qede/base: add HSI changes and register defines
2016-10-19 12:37 ` Ferruh Yigit
@ 2016-10-19 13:46 ` Mody, Rasesh
0 siblings, 0 replies; 59+ messages in thread
From: Mody, Rasesh @ 2016-10-19 13:46 UTC (permalink / raw)
To: Ferruh Yigit, Rasesh Mody, thomas.monjalon, bruce.richardson
Cc: dev, Dept-EngDPDKDev
Hi Ferruh,
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ferruh Yigit
> Sent: Wednesday, October 19, 2016 5:38 AM
> To: Rasesh Mody <rasesh.mody@qlogic.com>;
> thomas.monjalon@6wind.com; bruce.richardson@intel.com
> Cc: dev@dpdk.org; Dept-EngDPDKDev@qlogic.com
> Subject: Re: [dpdk-dev] [PATCH v4 04/32] net/qede/base: add HSI changes
> and register defines
>
> On 10/19/2016 5:11 AM, Rasesh Mody wrote:
> > - add the hardware software interface(HSI) changes
> > - add register definitions
> >
> > These will be required for 8.10.9.0 FW upgrade.
> >
> > Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
>
> <...>
>
> > /*
> > * Igu cleanup bit values to distinguish between clean or producer
> consumer
> > */
> > @@ -1376,7 +1572,7 @@ struct dmae_cmd {
> > __le32 src_addr_hi;
> > __le32 dst_addr_lo;
> > __le32 dst_addr_hi;
> > - __le16 length /* Length in DW */;
> > + __le16 length_dw /* Length in DW */;
>
> This cause a compilation error [1] in patch by patch compilation, when debug
> is enabled. Fix seems straightforward [2].
>
> Do you confirm the fix?
> If you confirm, you don't need to send a new version, fix can be applied
> while getting the patch.
Please apply the fix [2] while getting the patch. The error [1] is due to debug option not enabled for individual patch compilation. We did test after applying entire series with all the config options enabled.
For our upcoming submissions, we'll take care of enabling the debug option when compile testing all the patches individually.
Thanks!
-Rasesh
>
> Thanks,
> ferruh
>
>
> [1]:
> ====
> In file included from .../drivers/net/qede/base/ecore.h:36:0,
> from .../drivers/net/qede/base/ecore_hw.c:12:
> .../drivers/net/qede/base/ecore_hw.c: In function
> 'ecore_dmae_post_command':
> .../drivers/net/qede/base/ecore_hw.c:478:20: error: 'struct dmae_cmd'
> has no member named 'length'; did you mean 'length_dw'?
> (int)p_command->length,
> ^
> .../drivers/net/qede/base/../qede_logs.h:47:11: note: in definition of macro
> 'DP_VERBOSE'
> ##__VA_ARGS__); \
> ^~~~~~~~~~~
> .../mk/internal/rte.compile-pre.mk:138: recipe for target 'base/ecore_hw.o'
> failed
>
>
> [2]:
> ====
> diff --git a/drivers/net/qede/base/ecore_hw.c
> b/drivers/net/qede/base/ecore_hw.c
> index 72bc6de..5412ed1 100644
> --- a/drivers/net/qede/base/ecore_hw.c
> +++ b/drivers/net/qede/base/ecore_hw.c
> @@ -475,7 +475,7 @@ ecore_dmae_post_command(struct ecore_hwfn
> *p_hwfn, struct ecore_ptt *p_ptt)
> "len=0x%x src=0x%x:%x dst=0x%x:%x\n",
> idx_cmd, (u32)p_command->opcode,
> (u16)p_command->opcode_b,
> - (int)p_command->length,
> + (int)p_command->length_dw,
> (int)p_command->src_addr_hi,
> (int)p_command->src_addr_lo,
> (int)p_command->dst_addr_hi, (int)p_command->dst_addr_lo);
>
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
` (31 preceding siblings ...)
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 32/32] net/qede: update driver version Rasesh Mody
@ 2016-10-24 13:41 ` Bruce Richardson
2016-10-26 15:20 ` Thomas Monjalon
32 siblings, 1 reply; 59+ messages in thread
From: Bruce Richardson @ 2016-10-24 13:41 UTC (permalink / raw)
To: Rasesh Mody; +Cc: ferruh.yigit, thomas.monjalon, dev, Dept-EngDPDKDev
On Tue, Oct 18, 2016 at 09:11:14PM -0700, Rasesh Mody wrote:
> Hi,
>
> This patch set includes changes to update the base driver, work with
> newer FW 8.10.9.0, adds new features, includes enhancements and code
> cleanup, provides bug fixes and updates documentation for the QEDE
> poll mode driver.
>
> It enables QEDE PMD in the dpdk config by default. The dependency on
> external library libz has been addressed.
>
> The patch set updates the QEDE PMD to 1.2.0.1.
>
> Review comments received for v3 have been addressed.
>
> Please apply to DPDK tree for v16.11 release.
>
> Thanks!
> Rasesh
>
> Harish Patil (14):
> net/qede: change signature of MCP command API
> net/qede: serialize access to MFW mbox
> net/qede: add NIC selftest and query sensor info support
> net/qede: fix port (re)configuration issue
> net/qede/base: allow MTU change via vport-update
> net/qede: add missing 100G link speed capability
> net/qede: remove unused/dead code
> net/qede: fixes for VLAN filters
> net/qede: add enable/disable VLAN filtering
> net/qede: fix RSS related issues
> net/qede/base: add support to initiate PF FLR
> net/qede: skip slowpath polling for 100G VF device
> net/qede: fix driver version string
> net/qede: fix status block index for VF queues
>
> Rasesh Mody (16):
> net/qede/base: add new init files and rearrange the code
> net/qede/base: formatting changes
> net/qede: use FW CONFIG defines as needed
> net/qede/base: add HSI changes and register defines
> net/qede/base: add attention formatting string
> net/qede/base: additional formatting/comment changes
> net/qede: fix 32 bit compilation
> net/qede/base: update base driver
> net/qede/base: rename structure and defines
> net/qede/base: comment enhancements
> net/qede/base: add MFW crash dump support
> net/qede/base: change Rx Tx queue start APIs
> net/qede: add support for queue statistics
> net/qede: remove zlib dependency and enable PMD by default
> doc: update qede pmd documentation
> net/qede: update driver version
>
> Sony Chacko (2):
> net/qede: enable support for unequal number of Rx/Tx queues
> net/qede: add scatter gather support
>
Patchset applied to dpdk_next_net/rel_16_11
Thanks,
/Bruce
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default
2016-10-24 13:41 ` [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Bruce Richardson
@ 2016-10-26 15:20 ` Thomas Monjalon
2016-10-26 17:01 ` Mody, Rasesh
0 siblings, 1 reply; 59+ messages in thread
From: Thomas Monjalon @ 2016-10-26 15:20 UTC (permalink / raw)
To: Rasesh Mody; +Cc: dev, Dept-EngDPDKDev, ferruh.yigit
2016-10-24 14:41, Bruce Richardson:
> On Tue, Oct 18, 2016 at 09:11:14PM -0700, Rasesh Mody wrote:
> > Please apply to DPDK tree for v16.11 release.
>
> Patchset applied to dpdk_next_net/rel_16_11
It breaks compilation because it is enabled everywhere
and zlib.h is still included without checking CONFIG_ECORE_ZIPPED_FW.
The patch removing zlib dependency was not tested without zlib installed.
I will fix it while applying with this change:
--- a/drivers/net/qede/base/bcm_osal.c
+++ b/drivers/net/qede/base/bcm_osal.c
@@ -6,7 +6,9 @@
* See LICENSE.qede_pmd for copyright and licensing details.
*/
+#ifdef CONFIG_ECORE_ZIPPED_FW
#include <zlib.h>
+#endif
#include <rte_memzone.h>
#include <rte_errno.h>
I won't do any quality review of qede patches but from what I've seen before,
there is some room for improvements.
Another nit, important to help reviews, please use --in-reply-to when
sending a new revision of a patch to keep them in the same thread and
allow us to understand the progress.
I plan to do an automatic nack for patches missing the --in-reply-to.
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH v4 18/32] net/qede: add missing 100G link speed capability
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 18/32] net/qede: add missing 100G link speed capability Rasesh Mody
@ 2016-10-26 15:41 ` Thomas Monjalon
2016-10-26 15:54 ` Bruce Richardson
2016-10-26 21:28 ` Harish Patil
0 siblings, 2 replies; 59+ messages in thread
From: Thomas Monjalon @ 2016-10-26 15:41 UTC (permalink / raw)
To: Rasesh Mody, ferruh.yigit, bruce.richardson, Harish Patil
Cc: dev, Dept-EngDPDKDev
2016-10-18 21:11, Rasesh Mody:
> From: Harish Patil <harish.patil@qlogic.com>
>
> This patch fixes the missing 100G link speed advertisement
> when the 100G support was initially added.
>
> Fixes: 2af14ca79c0a ("net/qede: support 100G")
>
> Signed-off-by: Harish Patil <harish.patil@qlogic.com>
[...]
> [Features]
> +Speed capabilities = Y
This feature should be checked only when it is fully implemented,
i.e. when you return the real capabilities of the device.
> --- a/drivers/net/qede/qede_ethdev.c
> +++ b/drivers/net/qede/qede_ethdev.c
> @@ -599,7 +599,8 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
> DEV_TX_OFFLOAD_UDP_CKSUM |
> DEV_TX_OFFLOAD_TCP_CKSUM);
>
> - dev_info->speed_capa = ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G;
> + dev_info->speed_capa = ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
> + ETH_LINK_SPEED_100G;
> }
It is only faking the capabilities at driver-level.
You should check if the underlying device is able to achieve 100G
before advertising this flag to the application.
I suggest to update this patch to remove the doc update.
The contract is to fill it only when the code is fixed.
By the way, we must call every other drivers to properly implement
this feature.
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH v4 18/32] net/qede: add missing 100G link speed capability
2016-10-26 15:41 ` Thomas Monjalon
@ 2016-10-26 15:54 ` Bruce Richardson
2016-10-26 21:28 ` Harish Patil
1 sibling, 0 replies; 59+ messages in thread
From: Bruce Richardson @ 2016-10-26 15:54 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev, Dept-EngDPDKDev, ferruh.yigit
On Wed, Oct 26, 2016 at 05:41:58PM +0200, Thomas Monjalon wrote:
> 2016-10-18 21:11, Rasesh Mody:
> > From: Harish Patil <harish.patil@qlogic.com>
> >
> > This patch fixes the missing 100G link speed advertisement
> > when the 100G support was initially added.
> >
> > Fixes: 2af14ca79c0a ("net/qede: support 100G")
> >
> > Signed-off-by: Harish Patil <harish.patil@qlogic.com>
> [...]
> > [Features]
> > +Speed capabilities = Y
>
> This feature should be checked only when it is fully implemented,
> i.e. when you return the real capabilities of the device.
>
> > --- a/drivers/net/qede/qede_ethdev.c
> > +++ b/drivers/net/qede/qede_ethdev.c
> > @@ -599,7 +599,8 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
> > DEV_TX_OFFLOAD_UDP_CKSUM |
> > DEV_TX_OFFLOAD_TCP_CKSUM);
> >
> > - dev_info->speed_capa = ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G;
> > + dev_info->speed_capa = ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
> > + ETH_LINK_SPEED_100G;
> > }
>
> It is only faking the capabilities at driver-level.
> You should check if the underlying device is able to achieve 100G
> before advertising this flag to the application.
>
> I suggest to update this patch to remove the doc update.
> The contract is to fill it only when the code is fixed.
> By the way, we must call every other drivers to properly implement
> this feature.
I agree that devices should only advertise speeds they support and the
doc should reflect this ability (or lack of this ability) in a driver.
Regards,
/Bruce
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH v4 07/32] net/qede: fix 32 bit compilation
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 07/32] net/qede: fix 32 bit compilation Rasesh Mody
@ 2016-10-26 16:54 ` Thomas Monjalon
2016-10-26 21:01 ` Mody, Rasesh
0 siblings, 1 reply; 59+ messages in thread
From: Thomas Monjalon @ 2016-10-26 16:54 UTC (permalink / raw)
To: Rasesh Mody; +Cc: dev, Dept-EngDPDKDev, ferruh.yigit
2016-10-18 21:11, Rasesh Mody:
> Fix 32 bit compilation for gcc version 4.3.4.
>
> Fixes: ec94dbc57362 ("qede: add base driver")
>
> Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
[...]
> ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y)
> +ifeq ($(shell gcc -Wno-unused-but-set-variable -Werror -E - < /dev/null > /dev/null 2>&1; echo $$?),0)
> CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable
> +endif
> CFLAGS_BASE_DRIVER += -Wno-missing-declarations
> +ifeq ($(shell gcc -Wno-maybe-uninitialized -Werror -E - < /dev/null > /dev/null 2>&1; echo $$?),0)
> CFLAGS_BASE_DRIVER += -Wno-maybe-uninitialized
> +endif
> CFLAGS_BASE_DRIVER += -Wno-strict-prototypes
> ifeq ($(shell test $(GCC_VERSION) -ge 60 && echo 1), 1)
> CFLAGS_BASE_DRIVER += -Wno-shift-negative-value
What the hell are you doing here?
1/ You should better fix "unused-but-set-variable" errors
2/ It won't work when cross-compiling because you do not use $(CC)
in $(shell gcc
I really do not want to look at the qede patches.
But each time my eyes stop on one of them, I'm struggling.
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default
2016-10-26 15:20 ` Thomas Monjalon
@ 2016-10-26 17:01 ` Mody, Rasesh
0 siblings, 0 replies; 59+ messages in thread
From: Mody, Rasesh @ 2016-10-26 17:01 UTC (permalink / raw)
To: Thomas Monjalon, Rasesh Mody; +Cc: dev, Dept-EngDPDKDev, ferruh.yigit
Hi Thomas,
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas Monjalon
> Sent: Wednesday, October 26, 2016 8:20 AM
>
> 2016-10-24 14:41, Bruce Richardson:
> > On Tue, Oct 18, 2016 at 09:11:14PM -0700, Rasesh Mody wrote:
> > > Please apply to DPDK tree for v16.11 release.
> >
> > Patchset applied to dpdk_next_net/rel_16_11
>
> It breaks compilation because it is enabled everywhere and zlib.h is still
> included without checking CONFIG_ECORE_ZIPPED_FW.
> The patch removing zlib dependency was not tested without zlib installed.
> I will fix it while applying with this change:
Sorry, we missed to test the patch removing zlib dependency from latest patch set when zlib headers are unavailable.
The zlib.h include is not needed in bcm_osal.c. It got left out there when zlib.h was included in ecore.h file by patch "[PATCH v4 03/32] net/qede: use FW CONFIG defines as needed". In ecore.h it is protected by ifdef, however, same is not true about bcm_osal.c. Hence compilation complains when zlib.h is not available.
--- a/drivers/net/qede/base/bcm_osal.c
+++ b/drivers/net/qede/base/bcm_osal.c
@@ -6,8 +6,6 @@
* See LICENSE.qede_pmd for copyright and licensing details.
*/
-#include <zlib.h>
-
#include <rte_memzone.h>
#include <rte_errno.h>
> --- a/drivers/net/qede/base/bcm_osal.c
> +++ b/drivers/net/qede/base/bcm_osal.c
> @@ -6,7 +6,9 @@
> * See LICENSE.qede_pmd for copyright and licensing details.
> */
>
> +#ifdef CONFIG_ECORE_ZIPPED_FW
> #include <zlib.h>
> +#endif
>
> #include <rte_memzone.h>
> #include <rte_errno.h>
>
Above change looks fine. Thanks!
> I won't do any quality review of qede patches but from what I've seen
> before, there is some room for improvements.
>
> Another nit, important to help reviews, please use --in-reply-to when
> sending a new revision of a patch to keep them in the same thread and allow
> us to understand the progress.
> I plan to do an automatic nack for patches missing the --in-reply-to.
Sure, will do.
Thanks!
-Rasesh
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH v4 07/32] net/qede: fix 32 bit compilation
2016-10-26 16:54 ` Thomas Monjalon
@ 2016-10-26 21:01 ` Mody, Rasesh
2016-10-26 21:40 ` Thomas Monjalon
0 siblings, 1 reply; 59+ messages in thread
From: Mody, Rasesh @ 2016-10-26 21:01 UTC (permalink / raw)
To: Thomas Monjalon, Rasesh Mody; +Cc: dev, Dept-EngDPDKDev, ferruh.yigit
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Wednesday, October 26, 2016 9:54 AM
>
> 2016-10-18 21:11, Rasesh Mody:
> > Fix 32 bit compilation for gcc version 4.3.4.
> >
> > Fixes: ec94dbc57362 ("qede: add base driver")
> >
> > Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
> [...]
> > ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y)
> > +ifeq ($(shell gcc -Wno-unused-but-set-variable -Werror -E - <
> > +/dev/null > /dev/null 2>&1; echo $$?),0)
> > CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable
> > +endif
> > CFLAGS_BASE_DRIVER += -Wno-missing-declarations
> > +ifeq ($(shell gcc -Wno-maybe-uninitialized -Werror -E - < /dev/null >
> > +/dev/null 2>&1; echo $$?),0)
> > CFLAGS_BASE_DRIVER += -Wno-maybe-uninitialized
> > +endif
> > CFLAGS_BASE_DRIVER += -Wno-strict-prototypes ifeq ($(shell test
> > $(GCC_VERSION) -ge 60 && echo 1), 1) CFLAGS_BASE_DRIVER +=
> > -Wno-shift-negative-value
>
> What the hell are you doing here?
In one of our compilation testing on i586, we have gcc version 4.3.4. This version of gcc gives us following errors:
cc1: error: unrecognized command line option "-Wno-unused-but-set-variable"
cc1: error: unrecognized command line option "-Wno-maybe-uninitialized"
-Wno-unused-but-set-variable option was added only in gcc version 5.1.0
-Wno-maybe-uninitialized option was added only in gcc version 4.7.0
All that above change does is that it checks if -Wno-unused-but-set-variable and -Wno-maybe-uninitialized options are available with gcc only then include them for compilation.
> 1/ You should better fix "unused-but-set-variable" errors 2/ It won't work
> when cross-compiling because you do not use $(CC)
> in $(shell gcc
We tested on gcc version 6.2.0 on x86_64 without applying this patch. Errors related to "unused-but-set-variable" option were not seen. The only errors we saw are as noted above due to an older version of gcc.
We do use $(shell gcc, however, it is used under ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y), so, I believe it should work when cross-compiling. For example, in one of our compilation testing on clang version 3.8.0, with this patch applied, we did not see any errors. Please let us know if you see otherwise.
However, I do agree it is better to use $(CC). We could change that with a follow on patch.
Thanks!
-Rasesh
>
> I really do not want to look at the qede patches.
> But each time my eyes stop on one of them, I'm struggling.
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH v4 18/32] net/qede: add missing 100G link speed capability
2016-10-26 15:41 ` Thomas Monjalon
2016-10-26 15:54 ` Bruce Richardson
@ 2016-10-26 21:28 ` Harish Patil
2016-10-26 21:43 ` Thomas Monjalon
1 sibling, 1 reply; 59+ messages in thread
From: Harish Patil @ 2016-10-26 21:28 UTC (permalink / raw)
To: Thomas Monjalon, Rasesh Mody, ferruh.yigit, bruce.richardson
Cc: dev, Dept-Eng DPDK Dev
>2016-10-18 21:11, Rasesh Mody:
>> From: Harish Patil <harish.patil@qlogic.com>
>>
>> This patch fixes the missing 100G link speed advertisement
>> when the 100G support was initially added.
>>
>> Fixes: 2af14ca79c0a ("net/qede: support 100G")
>>
>> Signed-off-by: Harish Patil <harish.patil@qlogic.com>
>[...]
>> [Features]
>> +Speed capabilities = Y
>
>This feature should be checked only when it is fully implemented,
>i.e. when you return the real capabilities of the device.
>
>> --- a/drivers/net/qede/qede_ethdev.c
>> +++ b/drivers/net/qede/qede_ethdev.c
>> @@ -599,7 +599,8 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
>> DEV_TX_OFFLOAD_UDP_CKSUM |
>> DEV_TX_OFFLOAD_TCP_CKSUM);
>>
>> - dev_info->speed_capa = ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G;
>> + dev_info->speed_capa = ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
>> + ETH_LINK_SPEED_100G;
>> }
>
>It is only faking the capabilities at driver-level.
>You should check if the underlying device is able to achieve 100G
>before advertising this flag to the application.
>
>I suggest to update this patch to remove the doc update.
>The contract is to fill it only when the code is fixed.
>By the way, we must call every other drivers to properly implement
>this feature.
>
Hi Thomas,
Its not really a faking. The same driver supports all three link speeds.
The required support for 100G was already present in the 16.07 inbox
driver.
We just had missed out advertising 100G link speed via
dev_info->speed_capa.
Hence it is - Fixes: 2af14ca79c0a ("net/qede: support 100G”).
Hope it is okay.
Thanks,
Harish
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH v4 07/32] net/qede: fix 32 bit compilation
2016-10-26 21:01 ` Mody, Rasesh
@ 2016-10-26 21:40 ` Thomas Monjalon
2016-10-28 6:37 ` [dpdk-dev] [PATCH] net/qede: fix gcc compiler option checks Rasesh Mody
0 siblings, 1 reply; 59+ messages in thread
From: Thomas Monjalon @ 2016-10-26 21:40 UTC (permalink / raw)
To: Mody, Rasesh; +Cc: dev, Dept-EngDPDKDev, ferruh.yigit
2016-10-26 21:01, Mody, Rasesh:
> > From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> > Sent: Wednesday, October 26, 2016 9:54 AM
> >
> > 2016-10-18 21:11, Rasesh Mody:
> > > Fix 32 bit compilation for gcc version 4.3.4.
> > >
> > > Fixes: ec94dbc57362 ("qede: add base driver")
> > >
> > > Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
> > [...]
> > > ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y)
> > > +ifeq ($(shell gcc -Wno-unused-but-set-variable -Werror -E - <
> > > +/dev/null > /dev/null 2>&1; echo $$?),0)
> > > CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable
> > > +endif
> > > CFLAGS_BASE_DRIVER += -Wno-missing-declarations
> > > +ifeq ($(shell gcc -Wno-maybe-uninitialized -Werror -E - < /dev/null >
> > > +/dev/null 2>&1; echo $$?),0)
> > > CFLAGS_BASE_DRIVER += -Wno-maybe-uninitialized
> > > +endif
> > > CFLAGS_BASE_DRIVER += -Wno-strict-prototypes ifeq ($(shell test
> > > $(GCC_VERSION) -ge 60 && echo 1), 1) CFLAGS_BASE_DRIVER +=
> > > -Wno-shift-negative-value
> >
> > What the hell are you doing here?
>
> In one of our compilation testing on i586, we have gcc version 4.3.4. This version of gcc gives us following errors:
>
> cc1: error: unrecognized command line option "-Wno-unused-but-set-variable"
> cc1: error: unrecognized command line option "-Wno-maybe-uninitialized"
>
> -Wno-unused-but-set-variable option was added only in gcc version 5.1.0
> -Wno-maybe-uninitialized option was added only in gcc version 4.7.0
>
> All that above change does is that it checks if -Wno-unused-but-set-variable and -Wno-maybe-uninitialized options are available with gcc only then include them for compilation.
Have you tried to look what is done for other drivers?
It is using GCC_VERSION to check the compatibility.
> > 1/ You should better fix "unused-but-set-variable" errors 2/ It won't work
> > when cross-compiling because you do not use $(CC)
> > in $(shell gcc
>
> We tested on gcc version 6.2.0 on x86_64 without applying this patch. Errors related to "unused-but-set-variable" option were not seen. The only errors we saw are as noted above due to an older version of gcc.
> We do use $(shell gcc, however, it is used under ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y), so, I believe it should work when cross-compiling. For example, in one of our compilation testing on clang version 3.8.0, with this patch applied, we did not see any errors. Please let us know if you see otherwise.
Cross-compilation is using $(CROSS) prefix, e.g. when compiling for ARM on x86.
> However, I do agree it is better to use $(CC). We could change that with a follow on patch.
Please fix this patch by using GCC_VERSION.
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH v4 18/32] net/qede: add missing 100G link speed capability
2016-10-26 21:28 ` Harish Patil
@ 2016-10-26 21:43 ` Thomas Monjalon
2016-10-28 6:42 ` [dpdk-dev] [PATCH] net/qede: fix advertising " Rasesh Mody
0 siblings, 1 reply; 59+ messages in thread
From: Thomas Monjalon @ 2016-10-26 21:43 UTC (permalink / raw)
To: Harish Patil; +Cc: dev, Dept-Eng DPDK Dev, ferruh.yigit
2016-10-26 21:28, Harish Patil:
>
> >2016-10-18 21:11, Rasesh Mody:
> >> From: Harish Patil <harish.patil@qlogic.com>
> >>
> >> This patch fixes the missing 100G link speed advertisement
> >> when the 100G support was initially added.
> >>
> >> Fixes: 2af14ca79c0a ("net/qede: support 100G")
> >>
> >> Signed-off-by: Harish Patil <harish.patil@qlogic.com>
> >[...]
> >> [Features]
> >> +Speed capabilities = Y
> >
> >This feature should be checked only when it is fully implemented,
> >i.e. when you return the real capabilities of the device.
> >
> >> --- a/drivers/net/qede/qede_ethdev.c
> >> +++ b/drivers/net/qede/qede_ethdev.c
> >> @@ -599,7 +599,8 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
> >> DEV_TX_OFFLOAD_UDP_CKSUM |
> >> DEV_TX_OFFLOAD_TCP_CKSUM);
> >>
> >> - dev_info->speed_capa = ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G;
> >> + dev_info->speed_capa = ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
> >> + ETH_LINK_SPEED_100G;
> >> }
> >
> >It is only faking the capabilities at driver-level.
> >You should check if the underlying device is able to achieve 100G
> >before advertising this flag to the application.
> >
> >I suggest to update this patch to remove the doc update.
> >The contract is to fill it only when the code is fixed.
> >By the way, we must call every other drivers to properly implement
> >this feature.
> >
>
> Hi Thomas,
> Its not really a faking. The same driver supports all three link speeds.
> The required support for 100G was already present in the 16.07 inbox
> driver.
> We just had missed out advertising 100G link speed via
> dev_info->speed_capa.
> Hence it is - Fixes: 2af14ca79c0a ("net/qede: support 100G”).
> Hope it is okay.
Yes the code is okay. But it is not complete.
Please tell me you understand that you must fill speed_capa depending
of the device capabilities.
Then you will be able to fill "Speed capabilities" box in the doc.
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH] net/qede: fix gcc compiler option checks
2016-10-26 21:40 ` Thomas Monjalon
@ 2016-10-28 6:37 ` Rasesh Mody
2016-10-28 22:12 ` Stephen Hemminger
2016-11-07 20:10 ` Thomas Monjalon
0 siblings, 2 replies; 59+ messages in thread
From: Rasesh Mody @ 2016-10-28 6:37 UTC (permalink / raw)
To: thomas.monjalon; +Cc: dev, Dept-EngDPDKDev
From: Rasesh Mody <Rasesh.Mody@cavium.com>
Using GCC_VERSION to check gcc version and decide whether to include
that compiler option.
Fixes: ec94dbc57362 ("qede: add base driver")
Fixes: ecc7a5a27ffe ("net/qede/base: fix 32-bit build")
Signed-off-by: Rasesh Mody <Rasesh.Mody@cavium.com>
---
drivers/net/qede/Makefile | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/qede/Makefile b/drivers/net/qede/Makefile
index 39751e4..29b443d 100644
--- a/drivers/net/qede/Makefile
+++ b/drivers/net/qede/Makefile
@@ -46,11 +46,11 @@ endif
endif
ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y)
-ifeq ($(shell gcc -Wno-unused-but-set-variable -Werror -E - < /dev/null > /dev/null 2>&1; echo $$?),0)
+ifeq ($(shell test $(GCC_VERSION) -ge 44 && echo 1), 1)
CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable
endif
CFLAGS_BASE_DRIVER += -Wno-missing-declarations
-ifeq ($(shell gcc -Wno-maybe-uninitialized -Werror -E - < /dev/null > /dev/null 2>&1; echo $$?),0)
+ifeq ($(shell test $(GCC_VERSION) -ge 46 && echo 1), 1)
CFLAGS_BASE_DRIVER += -Wno-maybe-uninitialized
endif
CFLAGS_BASE_DRIVER += -Wno-strict-prototypes
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH] net/qede: fix advertising link speed capability
2016-10-26 21:43 ` Thomas Monjalon
@ 2016-10-28 6:42 ` Rasesh Mody
2016-10-28 7:26 ` Thomas Monjalon
0 siblings, 1 reply; 59+ messages in thread
From: Rasesh Mody @ 2016-10-28 6:42 UTC (permalink / raw)
To: thomas.monjalon; +Cc: dev, Dept-EngDPDKDev
From: Harish Patil <harish.patil@qlogic.com>
Fix to advertise device's link speed capability based on current
link speed rather than returning driver supported speeds.
Fixes: 95e67b479506 ("net/qede: add 100G link speed capability")
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
drivers/net/qede/qede_ethdev.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index b91b478..4c4c669 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -646,6 +646,7 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
{
struct qede_dev *qdev = eth_dev->data->dev_private;
struct ecore_dev *edev = &qdev->edev;
+ struct qed_link_output link;
PMD_INIT_FUNC_TRACE(edev);
@@ -678,8 +679,9 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_TX_OFFLOAD_UDP_CKSUM |
DEV_TX_OFFLOAD_TCP_CKSUM);
- dev_info->speed_capa = ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_100G;
+ memset(&link, 0, sizeof(struct qed_link_output));
+ qdev->ops->common->get_link(edev, &link);
+ dev_info->speed_capa = rte_eth_speed_bitflag(link.speed, 0);
}
/* return 0 means link status changed, -1 means not changed */
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH] net/qede: fix advertising link speed capability
2016-10-28 6:42 ` [dpdk-dev] [PATCH] net/qede: fix advertising " Rasesh Mody
@ 2016-10-28 7:26 ` Thomas Monjalon
2016-10-29 1:11 ` Harish Patil
2016-10-29 6:14 ` [dpdk-dev] [PATCH v2] " Rasesh Mody
0 siblings, 2 replies; 59+ messages in thread
From: Thomas Monjalon @ 2016-10-28 7:26 UTC (permalink / raw)
To: Rasesh Mody; +Cc: dev, Dept-EngDPDKDev
2016-10-27 23:42, Rasesh Mody:
> From: Harish Patil <harish.patil@qlogic.com>
>
> Fix to advertise device's link speed capability based on current
> link speed rather than returning driver supported speeds.
[...]
> - dev_info->speed_capa = ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
> - ETH_LINK_SPEED_100G;
> + memset(&link, 0, sizeof(struct qed_link_output));
> + qdev->ops->common->get_link(edev, &link);
> + dev_info->speed_capa = rte_eth_speed_bitflag(link.speed, 0);
> }
No, that's wrong.
You must advertise a capability!
So what the device supports?
Are every qede devices support ETH_LINK_SPEED_100G?
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH] net/qede: fix gcc compiler option checks
2016-10-28 6:37 ` [dpdk-dev] [PATCH] net/qede: fix gcc compiler option checks Rasesh Mody
@ 2016-10-28 22:12 ` Stephen Hemminger
2016-10-28 22:49 ` Mody, Rasesh
2016-11-07 20:10 ` Thomas Monjalon
1 sibling, 1 reply; 59+ messages in thread
From: Stephen Hemminger @ 2016-10-28 22:12 UTC (permalink / raw)
To: Rasesh Mody; +Cc: dev, Dept-EngDPDKDev, thomas.monjalon
On Thu, 27 Oct 2016 23:37:57 -0700
Rasesh Mody <rasesh.mody@qlogic.com> wrote:
> From: Rasesh Mody <Rasesh.Mody@cavium.com>
>
> Using GCC_VERSION to check gcc version and decide whether to include
> that compiler option.
>
> Fixes: ec94dbc57362 ("qede: add base driver")
> Fixes: ecc7a5a27ffe ("net/qede/base: fix 32-bit build")
>
> Signed-off-by: Rasesh Mody <Rasesh.Mody@cavium.com>
> ---
> drivers/net/qede/Makefile | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/qede/Makefile b/drivers/net/qede/Makefile
> index 39751e4..29b443d 100644
> --- a/drivers/net/qede/Makefile
> +++ b/drivers/net/qede/Makefile
> @@ -46,11 +46,11 @@ endif
> endif
>
> ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y)
> -ifeq ($(shell gcc -Wno-unused-but-set-variable -Werror -E - < /dev/null > /dev/null 2>&1; echo $$?),0)
> +ifeq ($(shell test $(GCC_VERSION) -ge 44 && echo 1), 1)
> CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable
> endif
> CFLAGS_BASE_DRIVER += -Wno-missing-declarations
> -ifeq ($(shell gcc -Wno-maybe-uninitialized -Werror -E - < /dev/null > /dev/null 2>&1; echo $$?),0)
> +ifeq ($(shell test $(GCC_VERSION) -ge 46 && echo 1), 1)
> CFLAGS_BASE_DRIVER += -Wno-maybe-uninitialized
> endif
> CFLAGS_BASE_DRIVER += -Wno-strict-prototypes
Does this mean that less compiler checking is done or more?
It seems lots of drivers make the excuse:
"the base driver comes from another group and is known buggy but can't be fixed"
That doesn't reflect well on the quality of the DPDK.
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH] net/qede: fix gcc compiler option checks
2016-10-28 22:12 ` Stephen Hemminger
@ 2016-10-28 22:49 ` Mody, Rasesh
2016-11-07 19:54 ` Thomas Monjalon
0 siblings, 1 reply; 59+ messages in thread
From: Mody, Rasesh @ 2016-10-28 22:49 UTC (permalink / raw)
To: Stephen Hemminger, Rasesh Mody; +Cc: dev, Dept-EngDPDKDev, thomas.monjalon
Hi Stephen,
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Stephen
> Hemminger
> Sent: Friday, October 28, 2016 3:12 PM
>
> On Thu, 27 Oct 2016 23:37:57 -0700
> Rasesh Mody <rasesh.mody@qlogic.com> wrote:
>
> > From: Rasesh Mody <Rasesh.Mody@cavium.com>
> >
> > Using GCC_VERSION to check gcc version and decide whether to include
> > that compiler option.
> >
> > Fixes: ec94dbc57362 ("qede: add base driver")
> > Fixes: ecc7a5a27ffe ("net/qede/base: fix 32-bit build")
> >
> > Signed-off-by: Rasesh Mody <Rasesh.Mody@cavium.com>
> > ---
> > drivers/net/qede/Makefile | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/qede/Makefile b/drivers/net/qede/Makefile
> > index 39751e4..29b443d 100644
> > --- a/drivers/net/qede/Makefile
> > +++ b/drivers/net/qede/Makefile
> > @@ -46,11 +46,11 @@ endif
> > endif
> >
> > ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y)
> > -ifeq ($(shell gcc -Wno-unused-but-set-variable -Werror -E - <
> > /dev/null > /dev/null 2>&1; echo $$?),0)
> > +ifeq ($(shell test $(GCC_VERSION) -ge 44 && echo 1), 1)
> > CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable endif
> > CFLAGS_BASE_DRIVER += -Wno-missing-declarations -ifeq ($(shell gcc
> > -Wno-maybe-uninitialized -Werror -E - < /dev/null > /dev/null 2>&1;
> > echo $$?),0)
> > +ifeq ($(shell test $(GCC_VERSION) -ge 46 && echo 1), 1)
> > CFLAGS_BASE_DRIVER += -Wno-maybe-uninitialized endif
> > CFLAGS_BASE_DRIVER += -Wno-strict-prototypes
>
> Does this mean that less compiler checking is done or more?
With higher version of compilers more compiler checking is done, for older compilers less checking is done. As some of the older compiles do not have newly added checking capabilities. Testing with latest compilers ensures we do lot more checking.
Thanks!
-Rasesh
> It seems lots of drivers make the excuse:
> "the base driver comes from another group and is known buggy but can't be
> fixed"
> That doesn't reflect well on the quality of the DPDK.
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH] net/qede: fix advertising link speed capability
2016-10-28 7:26 ` Thomas Monjalon
@ 2016-10-29 1:11 ` Harish Patil
2016-10-29 6:14 ` [dpdk-dev] [PATCH v2] " Rasesh Mody
1 sibling, 0 replies; 59+ messages in thread
From: Harish Patil @ 2016-10-29 1:11 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev, Dept-Eng DPDK Dev
>2016-10-27 23:42, Rasesh Mody:
>> From: Harish Patil <harish.patil@qlogic.com>
>>
>> Fix to advertise device's link speed capability based on current
>> link speed rather than returning driver supported speeds.
>[...]
>> - dev_info->speed_capa = ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
>> - ETH_LINK_SPEED_100G;
>> + memset(&link, 0, sizeof(struct qed_link_output));
>> + qdev->ops->common->get_link(edev, &link);
>> + dev_info->speed_capa = rte_eth_speed_bitflag(link.speed, 0);
>> }
>
>No, that's wrong.
>You must advertise a capability!
Got it. It was not very clear to me. Let me send out v2 patch that would
advertise native link speed based on the NIC personality configured (and
not based on current/negotiated link speed).
>So what the device supports?
10, 25, 40, 50, 100G speeds
>Are every qede devices support ETH_LINK_SPEED_100G?
No
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v2] net/qede: fix advertising link speed capability
2016-10-28 7:26 ` Thomas Monjalon
2016-10-29 1:11 ` Harish Patil
@ 2016-10-29 6:14 ` Rasesh Mody
2016-10-31 18:35 ` [dpdk-dev] [PATCH v3] " Rasesh Mody
1 sibling, 1 reply; 59+ messages in thread
From: Rasesh Mody @ 2016-10-29 6:14 UTC (permalink / raw)
To: thomas.monjalon; +Cc: dev, Dept-EngDPDKDev
From: Harish Patil <harish.patil@qlogic.com>
Fix to advertise device's link speed capability based on NVM
port configuration instead of returning driver supported speeds.
Fixes: 95e67b479506 ("net/qede: add 100G link speed capability")
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
drivers/net/qede/qede_ethdev.c | 6 ++++--
drivers/net/qede/qede_ethdev.h | 1 +
drivers/net/qede/qede_if.h | 1 +
drivers/net/qede/qede_main.c | 24 ++++++++++++++++++++++++
4 files changed, 30 insertions(+), 2 deletions(-)
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index b91b478..59129f2 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -646,6 +646,7 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
{
struct qede_dev *qdev = eth_dev->data->dev_private;
struct ecore_dev *edev = &qdev->edev;
+ struct qed_link_output link;
PMD_INIT_FUNC_TRACE(edev);
@@ -678,8 +679,9 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_TX_OFFLOAD_UDP_CKSUM |
DEV_TX_OFFLOAD_TCP_CKSUM);
- dev_info->speed_capa = ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_100G;
+ memset(&link, 0, sizeof(struct qed_link_output));
+ qdev->ops->common->get_link(edev, &link);
+ dev_info->speed_capa = rte_eth_speed_bitflag(link.adv_speed, 0);
}
/* return 0 means link status changed, -1 means not changed */
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 5eb3f52..a97e3d9 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -30,6 +30,7 @@
#include "base/ecore_dev_api.h"
#include "base/ecore_iov_api.h"
#include "base/ecore_cxt.h"
+#include "base/nvm_cfg.h"
#include "qede_logs.h"
#include "qede_if.h"
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 2d38b1b..4936349 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -70,6 +70,7 @@ struct qed_link_output {
uint32_t advertised_caps; /* In ADVERTISED defs */
uint32_t lp_caps; /* In ADVERTISED defs */
uint32_t speed; /* In Mb/s */
+ uint32_t adv_speed; /* In Mb/s */
uint8_t duplex; /* In DUPLEX defs */
uint8_t port; /* In PORT defs */
bool autoneg;
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 2d354e1..e6501d0 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -488,6 +488,7 @@ static void qed_fill_link(struct ecore_hwfn *hwfn,
struct ecore_mcp_link_state link;
struct ecore_mcp_link_capabilities link_caps;
uint32_t media_type;
+ uint32_t adv_speed;
uint8_t change = 0;
memset(if_link, 0, sizeof(*if_link));
@@ -514,6 +515,29 @@ static void qed_fill_link(struct ecore_hwfn *hwfn,
if_link->speed = link.speed;
if_link->duplex = QEDE_DUPLEX_FULL;
+
+ /* Fill up the native advertised speed */
+ switch (params.speed.advertised_speeds) {
+ case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G:
+ adv_speed = 10000;
+ break;
+ case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G:
+ adv_speed = 25000;
+ break;
+ case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G:
+ adv_speed = 40000;
+ break;
+ case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G:
+ adv_speed = 50000;
+ break;
+ case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G:
+ adv_speed = 100000;
+ break;
+ default:
+ DP_NOTICE(hwfn, false, "Unknown speed\n");
+ adv_speed = 0;
+ }
+ if_link->adv_speed = adv_speed;
if (params.speed.autoneg)
if_link->supported_caps |= QEDE_SUPPORTED_AUTONEG;
--
1.8.3.1
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v3] net/qede: fix advertising link speed capability
2016-10-29 6:14 ` [dpdk-dev] [PATCH v2] " Rasesh Mody
@ 2016-10-31 18:35 ` Rasesh Mody
2016-10-31 18:35 ` Rasesh Mody
0 siblings, 1 reply; 59+ messages in thread
From: Rasesh Mody @ 2016-10-31 18:35 UTC (permalink / raw)
To: thomas.monjalon, bruce.richardson; +Cc: dev, Dept-EngDPDKDev
v3:
- check patch fixes
Fix to advertise device's link speed capability based on NVM
port configuration instead of returning driver supported speeds.
Fixes: 95e67b479506 ("net/qede: add 100G link speed capability")
Harish Patil (1):
net/qede: fix advertising link speed capability
drivers/net/qede/qede_ethdev.c | 6 ++++--
drivers/net/qede/qede_ethdev.h | 1 +
drivers/net/qede/qede_if.h | 1 +
drivers/net/qede/qede_main.c | 24 ++++++++++++++++++++++++
4 files changed, 30 insertions(+), 2 deletions(-)
--
1.7.10.3
^ permalink raw reply [flat|nested] 59+ messages in thread
* [dpdk-dev] [PATCH v3] net/qede: fix advertising link speed capability
2016-10-31 18:35 ` [dpdk-dev] [PATCH v3] " Rasesh Mody
@ 2016-10-31 18:35 ` Rasesh Mody
2016-11-07 19:48 ` Thomas Monjalon
0 siblings, 1 reply; 59+ messages in thread
From: Rasesh Mody @ 2016-10-31 18:35 UTC (permalink / raw)
To: thomas.monjalon, bruce.richardson; +Cc: dev, Dept-EngDPDKDev
From: Harish Patil <harish.patil@qlogic.com>
Fix to advertise device's link speed capability based on NVM
port configuration instead of returning driver supported speeds.
Fixes: 95e67b479506 ("net/qede: add 100G link speed capability")
Signed-off-by: Harish Patil <harish.patil@qlogic.com>
---
drivers/net/qede/qede_ethdev.c | 6 ++++--
drivers/net/qede/qede_ethdev.h | 1 +
drivers/net/qede/qede_if.h | 1 +
drivers/net/qede/qede_main.c | 24 ++++++++++++++++++++++++
4 files changed, 30 insertions(+), 2 deletions(-)
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index b91b478..59129f2 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -646,6 +646,7 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
{
struct qede_dev *qdev = eth_dev->data->dev_private;
struct ecore_dev *edev = &qdev->edev;
+ struct qed_link_output link;
PMD_INIT_FUNC_TRACE(edev);
@@ -678,8 +679,9 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
DEV_TX_OFFLOAD_UDP_CKSUM |
DEV_TX_OFFLOAD_TCP_CKSUM);
- dev_info->speed_capa = ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
- ETH_LINK_SPEED_100G;
+ memset(&link, 0, sizeof(struct qed_link_output));
+ qdev->ops->common->get_link(edev, &link);
+ dev_info->speed_capa = rte_eth_speed_bitflag(link.adv_speed, 0);
}
/* return 0 means link status changed, -1 means not changed */
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 5eb3f52..a97e3d9 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -30,6 +30,7 @@
#include "base/ecore_dev_api.h"
#include "base/ecore_iov_api.h"
#include "base/ecore_cxt.h"
+#include "base/nvm_cfg.h"
#include "qede_logs.h"
#include "qede_if.h"
diff --git a/drivers/net/qede/qede_if.h b/drivers/net/qede/qede_if.h
index 2d38b1b..4936349 100644
--- a/drivers/net/qede/qede_if.h
+++ b/drivers/net/qede/qede_if.h
@@ -70,6 +70,7 @@ struct qed_link_output {
uint32_t advertised_caps; /* In ADVERTISED defs */
uint32_t lp_caps; /* In ADVERTISED defs */
uint32_t speed; /* In Mb/s */
+ uint32_t adv_speed; /* In Mb/s */
uint8_t duplex; /* In DUPLEX defs */
uint8_t port; /* In PORT defs */
bool autoneg;
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 2d354e1..d2e476c 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -488,6 +488,7 @@ static void qed_fill_link(struct ecore_hwfn *hwfn,
struct ecore_mcp_link_state link;
struct ecore_mcp_link_capabilities link_caps;
uint32_t media_type;
+ uint32_t adv_speed;
uint8_t change = 0;
memset(if_link, 0, sizeof(*if_link));
@@ -515,6 +516,29 @@ static void qed_fill_link(struct ecore_hwfn *hwfn,
if_link->duplex = QEDE_DUPLEX_FULL;
+ /* Fill up the native advertised speed */
+ switch (params.speed.advertised_speeds) {
+ case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G:
+ adv_speed = 10000;
+ break;
+ case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G:
+ adv_speed = 25000;
+ break;
+ case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G:
+ adv_speed = 40000;
+ break;
+ case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G:
+ adv_speed = 50000;
+ break;
+ case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G:
+ adv_speed = 100000;
+ break;
+ default:
+ DP_NOTICE(hwfn, false, "Unknown speed\n");
+ adv_speed = 0;
+ }
+ if_link->adv_speed = adv_speed;
+
if (params.speed.autoneg)
if_link->supported_caps |= QEDE_SUPPORTED_AUTONEG;
--
1.7.10.3
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH v3] net/qede: fix advertising link speed capability
2016-10-31 18:35 ` Rasesh Mody
@ 2016-11-07 19:48 ` Thomas Monjalon
2016-11-10 2:54 ` Harish Patil
0 siblings, 1 reply; 59+ messages in thread
From: Thomas Monjalon @ 2016-11-07 19:48 UTC (permalink / raw)
To: Rasesh Mody, Harish Patil; +Cc: bruce.richardson, dev, Dept-EngDPDKDev
2016-10-31 11:35, Rasesh Mody:
> From: Harish Patil <harish.patil@qlogic.com>
>
> Fix to advertise device's link speed capability based on NVM
> port configuration instead of returning driver supported speeds.
>
> Fixes: 95e67b479506 ("net/qede: add 100G link speed capability")
>
> Signed-off-by: Harish Patil <harish.patil@qlogic.com>
[...]
> + /* Fill up the native advertised speed */
> + switch (params.speed.advertised_speeds) {
> + case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G:
> + adv_speed = 10000;
> + break;
> + case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G:
> + adv_speed = 25000;
> + break;
> + case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G:
> + adv_speed = 40000;
> + break;
> + case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G:
> + adv_speed = 50000;
> + break;
> + case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G:
> + adv_speed = 100000;
> + break;
> + default:
> + DP_NOTICE(hwfn, false, "Unknown speed\n");
> + adv_speed = 0;
> + }
> + if_link->adv_speed = adv_speed;
The qede devices support only one speed?
I guess it is wrong but it is a step in right direction so it
will be enough for 16.11.
Applied
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH] net/qede: fix gcc compiler option checks
2016-10-28 22:49 ` Mody, Rasesh
@ 2016-11-07 19:54 ` Thomas Monjalon
0 siblings, 0 replies; 59+ messages in thread
From: Thomas Monjalon @ 2016-11-07 19:54 UTC (permalink / raw)
To: Mody, Rasesh, Rasesh Mody; +Cc: Stephen Hemminger, dev, Dept-EngDPDKDev
2016-10-28 22:49, Mody, Rasesh:
> > From: Stephen Hemminger
> > > ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y)
> > > -ifeq ($(shell gcc -Wno-unused-but-set-variable -Werror -E - < /dev/null > /dev/null 2>&1; echo $$?),0)
> > > +ifeq ($(shell test $(GCC_VERSION) -ge 44 && echo 1), 1)
> > > CFLAGS_BASE_DRIVER += -Wno-unused-but-set-variable
> > > endif
> > > CFLAGS_BASE_DRIVER += -Wno-missing-declarations
> > > -ifeq ($(shell gcc -Wno-maybe-uninitialized -Werror -E - < /dev/null > /dev/null 2>&1; echo $$?),0)
> > > +ifeq ($(shell test $(GCC_VERSION) -ge 46 && echo 1), 1)
> > > CFLAGS_BASE_DRIVER += -Wno-maybe-uninitialized
> > > endif
> >
> > Does this mean that less compiler checking is done or more?
>
> With higher version of compilers more compiler checking is done, for older compilers less checking is done. As some of the older compiles do not have newly added checking capabilities. Testing with latest compilers ensures we do lot more checking.
It is basically less checking.
It disables some checks if the compiler support them because it would
make compilation failing.
Why would it fail? Because as other base drivers, the code is messy.
> > It seems lots of drivers make the excuse:
> > "the base driver comes from another group and is known buggy but can't be
> > fixed"
> > That doesn't reflect well on the quality of the DPDK.
You're right Stephen. It is an excuse which has been accepted in DPDK.
Should we be stricter?
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH] net/qede: fix gcc compiler option checks
2016-10-28 6:37 ` [dpdk-dev] [PATCH] net/qede: fix gcc compiler option checks Rasesh Mody
2016-10-28 22:12 ` Stephen Hemminger
@ 2016-11-07 20:10 ` Thomas Monjalon
1 sibling, 0 replies; 59+ messages in thread
From: Thomas Monjalon @ 2016-11-07 20:10 UTC (permalink / raw)
To: Rasesh Mody; +Cc: dev, Dept-EngDPDKDev, Rasesh Mody
2016-10-27 23:37, Rasesh Mody:
> From: Rasesh Mody <Rasesh.Mody@cavium.com>
>
> Using GCC_VERSION to check gcc version and decide whether to include
> that compiler option.
>
> Fixes: ec94dbc57362 ("qede: add base driver")
No need for the above line.
It is fixing the commit below (which fixes above one).
> Fixes: ecc7a5a27ffe ("net/qede/base: fix 32-bit build")
>
> Signed-off-by: Rasesh Mody <Rasesh.Mody@cavium.com>
Applied, thanks
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH v3] net/qede: fix advertising link speed capability
2016-11-07 19:48 ` Thomas Monjalon
@ 2016-11-10 2:54 ` Harish Patil
0 siblings, 0 replies; 59+ messages in thread
From: Harish Patil @ 2016-11-10 2:54 UTC (permalink / raw)
To: Thomas Monjalon, Mody, Rasesh; +Cc: bruce.richardson, dev, Dept-Eng DPDK Dev
>
>2016-10-31 11:35, Rasesh Mody:
>> From: Harish Patil <harish.patil@qlogic.com>
>>
>> Fix to advertise device's link speed capability based on NVM
>> port configuration instead of returning driver supported speeds.
>>
>> Fixes: 95e67b479506 ("net/qede: add 100G link speed capability")
>>
>> Signed-off-by: Harish Patil <harish.patil@qlogic.com>
>[...]
>> + /* Fill up the native advertised speed */
>> + switch (params.speed.advertised_speeds) {
>> + case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G:
>> + adv_speed = 10000;
>> + break;
>> + case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G:
>> + adv_speed = 25000;
>> + break;
>> + case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G:
>> + adv_speed = 40000;
>> + break;
>> + case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G:
>> + adv_speed = 50000;
>> + break;
>> + case NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G:
>> + adv_speed = 100000;
>> + break;
>> + default:
>> + DP_NOTICE(hwfn, false, "Unknown speed\n");
>> + adv_speed = 0;
>> + }
>> + if_link->adv_speed = adv_speed;
>
>The qede devices support only one speed?
>I guess it is wrong but it is a step in right direction so it
>will be enough for 16.11.
>
>Applied
>
qede device is capable of 10, 25, 40, 50, 100Gb speeds.
It's configured at factory to have 100Gb, 50Gb, 40Gb or 25Gb speeds.
A unique PCI ID gets assigned to the device based on the speed configured.
25G device can auto-negotiate down to 10G speeds when connected to a 10G
switch.
So only for 25G case the above logic does not work correctly, for which I
have a submitted a minor fix today:
("[PATCH] net/qede: fix unknown speed errmsg for 25G link”). Pls include
it in 16.11.
Thanks,
Harish
^ permalink raw reply [flat|nested] 59+ messages in thread
* Re: [dpdk-dev] [PATCH v4 11/32] net/qede/base: update base driver
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 11/32] net/qede/base: update base driver Rasesh Mody
@ 2021-03-24 14:07 ` Ferruh Yigit
0 siblings, 0 replies; 59+ messages in thread
From: Ferruh Yigit @ 2021-03-24 14:07 UTC (permalink / raw)
To: Rasesh Mody, Devendra Singh Rawat, Igor Russkikh; +Cc: dev, jerinj
On 10/19/2016 5:11 AM, Rasesh Mody wrote:
> This patch updates the base driver and incorporates necessary changes
> required to bring in the new firmware 8.10.9.0.
>
> In addition, it would allow driver to add new functionalities that might
> be needed in future.
>
> Signed-off-by: Rasesh Mody <rasesh.mody@qlogic.com>
<...>
> +#ifndef LINUX_REMOVE
> /**
> * @brief ecore_cxt_qm_iids - fills the cid/tid counts for the QM configuration
> *
> * @param p_hwfn
> * @param iids [out], a structure holding all the counters
> */
> -void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn, struct ecore_qm_iids *iids);
> +void ecore_cxt_qm_iids(struct ecore_hwfn *p_hwfn,
> + struct ecore_qm_iids *iids);
> +#endif
>
Hi Rasesh, Devendra Singh, Igor,
The 'LINUX_REMOVE' macro is still in the driver code, it looks like that is
remainder of code strip for different platforms.
What do you think cleaning up them? Although it is not functional, just to
remove the noise..
Same for '__EXTRACT__LINUX__' macro.
^ permalink raw reply [flat|nested] 59+ messages in thread
end of thread, other threads:[~2021-03-24 14:08 UTC | newest]
Thread overview: 59+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-19 4:11 [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 01/32] net/qede/base: add new init files and rearrange the code Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 02/32] net/qede/base: formatting changes Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 03/32] net/qede: use FW CONFIG defines as needed Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 04/32] net/qede/base: add HSI changes and register defines Rasesh Mody
2016-10-19 12:37 ` Ferruh Yigit
2016-10-19 13:46 ` Mody, Rasesh
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 05/32] net/qede/base: add attention formatting string Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 06/32] net/qede/base: additional formatting/comment changes Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 07/32] net/qede: fix 32 bit compilation Rasesh Mody
2016-10-26 16:54 ` Thomas Monjalon
2016-10-26 21:01 ` Mody, Rasesh
2016-10-26 21:40 ` Thomas Monjalon
2016-10-28 6:37 ` [dpdk-dev] [PATCH] net/qede: fix gcc compiler option checks Rasesh Mody
2016-10-28 22:12 ` Stephen Hemminger
2016-10-28 22:49 ` Mody, Rasesh
2016-11-07 19:54 ` Thomas Monjalon
2016-11-07 20:10 ` Thomas Monjalon
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 08/32] net/qede: change signature of MCP command API Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 09/32] net/qede: serialize access to MFW mbox Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 10/32] net/qede: add NIC selftest and query sensor info support Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 11/32] net/qede/base: update base driver Rasesh Mody
2021-03-24 14:07 ` Ferruh Yigit
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 12/32] net/qede/base: rename structure and defines Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 13/32] net/qede/base: comment enhancements Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 14/32] net/qede/base: add MFW crash dump support Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 15/32] net/qede: enable support for unequal number of Rx/Tx queues Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 16/32] net/qede: fix port (re)configuration issue Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 17/32] net/qede/base: allow MTU change via vport-update Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 18/32] net/qede: add missing 100G link speed capability Rasesh Mody
2016-10-26 15:41 ` Thomas Monjalon
2016-10-26 15:54 ` Bruce Richardson
2016-10-26 21:28 ` Harish Patil
2016-10-26 21:43 ` Thomas Monjalon
2016-10-28 6:42 ` [dpdk-dev] [PATCH] net/qede: fix advertising " Rasesh Mody
2016-10-28 7:26 ` Thomas Monjalon
2016-10-29 1:11 ` Harish Patil
2016-10-29 6:14 ` [dpdk-dev] [PATCH v2] " Rasesh Mody
2016-10-31 18:35 ` [dpdk-dev] [PATCH v3] " Rasesh Mody
2016-10-31 18:35 ` Rasesh Mody
2016-11-07 19:48 ` Thomas Monjalon
2016-11-10 2:54 ` Harish Patil
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 19/32] net/qede: remove unused/dead code Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 20/32] net/qede: fixes for VLAN filters Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 21/32] net/qede: add enable/disable VLAN filtering Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 22/32] net/qede: fix RSS related issues Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 23/32] net/qede: add scatter gather support Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 24/32] net/qede/base: change Rx Tx queue start APIs Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 25/32] net/qede/base: add support to initiate PF FLR Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 26/32] net/qede: skip slowpath polling for 100G VF device Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 27/32] net/qede: fix driver version string Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 28/32] net/qede: fix status block index for VF queues Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 29/32] net/qede: add support for queue statistics Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 30/32] net/qede: remove zlib dependency and enable PMD by default Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 31/32] doc: update qede pmd documentation Rasesh Mody
2016-10-19 4:11 ` [dpdk-dev] [PATCH v4 32/32] net/qede: update driver version Rasesh Mody
2016-10-24 13:41 ` [dpdk-dev] [PATCH v4 00/32] net/qede: update qede pmd to 1.2.0.1 and enable by default Bruce Richardson
2016-10-26 15:20 ` Thomas Monjalon
2016-10-26 17:01 ` Mody, Rasesh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).