* [dpdk-dev] [PATCH 00/18] net/qede: base driver update
@ 2018-09-29 8:13 Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 01/18] net/qede/base: upgrade to FW 8.37.7.0 Mody, Rasesh
` (18 more replies)
0 siblings, 19 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:13 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
This patch set updates the base driver to use FW 8.37.7.0 and adds
support for other base driver functionalities. It also updates the
PMD version to 2.10.0.1.
Rasesh Mody (18):
net/qede/base: upgrade to FW 8.37.7.0
net/qede/base: check for EDPM enabled in DB recovery
net/qede/base: add DPC sync after PF stop
net/qede/base: workaround to indicate SHMEM data ready
net/qede/base: add API to update FW RSS indirection table
net/qede/base: add mf-bit/API for FIP special mode
net/qede/base: add error handling for mutex allocation
net/qede/base: adjust queue manager idx greater than max
net/qede/base: add pretend function for port/PF
net/qede/base: add support for SRIOV VF min rate
net/qede/base: add periodic Doorbell Recovery support
net/qede/base: get pre-negotiated OEM values
net/qede/base: enable control frame filtering
net/qede/base: changes for 100G
net/qede/base: add RL update params
net/qede/base: add APIs for dscp priority map configuration
net/qede/base: semantic changes
net/qede: bump PMD version to 2.10.0.1
drivers/net/qede/base/bcm_osal.h | 2 +
drivers/net/qede/base/common_hsi.h | 15 +-
drivers/net/qede/base/ecore.h | 59 +-
drivers/net/qede/base/ecore_cxt.c | 15 +-
drivers/net/qede/base/ecore_dcbx.c | 99 +-
drivers/net/qede/base/ecore_dcbx_api.h | 10 +
drivers/net/qede/base/ecore_dev.c | 1807 ++++++++++++++++++-------
drivers/net/qede/base/ecore_dev_api.h | 170 ++-
drivers/net/qede/base/ecore_hsi_common.h | 57 +-
drivers/net/qede/base/ecore_hsi_debug_tools.h | 15 +
drivers/net/qede/base/ecore_hsi_eth.h | 57 +-
drivers/net/qede/base/ecore_hw.c | 127 +-
drivers/net/qede/base/ecore_hw.h | 40 +-
drivers/net/qede/base/ecore_init_fw_funcs.c | 93 +-
drivers/net/qede/base/ecore_init_fw_funcs.h | 42 +-
drivers/net/qede/base/ecore_init_ops.c | 26 +-
drivers/net/qede/base/ecore_int.c | 67 +-
drivers/net/qede/base/ecore_int_api.h | 14 +-
drivers/net/qede/base/ecore_iov_api.h | 10 +
drivers/net/qede/base/ecore_iro.h | 164 ++-
drivers/net/qede/base/ecore_iro_values.h | 42 +-
drivers/net/qede/base/ecore_l2.c | 82 +-
drivers/net/qede/base/ecore_l2_api.h | 30 +-
drivers/net/qede/base/ecore_mcp.c | 123 +-
drivers/net/qede/base/ecore_mcp.h | 21 +-
drivers/net/qede/base/ecore_rt_defs.h | 265 ++--
drivers/net/qede/base/ecore_sp_commands.c | 8 +-
drivers/net/qede/base/ecore_sp_commands.h | 3 +
drivers/net/qede/base/ecore_spq.c | 56 +-
drivers/net/qede/base/ecore_sriov.c | 48 +-
drivers/net/qede/base/ecore_vf.c | 19 +-
drivers/net/qede/base/eth_common.h | 5 +
drivers/net/qede/base/mcp_public.h | 23 +
drivers/net/qede/base/reg_addr.h | 56 +-
drivers/net/qede/qede_ethdev.h | 2 +-
drivers/net/qede/qede_main.c | 2 +-
36 files changed, 2701 insertions(+), 973 deletions(-)
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 01/18] net/qede/base: upgrade to FW 8.37.7.0
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 03/18] net/qede/base: add DPC sync after PF stop Mody, Rasesh
` (17 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
This patch adds changes to base driver for upgrading to 8.37.3.0 FW.
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/base/bcm_osal.h | 1 +
drivers/net/qede/base/common_hsi.h | 15 +-
drivers/net/qede/base/ecore.h | 5 +-
drivers/net/qede/base/ecore_dev.c | 16 +-
drivers/net/qede/base/ecore_hsi_common.h | 57 +++++-
drivers/net/qede/base/ecore_hsi_debug_tools.h | 15 ++
drivers/net/qede/base/ecore_hsi_eth.h | 57 +++++-
drivers/net/qede/base/ecore_init_fw_funcs.c | 93 +++++----
drivers/net/qede/base/ecore_init_fw_funcs.h | 42 ++--
drivers/net/qede/base/ecore_iro.h | 164 +++++++++------
drivers/net/qede/base/ecore_iro_values.h | 42 ++--
drivers/net/qede/base/ecore_l2.c | 3 +
drivers/net/qede/base/ecore_l2_api.h | 1 +
drivers/net/qede/base/ecore_rt_defs.h | 265 ++++++++++++-------------
drivers/net/qede/base/eth_common.h | 5 +
drivers/net/qede/base/reg_addr.h | 51 ++---
drivers/net/qede/qede_main.c | 2 +-
17 files changed, 523 insertions(+), 311 deletions(-)
diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index b43e0b3..70805f6 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -453,5 +453,6 @@ void qede_get_mcp_proto_stats(struct ecore_dev *, enum ecore_mcp_protocol_type,
#define OSAL_DIV_S64(a, b) ((a) / (b))
#define OSAL_LLDP_RX_TLVS(p_hwfn, tlv_buf, tlv_size) nothing
+#define OSAL_DBG_ALLOC_USER_DATA(p_hwfn, user_data_ptr) (0)
#endif /* __BCM_OSAL_H */
diff --git a/drivers/net/qede/base/common_hsi.h b/drivers/net/qede/base/common_hsi.h
index ca8e59d..2aaf298 100644
--- a/drivers/net/qede/base/common_hsi.h
+++ b/drivers/net/qede/base/common_hsi.h
@@ -95,8 +95,8 @@
#define FW_MAJOR_VERSION 8
-#define FW_MINOR_VERSION 33
-#define FW_REVISION_VERSION 12
+#define FW_MINOR_VERSION 37
+#define FW_REVISION_VERSION 7
#define FW_ENGINEERING_VERSION 0
/***********************/
@@ -1033,13 +1033,14 @@ struct db_rdma_dpm_params {
#define DB_RDMA_DPM_PARAMS_WQE_SIZE_SHIFT 16
#define DB_RDMA_DPM_PARAMS_RESERVED0_MASK 0x1
#define DB_RDMA_DPM_PARAMS_RESERVED0_SHIFT 27
-/* RoCE completion flag */
-#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_MASK 0x1
-#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_SHIFT 28
+/* RoCE ack request (will be set 1) */
+#define DB_RDMA_DPM_PARAMS_ACK_REQUEST_MASK 0x1
+#define DB_RDMA_DPM_PARAMS_ACK_REQUEST_SHIFT 28
#define DB_RDMA_DPM_PARAMS_S_FLG_MASK 0x1 /* RoCE S flag */
#define DB_RDMA_DPM_PARAMS_S_FLG_SHIFT 29
-#define DB_RDMA_DPM_PARAMS_RESERVED1_MASK 0x1
-#define DB_RDMA_DPM_PARAMS_RESERVED1_SHIFT 30
+/* RoCE completion flag for FW use */
+#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_MASK 0x1
+#define DB_RDMA_DPM_PARAMS_COMPLETION_FLG_SHIFT 30
/* Connection type is iWARP */
#define DB_RDMA_DPM_PARAMS_CONN_TYPE_IS_IWARP_MASK 0x1
#define DB_RDMA_DPM_PARAMS_CONN_TYPE_IS_IWARP_SHIFT 31
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index cf66c4c..8982214 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -27,8 +27,8 @@
#include "mcp_public.h"
#define ECORE_MAJOR_VERSION 8
-#define ECORE_MINOR_VERSION 30
-#define ECORE_REVISION_VERSION 8
+#define ECORE_MINOR_VERSION 37
+#define ECORE_REVISION_VERSION 20
#define ECORE_ENGINEERING_VERSION 0
#define ECORE_VERSION \
@@ -660,6 +660,7 @@ struct ecore_hwfn {
#endif
struct dbg_tools_data dbg_info;
+ void *dbg_user_info;
struct z_stream_s *stream;
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index d91fe27..b83f003 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -456,6 +456,12 @@ static void ecore_qm_info_free(struct ecore_hwfn *p_hwfn)
OSAL_FREE(p_hwfn->p_dev, qm_info->wfq_data);
}
+static void ecore_dbg_user_data_free(struct ecore_hwfn *p_hwfn)
+{
+ OSAL_FREE(p_hwfn->p_dev, p_hwfn->dbg_user_info);
+ p_hwfn->dbg_user_info = OSAL_NULL;
+}
+
void ecore_resc_free(struct ecore_dev *p_dev)
{
int i;
@@ -483,6 +489,7 @@ void ecore_resc_free(struct ecore_dev *p_dev)
ecore_l2_free(p_hwfn);
ecore_dmae_info_free(p_hwfn);
ecore_dcbx_info_free(p_hwfn);
+ ecore_dbg_user_data_free(p_hwfn);
/* @@@TBD Flush work-queue ? */
/* destroy doorbell recovery mechanism */
@@ -1334,7 +1341,14 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
"Failed to allocate memory for dcbx structure\n");
goto alloc_err;
}
- }
+
+ rc = OSAL_DBG_ALLOC_USER_DATA(p_hwfn, &p_hwfn->dbg_user_info);
+ if (rc) {
+ DP_NOTICE(p_hwfn, false,
+ "Failed to allocate dbg user info structure\n");
+ goto alloc_err;
+ }
+ } /* hwfn loop */
p_dev->reset_stats = OSAL_ZALLOC(p_dev, GFP_KERNEL,
sizeof(*p_dev->reset_stats));
diff --git a/drivers/net/qede/base/ecore_hsi_common.h b/drivers/net/qede/base/ecore_hsi_common.h
index 2d761b9..6d4a4dd 100644
--- a/drivers/net/qede/base/ecore_hsi_common.h
+++ b/drivers/net/qede/base/ecore_hsi_common.h
@@ -922,7 +922,11 @@ struct core_rx_start_ramrod_data {
struct core_rx_action_on_error action_on_error;
/* set when in GSI offload mode on ROCE connection */
u8 gsi_offload_flag;
- u8 reserved[6];
+/* If set, the inner vlan (802.1q tag) priority that is written to cqe will be
+ * zero out, used for TenantDcb
+ */
+ u8 wipe_inner_vlan_pri_en;
+ u8 reserved[5];
};
@@ -1044,7 +1048,11 @@ struct core_tx_start_ramrod_data {
__le16 qm_pq_id /* QM PQ ID */;
/* set when in GSI offload mode on ROCE connection */
u8 gsi_offload_flag;
- u8 resrved[3];
+/* vport id of the current connection, used to access non_rdma_in_to_in_pri_map
+ * which is per vport
+ */
+ u8 vport_id;
+ u8 resrved[2];
};
@@ -1171,6 +1179,25 @@ struct eth_rx_rate_limit {
};
+/* Update RSS indirection table entry command. One outstanding command supported
+ * per PF.
+ */
+struct eth_tstorm_rss_update_data {
+/* Valid flag. Driver must set this flag, FW clear valid flag when ready for new
+ * RSS update command.
+ */
+ u8 valid;
+/* Global VPORT ID. If RSS is disable for VPORT, RSS update command will be
+ * ignored.
+ */
+ u8 vport_id;
+ u8 ind_table_index /* RSS indirect table index that will be updated. */;
+ u8 reserved;
+ __le16 ind_table_value /* RSS indirect table new value. */;
+ __le16 reserved1 /* reserved. */;
+};
+
+
struct eth_ustorm_per_pf_stat {
/* number of total ucast bytes received on loopback port without errors */
struct regpair rcv_lb_ucast_bytes;
@@ -1463,6 +1490,10 @@ struct pf_start_tunnel_config {
* FW will use a default port
*/
u8 set_geneve_udp_port_flg;
+/* Set no-innet-L2 VXLAN tunnel UDP destination port to
+ * no_inner_l2_vxlan_udp_port. If not set - FW will use a default port
+ */
+ u8 set_no_inner_l2_vxlan_udp_port_flg;
u8 tunnel_clss_vxlan /* Rx classification scheme for VXLAN tunnel. */;
/* Rx classification scheme for l2 GENEVE tunnel. */
u8 tunnel_clss_l2geneve;
@@ -1470,11 +1501,15 @@ struct pf_start_tunnel_config {
u8 tunnel_clss_ipgeneve;
u8 tunnel_clss_l2gre /* Rx classification scheme for l2 GRE tunnel. */;
u8 tunnel_clss_ipgre /* Rx classification scheme for ip GRE tunnel. */;
- u8 reserved;
/* VXLAN tunnel UDP destination port. Valid if set_vxlan_udp_port_flg=1 */
__le16 vxlan_udp_port;
/* GENEVE tunnel UDP destination port. Valid if set_geneve_udp_port_flg=1 */
__le16 geneve_udp_port;
+/* no-innet-L2 VXLAN tunnel UDP destination port. Valid if
+ * set_no_inner_l2_vxlan_udp_port_flg=1
+ */
+ __le16 no_inner_l2_vxlan_udp_port;
+ __le16 reserved[3];
};
/*
@@ -1547,6 +1582,8 @@ struct pf_update_tunnel_config {
u8 set_vxlan_udp_port_flg;
/* Update GENEVE tunnel UDP destination port. */
u8 set_geneve_udp_port_flg;
+/* Update no-innet-L2 VXLAN tunnel UDP destination port. */
+ u8 set_no_inner_l2_vxlan_udp_port_flg;
u8 tunnel_clss_vxlan /* Classification scheme for VXLAN tunnel. */;
/* Classification scheme for l2 GENEVE tunnel. */
u8 tunnel_clss_l2geneve;
@@ -1554,9 +1591,12 @@ struct pf_update_tunnel_config {
u8 tunnel_clss_ipgeneve;
u8 tunnel_clss_l2gre /* Classification scheme for l2 GRE tunnel. */;
u8 tunnel_clss_ipgre /* Classification scheme for ip GRE tunnel. */;
+ u8 reserved;
__le16 vxlan_udp_port /* VXLAN tunnel UDP destination port. */;
__le16 geneve_udp_port /* GENEVE tunnel UDP destination port. */;
- __le16 reserved;
+/* no-innet-L2 VXLAN tunnel UDP destination port. */
+ __le16 no_inner_l2_vxlan_udp_port;
+ __le16 reserved1[3];
};
/*
@@ -1686,6 +1726,13 @@ struct rl_update_ramrod_data {
/* ID of last RL, that will be updated. If clear, single RL will updated. */
u8 rl_id_last;
u8 rl_dc_qcn_flg /* If set, RL will used for DCQCN. */;
+/* If set, alpha will be reset to 1 when the state machine is idle. */
+ u8 dcqcn_reset_alpha_on_idle;
+/* Byte counter threshold to change rate increase stage. */
+ u8 rl_bc_stage_th;
+/* Timer threshold to change rate increase stage. */
+ u8 rl_timer_stage_th;
+ u8 reserved1;
__le32 rl_bc_rate /* Byte Counter Limit. */;
__le16 rl_max_rate /* Maximum rate in 1.6 Mbps resolution. */;
__le16 rl_r_ai /* Active increase rate. */;
@@ -1694,7 +1741,7 @@ struct rl_update_ramrod_data {
__le32 dcqcn_k_us /* DCQCN Alpha update interval. */;
__le32 dcqcn_timeuot_us /* DCQCN timeout. */;
__le32 qcn_timeuot_us /* QCN timeout. */;
- __le32 reserved[2];
+ __le32 reserved2;
};
diff --git a/drivers/net/qede/base/ecore_hsi_debug_tools.h b/drivers/net/qede/base/ecore_hsi_debug_tools.h
index bf54872..085af0a 100644
--- a/drivers/net/qede/base/ecore_hsi_debug_tools.h
+++ b/drivers/net/qede/base/ecore_hsi_debug_tools.h
@@ -1091,6 +1091,15 @@ struct idle_chk_data {
};
/*
+ * Pretend parameters
+ */
+struct pretend_params {
+ u8 split_type /* Pretend split type (from enum init_split_types) */;
+ u8 reserved;
+ u16 split_id /* Preted split ID (within the pretend split type) */;
+};
+
+/*
* Debug Tools data (per HW function)
*/
struct dbg_tools_data {
@@ -1102,11 +1111,17 @@ struct dbg_tools_data {
u8 block_in_reset[88];
u8 chip_id /* Chip ID (from enum chip_ids) */;
u8 platform_id /* Platform ID */;
+ u8 num_ports /* Number of ports in the chip */;
+ u8 num_pfs_per_port /* Number of PFs in each port */;
+ u8 num_vfs /* Number of VFs in the chip */;
u8 initialized /* Indicates if the data was initialized */;
u8 use_dmae /* Indicates if DMAE should be used */;
+ u8 reserved;
+ struct pretend_params pretend /* Current pretend parameters */;
/* Numbers of registers that were read since last log */
u32 num_regs_read;
};
+
#endif /* __ECORE_HSI_DEBUG_TOOLS__ */
diff --git a/drivers/net/qede/base/ecore_hsi_eth.h b/drivers/net/qede/base/ecore_hsi_eth.h
index 6b51230..158ca67 100644
--- a/drivers/net/qede/base/ecore_hsi_eth.h
+++ b/drivers/net/qede/base/ecore_hsi_eth.h
@@ -832,6 +832,26 @@ enum eth_filter_type {
/*
+ * inner to inner vlan priority translation configurations
+ */
+struct eth_in_to_in_pri_map_cfg {
+/* If set, non_rdma_in_to_in_pri_map or rdma_in_to_in_pri_map will be used for
+ * inner to inner priority mapping depending on protocol type
+ */
+ u8 inner_vlan_pri_remap_en;
+ u8 reserved[7];
+/* Map for inner to inner vlan priority translation for Non RDMA protocols, used
+ * for TenantDcb. Set inner_vlan_pri_remap_en, when init the map.
+ */
+ u8 non_rdma_in_to_in_pri_map[8];
+/* Map for inner to inner vlan priority translation for RDMA protocols, used for
+ * TenantDcb. Set inner_vlan_pri_remap_en, when init the map.
+ */
+ u8 rdma_in_to_in_pri_map[8];
+};
+
+
+/*
* eth IPv4 Fragment Type
*/
enum eth_ipv4_frag_type {
@@ -1030,8 +1050,11 @@ struct eth_vport_rx_mode {
/* accept all broadcast packets (subject to vlan) */
#define ETH_VPORT_RX_MODE_BCAST_ACCEPT_ALL_MASK 0x1
#define ETH_VPORT_RX_MODE_BCAST_ACCEPT_ALL_SHIFT 5
-#define ETH_VPORT_RX_MODE_RESERVED1_MASK 0x3FF
-#define ETH_VPORT_RX_MODE_RESERVED1_SHIFT 6
+/* accept any VNI in tunnel VNI classification. Used for default queue. */
+#define ETH_VPORT_RX_MODE_ACCEPT_ANY_VNI_MASK 0x1
+#define ETH_VPORT_RX_MODE_ACCEPT_ANY_VNI_SHIFT 6
+#define ETH_VPORT_RX_MODE_RESERVED1_MASK 0x1FF
+#define ETH_VPORT_RX_MODE_RESERVED1_SHIFT 7
};
@@ -1357,6 +1380,20 @@ struct tx_queue_update_ramrod_data {
};
+/*
+ * Inner to Inner VLAN priority map update mode
+ */
+enum update_in_to_in_pri_map_mode_enum {
+/* Inner to Inner VLAN priority map update Disabled */
+ ETH_IN_TO_IN_PRI_MAP_UPDATE_DISABLED,
+/* Update Inner to Inner VLAN priority map for non RDMA protocols */
+ ETH_IN_TO_IN_PRI_MAP_UPDATE_NON_RDMA_TBL,
+/* Update Inner to Inner VLAN priority map for RDMA protocols */
+ ETH_IN_TO_IN_PRI_MAP_UPDATE_RDMA_TBL,
+ MAX_UPDATE_IN_TO_IN_PRI_MAP_MODE_ENUM
+};
+
+
/*
* Ramrod data for vport update ramrod
@@ -1405,7 +1442,12 @@ struct vport_start_ramrod_data {
u8 ctl_frame_mac_check_en;
/* If set, control frames will be filtered according to ethtype check. */
u8 ctl_frame_ethtype_check_en;
- u8 reserved[1];
+/* If set, the inner vlan (802.1q tag) priority that is written to cqe will be
+ * zero out, used for TenantDcb
+ */
+ u8 wipe_inner_vlan_pri_en;
+/* inner to inner vlan priority translation configurations */
+ struct eth_in_to_in_pri_map_cfg in_to_in_vlan_pri_map_cfg;
};
@@ -1473,7 +1515,14 @@ struct vport_update_ramrod_data_cmn {
u8 ctl_frame_mac_check_en;
/* If set, control frames will be filtered according to ethtype check. */
u8 ctl_frame_ethtype_check_en;
- u8 reserved[15];
+/* Indicates to update RDMA or NON-RDMA vlan remapping priority table according
+ * to update_in_to_in_pri_map_mode_enum, used for TenantDcb (use enum
+ * update_in_to_in_pri_map_mode_enum)
+ */
+ u8 update_in_to_in_pri_map_mode;
+/* Map for inner to inner vlan priority translation, used for TenantDcb. */
+ u8 in_to_in_pri_map[8];
+ u8 reserved[6];
};
struct vport_update_ramrod_mcast {
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.c b/drivers/net/qede/base/ecore_init_fw_funcs.c
index b8496cb..cfc1156 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.c
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.c
@@ -1665,7 +1665,7 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
bool ipv6,
enum gft_profile_type profile_type)
{
- u32 reg_val, cam_line, ram_line_lo, ram_line_hi;
+ u32 reg_val, cam_line, ram_line_lo, ram_line_hi, search_non_ip_as_gft;
if (!ipv6 && !ipv4)
DP_NOTICE(p_hwfn, true, "gft_config: must accept at least on of - ipv4 or ipv6'\n");
@@ -1729,6 +1729,9 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
ram_line_lo = 0;
ram_line_hi = 0;
+ /* Search no IP as GFT */
+ search_non_ip_as_gft = 0;
+
/* Tunnel type */
SET_FIELD(ram_line_lo, GFT_RAM_LINE_TUNNEL_DST_PORT, 1);
SET_FIELD(ram_line_lo, GFT_RAM_LINE_TUNNEL_OVER_IP_PROTOCOL, 1);
@@ -1752,8 +1755,13 @@ void ecore_gft_config(struct ecore_hwfn *p_hwfn,
SET_FIELD(ram_line_lo, GFT_RAM_LINE_ETHERTYPE, 1);
} else if (profile_type == GFT_PROFILE_TYPE_TUNNEL_TYPE) {
SET_FIELD(ram_line_lo, GFT_RAM_LINE_TUNNEL_ETHERTYPE, 1);
+
+ /* Allow tunneled traffic without inner IP */
+ search_non_ip_as_gft = 1;
}
+ ecore_wr(p_hwfn, p_ptt, PRS_REG_SEARCH_NON_IP_AS_GFT,
+ search_non_ip_as_gft);
ecore_wr(p_hwfn, p_ptt,
PRS_REG_GFT_PROFILE_MASK_RAM + RAM_LINE_SIZE * pf_id,
ram_line_lo);
@@ -1996,52 +2004,49 @@ void ecore_enable_context_validation(struct ecore_hwfn *p_hwfn,
ecore_wr(p_hwfn, p_ptt, CDU_REG_TCFC_CTX_VALID0, ctx_validation);
}
-#define RSS_IND_TABLE_BASE_ADDR 4112
-#define RSS_IND_TABLE_VPORT_SIZE 16
-#define RSS_IND_TABLE_ENTRY_PER_LINE 8
-/* Update RSS indirection table entry. */
-void ecore_update_eth_rss_ind_table_entry(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u8 rss_id,
- u8 ind_table_index,
- u16 ind_table_value)
+/*******************************************************************************
+ * File name : rdma_init.c
+ * Author : Michael Shteinbok
+ *******************************************************************************
+ *******************************************************************************
+ * Description:
+ * RDMA HSI functions
+ *
+ *******************************************************************************
+ * Notes: This is the input to the auto generated file drv_init_fw_funcs.c
+ *
+ *******************************************************************************
+ */
+static u32 ecore_get_rdma_assert_ram_addr(struct ecore_hwfn *p_hwfn,
+ u8 storm_id)
{
- u32 cnt, rss_addr;
- u32 *reg_val;
- u16 rss_ind_entry[RSS_IND_TABLE_ENTRY_PER_LINE];
- u16 rss_ind_mask[RSS_IND_TABLE_ENTRY_PER_LINE];
-
- /* get entry address */
- rss_addr = RSS_IND_TABLE_BASE_ADDR +
- RSS_IND_TABLE_VPORT_SIZE * rss_id +
- ind_table_index / RSS_IND_TABLE_ENTRY_PER_LINE;
-
- /* prepare update command */
- ind_table_index %= RSS_IND_TABLE_ENTRY_PER_LINE;
-
- for (cnt = 0; cnt < RSS_IND_TABLE_ENTRY_PER_LINE; cnt++) {
- if (cnt == ind_table_index) {
- rss_ind_entry[cnt] = ind_table_value;
- rss_ind_mask[cnt] = 0xFFFF;
- } else {
- rss_ind_entry[cnt] = 0;
- rss_ind_mask[cnt] = 0;
- }
+ switch (storm_id) {
+ case 0: return TSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
+ TSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
+ case 1: return MSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
+ MSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
+ case 2: return USEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
+ USTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
+ case 3: return XSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
+ XSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
+ case 4: return YSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
+ YSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
+ case 5: return PSEM_REG_FAST_MEMORY + SEM_FAST_REG_INT_RAM +
+ PSTORM_RDMA_ASSERT_LEVEL_OFFSET(p_hwfn->rel_pf_id);
+
+ default: return 0;
}
+}
- /* Update entry in HW*/
- ecore_wr(p_hwfn, p_ptt, RSS_REG_RSS_RAM_ADDR, rss_addr);
-
- reg_val = (u32 *)rss_ind_mask;
- ecore_wr(p_hwfn, p_ptt, RSS_REG_RSS_RAM_MASK, reg_val[0]);
- ecore_wr(p_hwfn, p_ptt, RSS_REG_RSS_RAM_MASK + 4, reg_val[1]);
- ecore_wr(p_hwfn, p_ptt, RSS_REG_RSS_RAM_MASK + 8, reg_val[2]);
- ecore_wr(p_hwfn, p_ptt, RSS_REG_RSS_RAM_MASK + 12, reg_val[3]);
+void ecore_set_rdma_error_level(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u8 assert_level[NUM_STORMS])
+{
+ u8 storm_id;
+ for (storm_id = 0; storm_id < NUM_STORMS; storm_id++) {
+ u32 ram_addr = ecore_get_rdma_assert_ram_addr(p_hwfn, storm_id);
- reg_val = (u32 *)rss_ind_entry;
- ecore_wr(p_hwfn, p_ptt, RSS_REG_RSS_RAM_DATA, reg_val[0]);
- ecore_wr(p_hwfn, p_ptt, RSS_REG_RSS_RAM_DATA + 4, reg_val[1]);
- ecore_wr(p_hwfn, p_ptt, RSS_REG_RSS_RAM_DATA + 8, reg_val[2]);
- ecore_wr(p_hwfn, p_ptt, RSS_REG_RSS_RAM_DATA + 12, reg_val[3]);
+ ecore_wr(p_hwfn, p_ptt, ram_addr, assert_level[storm_id]);
+ }
}
diff --git a/drivers/net/qede/base/ecore_init_fw_funcs.h b/drivers/net/qede/base/ecore_init_fw_funcs.h
index 1024bb2..3503a90 100644
--- a/drivers/net/qede/base/ecore_init_fw_funcs.h
+++ b/drivers/net/qede/base/ecore_init_fw_funcs.h
@@ -472,21 +472,35 @@ void ecore_memset_task_ctx(void *p_ctx_mem,
u32 ctx_size,
u8 ctx_type);
-/**
- * @brief ecore_update_eth_rss_ind_table_entry - Update RSS indirection table
- * entry.
- * The function must run in exclusive mode to prevent wrong RSS configuration.
+
+/*******************************************************************************
+ * File name : rdma_init.h
+ * Author : Michael Shteinbok
+ *******************************************************************************
+ *******************************************************************************
+ * Description:
+ * RDMA HSI functions header
+ *
+ *******************************************************************************
+ * Notes: This is the input to the auto generated file drv_init_fw_funcs.h
*
- * @param p_hwfn - HW device data
- * @param p_ptt - ptt window used for writing the registers.
- * @param rss_id - RSS engine ID.
- * @param ind_table_index - RSS indirect table index.
- * @param ind_table_value - RSS indirect table new value.
+ *******************************************************************************
*/
-void ecore_update_eth_rss_ind_table_entry(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u8 rss_id,
- u8 ind_table_index,
- u16 ind_table_value);
+#define NUM_STORMS 6
+
+
+
+/**
+ * @brief ecore_set_rdma_error_level - Sets the RDMA assert level.
+ * If the severity of the error will be
+ * above the level, the FW will assert.
+ * @param p_hwfn - HW device data
+ * @param p_ptt - ptt window used for writing the registers
+ * @param assert_level - An array of assert levels for each storm.
+ */
+void ecore_set_rdma_error_level(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u8 assert_level[NUM_STORMS]);
+
#endif
diff --git a/drivers/net/qede/base/ecore_iro.h b/drivers/net/qede/base/ecore_iro.h
index 0569302..12d45c1 100644
--- a/drivers/net/qede/base/ecore_iro.h
+++ b/drivers/net/qede/base/ecore_iro.h
@@ -113,91 +113,129 @@
/* Tstorm Eth limit Rx rate */
#define ETH_RX_RATE_LIMIT_OFFSET(pf_id) (IRO[29].base + ((pf_id) * IRO[29].m1))
#define ETH_RX_RATE_LIMIT_SIZE (IRO[29].size)
+/* RSS indirection table entry update command per PF offset in TSTORM PF BAR0.
+ * Use eth_tstorm_rss_update_data for update.
+ */
+#define TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id) (IRO[30].base + \
+ ((pf_id) * IRO[30].m1))
+#define TSTORM_ETH_RSS_UPDATE_SIZE (IRO[30].size)
/* Xstorm queue zone */
-#define XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) (IRO[30].base + \
- ((queue_id) * IRO[30].m1))
-#define XSTORM_ETH_QUEUE_ZONE_SIZE (IRO[30].size)
+#define XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) (IRO[31].base + \
+ ((queue_id) * IRO[31].m1))
+#define XSTORM_ETH_QUEUE_ZONE_SIZE (IRO[31].size)
/* Ystorm cqe producer */
-#define YSTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[31].base + \
- ((rss_id) * IRO[31].m1))
-#define YSTORM_TOE_CQ_PROD_SIZE (IRO[31].size)
-/* Ustorm cqe producer */
-#define USTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[32].base + \
+#define YSTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[32].base + \
((rss_id) * IRO[32].m1))
-#define USTORM_TOE_CQ_PROD_SIZE (IRO[32].size)
+#define YSTORM_TOE_CQ_PROD_SIZE (IRO[32].size)
+/* Ustorm cqe producer */
+#define USTORM_TOE_CQ_PROD_OFFSET(rss_id) (IRO[33].base + \
+ ((rss_id) * IRO[33].m1))
+#define USTORM_TOE_CQ_PROD_SIZE (IRO[33].size)
/* Ustorm grq producer */
-#define USTORM_TOE_GRQ_PROD_OFFSET(pf_id) (IRO[33].base + \
- ((pf_id) * IRO[33].m1))
-#define USTORM_TOE_GRQ_PROD_SIZE (IRO[33].size)
+#define USTORM_TOE_GRQ_PROD_OFFSET(pf_id) (IRO[34].base + \
+ ((pf_id) * IRO[34].m1))
+#define USTORM_TOE_GRQ_PROD_SIZE (IRO[34].size)
/* Tstorm cmdq-cons of given command queue-id */
-#define TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id) (IRO[34].base + \
- ((cmdq_queue_id) * IRO[34].m1))
-#define TSTORM_SCSI_CMDQ_CONS_SIZE (IRO[34].size)
+#define TSTORM_SCSI_CMDQ_CONS_OFFSET(cmdq_queue_id) (IRO[35].base + \
+ ((cmdq_queue_id) * IRO[35].m1))
+#define TSTORM_SCSI_CMDQ_CONS_SIZE (IRO[35].size)
/* Tstorm (reflects M-Storm) bdq-external-producer of given function ID,
* BDqueue-id
*/
-#define TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id, bdq_id) (IRO[35].base + \
- ((func_id) * IRO[35].m1) + ((bdq_id) * IRO[35].m2))
-#define TSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[35].size)
-/* Mstorm bdq-external-producer of given BDQ resource ID, BDqueue-id */
-#define MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id, bdq_id) (IRO[36].base + \
+#define TSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id, bdq_id) (IRO[36].base + \
((func_id) * IRO[36].m1) + ((bdq_id) * IRO[36].m2))
-#define MSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[36].size)
+#define TSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[36].size)
+/* Mstorm bdq-external-producer of given BDQ resource ID, BDqueue-id */
+#define MSTORM_SCSI_BDQ_EXT_PROD_OFFSET(func_id, bdq_id) (IRO[37].base + \
+ ((func_id) * IRO[37].m1) + ((bdq_id) * IRO[37].m2))
+#define MSTORM_SCSI_BDQ_EXT_PROD_SIZE (IRO[37].size)
/* Tstorm iSCSI RX stats */
-#define TSTORM_ISCSI_RX_STATS_OFFSET(pf_id) (IRO[37].base + \
- ((pf_id) * IRO[37].m1))
-#define TSTORM_ISCSI_RX_STATS_SIZE (IRO[37].size)
-/* Mstorm iSCSI RX stats */
-#define MSTORM_ISCSI_RX_STATS_OFFSET(pf_id) (IRO[38].base + \
+#define TSTORM_ISCSI_RX_STATS_OFFSET(pf_id) (IRO[38].base + \
((pf_id) * IRO[38].m1))
-#define MSTORM_ISCSI_RX_STATS_SIZE (IRO[38].size)
-/* Ustorm iSCSI RX stats */
-#define USTORM_ISCSI_RX_STATS_OFFSET(pf_id) (IRO[39].base + \
+#define TSTORM_ISCSI_RX_STATS_SIZE (IRO[38].size)
+/* Mstorm iSCSI RX stats */
+#define MSTORM_ISCSI_RX_STATS_OFFSET(pf_id) (IRO[39].base + \
((pf_id) * IRO[39].m1))
-#define USTORM_ISCSI_RX_STATS_SIZE (IRO[39].size)
-/* Xstorm iSCSI TX stats */
-#define XSTORM_ISCSI_TX_STATS_OFFSET(pf_id) (IRO[40].base + \
+#define MSTORM_ISCSI_RX_STATS_SIZE (IRO[39].size)
+/* Ustorm iSCSI RX stats */
+#define USTORM_ISCSI_RX_STATS_OFFSET(pf_id) (IRO[40].base + \
((pf_id) * IRO[40].m1))
-#define XSTORM_ISCSI_TX_STATS_SIZE (IRO[40].size)
-/* Ystorm iSCSI TX stats */
-#define YSTORM_ISCSI_TX_STATS_OFFSET(pf_id) (IRO[41].base + \
+#define USTORM_ISCSI_RX_STATS_SIZE (IRO[40].size)
+/* Xstorm iSCSI TX stats */
+#define XSTORM_ISCSI_TX_STATS_OFFSET(pf_id) (IRO[41].base + \
((pf_id) * IRO[41].m1))
-#define YSTORM_ISCSI_TX_STATS_SIZE (IRO[41].size)
-/* Pstorm iSCSI TX stats */
-#define PSTORM_ISCSI_TX_STATS_OFFSET(pf_id) (IRO[42].base + \
+#define XSTORM_ISCSI_TX_STATS_SIZE (IRO[41].size)
+/* Ystorm iSCSI TX stats */
+#define YSTORM_ISCSI_TX_STATS_OFFSET(pf_id) (IRO[42].base + \
((pf_id) * IRO[42].m1))
-#define PSTORM_ISCSI_TX_STATS_SIZE (IRO[42].size)
-/* Tstorm FCoE RX stats */
-#define TSTORM_FCOE_RX_STATS_OFFSET(pf_id) (IRO[43].base + \
+#define YSTORM_ISCSI_TX_STATS_SIZE (IRO[42].size)
+/* Pstorm iSCSI TX stats */
+#define PSTORM_ISCSI_TX_STATS_OFFSET(pf_id) (IRO[43].base + \
((pf_id) * IRO[43].m1))
-#define TSTORM_FCOE_RX_STATS_SIZE (IRO[43].size)
-/* Pstorm FCoE TX stats */
-#define PSTORM_FCOE_TX_STATS_OFFSET(pf_id) (IRO[44].base + \
+#define PSTORM_ISCSI_TX_STATS_SIZE (IRO[43].size)
+/* Tstorm FCoE RX stats */
+#define TSTORM_FCOE_RX_STATS_OFFSET(pf_id) (IRO[44].base + \
((pf_id) * IRO[44].m1))
-#define PSTORM_FCOE_TX_STATS_SIZE (IRO[44].size)
+#define TSTORM_FCOE_RX_STATS_SIZE (IRO[44].size)
+/* Pstorm FCoE TX stats */
+#define PSTORM_FCOE_TX_STATS_OFFSET(pf_id) (IRO[45].base + \
+ ((pf_id) * IRO[45].m1))
+#define PSTORM_FCOE_TX_STATS_SIZE (IRO[45].size)
/* Pstorm RDMA queue statistics */
-#define PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) \
- (IRO[45].base + ((rdma_stat_counter_id) * IRO[45].m1))
-#define PSTORM_RDMA_QUEUE_STAT_SIZE (IRO[45].size)
-/* Tstorm RDMA queue statistics */
-#define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[46].base + \
+#define PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[46].base + \
((rdma_stat_counter_id) * IRO[46].m1))
-#define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[46].size)
+#define PSTORM_RDMA_QUEUE_STAT_SIZE (IRO[46].size)
+/* Tstorm RDMA queue statistics */
+#define TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) (IRO[47].base + \
+ ((rdma_stat_counter_id) * IRO[47].m1))
+#define TSTORM_RDMA_QUEUE_STAT_SIZE (IRO[47].size)
+/* Xstorm error level for assert */
+#define XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[48].base + \
+ ((pf_id) * IRO[48].m1))
+#define XSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[48].size)
+/* Ystorm error level for assert */
+#define YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[49].base + \
+ ((pf_id) * IRO[49].m1))
+#define YSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[49].size)
+/* Pstorm error level for assert */
+#define PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[50].base + \
+ ((pf_id) * IRO[50].m1))
+#define PSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[50].size)
+/* Tstorm error level for assert */
+#define TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[51].base + \
+ ((pf_id) * IRO[51].m1))
+#define TSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[51].size)
+/* Mstorm error level for assert */
+#define MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[52].base + \
+ ((pf_id) * IRO[52].m1))
+#define MSTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[52].size)
+/* Ustorm error level for assert */
+#define USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) (IRO[53].base + \
+ ((pf_id) * IRO[53].m1))
+#define USTORM_RDMA_ASSERT_LEVEL_SIZE (IRO[53].size)
/* Xstorm iWARP rxmit stats */
-#define XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) (IRO[47].base + \
- ((pf_id) * IRO[47].m1))
-#define XSTORM_IWARP_RXMIT_STATS_SIZE (IRO[47].size)
+#define XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) (IRO[54].base + \
+ ((pf_id) * IRO[54].m1))
+#define XSTORM_IWARP_RXMIT_STATS_SIZE (IRO[54].size)
/* Tstorm RoCE Event Statistics */
-#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) (IRO[48].base + \
- ((roce_pf_id) * IRO[48].m1))
-#define TSTORM_ROCE_EVENTS_STAT_SIZE (IRO[48].size)
+#define TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) (IRO[55].base + \
+ ((roce_pf_id) * IRO[55].m1))
+#define TSTORM_ROCE_EVENTS_STAT_SIZE (IRO[55].size)
/* DCQCN Received Statistics */
-#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id) (IRO[49].base + \
- ((roce_pf_id) * IRO[49].m1))
-#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_SIZE (IRO[49].size)
+#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id) (IRO[56].base + \
+ ((roce_pf_id) * IRO[56].m1))
+#define YSTORM_ROCE_DCQCN_RECEIVED_STATS_SIZE (IRO[56].size)
+/* RoCE Error Statistics */
+#define YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id) (IRO[57].base + \
+ ((roce_pf_id) * IRO[57].m1))
+#define YSTORM_ROCE_ERROR_STATS_SIZE (IRO[57].size)
/* DCQCN Sent Statistics */
-#define PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id) (IRO[50].base + \
- ((roce_pf_id) * IRO[50].m1))
-#define PSTORM_ROCE_DCQCN_SENT_STATS_SIZE (IRO[50].size)
+#define PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id) (IRO[58].base + \
+ ((roce_pf_id) * IRO[58].m1))
+#define PSTORM_ROCE_DCQCN_SENT_STATS_SIZE (IRO[58].size)
+/* RoCE CQEs Statistics */
+#define USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id) (IRO[59].base + \
+ ((roce_pf_id) * IRO[59].m1))
+#define USTORM_ROCE_CQE_STATS_SIZE (IRO[59].size)
#endif /* __IRO_H__ */
diff --git a/drivers/net/qede/base/ecore_iro_values.h b/drivers/net/qede/base/ecore_iro_values.h
index 685fa2e..30e632c 100644
--- a/drivers/net/qede/base/ecore_iro_values.h
+++ b/drivers/net/qede/base/ecore_iro_values.h
@@ -7,7 +7,7 @@
#ifndef __IRO_VALUES_H__
#define __IRO_VALUES_H__
-static const struct iro iro_arr[51] = {
+static const struct iro iro_arr[60] = {
/* YSTORM_FLOW_CONTROL_MODE_OFFSET */
{ 0x0, 0x0, 0x0, 0x0, 0x8},
/* TSTORM_PORT_STAT_OFFSET(port_id) */
@@ -29,7 +29,7 @@
/* YSTORM_INTEG_TEST_DATA_OFFSET */
{ 0x3e38, 0x0, 0x0, 0x0, 0x78},
/* PSTORM_INTEG_TEST_DATA_OFFSET */
- { 0x2b78, 0x0, 0x0, 0x0, 0x78},
+ { 0x3ef8, 0x0, 0x0, 0x0, 0x78},
/* TSTORM_INTEG_TEST_DATA_OFFSET */
{ 0x4c40, 0x0, 0x0, 0x0, 0x78},
/* MSTORM_INTEG_TEST_DATA_OFFSET */
@@ -43,7 +43,7 @@
/* CORE_LL2_USTORM_PER_QUEUE_STAT_OFFSET(core_rx_queue_id) */
{ 0xb820, 0x30, 0x0, 0x0, 0x30},
/* CORE_LL2_PSTORM_PER_QUEUE_STAT_OFFSET(core_tx_stats_id) */
- { 0x96c0, 0x30, 0x0, 0x0, 0x30},
+ { 0xa990, 0x30, 0x0, 0x0, 0x30},
/* MSTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
{ 0x4b68, 0x80, 0x0, 0x0, 0x40},
/* MSTORM_ETH_PF_PRODS_OFFSET(queue_id) */
@@ -59,15 +59,17 @@
/* USTORM_ETH_PF_STAT_OFFSET(pf_id) */
{ 0xe770, 0x60, 0x0, 0x0, 0x60},
/* PSTORM_QUEUE_STAT_OFFSET(stat_counter_id) */
- { 0x2d10, 0x80, 0x0, 0x0, 0x38},
+ { 0x4090, 0x80, 0x0, 0x0, 0x38},
/* PSTORM_ETH_PF_STAT_OFFSET(pf_id) */
- { 0xf2b8, 0x78, 0x0, 0x0, 0x78},
+ { 0xfea8, 0x78, 0x0, 0x0, 0x78},
/* PSTORM_CTL_FRAME_ETHTYPE_OFFSET(ethType_id) */
{ 0x1f8, 0x4, 0x0, 0x0, 0x4},
/* TSTORM_ETH_PRS_INPUT_OFFSET */
{ 0xaf20, 0x0, 0x0, 0x0, 0xf0},
/* ETH_RX_RATE_LIMIT_OFFSET(pf_id) */
{ 0xb010, 0x8, 0x0, 0x0, 0x8},
+/* TSTORM_ETH_RSS_UPDATE_OFFSET(pf_id) */
+ { 0xc00, 0x8, 0x0, 0x0, 0x8},
/* XSTORM_ETH_QUEUE_ZONE_OFFSET(queue_id) */
{ 0x1f8, 0x8, 0x0, 0x0, 0x8},
/* YSTORM_TOE_CQ_PROD_OFFSET(rss_id) */
@@ -91,25 +93,41 @@
/* XSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
{ 0xa588, 0x50, 0x0, 0x0, 0x20},
/* YSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
- { 0x8700, 0x40, 0x0, 0x0, 0x28},
+ { 0x8f00, 0x40, 0x0, 0x0, 0x28},
/* PSTORM_ISCSI_TX_STATS_OFFSET(pf_id) */
- { 0x10300, 0x18, 0x0, 0x0, 0x10},
+ { 0x10e30, 0x18, 0x0, 0x0, 0x10},
/* TSTORM_FCOE_RX_STATS_OFFSET(pf_id) */
{ 0xde48, 0x48, 0x0, 0x0, 0x38},
/* PSTORM_FCOE_TX_STATS_OFFSET(pf_id) */
- { 0x10768, 0x20, 0x0, 0x0, 0x20},
+ { 0x11298, 0x20, 0x0, 0x0, 0x20},
/* PSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
- { 0x2d48, 0x80, 0x0, 0x0, 0x10},
+ { 0x40c8, 0x80, 0x0, 0x0, 0x10},
/* TSTORM_RDMA_QUEUE_STAT_OFFSET(rdma_stat_counter_id) */
{ 0x5048, 0x10, 0x0, 0x0, 0x10},
+/* XSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
+ { 0xa928, 0x8, 0x0, 0x0, 0x1},
+/* YSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
+ { 0xa128, 0x8, 0x0, 0x0, 0x1},
+/* PSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
+ { 0x11a30, 0x8, 0x0, 0x0, 0x1},
+/* TSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
+ { 0xf030, 0x8, 0x0, 0x0, 0x1},
+/* MSTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
+ { 0x13028, 0x8, 0x0, 0x0, 0x1},
+/* USTORM_RDMA_ASSERT_LEVEL_OFFSET(pf_id) */
+ { 0x12c58, 0x8, 0x0, 0x0, 0x1},
/* XSTORM_IWARP_RXMIT_STATS_OFFSET(pf_id) */
{ 0xc9b8, 0x30, 0x0, 0x0, 0x10},
/* TSTORM_ROCE_EVENTS_STAT_OFFSET(roce_pf_id) */
- { 0xed90, 0x10, 0x0, 0x0, 0x10},
+ { 0xed90, 0x28, 0x0, 0x0, 0x28},
/* YSTORM_ROCE_DCQCN_RECEIVED_STATS_OFFSET(roce_pf_id) */
- { 0xa520, 0x10, 0x0, 0x0, 0x10},
+ { 0xad20, 0x18, 0x0, 0x0, 0x18},
+/* YSTORM_ROCE_ERROR_STATS_OFFSET(roce_pf_id) */
+ { 0xaea0, 0x8, 0x0, 0x0, 0x8},
/* PSTORM_ROCE_DCQCN_SENT_STATS_OFFSET(roce_pf_id) */
- { 0x13108, 0x8, 0x0, 0x0, 0x8},
+ { 0x13c38, 0x8, 0x0, 0x0, 0x8},
+/* USTORM_ROCE_CQE_STATS_OFFSET(roce_pf_id) */
+ { 0x13c50, 0x18, 0x0, 0x0, 0x18},
};
#endif /* __IRO_VALUES_H__ */
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index ca4d901..ec40aac 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -608,6 +608,9 @@ enum _ecore_status_t
SET_FIELD(state, ETH_VPORT_RX_MODE_BCAST_ACCEPT_ALL,
!!(accept_filter & ECORE_ACCEPT_BCAST));
+ SET_FIELD(state, ETH_VPORT_RX_MODE_ACCEPT_ANY_VNI,
+ !!(accept_filter & ECORE_ACCEPT_ANY_VNI));
+
p_ramrod->rx_mode.state = OSAL_CPU_TO_LE16(state);
DP_VERBOSE(p_hwfn, ECORE_MSG_SP,
"vport[%02x] p_ramrod->rx_mode.state = 0x%x\n",
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 85034e6..bde825c 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -137,6 +137,7 @@ struct ecore_filter_accept_flags {
#define ECORE_ACCEPT_MCAST_MATCHED 0x08
#define ECORE_ACCEPT_MCAST_UNMATCHED 0x10
#define ECORE_ACCEPT_BCAST 0x20
+#define ECORE_ACCEPT_ANY_VNI 0x40
};
enum ecore_filter_config_mode {
diff --git a/drivers/net/qede/base/ecore_rt_defs.h b/drivers/net/qede/base/ecore_rt_defs.h
index 721b8c1..3860e1a 100644
--- a/drivers/net/qede/base/ecore_rt_defs.h
+++ b/drivers/net/qede/base/ecore_rt_defs.h
@@ -390,147 +390,146 @@
#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_OFFSET 39769
#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_RT_SIZE 16
#define NIG_REG_TX_EDPM_CTRL_RT_OFFSET 39785
-#define NIG_REG_ROCE_DUPLICATE_TO_HOST_RT_OFFSET 39786
-#define NIG_REG_PPF_TO_ENGINE_SEL_RT_OFFSET 39787
+#define NIG_REG_PPF_TO_ENGINE_SEL_RT_OFFSET 39786
#define NIG_REG_PPF_TO_ENGINE_SEL_RT_SIZE 8
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_VALUE_RT_OFFSET 39795
+#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_VALUE_RT_OFFSET 39794
#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_VALUE_RT_SIZE 1024
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_EN_RT_OFFSET 40819
+#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_EN_RT_OFFSET 40818
#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_EN_RT_SIZE 512
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_MODE_RT_OFFSET 41331
+#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_MODE_RT_OFFSET 41330
#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_MODE_RT_SIZE 512
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET 41843
+#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_PROTOCOL_TYPE_RT_OFFSET 41842
#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_PROTOCOL_TYPE_RT_SIZE 512
-#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_HDR_SEL_RT_OFFSET 42355
+#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_HDR_SEL_RT_OFFSET 42354
#define NIG_REG_LLH_PF_CLS_FUNC_FILTER_HDR_SEL_RT_SIZE 512
-#define NIG_REG_LLH_PF_CLS_FILTERS_MAP_RT_OFFSET 42867
+#define NIG_REG_LLH_PF_CLS_FILTERS_MAP_RT_OFFSET 42866
#define NIG_REG_LLH_PF_CLS_FILTERS_MAP_RT_SIZE 32
-#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET 42899
-#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET 42900
-#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET 42901
-#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET 42902
-#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET 42903
-#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET 42904
-#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET 42905
-#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET 42906
-#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET 42907
-#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET 42908
-#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET 42909
-#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET 42910
-#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET 42911
-#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET 42912
-#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET 42913
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET 42914
-#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET 42915
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET 42916
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET 42917
-#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET 42918
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET 42919
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET 42920
-#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET 42921
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET 42922
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET 42923
-#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET 42924
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET 42925
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET 42926
-#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET 42927
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET 42928
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET 42929
-#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET 42930
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET 42931
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET 42932
-#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET 42933
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET 42934
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET 42935
-#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET 42936
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET 42937
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET 42938
-#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET 42939
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET 42940
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET 42941
-#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET 42942
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET 42943
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET 42944
-#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET 42945
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET 42946
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET 42947
-#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET 42948
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET 42949
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET 42950
-#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET 42951
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET 42952
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET 42953
-#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET 42954
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET 42955
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET 42956
-#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET 42957
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET 42958
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET 42959
-#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET 42960
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET 42961
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET 42962
-#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET 42963
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET 42964
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET 42965
-#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET 42966
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET 42967
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET 42968
-#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET 42969
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET 42970
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET 42971
-#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET 42972
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET 42973
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ20_RT_OFFSET 42974
-#define PBF_REG_BTB_GUARANTEED_VOQ20_RT_OFFSET 42975
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ20_RT_OFFSET 42976
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ21_RT_OFFSET 42977
-#define PBF_REG_BTB_GUARANTEED_VOQ21_RT_OFFSET 42978
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ21_RT_OFFSET 42979
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ22_RT_OFFSET 42980
-#define PBF_REG_BTB_GUARANTEED_VOQ22_RT_OFFSET 42981
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ22_RT_OFFSET 42982
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ23_RT_OFFSET 42983
-#define PBF_REG_BTB_GUARANTEED_VOQ23_RT_OFFSET 42984
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ23_RT_OFFSET 42985
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ24_RT_OFFSET 42986
-#define PBF_REG_BTB_GUARANTEED_VOQ24_RT_OFFSET 42987
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ24_RT_OFFSET 42988
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ25_RT_OFFSET 42989
-#define PBF_REG_BTB_GUARANTEED_VOQ25_RT_OFFSET 42990
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ25_RT_OFFSET 42991
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ26_RT_OFFSET 42992
-#define PBF_REG_BTB_GUARANTEED_VOQ26_RT_OFFSET 42993
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ26_RT_OFFSET 42994
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ27_RT_OFFSET 42995
-#define PBF_REG_BTB_GUARANTEED_VOQ27_RT_OFFSET 42996
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ27_RT_OFFSET 42997
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ28_RT_OFFSET 42998
-#define PBF_REG_BTB_GUARANTEED_VOQ28_RT_OFFSET 42999
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ28_RT_OFFSET 43000
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ29_RT_OFFSET 43001
-#define PBF_REG_BTB_GUARANTEED_VOQ29_RT_OFFSET 43002
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ29_RT_OFFSET 43003
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ30_RT_OFFSET 43004
-#define PBF_REG_BTB_GUARANTEED_VOQ30_RT_OFFSET 43005
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ30_RT_OFFSET 43006
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ31_RT_OFFSET 43007
-#define PBF_REG_BTB_GUARANTEED_VOQ31_RT_OFFSET 43008
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ31_RT_OFFSET 43009
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ32_RT_OFFSET 43010
-#define PBF_REG_BTB_GUARANTEED_VOQ32_RT_OFFSET 43011
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ32_RT_OFFSET 43012
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ33_RT_OFFSET 43013
-#define PBF_REG_BTB_GUARANTEED_VOQ33_RT_OFFSET 43014
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ33_RT_OFFSET 43015
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ34_RT_OFFSET 43016
-#define PBF_REG_BTB_GUARANTEED_VOQ34_RT_OFFSET 43017
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ34_RT_OFFSET 43018
-#define PBF_REG_YCMD_QS_NUM_LINES_VOQ35_RT_OFFSET 43019
-#define PBF_REG_BTB_GUARANTEED_VOQ35_RT_OFFSET 43020
-#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ35_RT_OFFSET 43021
-#define XCM_REG_CON_PHY_Q3_RT_OFFSET 43022
+#define CDU_REG_CID_ADDR_PARAMS_RT_OFFSET 42898
+#define CDU_REG_SEGMENT0_PARAMS_RT_OFFSET 42899
+#define CDU_REG_SEGMENT1_PARAMS_RT_OFFSET 42900
+#define CDU_REG_PF_SEG0_TYPE_OFFSET_RT_OFFSET 42901
+#define CDU_REG_PF_SEG1_TYPE_OFFSET_RT_OFFSET 42902
+#define CDU_REG_PF_SEG2_TYPE_OFFSET_RT_OFFSET 42903
+#define CDU_REG_PF_SEG3_TYPE_OFFSET_RT_OFFSET 42904
+#define CDU_REG_PF_FL_SEG0_TYPE_OFFSET_RT_OFFSET 42905
+#define CDU_REG_PF_FL_SEG1_TYPE_OFFSET_RT_OFFSET 42906
+#define CDU_REG_PF_FL_SEG2_TYPE_OFFSET_RT_OFFSET 42907
+#define CDU_REG_PF_FL_SEG3_TYPE_OFFSET_RT_OFFSET 42908
+#define CDU_REG_VF_SEG_TYPE_OFFSET_RT_OFFSET 42909
+#define CDU_REG_VF_FL_SEG_TYPE_OFFSET_RT_OFFSET 42910
+#define PBF_REG_TAG_ETHERTYPE_0_RT_OFFSET 42911
+#define PBF_REG_BTB_SHARED_AREA_SIZE_RT_OFFSET 42912
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ0_RT_OFFSET 42913
+#define PBF_REG_BTB_GUARANTEED_VOQ0_RT_OFFSET 42914
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ0_RT_OFFSET 42915
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ1_RT_OFFSET 42916
+#define PBF_REG_BTB_GUARANTEED_VOQ1_RT_OFFSET 42917
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ1_RT_OFFSET 42918
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ2_RT_OFFSET 42919
+#define PBF_REG_BTB_GUARANTEED_VOQ2_RT_OFFSET 42920
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ2_RT_OFFSET 42921
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ3_RT_OFFSET 42922
+#define PBF_REG_BTB_GUARANTEED_VOQ3_RT_OFFSET 42923
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ3_RT_OFFSET 42924
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ4_RT_OFFSET 42925
+#define PBF_REG_BTB_GUARANTEED_VOQ4_RT_OFFSET 42926
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ4_RT_OFFSET 42927
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ5_RT_OFFSET 42928
+#define PBF_REG_BTB_GUARANTEED_VOQ5_RT_OFFSET 42929
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ5_RT_OFFSET 42930
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ6_RT_OFFSET 42931
+#define PBF_REG_BTB_GUARANTEED_VOQ6_RT_OFFSET 42932
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ6_RT_OFFSET 42933
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ7_RT_OFFSET 42934
+#define PBF_REG_BTB_GUARANTEED_VOQ7_RT_OFFSET 42935
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ7_RT_OFFSET 42936
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ8_RT_OFFSET 42937
+#define PBF_REG_BTB_GUARANTEED_VOQ8_RT_OFFSET 42938
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ8_RT_OFFSET 42939
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ9_RT_OFFSET 42940
+#define PBF_REG_BTB_GUARANTEED_VOQ9_RT_OFFSET 42941
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ9_RT_OFFSET 42942
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ10_RT_OFFSET 42943
+#define PBF_REG_BTB_GUARANTEED_VOQ10_RT_OFFSET 42944
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ10_RT_OFFSET 42945
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ11_RT_OFFSET 42946
+#define PBF_REG_BTB_GUARANTEED_VOQ11_RT_OFFSET 42947
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ11_RT_OFFSET 42948
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ12_RT_OFFSET 42949
+#define PBF_REG_BTB_GUARANTEED_VOQ12_RT_OFFSET 42950
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ12_RT_OFFSET 42951
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ13_RT_OFFSET 42952
+#define PBF_REG_BTB_GUARANTEED_VOQ13_RT_OFFSET 42953
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ13_RT_OFFSET 42954
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ14_RT_OFFSET 42955
+#define PBF_REG_BTB_GUARANTEED_VOQ14_RT_OFFSET 42956
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ14_RT_OFFSET 42957
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ15_RT_OFFSET 42958
+#define PBF_REG_BTB_GUARANTEED_VOQ15_RT_OFFSET 42959
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ15_RT_OFFSET 42960
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ16_RT_OFFSET 42961
+#define PBF_REG_BTB_GUARANTEED_VOQ16_RT_OFFSET 42962
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ16_RT_OFFSET 42963
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ17_RT_OFFSET 42964
+#define PBF_REG_BTB_GUARANTEED_VOQ17_RT_OFFSET 42965
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ17_RT_OFFSET 42966
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ18_RT_OFFSET 42967
+#define PBF_REG_BTB_GUARANTEED_VOQ18_RT_OFFSET 42968
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ18_RT_OFFSET 42969
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ19_RT_OFFSET 42970
+#define PBF_REG_BTB_GUARANTEED_VOQ19_RT_OFFSET 42971
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ19_RT_OFFSET 42972
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ20_RT_OFFSET 42973
+#define PBF_REG_BTB_GUARANTEED_VOQ20_RT_OFFSET 42974
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ20_RT_OFFSET 42975
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ21_RT_OFFSET 42976
+#define PBF_REG_BTB_GUARANTEED_VOQ21_RT_OFFSET 42977
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ21_RT_OFFSET 42978
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ22_RT_OFFSET 42979
+#define PBF_REG_BTB_GUARANTEED_VOQ22_RT_OFFSET 42980
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ22_RT_OFFSET 42981
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ23_RT_OFFSET 42982
+#define PBF_REG_BTB_GUARANTEED_VOQ23_RT_OFFSET 42983
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ23_RT_OFFSET 42984
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ24_RT_OFFSET 42985
+#define PBF_REG_BTB_GUARANTEED_VOQ24_RT_OFFSET 42986
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ24_RT_OFFSET 42987
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ25_RT_OFFSET 42988
+#define PBF_REG_BTB_GUARANTEED_VOQ25_RT_OFFSET 42989
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ25_RT_OFFSET 42990
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ26_RT_OFFSET 42991
+#define PBF_REG_BTB_GUARANTEED_VOQ26_RT_OFFSET 42992
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ26_RT_OFFSET 42993
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ27_RT_OFFSET 42994
+#define PBF_REG_BTB_GUARANTEED_VOQ27_RT_OFFSET 42995
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ27_RT_OFFSET 42996
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ28_RT_OFFSET 42997
+#define PBF_REG_BTB_GUARANTEED_VOQ28_RT_OFFSET 42998
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ28_RT_OFFSET 42999
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ29_RT_OFFSET 43000
+#define PBF_REG_BTB_GUARANTEED_VOQ29_RT_OFFSET 43001
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ29_RT_OFFSET 43002
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ30_RT_OFFSET 43003
+#define PBF_REG_BTB_GUARANTEED_VOQ30_RT_OFFSET 43004
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ30_RT_OFFSET 43005
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ31_RT_OFFSET 43006
+#define PBF_REG_BTB_GUARANTEED_VOQ31_RT_OFFSET 43007
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ31_RT_OFFSET 43008
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ32_RT_OFFSET 43009
+#define PBF_REG_BTB_GUARANTEED_VOQ32_RT_OFFSET 43010
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ32_RT_OFFSET 43011
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ33_RT_OFFSET 43012
+#define PBF_REG_BTB_GUARANTEED_VOQ33_RT_OFFSET 43013
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ33_RT_OFFSET 43014
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ34_RT_OFFSET 43015
+#define PBF_REG_BTB_GUARANTEED_VOQ34_RT_OFFSET 43016
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ34_RT_OFFSET 43017
+#define PBF_REG_YCMD_QS_NUM_LINES_VOQ35_RT_OFFSET 43018
+#define PBF_REG_BTB_GUARANTEED_VOQ35_RT_OFFSET 43019
+#define PBF_REG_BTB_SHARED_AREA_SETUP_VOQ35_RT_OFFSET 43020
+#define XCM_REG_CON_PHY_Q3_RT_OFFSET 43021
-#define RUNTIME_ARRAY_SIZE 43023
+#define RUNTIME_ARRAY_SIZE 43022
/* Init Callbacks */
#define DMAE_READY_CB 0
diff --git a/drivers/net/qede/base/eth_common.h b/drivers/net/qede/base/eth_common.h
index abfa685..9a401ed 100644
--- a/drivers/net/qede/base/eth_common.h
+++ b/drivers/net/qede/base/eth_common.h
@@ -178,6 +178,11 @@ struct eth_tx_1st_bd_flags {
#define ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_SHIFT 6
/* Recalculate Tunnel UDP/GRE Checksum (Depending on Tunnel Type) */
#define ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_MASK 0x1
+/* Recalculate Tunnel UDP/GRE Checksum (Depending on Tunnel Type). In case of
+ * GRE tunnel, this flag means GRE CSO, and in this case GRE checksum field
+ * Must be present.
+ */
+#define ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_MASK 0x1
#define ETH_TX_1ST_BD_FLAGS_TUNN_L4_CSUM_SHIFT 7
};
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index b82ccc1..c3e0bd2 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -8,13 +8,13 @@
0
#define CDU_REG_CID_ADDR_PARAMS_CONTEXT_SIZE ( \
- 0xfff << 0)
+ 0xfffUL << 0)
#define CDU_REG_CID_ADDR_PARAMS_BLOCK_WASTE_SHIFT \
12
#define CDU_REG_CID_ADDR_PARAMS_BLOCK_WASTE ( \
- 0xfff << 12)
+ 0xfffUL << 12)
#define CDU_REG_CID_ADDR_PARAMS_NCIB_SHIFT \
24
@@ -366,9 +366,9 @@
#define IGU_REG_COMMAND_REG_CTRL \
0x180848UL
#define IGU_REG_BLOCK_CONFIGURATION_VF_CLEANUP_EN ( \
- 0x1 << 1)
+ 0x1UL << 1)
#define IGU_REG_BLOCK_CONFIGURATION_PXP_TPH_INTERFACE_EN ( \
- 0x1 << 0)
+ 0x1UL << 0)
#define IGU_REG_MAPPING_MEMORY \
0x184000UL
#define MISCS_REG_GENERIC_POR_0 \
@@ -376,7 +376,7 @@
#define MCP_REG_NVM_CFG4 \
0xe0642cUL
#define MCP_REG_NVM_CFG4_FLASH_SIZE ( \
- 0x7 << 0)
+ 0x7UL << 0)
#define MCP_REG_NVM_CFG4_FLASH_SIZE_SHIFT \
0
#define CCFC_REG_STRONG_ENABLE_VF 0x2e070cUL
@@ -409,7 +409,7 @@
#define XMAC_REG_TX_CTRL_LO 0x210020UL
#define XMAC_REG_CTRL 0x210000UL
#define XMAC_REG_RX_CTRL 0x210030UL
-#define XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE (0x1 << 12)
+#define XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE (0x1UL << 12)
#define MISC_REG_CLK_100G_MODE 0x008c10UL
#define MISC_REG_OPTE_MODE 0x008c0cUL
#define NIG_REG_LLH_ENG_CLS_TCP_4_TUPLE_SEARCH 0x501b84UL
@@ -439,16 +439,16 @@
#define NIG_REG_LLH_FUNC_FILTER_EN 0x501a80UL
#define NIG_REG_LLH_FUNC_FILTER_EN_SIZE 16
#define NIG_REG_LLH_FUNC_FILTER_VALUE 0x501a00UL
-#define XMAC_REG_CTRL_TX_EN (0x1 << 0)
-#define XMAC_REG_CTRL_RX_EN (0x1 << 1)
+#define XMAC_REG_CTRL_TX_EN (0x1UL << 0)
+#define XMAC_REG_CTRL_RX_EN (0x1UL << 1)
#define CDU_REG_SEGMENT0_PARAMS_T0_TID_SIZE (0xffUL << 24) /* @DPDK */
-#define CDU_REG_SEGMENT0_PARAMS_T0_TID_BLOCK_WASTE (0xff << 16)
+#define CDU_REG_SEGMENT0_PARAMS_T0_TID_BLOCK_WASTE (0xffUL << 16)
#define CDU_REG_SEGMENT0_PARAMS_T0_TID_BLOCK_WASTE_SHIFT 16
-#define CDU_REG_SEGMENT1_PARAMS_T1_TID_BLOCK_WASTE (0xff << 16)
+#define CDU_REG_SEGMENT1_PARAMS_T1_TID_BLOCK_WASTE (0xffUL << 16)
#define CDU_REG_SEGMENT1_PARAMS_T1_TID_SIZE (0xffUL << 24) /* @DPDK */
-#define CDU_REG_SEGMENT1_PARAMS_T1_NUM_TIDS_IN_BLOCK (0xfff << 0)
+#define CDU_REG_SEGMENT1_PARAMS_T1_NUM_TIDS_IN_BLOCK (0xfffUL << 0)
#define CDU_REG_SEGMENT1_PARAMS_T1_NUM_TIDS_IN_BLOCK_SHIFT 0
-#define CDU_REG_SEGMENT0_PARAMS_T0_NUM_TIDS_IN_BLOCK (0xfff << 0)
+#define CDU_REG_SEGMENT0_PARAMS_T0_NUM_TIDS_IN_BLOCK (0xfffUL << 0)
#define CDU_REG_SEGMENT0_PARAMS_T0_NUM_TIDS_IN_BLOCK_SHIFT 0
#define PSWRQ2_REG_ILT_MEMORY 0x260000UL
#define QM_REG_WFQPFWEIGHT 0x2f4e80UL
@@ -536,7 +536,7 @@
#define MISC_REG_AEU_GENERAL_ATTN_35 0x00848cUL
#define MCP_REG_CPU_STATE 0xe05004UL
#define MCP_REG_CPU_MODE 0xe05000UL
-#define MCP_REG_CPU_MODE_SOFT_HALT (0x1 << 10)
+#define MCP_REG_CPU_MODE_SOFT_HALT (0x1UL << 10)
#define MCP_REG_CPU_EVENT_MASK 0xe05008UL
#define PSWHST_REG_VF_DISABLED_ERROR_VALID 0x2a0060UL
#define PSWHST_REG_VF_DISABLED_ERROR_ADDRESS 0x2a0064UL
@@ -565,15 +565,15 @@
#define PGLUE_B_REG_VF_ILT_ERR_ADD_63_32 0x2aae78UL
#define PGLUE_B_REG_VF_ILT_ERR_DETAILS 0x2aae7cUL
#define PGLUE_B_REG_LATCHED_ERRORS_CLR 0x2aa3bcUL
-#define NIG_REG_INT_MASK_3_P0_LB_TC1_PAUSE_TOO_LONG_INT (0x1 << 10)
+#define NIG_REG_INT_MASK_3_P0_LB_TC1_PAUSE_TOO_LONG_INT (0x1UL << 10)
#define DORQ_REG_DB_DROP_REASON 0x100a2cUL
#define DORQ_REG_DB_DROP_DETAILS 0x100a24UL
#define TM_REG_INT_STS_1 0x2c0190UL
-#define TM_REG_INT_STS_1_PEND_TASK_SCAN (0x1 << 6)
-#define TM_REG_INT_STS_1_PEND_CONN_SCAN (0x1 << 5)
+#define TM_REG_INT_STS_1_PEND_TASK_SCAN (0x1UL << 6)
+#define TM_REG_INT_STS_1_PEND_CONN_SCAN (0x1UL << 5)
#define TM_REG_INT_MASK_1 0x2c0194UL
-#define TM_REG_INT_MASK_1_PEND_CONN_SCAN (0x1 << 5)
-#define TM_REG_INT_MASK_1_PEND_TASK_SCAN (0x1 << 6)
+#define TM_REG_INT_MASK_1_PEND_CONN_SCAN (0x1UL << 5)
+#define TM_REG_INT_MASK_1_PEND_TASK_SCAN (0x1UL << 6)
#define MISC_REG_AEU_AFTER_INVERT_1_IGU 0x0087b4UL
#define MISC_REG_AEU_ENABLE4_IGU_OUT_0 0x0084a8UL
#define MISC_REG_AEU_ENABLE3_IGU_OUT_0 0x0084a4UL
@@ -1187,10 +1187,10 @@
#define XMAC_REG_RX_MAX_SIZE_BB 0x210040UL
#define XMAC_REG_TX_CTRL_LO_BB 0x210020UL
#define XMAC_REG_CTRL_BB 0x210000UL
-#define XMAC_REG_CTRL_TX_EN_BB (0x1 << 0)
-#define XMAC_REG_CTRL_RX_EN_BB (0x1 << 1)
+#define XMAC_REG_CTRL_TX_EN_BB (0x1UL << 0)
+#define XMAC_REG_CTRL_RX_EN_BB (0x1UL << 1)
#define XMAC_REG_RX_CTRL_BB 0x210030UL
-#define XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE_BB (0x1 << 12)
+#define XMAC_REG_RX_CTRL_PROCESS_VARIABLE_PREAMBLE_BB (0x1UL << 12)
#define PGLUE_B_REG_PGL_ADDR_E8_F0_K2_E5 0x2aaf98UL
#define PGLUE_B_REG_PGL_ADDR_EC_F0_K2_E5 0x2aaf9cUL
@@ -1217,14 +1217,14 @@
#define DORQ_REG_DPM_FORCE_ABORT 0x1009d8UL
#define DORQ_REG_PF_OVFL_STICKY 0x1009d0UL
#define DORQ_REG_INT_STS 0x100180UL
- #define DORQ_REG_INT_STS_DB_DROP (0x1 << 1)
- #define DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR (0x1 << 2)
- #define DORQ_REG_INT_STS_DORQ_FIFO_AFULL (0x1 << 3)
+ #define DORQ_REG_INT_STS_DB_DROP (0x1UL << 1)
+ #define DORQ_REG_INT_STS_DORQ_FIFO_OVFL_ERR (0x1UL << 2)
+ #define DORQ_REG_INT_STS_DORQ_FIFO_AFULL (0x1UL << 3)
#define DORQ_REG_DB_DROP_DETAILS_REL 0x100a28UL
#define DORQ_REG_INT_STS_WR 0x100188UL
#define DORQ_REG_DB_DROP_DETAILS_REASON 0x100a20UL
#define MCP_REG_CPU_PROGRAM_COUNTER 0xe0501cUL
- #define MCP_REG_CPU_STATE_SOFT_HALTED (0x1 << 10)
+ #define MCP_REG_CPU_STATE_SOFT_HALTED (0x1UL << 10)
#define PRS_REG_SEARCH_TENANT_ID 0x1f044cUL
#define PGLUE_B_REG_VF_BAR1_SIZE 0x2aae68UL
@@ -1234,3 +1234,4 @@
#define NIG_REG_LLH_FUNC_TAG_VALUE 0x5019d0UL
#define DORQ_REG_TAG1_OVRD_MODE 0x1008b4UL
#define DORQ_REG_PF_EXT_VID_BB_K2 0x1008c8UL
+#define PRS_REG_SEARCH_NON_IP_AS_GFT 0x1f11c0UL
diff --git a/drivers/net/qede/qede_main.c b/drivers/net/qede/qede_main.c
index 46fa837..c361f24 100644
--- a/drivers/net/qede/qede_main.c
+++ b/drivers/net/qede/qede_main.c
@@ -18,7 +18,7 @@
char fw_file[PATH_MAX];
const char *QEDE_DEFAULT_FIRMWARE =
- "/lib/firmware/qed/qed_init_values-8.33.12.0.bin";
+ "/lib/firmware/qed/qed_init_values-8.37.7.0.bin";
static void
qed_update_pf_params(struct ecore_dev *edev, struct ecore_pf_params *params)
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 02/18] net/qede/base: check for EDPM enabled in DB recovery
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 01/18] net/qede/base: upgrade to FW 8.37.7.0 Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 03/18] net/qede/base: add DPC sync after PF stop Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 04/18] net/qede/base: workaround to indicate SHMEM data ready Mody, Rasesh
` (15 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
Add a check for EDPM enabled before flushing doorbell recovery queue.
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/base/ecore.h | 2 ++
drivers/net/qede/base/ecore_dev.c | 10 +++++++++-
drivers/net/qede/base/ecore_int.c | 20 +++++++++++++++++---
3 files changed, 28 insertions(+), 4 deletions(-)
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 8982214..4607a80 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -962,6 +962,8 @@ void ecore_set_fw_mac_addr(__le16 *fw_msb, __le16 *fw_mid, __le16 *fw_lsb,
void ecore_db_recovery_execute(struct ecore_hwfn *p_hwfn,
enum ecore_db_rec_exec);
+bool ecore_edpm_enabled(struct ecore_hwfn *p_hwfn);
+
/* amount of resources used in qm init */
u8 ecore_init_qm_get_num_tcs(struct ecore_hwfn *p_hwfn);
u16 ecore_init_qm_get_num_vfs(struct ecore_hwfn *p_hwfn);
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index b83f003..f09f771 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -1974,6 +1974,14 @@ enum ECORE_ROCE_EDPM_MODE {
ECORE_ROCE_EDPM_MODE_DISABLE = 2,
};
+bool ecore_edpm_enabled(struct ecore_hwfn *p_hwfn)
+{
+ if (p_hwfn->dcbx_no_edpm || p_hwfn->db_bar_no_edpm)
+ return false;
+
+ return true;
+}
+
static enum _ecore_status_t
ecore_hw_init_pf_doorbell_bar(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
@@ -2061,7 +2069,7 @@ enum ECORE_ROCE_EDPM_MODE {
DP_INFO(p_hwfn,
" dpi_size=%d, dpi_count=%d, roce_edpm=%s\n",
p_hwfn->dpi_size, p_hwfn->dpi_count,
- ((p_hwfn->dcbx_no_edpm) || (p_hwfn->db_bar_no_edpm)) ?
+ (!ecore_edpm_enabled(p_hwfn)) ?
"disabled" : "enabled");
/* Check return codes from above calls */
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index d41107d..c9acc72 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -431,9 +431,8 @@ static enum _ecore_status_t ecore_fw_assertion(struct ecore_hwfn *p_hwfn)
#define ECORE_DB_REC_COUNT 10
#define ECORE_DB_REC_INTERVAL 100
-/* assumes sticky overflow indication was set for this PF */
-static enum _ecore_status_t ecore_db_rec_attn(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
+static enum _ecore_status_t ecore_db_rec_flush_queue(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt)
{
u8 count = ECORE_DB_REC_COUNT;
u32 usage = 1;
@@ -461,6 +460,21 @@ static enum _ecore_status_t ecore_db_rec_attn(struct ecore_hwfn *p_hwfn,
return ECORE_TIMEOUT;
}
+ return ECORE_SUCCESS;
+}
+
+/* assumes sticky overflow indication was set for this PF */
+static enum _ecore_status_t ecore_db_rec_attn(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt)
+{
+ enum _ecore_status_t rc;
+
+ if (ecore_edpm_enabled(p_hwfn)) {
+ rc = ecore_db_rec_flush_queue(p_hwfn, p_ptt);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+ }
+
/* flush any pedning (e)dpm as they may never arrive */
ecore_wr(p_hwfn, p_ptt, DORQ_REG_DPM_FORCE_ABORT, 0x1);
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 03/18] net/qede/base: add DPC sync after PF stop
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 01/18] net/qede/base: upgrade to FW 8.37.7.0 Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 02/18] net/qede/base: check for EDPM enabled in DB recovery Mody, Rasesh
` (16 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
Add DPC sync after stopping the physical funciton to allow clean up of
asyncronous events. Post this the driver don't expect the FW to send
async events.
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/base/ecore_dev.c | 6 ++++++
drivers/net/qede/base/ecore_spq.c | 12 ++++++++++--
2 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index f09f771..4558306 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2804,6 +2804,12 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
rc2 = ECORE_UNKNOWN_ERROR;
}
+ OSAL_DPC_SYNC(p_hwfn);
+
+ /* After this point we don't expect the FW to send us async
+ * events
+ */
+
/* perform debug action after PF stop was sent */
OSAL_AFTER_PF_STOP((void *)p_dev, p_hwfn->my_id);
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 776c86f..1a02ba2 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -283,8 +283,10 @@ static enum _ecore_status_t ecore_spq_hw_post(struct ecore_hwfn *p_hwfn,
{
ecore_spq_async_comp_cb cb;
- if (!p_hwfn->p_spq || (p_eqe->protocol_id >= MAX_PROTOCOL_TYPE))
+ if (p_eqe->protocol_id >= MAX_PROTOCOL_TYPE) {
+ DP_ERR(p_hwfn, "Wrong protocol: %d\n", p_eqe->protocol_id);
return ECORE_INVAL;
+ }
cb = p_hwfn->p_spq->async_comp_cb[p_eqe->protocol_id];
if (cb) {
@@ -339,10 +341,16 @@ enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn,
{
struct ecore_eq *p_eq = cookie;
struct ecore_chain *p_chain = &p_eq->chain;
+ u16 fw_cons_idx = 0;
enum _ecore_status_t rc = 0;
+ if (!p_hwfn->p_spq) {
+ DP_ERR(p_hwfn, "Unexpected NULL p_spq\n");
+ return ECORE_INVAL;
+ }
+
/* take a snapshot of the FW consumer */
- u16 fw_cons_idx = OSAL_LE16_TO_CPU(*p_eq->p_fw_cons);
+ fw_cons_idx = OSAL_LE16_TO_CPU(*p_eq->p_fw_cons);
DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "fw_cons_idx %x\n", fw_cons_idx);
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 04/18] net/qede/base: workaround to indicate SHMEM data ready
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
` (2 preceding siblings ...)
2018-09-29 8:14 ` [dpdk-dev] [PATCH 02/18] net/qede/base: check for EDPM enabled in DB recovery Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 06/18] net/qede/base: add mf-bit/API for FIP special mode Mody, Rasesh
` (14 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
The driver can notify that there was an MCP reset and read the SHMEM
values before the management FW has completed initializing them.
As a temporary solution, the "sup_msgs" field is used as a SHMEM data
ready indication. This should be replaced with an actual indication
when it is provided by the management FW.
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/base/ecore_mcp.c | 43 ++++++++++++++++++++++++++++++-------
1 file changed, 35 insertions(+), 8 deletions(-)
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 364c146..3811d27 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -177,10 +177,16 @@ enum _ecore_status_t ecore_mcp_free(struct ecore_hwfn *p_hwfn)
return ECORE_SUCCESS;
}
+/* Maximum of 1 sec to wait for the SHMEM ready indication */
+#define ECORE_MCP_SHMEM_RDY_MAX_RETRIES 20
+#define ECORE_MCP_SHMEM_RDY_ITER_MS 50
+
static enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
{
struct ecore_mcp_info *p_info = p_hwfn->mcp_info;
+ u8 cnt = ECORE_MCP_SHMEM_RDY_MAX_RETRIES;
+ u8 msec = ECORE_MCP_SHMEM_RDY_ITER_MS;
u32 drv_mb_offsize, mfw_mb_offsize;
u32 mcp_pf_id = MCP_PF_ID(p_hwfn);
@@ -198,6 +204,35 @@ static enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn,
p_info->public_base |= GRCBASE_MCP;
+ /* Get the MFW MB address and number of supported messages */
+ mfw_mb_offsize = ecore_rd(p_hwfn, p_ptt,
+ SECTION_OFFSIZE_ADDR(p_info->public_base,
+ PUBLIC_MFW_MB));
+ p_info->mfw_mb_addr = SECTION_ADDR(mfw_mb_offsize, mcp_pf_id);
+ p_info->mfw_mb_length = (u16)ecore_rd(p_hwfn, p_ptt,
+ p_info->mfw_mb_addr);
+
+ /* @@@TBD:
+ * The driver can notify that there was an MCP reset, and read the SHMEM
+ * values before the MFW has completed initializing them.
+ * As a temporary solution, the "sup_msgs" field is used as a data ready
+ * indication.
+ * This should be replaced with an actual indication when it is provided
+ * by the MFW.
+ */
+ while (!p_info->mfw_mb_length && cnt--) {
+ OSAL_MSLEEP(msec);
+ p_info->mfw_mb_length = (u16)ecore_rd(p_hwfn, p_ptt,
+ p_info->mfw_mb_addr);
+ }
+
+ if (!cnt) {
+ DP_NOTICE(p_hwfn, false,
+ "Failed to get the SHMEM ready notification after %d msec\n",
+ ECORE_MCP_SHMEM_RDY_MAX_RETRIES * msec);
+ return ECORE_TIMEOUT;
+ }
+
/* Calculate the driver and MFW mailbox address */
drv_mb_offsize = ecore_rd(p_hwfn, p_ptt,
SECTION_OFFSIZE_ADDR(p_info->public_base,
@@ -208,14 +243,6 @@ static enum _ecore_status_t ecore_load_mcp_offsets(struct ecore_hwfn *p_hwfn,
" mcp_pf_id = 0x%x\n",
drv_mb_offsize, p_info->drv_mb_addr, mcp_pf_id);
- /* Set the MFW MB address */
- mfw_mb_offsize = ecore_rd(p_hwfn, p_ptt,
- SECTION_OFFSIZE_ADDR(p_info->public_base,
- PUBLIC_MFW_MB));
- p_info->mfw_mb_addr = SECTION_ADDR(mfw_mb_offsize, mcp_pf_id);
- p_info->mfw_mb_length = (u16)ecore_rd(p_hwfn, p_ptt,
- p_info->mfw_mb_addr);
-
/* Get the current driver mailbox sequence before sending
* the first command
*/
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 05/18] net/qede/base: add API to update FW RSS indirection table
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
` (4 preceding siblings ...)
2018-09-29 8:14 ` [dpdk-dev] [PATCH 06/18] net/qede/base: add mf-bit/API for FIP special mode Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 07/18] net/qede/base: add error handling for mutex allocation Mody, Rasesh
` (12 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
Added ecore_update_eth_rss_ind_table_entry() api to update FW RSS
indirection table entry according to new interface of FW 8.37.x.x.
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/base/ecore_l2.c | 52 ++++++++++++++++++++++++++++++++++
drivers/net/qede/base/ecore_l2_api.h | 24 ++++++++++++++++
2 files changed, 76 insertions(+)
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index ec40aac..d87ffda 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -2323,3 +2323,55 @@ enum _ecore_status_t
return ecore_init_vport_rl(p_hwfn, p_ptt, vport, rate,
p_link->speed);
}
+
+#define RSS_TSTORM_UPDATE_STATUS_MAX_POLL_COUNT 100
+#define RSS_TSTORM_UPDATE_STATUS_POLL_PERIOD_US 1
+
+enum _ecore_status_t
+ecore_update_eth_rss_ind_table_entry(struct ecore_hwfn *p_hwfn,
+ u8 vport_id,
+ u8 ind_table_index,
+ u16 ind_table_value)
+{
+ struct eth_tstorm_rss_update_data update_data = { 0 };
+ void OSAL_IOMEM *addr = OSAL_NULL;
+ enum _ecore_status_t rc;
+ u8 abs_vport_id;
+ u32 cnt = 0;
+
+ OSAL_BUILD_BUG_ON(sizeof(update_data) != sizeof(u64));
+
+ rc = ecore_fw_vport(p_hwfn, vport_id, &abs_vport_id);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ addr = (u8 OSAL_IOMEM *)p_hwfn->regview +
+ GTT_BAR0_MAP_REG_TSDM_RAM +
+ TSTORM_ETH_RSS_UPDATE_OFFSET(p_hwfn->rel_pf_id);
+
+ *(u64 *)(&update_data) = DIRECT_REG_RD64(p_hwfn, addr);
+
+ for (cnt = 0; update_data.valid &&
+ cnt < RSS_TSTORM_UPDATE_STATUS_MAX_POLL_COUNT; cnt++) {
+ OSAL_UDELAY(RSS_TSTORM_UPDATE_STATUS_POLL_PERIOD_US);
+ *(u64 *)(&update_data) = DIRECT_REG_RD64(p_hwfn, addr);
+ }
+
+ if (update_data.valid) {
+ DP_NOTICE(p_hwfn, true,
+ "rss update valid status is not clear! valid=0x%x vport id=%d ind_Table_idx=%d ind_table_value=%d.\n",
+ update_data.valid, vport_id, ind_table_index,
+ ind_table_value);
+
+ return ECORE_AGAIN;
+ }
+
+ update_data.valid = 1;
+ update_data.ind_table_index = ind_table_index;
+ update_data.ind_table_value = ind_table_value;
+ update_data.vport_id = abs_vport_id;
+
+ DIRECT_REG_WR64(p_hwfn, addr, *(u64 *)(&update_data));
+
+ return ECORE_SUCCESS;
+}
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index bde825c..21595f3 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -470,4 +470,28 @@ enum _ecore_status_t
dma_addr_t p_addr, u16 length,
u16 qid, u8 vport_id,
bool b_is_add);
+
+/**
+ * @brief - ecore_update_eth_rss_ind_table_entry
+ *
+ * This function being used to update RSS indirection table entry to FW RAM
+ * instead of using the SP vport update ramrod with rss params.
+ *
+ * Notice:
+ * This function supports only one outstanding command per engine. Ecore
+ * clients which use this function should call ecore_mcp_ind_table_lock() prior
+ * to it and ecore_mcp_ind_table_unlock() after it.
+ *
+ * @params p_hwfn
+ * @params vport_id
+ * @params ind_table_index
+ * @params ind_table_value
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t
+ecore_update_eth_rss_ind_table_entry(struct ecore_hwfn *p_hwfn,
+ u8 vport_id,
+ u8 ind_table_index,
+ u16 ind_table_value);
#endif
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 06/18] net/qede/base: add mf-bit/API for FIP special mode
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
` (3 preceding siblings ...)
2018-09-29 8:14 ` [dpdk-dev] [PATCH 04/18] net/qede/base: workaround to indicate SHMEM data ready Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 05/18] net/qede/base: add API to update FW RSS indirection table Mody, Rasesh
` (13 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
Add mf-bit/API for FIP special mode.
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/base/ecore.h | 3 +++
drivers/net/qede/base/ecore_dev.c | 8 +++++++-
drivers/net/qede/base/ecore_dev_api.h | 9 +++++++++
3 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index 4607a80..b9f5993 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -543,6 +543,9 @@ enum ecore_mf_mode_bit {
/* Use stag for steering */
ECORE_MF_8021AD_TAGGING,
+
+ /* Allow FIP discovery fallback */
+ ECORE_MF_FIP_SPECIAL,
};
enum ecore_ufp_mode {
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 4558306..da312b4 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -3704,7 +3704,8 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
case NVM_CFG1_GLOB_MF_MODE_BD:
p_hwfn->p_dev->mf_bits = 1 << ECORE_MF_OVLAN_CLSS |
1 << ECORE_MF_LLH_PROTO_CLSS |
- 1 << ECORE_MF_8021AD_TAGGING;
+ 1 << ECORE_MF_8021AD_TAGGING |
+ 1 << ECORE_MF_FIP_SPECIAL;
break;
case NVM_CFG1_GLOB_MF_MODE_NPAR1_0:
p_hwfn->p_dev->mf_bits = 1 << ECORE_MF_LLH_MAC_CLSS |
@@ -5804,3 +5805,8 @@ void ecore_set_fw_mac_addr(__le16 *fw_msb,
((u8 *)fw_lsb)[0] = mac[5];
((u8 *)fw_lsb)[1] = mac[4];
}
+
+bool ecore_is_mf_fip_special(struct ecore_dev *p_dev)
+{
+ return !!OSAL_TEST_BIT(ECORE_MF_FIP_SPECIAL, &p_dev->mf_bits);
+}
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index 7cba54c..ab80b52 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -704,4 +704,13 @@ enum _ecore_status_t
enum _ecore_status_t ecore_pglueb_set_pfid_enable(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
bool b_enable);
+
+/**
+ * @brief Whether FIP discovery fallback special mode is enabled or not.
+ *
+ * @param cdev
+ *
+ * @return true if device is in FIP special mode, false otherwise.
+ */
+bool ecore_is_mf_fip_special(struct ecore_dev *p_dev);
#endif
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 07/18] net/qede/base: add error handling for mutex allocation
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
` (5 preceding siblings ...)
2018-09-29 8:14 ` [dpdk-dev] [PATCH 05/18] net/qede/base: add API to update FW RSS indirection table Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 08/18] net/qede/base: adjust queue manager idx greater than max Mody, Rasesh
` (11 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
Add error handling for mutex allocation failure
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/base/ecore_cxt.c | 11 +++++++----
drivers/net/qede/base/ecore_vf.c | 19 ++++++++++++++++++-
2 files changed, 25 insertions(+), 5 deletions(-)
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index bf36ce5..6bc6348 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -1133,6 +1133,9 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
return ECORE_NOMEM;
}
+ /* Set the cxt mangr pointer prior to further allocations */
+ p_hwfn->p_cxt_mngr = p_mngr;
+
/* Initialize ILT client registers */
clients = p_mngr->clients;
clients[ILT_CLI_CDUC].first.reg = ILT_CFG_REG(CDUC, FIRST_ILT);
@@ -1174,13 +1177,13 @@ enum _ecore_status_t ecore_cxt_mngr_alloc(struct ecore_hwfn *p_hwfn)
/* Initialize the dynamic ILT allocation mutex */
#ifdef CONFIG_ECORE_LOCK_ALLOC
- OSAL_MUTEX_ALLOC(p_hwfn, &p_mngr->mutex);
+ if (OSAL_MUTEX_ALLOC(p_hwfn, &p_mngr->mutex)) {
+ DP_NOTICE(p_hwfn, false, "Failed to alloc p_mngr->mutex\n");
+ return ECORE_NOMEM;
+ }
#endif
OSAL_MUTEX_INIT(&p_mngr->mutex);
- /* Set the cxt mangr pointer priori to further allocations */
- p_hwfn->p_cxt_mngr = p_mngr;
-
return ECORE_SUCCESS;
}
diff --git a/drivers/net/qede/base/ecore_vf.c b/drivers/net/qede/base/ecore_vf.c
index d2213f7..409b301 100644
--- a/drivers/net/qede/base/ecore_vf.c
+++ b/drivers/net/qede/base/ecore_vf.c
@@ -565,13 +565,20 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn)
phys,
p_iov->bulletin.
size);
+ if (!p_iov->bulletin.p_virt) {
+ DP_NOTICE(p_hwfn, false, "Failed to alloc bulletin memory\n");
+ goto free_pf2vf_reply;
+ }
DP_VERBOSE(p_hwfn, ECORE_MSG_IOV,
"VF's bulletin Board [%p virt 0x%lx phys 0x%08x bytes]\n",
p_iov->bulletin.p_virt, (unsigned long)p_iov->bulletin.phys,
p_iov->bulletin.size);
#ifdef CONFIG_ECORE_LOCK_ALLOC
- OSAL_MUTEX_ALLOC(p_hwfn, &p_iov->mutex);
+ if (OSAL_MUTEX_ALLOC(p_hwfn, &p_iov->mutex)) {
+ DP_NOTICE(p_hwfn, false, "Failed to allocate p_iov->mutex\n");
+ goto free_bulletin_mem;
+ }
#endif
OSAL_MUTEX_INIT(&p_iov->mutex);
@@ -609,6 +616,16 @@ enum _ecore_status_t ecore_vf_hw_prepare(struct ecore_hwfn *p_hwfn)
return rc;
+#ifdef CONFIG_ECORE_LOCK_ALLOC
+free_bulletin_mem:
+ OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev, p_iov->bulletin.p_virt,
+ p_iov->bulletin.phys,
+ p_iov->bulletin.size);
+#endif
+free_pf2vf_reply:
+ OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev, p_iov->pf2vf_reply,
+ p_iov->pf2vf_reply_phys,
+ sizeof(union pfvf_tlvs));
free_vf2pf_request:
OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev, p_iov->vf2pf_request,
p_iov->vf2pf_request_phys,
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 08/18] net/qede/base: adjust queue manager idx greater than max
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
` (6 preceding siblings ...)
2018-09-29 8:14 ` [dpdk-dev] [PATCH 07/18] net/qede/base: add error handling for mutex allocation Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 09/18] net/qede/base: add pretend function for port/PF Mody, Rasesh
` (10 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
Modified queue manager getter APIs to cycle through their range if
index is higher than max. This prevents accessing index out of bounds.
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/base/ecore_dev.c | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index da312b4..f0adf18 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -569,12 +569,11 @@ u16 ecore_init_qm_get_num_pf_rls(struct ecore_hwfn *p_hwfn)
{
u16 num_pf_rls, num_vfs = ecore_init_qm_get_num_vfs(p_hwfn);
- /* @DPDK */
/* num RLs can't exceed resource amount of rls or vports or the
* dcqcn qps
*/
num_pf_rls = (u16)OSAL_MIN_T(u32, RESC_NUM(p_hwfn, ECORE_RL),
- (u16)RESC_NUM(p_hwfn, ECORE_VPORT));
+ RESC_NUM(p_hwfn, ECORE_VPORT));
/* make sure after we reserve the default and VF rls we'll have
* something left
@@ -835,7 +834,7 @@ u16 ecore_get_cm_pq_idx_mcos(struct ecore_hwfn *p_hwfn, u8 tc)
if (tc > max_tc)
DP_ERR(p_hwfn, "tc %d must be smaller than %d\n", tc, max_tc);
- return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + tc;
+ return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + (tc % max_tc);
}
u16 ecore_get_cm_pq_idx_vf(struct ecore_hwfn *p_hwfn, u16 vf)
@@ -845,17 +844,17 @@ u16 ecore_get_cm_pq_idx_vf(struct ecore_hwfn *p_hwfn, u16 vf)
if (vf > max_vf)
DP_ERR(p_hwfn, "vf %d must be smaller than %d\n", vf, max_vf);
- return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + vf;
+ return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + (vf % max_vf);
}
u16 ecore_get_cm_pq_idx_rl(struct ecore_hwfn *p_hwfn, u16 rl)
{
u16 max_rl = ecore_init_qm_get_num_pf_rls(p_hwfn);
- if (rl > max_rl)
- DP_ERR(p_hwfn, "rl %d must be smaller than %d\n", rl, max_rl);
-
- return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_RLS) + rl;
+ /* for rate limiters, it is okay to use the modulo behavior - no
+ * DP_ERR
+ */
+ return ecore_get_cm_pq_idx(p_hwfn, PQ_FLAGS_RLS) + (rl % max_rl);
}
u16 ecore_get_qm_vport_idx_rl(struct ecore_hwfn *p_hwfn, u16 rl)
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 09/18] net/qede/base: add pretend function for port/PF
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
` (7 preceding siblings ...)
2018-09-29 8:14 ` [dpdk-dev] [PATCH 08/18] net/qede/base: adjust queue manager idx greater than max Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 11/18] net/qede/base: add periodic Doorbell Recovery support Mody, Rasesh
` (9 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
Add a pretend function for port/PF, pretend to another port and another
function when accessing the ptt window
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/base/ecore_hw.c | 24 ++++++++++++++++++++++++
drivers/net/qede/base/ecore_hw.h | 12 ++++++++++++
2 files changed, 36 insertions(+)
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 51bba27..6cfbbab 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -407,6 +407,30 @@ void ecore_port_unpretend(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
*(u32 *)&p_ptt->pxp.pretend);
}
+void ecore_port_fid_pretend(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ u8 port_id, u16 fid)
+{
+ u16 control = 0;
+
+ SET_FIELD(control, PXP_PRETEND_CMD_PORT, port_id);
+ SET_FIELD(control, PXP_PRETEND_CMD_USE_PORT, 1);
+ SET_FIELD(control, PXP_PRETEND_CMD_PRETEND_PORT, 1);
+
+ SET_FIELD(control, PXP_PRETEND_CMD_IS_CONCRETE, 1);
+ SET_FIELD(control, PXP_PRETEND_CMD_PRETEND_FUNCTION, 1);
+
+ if (!GET_FIELD(fid, PXP_CONCRETE_FID_VFVALID))
+ fid = GET_FIELD(fid, PXP_CONCRETE_FID_PFID);
+
+ p_ptt->pxp.pretend.control = OSAL_CPU_TO_LE16(control);
+ p_ptt->pxp.pretend.fid.concrete_fid.fid = OSAL_CPU_TO_LE16(fid);
+
+ REG_WR(p_hwfn,
+ ecore_ptt_config_addr(p_ptt) +
+ OFFSETOF(struct pxp_ptt_entry, pretend),
+ *(u32 *)&p_ptt->pxp.pretend);
+}
+
u32 ecore_vfid_to_concrete(struct ecore_hwfn *p_hwfn, u8 vfid)
{
u32 concrete_fid = 0;
diff --git a/drivers/net/qede/base/ecore_hw.h b/drivers/net/qede/base/ecore_hw.h
index 394207e..a62ba39 100644
--- a/drivers/net/qede/base/ecore_hw.h
+++ b/drivers/net/qede/base/ecore_hw.h
@@ -223,6 +223,18 @@ void ecore_port_unpretend(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt);
/**
+ * @brief ecore_port_fid_pretend - pretend to another port and another function
+ * when accessing the ptt window
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param port_id - the port to pretend to
+ * @param fid - fid field of pxp_pretend structure. Can contain either pf / vf.
+ */
+void ecore_port_fid_pretend(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ u8 port_id, u16 fid);
+
+/**
* @brief ecore_vfid_to_concrete - build a concrete FID for a
* given VF ID
*
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 10/18] net/qede/base: add support for SRIOV VF min rate
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
` (10 preceding siblings ...)
2018-09-29 8:14 ` [dpdk-dev] [PATCH 12/18] net/qede/base: get pre-negotiated OEM values Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 13/18] net/qede/base: enable control frame filtering Mody, Rasesh
` (6 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
Add support for SRIOV vf min rate configuration.
Fix return code for ecore_iov_get_vf_min_rate().
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/base/ecore_iov_api.h | 10 ++++++++++
drivers/net/qede/base/ecore_sriov.c | 28 +++++++++++++++++++++++++++-
2 files changed, 37 insertions(+), 1 deletion(-)
diff --git a/drivers/net/qede/base/ecore_iov_api.h b/drivers/net/qede/base/ecore_iov_api.h
index d398478..55de708 100644
--- a/drivers/net/qede/base/ecore_iov_api.h
+++ b/drivers/net/qede/base/ecore_iov_api.h
@@ -702,6 +702,16 @@ bool ecore_iov_is_vf_started(struct ecore_hwfn *p_hwfn,
*/
int ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid);
+/**
+ * @brief - Configure min rate for VF's vport.
+ * @param p_dev
+ * @param vfid
+ * @param - rate in Mbps
+ *
+ * @return
+ */
+enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
+ int vfid, u32 rate);
#endif
/**
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 9e4a57b..9da4e41 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -4798,6 +4798,32 @@ enum _ecore_status_t ecore_iov_configure_tx_rate(struct ecore_hwfn *p_hwfn,
p_link->speed);
}
+enum _ecore_status_t ecore_iov_configure_min_tx_rate(struct ecore_dev *p_dev,
+ int vfid, u32 rate)
+{
+ struct ecore_vf_info *vf;
+ int i;
+
+ for_each_hwfn(p_dev, i) {
+ struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
+
+ if (!ecore_iov_pf_sanity_check(p_hwfn, vfid)) {
+ DP_NOTICE(p_hwfn, true,
+ "SR-IOV sanity check failed, can't set min rate\n");
+ return ECORE_INVAL;
+ }
+ }
+
+ vf = ecore_iov_get_vf_info(ECORE_LEADING_HWFN(p_dev), (u16)vfid, true);
+ if (!vf) {
+ DP_NOTICE(p_dev, true,
+ "Getting vf info failed, can't set min rate\n");
+ return ECORE_INVAL;
+ }
+
+ return ecore_configure_vport_wfq(p_dev, vf->vport_id, rate);
+}
+
enum _ecore_status_t ecore_iov_get_vf_stats(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
int vfid,
@@ -4908,7 +4934,7 @@ bool ecore_iov_is_vf_started(struct ecore_hwfn *p_hwfn,
return (p_vf->state != VF_FREE && p_vf->state != VF_STOPPED);
}
-enum _ecore_status_t
+int
ecore_iov_get_vf_min_rate(struct ecore_hwfn *p_hwfn, int vfid)
{
struct ecore_wfq_data *vf_vp_wfq;
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 11/18] net/qede/base: add periodic Doorbell Recovery support
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
` (8 preceding siblings ...)
2018-09-29 8:14 ` [dpdk-dev] [PATCH 09/18] net/qede/base: add pretend function for port/PF Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 12/18] net/qede/base: get pre-negotiated OEM values Mody, Rasesh
` (8 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
Add support for periodic Doorbell Recovery.
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/base/bcm_osal.h | 1 +
drivers/net/qede/base/ecore_int.c | 37 +++++++++++++++++----------------
drivers/net/qede/base/ecore_int_api.h | 11 ++++++++++
3 files changed, 31 insertions(+), 18 deletions(-)
diff --git a/drivers/net/qede/base/bcm_osal.h b/drivers/net/qede/base/bcm_osal.h
index 70805f6..1abf44f 100644
--- a/drivers/net/qede/base/bcm_osal.h
+++ b/drivers/net/qede/base/bcm_osal.h
@@ -454,5 +454,6 @@ void qede_get_mcp_proto_stats(struct ecore_dev *, enum ecore_mcp_protocol_type,
#define OSAL_DIV_S64(a, b) ((a) / (b))
#define OSAL_LLDP_RX_TLVS(p_hwfn, tlv_buf, tlv_size) nothing
#define OSAL_DBG_ALLOC_USER_DATA(p_hwfn, user_data_ptr) (0)
+#define OSAL_DB_REC_OCCURRED(p_hwfn) nothing
#endif /* __BCM_OSAL_H */
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index c9acc72..fd8f657 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -428,13 +428,13 @@ static enum _ecore_status_t ecore_fw_assertion(struct ecore_hwfn *p_hwfn)
#define ECORE_DORQ_ATTENTION_SIZE_MASK (0x7f)
#define ECORE_DORQ_ATTENTION_SIZE_SHIFT (16)
-#define ECORE_DB_REC_COUNT 10
+#define ECORE_DB_REC_COUNT 1000
#define ECORE_DB_REC_INTERVAL 100
static enum _ecore_status_t ecore_db_rec_flush_queue(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
{
- u8 count = ECORE_DB_REC_COUNT;
+ u32 count = ECORE_DB_REC_COUNT;
u32 usage = 1;
/* wait for usage to zero or count to run out. This is necessary since
@@ -463,12 +463,19 @@ static enum _ecore_status_t ecore_db_rec_flush_queue(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-/* assumes sticky overflow indication was set for this PF */
-static enum _ecore_status_t ecore_db_rec_attn(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
+enum _ecore_status_t ecore_db_rec_handler(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt)
{
+ u32 overflow;
enum _ecore_status_t rc;
+ overflow = ecore_rd(p_hwfn, p_ptt, DORQ_REG_PF_OVFL_STICKY);
+ DP_NOTICE(p_hwfn, false, "PF Overflow sticky 0x%x\n", overflow);
+ if (!overflow) {
+ ecore_db_recovery_execute(p_hwfn, DB_REC_ONCE);
+ return ECORE_SUCCESS;
+ }
+
if (ecore_edpm_enabled(p_hwfn)) {
rc = ecore_db_rec_flush_queue(p_hwfn, p_ptt);
if (rc != ECORE_SUCCESS)
@@ -491,8 +498,7 @@ static enum _ecore_status_t ecore_db_rec_attn(struct ecore_hwfn *p_hwfn,
static enum _ecore_status_t ecore_dorq_attn_cb(struct ecore_hwfn *p_hwfn)
{
- u32 int_sts, first_drop_reason, details, address, overflow,
- all_drops_reason;
+ u32 int_sts, first_drop_reason, details, address, all_drops_reason;
struct ecore_ptt *p_ptt = p_hwfn->p_dpc_ptt;
enum _ecore_status_t rc;
@@ -518,8 +524,6 @@ static enum _ecore_status_t ecore_dorq_attn_cb(struct ecore_hwfn *p_hwfn)
DORQ_REG_DB_DROP_DETAILS);
address = ecore_rd(p_hwfn, p_ptt,
DORQ_REG_DB_DROP_DETAILS_ADDRESS);
- overflow = ecore_rd(p_hwfn, p_ptt,
- DORQ_REG_PF_OVFL_STICKY);
all_drops_reason = ecore_rd(p_hwfn, p_ptt,
DORQ_REG_DB_DROP_DETAILS_REASON);
@@ -530,19 +534,16 @@ static enum _ecore_status_t ecore_dorq_attn_cb(struct ecore_hwfn *p_hwfn)
"FID\t\t0x%04x\t\t(Opaque FID)\n"
"Size\t\t0x%04x\t\t(in bytes)\n"
"1st drop reason\t0x%08x\t(details on first drop since last handling)\n"
- "Sticky reasons\t0x%08x\t(all drop reasons since last handling)\n"
- "Overflow\t0x%x\t\t(a per PF indication)\n",
+ "Sticky reasons\t0x%08x\t(all drop reasons since last handling)\n",
address,
GET_FIELD(details, ECORE_DORQ_ATTENTION_OPAQUE),
GET_FIELD(details, ECORE_DORQ_ATTENTION_SIZE) * 4,
- first_drop_reason, all_drops_reason, overflow);
+ first_drop_reason, all_drops_reason);
- /* if this PF caused overflow, initiate recovery */
- if (overflow) {
- rc = ecore_db_rec_attn(p_hwfn, p_ptt);
- if (rc != ECORE_SUCCESS)
- return rc;
- }
+ rc = ecore_db_rec_handler(p_hwfn, p_ptt);
+ OSAL_DB_REC_OCCURRED(p_hwfn);
+ if (rc != ECORE_SUCCESS)
+ return rc;
/* clear the doorbell drop details and prepare for next drop */
ecore_wr(p_hwfn, p_ptt, DORQ_REG_DB_DROP_DETAILS_REL, 0);
diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h
index aeaf469..5b9c31d 100644
--- a/drivers/net/qede/base/ecore_int_api.h
+++ b/drivers/net/qede/base/ecore_int_api.h
@@ -343,4 +343,15 @@ enum _ecore_status_t ecore_int_get_sb_dbg(struct ecore_hwfn *p_hwfn,
enum _ecore_status_t
ecore_int_igu_relocate_sb(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
u16 sb_id, bool b_to_vf);
+
+/**
+ * @brief - Doorbell Recovery handler.
+ * Run DB_REAL_DEAL doorbell recovery in case of PF overflow
+ * (and flush DORQ if needed), otherwise run DB_REC_ONCE.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ */
+enum _ecore_status_t ecore_db_rec_handler(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt);
#endif
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 12/18] net/qede/base: get pre-negotiated OEM values
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
` (9 preceding siblings ...)
2018-09-29 8:14 ` [dpdk-dev] [PATCH 11/18] net/qede/base: add periodic Doorbell Recovery support Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 10/18] net/qede/base: add support for SRIOV VF min rate Mody, Rasesh
` (7 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
Request management FW for OEM values, which are negotiated prior to
the driver load by sending the GET_OEM_UPDATES command after both
engines are initialized.
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/base/ecore_dev.c | 14 ++++++++++++++
drivers/net/qede/base/mcp_public.h | 8 ++++++++
2 files changed, 22 insertions(+)
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index f0adf18..30e12e9 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -2646,6 +2646,20 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
}
if (IS_PF(p_dev)) {
+ /* Get pre-negotiated values for stag, bandwidth etc. */
+ p_hwfn = ECORE_LEADING_HWFN(p_dev);
+ DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ,
+ "Sending GET_OEM_UPDATES command to trigger stag/bandwidth attention handling\n");
+ rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
+ DRV_MSG_CODE_GET_OEM_UPDATES,
+ 1 << DRV_MB_PARAM_DUMMY_OEM_UPDATES_OFFSET,
+ &resp, ¶m);
+ if (rc != ECORE_SUCCESS)
+ DP_NOTICE(p_hwfn, false,
+ "Failed to send GET_OEM_UPDATES attention request\n");
+ }
+
+ if (IS_PF(p_dev)) {
p_hwfn = ECORE_LEADING_HWFN(p_dev);
drv_mb_param = STORM_FW_VERSION;
rc = ecore_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 2ee8ab5..46ec984 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1696,6 +1696,8 @@ struct public_drv_mb {
#define FW_MSG_CODE_RESOURCE_ALLOC_UNKNOWN 0x35000000
#define FW_MSG_CODE_RESOURCE_ALLOC_DEPRECATED 0x36000000
#define FW_MSG_CODE_RESOURCE_ALLOC_GEN_ERR 0x37000000
+#define FW_MSG_CODE_GET_OEM_UPDATES_DONE 0x41000000
+
#define FW_MSG_CODE_NIG_DRAIN_DONE 0x30000000
#define FW_MSG_CODE_VF_DISABLED_DONE 0xb0000000
#define FW_MSG_CODE_DRV_CFG_VF_MSIX_DONE 0xb0010000
@@ -1804,6 +1806,12 @@ struct public_drv_mb {
#define FW_MB_PARAM_LOAD_DONE_DID_EFUSE_ERROR (1 << 0)
+#define FW_MB_PARAM_OEM_UPDATE_MASK 0xFF
+#define FW_MB_PARAM_OEM_UPDATE_OFFSET 0
+#define FW_MB_PARAM_OEM_UPDATE_BW 0x01
+#define FW_MB_PARAM_OEM_UPDATE_S_TAG 0x02
+#define FW_MB_PARAM_OEM_UPDATE_CFG 0x04
+
u32 drv_pulse_mb;
#define DRV_PULSE_SEQ_MASK 0x00007fff
#define DRV_PULSE_SYSTEM_TIME_MASK 0xffff0000
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 13/18] net/qede/base: enable control frame filtering
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
` (11 preceding siblings ...)
2018-09-29 8:14 ` [dpdk-dev] [PATCH 10/18] net/qede/base: add support for SRIOV VF min rate Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 14/18] net/qede/base: changes for 100G Mody, Rasesh
` (5 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
Enable control frame filtering for non-trusted VFs.
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/base/ecore_l2.c | 5 +++++
drivers/net/qede/base/ecore_l2_api.h | 5 ++++-
drivers/net/qede/base/ecore_sriov.c | 4 +++-
3 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index d87ffda..c17082e 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -786,6 +786,11 @@ enum _ecore_status_t
return rc;
}
+ if (p_params->update_ctl_frame_check) {
+ p_cmn->ctl_frame_mac_check_en = p_params->mac_chk_en;
+ p_cmn->ctl_frame_ethtype_check_en = p_params->ethtype_chk_en;
+ }
+
/* Update mcast bins for VFs, PF doesn't use this functionality */
ecore_sp_update_mcast_bin(p_ramrod, p_params);
diff --git a/drivers/net/qede/base/ecore_l2_api.h b/drivers/net/qede/base/ecore_l2_api.h
index 21595f3..004fb61 100644
--- a/drivers/net/qede/base/ecore_l2_api.h
+++ b/drivers/net/qede/base/ecore_l2_api.h
@@ -347,7 +347,10 @@ struct ecore_sp_vport_update_params {
/* MTU change - notice this requires the vport to be disabled.
* If non-zero, value would be used.
*/
- u16 mtu;
+ u16 mtu;
+ u8 update_ctl_frame_check;
+ u8 mac_chk_en;
+ u8 ethtype_chk_en;
};
/**
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 9da4e41..3ac1085 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -2158,7 +2158,9 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
params.vport_id = vf->vport_id;
params.max_buffers_per_cqe = start->max_buffers_per_cqe;
params.mtu = vf->mtu;
- params.check_mac = true;
+
+ /* Non trusted VFs should enable control frame filtering */
+ params.check_mac = !vf->p_vf_info.is_trusted_configured;
rc = ecore_sp_eth_vport_start(p_hwfn, ¶ms);
if (rc != ECORE_SUCCESS) {
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 15/18] net/qede/base: add RL update params
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
` (13 preceding siblings ...)
2018-09-29 8:14 ` [dpdk-dev] [PATCH 14/18] net/qede/base: changes for 100G Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 16/18] net/qede/base: add APIs for dscp priority map configuration Mody, Rasesh
` (3 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
Add 'rl_bc_stage_th','rl_timer_stage_th' and 'dcqcn_reset_alpha_on_idle'
to RL update param as well as logs.
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/base/ecore_sp_commands.c | 8 +++++++-
drivers/net/qede/base/ecore_sp_commands.h | 3 +++
2 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/net/qede/base/ecore_sp_commands.c b/drivers/net/qede/base/ecore_sp_commands.c
index b43baf9..49a5ff5 100644
--- a/drivers/net/qede/base/ecore_sp_commands.c
+++ b/drivers/net/qede/base/ecore_sp_commands.c
@@ -515,6 +515,10 @@ enum _ecore_status_t ecore_sp_rl_update(struct ecore_hwfn *p_hwfn,
rl_update->rl_id_first = params->rl_id_first;
rl_update->rl_id_last = params->rl_id_last;
rl_update->rl_dc_qcn_flg = params->rl_dc_qcn_flg;
+ rl_update->dcqcn_reset_alpha_on_idle =
+ params->dcqcn_reset_alpha_on_idle;
+ rl_update->rl_bc_stage_th = params->rl_bc_stage_th;
+ rl_update->rl_timer_stage_th = params->rl_timer_stage_th;
rl_update->rl_bc_rate = OSAL_CPU_TO_LE32(params->rl_bc_rate);
rl_update->rl_max_rate =
OSAL_CPU_TO_LE16(ecore_sp_rl_mb_to_qm(params->rl_max_rate));
@@ -529,12 +533,14 @@ enum _ecore_status_t ecore_sp_rl_update(struct ecore_hwfn *p_hwfn,
OSAL_CPU_TO_LE32(params->dcqcn_timeuot_us);
rl_update->qcn_timeuot_us = OSAL_CPU_TO_LE32(params->qcn_timeuot_us);
- DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "rl_params: qcn_update_param_flg %x, dcqcn_update_param_flg %x, rl_init_flg %x, rl_start_flg %x, rl_stop_flg %x, rl_id_first %x, rl_id_last %x, rl_dc_qcn_flg %x, rl_bc_rate %x, rl_max_rate %x, rl_r_ai %x, rl_r_hai %x, dcqcn_g %x, dcqcn_k_us %x, dcqcn_timeuot_us %x, qcn_timeuot_us %x\n",
+ DP_VERBOSE(p_hwfn, ECORE_MSG_SPQ, "rl_params: qcn_update_param_flg %x, dcqcn_update_param_flg %x, rl_init_flg %x, rl_start_flg %x, rl_stop_flg %x, rl_id_first %x, rl_id_last %x, rl_dc_qcn_flg %x,dcqcn_reset_alpha_on_idle %x, rl_bc_stage_th %x, rl_timer_stage_th %x, rl_bc_rate %x, rl_max_rate %x, rl_r_ai %x, rl_r_hai %x, dcqcn_g %x, dcqcn_k_us %x, dcqcn_timeuot_us %x, qcn_timeuot_us %x\n",
rl_update->qcn_update_param_flg,
rl_update->dcqcn_update_param_flg,
rl_update->rl_init_flg, rl_update->rl_start_flg,
rl_update->rl_stop_flg, rl_update->rl_id_first,
rl_update->rl_id_last, rl_update->rl_dc_qcn_flg,
+ rl_update->dcqcn_reset_alpha_on_idle,
+ rl_update->rl_bc_stage_th, rl_update->rl_timer_stage_th,
rl_update->rl_bc_rate, rl_update->rl_max_rate,
rl_update->rl_r_ai, rl_update->rl_r_hai,
rl_update->dcqcn_g, rl_update->dcqcn_k_us,
diff --git a/drivers/net/qede/base/ecore_sp_commands.h b/drivers/net/qede/base/ecore_sp_commands.h
index e57414c..524fe57 100644
--- a/drivers/net/qede/base/ecore_sp_commands.h
+++ b/drivers/net/qede/base/ecore_sp_commands.h
@@ -119,6 +119,9 @@ struct ecore_rl_update_params {
u8 rl_stop_flg;
u8 rl_id_first;
u8 rl_id_last;
+ u8 dcqcn_reset_alpha_on_idle;
+ u8 rl_bc_stage_th;
+ u8 rl_timer_stage_th;
u8 rl_dc_qcn_flg; /* If set, RL will used for DCQCN */
u32 rl_bc_rate; /* Byte Counter Limit */
u32 rl_max_rate; /* Maximum rate in Mbps resolution */
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 14/18] net/qede/base: changes for 100G
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
` (12 preceding siblings ...)
2018-09-29 8:14 ` [dpdk-dev] [PATCH 13/18] net/qede/base: enable control frame filtering Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 15/18] net/qede/base: add RL update params Mody, Rasesh
` (4 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
Change details:
- Get engine affinity from the management FW and configure accordingly
- Add an LLH filter with the primary MAC address in QPAR/NPAR
- Move some of the LLH APIs around
- Add PPFID APIs
- Update all allocated ppfids with the same value for the
following PORT_PF registers:
NIG_REG_DSCP_TO_TC_MAP_ENABLE
- Add port_id, src_pfid and dst_pfid to DMA engine params
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/base/ecore.h | 49 +-
drivers/net/qede/base/ecore_cxt.c | 4 +-
drivers/net/qede/base/ecore_dcbx.c | 11 +-
drivers/net/qede/base/ecore_dev.c | 1740 +++++++++++++++++++++++---------
drivers/net/qede/base/ecore_dev_api.h | 161 ++-
drivers/net/qede/base/ecore_hw.c | 103 +-
drivers/net/qede/base/ecore_hw.h | 28 +-
drivers/net/qede/base/ecore_init_ops.c | 14 +-
drivers/net/qede/base/ecore_int.c | 13 +-
drivers/net/qede/base/ecore_l2.c | 6 +-
drivers/net/qede/base/ecore_mcp.c | 69 ++
drivers/net/qede/base/ecore_mcp.h | 21 +-
drivers/net/qede/base/ecore_sriov.c | 4 +-
drivers/net/qede/base/mcp_public.h | 15 +
drivers/net/qede/base/reg_addr.h | 4 +
15 files changed, 1684 insertions(+), 558 deletions(-)
diff --git a/drivers/net/qede/base/ecore.h b/drivers/net/qede/base/ecore.h
index b9f5993..524a1dd 100644
--- a/drivers/net/qede/base/ecore.h
+++ b/drivers/net/qede/base/ecore.h
@@ -19,6 +19,7 @@
#include <zlib.h>
#endif
+#include "ecore_status.h"
#include "ecore_hsi_common.h"
#include "ecore_hsi_debug_tools.h"
#include "ecore_hsi_init_func.h"
@@ -207,6 +208,7 @@ enum DP_MODULE {
struct ecore_igu_info;
struct ecore_mcp_info;
struct ecore_dcbx_info;
+struct ecore_llh_info;
struct ecore_rt_data {
u32 *init_val;
@@ -743,6 +745,7 @@ struct ecore_dev {
#endif
#define ECORE_IS_AH(dev) ((dev)->type == ECORE_DEV_TYPE_AH)
#define ECORE_IS_K2(dev) ECORE_IS_AH(dev)
+#define ECORE_IS_E4(dev) (ECORE_IS_BB(dev) || ECORE_IS_AH(dev))
u16 vendor_id;
u16 device_id;
@@ -837,8 +840,26 @@ struct ecore_dev {
/* HW functions */
u8 num_hwfns;
struct ecore_hwfn hwfns[MAX_HWFNS_PER_DEVICE];
+#define ECORE_LEADING_HWFN(dev) (&dev->hwfns[0])
#define ECORE_IS_CMT(dev) ((dev)->num_hwfns > 1)
+ /* Engine affinity */
+ u8 l2_affin_hint;
+ u8 fir_affin;
+ u8 iwarp_affin;
+ /* Macro for getting the engine-affinitized hwfn for FCoE/iSCSI/RoCE */
+#define ECORE_FIR_AFFIN_HWFN(dev) (&dev->hwfns[dev->fir_affin])
+ /* Macro for getting the engine-affinitized hwfn for iWARP */
+#define ECORE_IWARP_AFFIN_HWFN(dev) (&dev->hwfns[dev->iwarp_affin])
+ /* Generic macro for getting the engine-affinitized hwfn */
+#define ECORE_AFFIN_HWFN(dev) \
+ (ECORE_IS_IWARP_PERSONALITY(ECORE_LEADING_HWFN(dev)) ? \
+ ECORE_IWARP_AFFIN_HWFN(dev) : \
+ ECORE_FIR_AFFIN_HWFN(dev))
+ /* Macro for getting the index (0/1) of the engine-affinitized hwfn */
+#define ECORE_AFFIN_HWFN_IDX(dev) \
+ (IS_LEAD_HWFN(ECORE_AFFIN_HWFN(dev)) ? 0 : 1)
+
/* SRIOV */
struct ecore_hw_sriov_info *p_iov_info;
#define IS_ECORE_SRIOV(p_dev) (!!(p_dev)->p_iov_info)
@@ -873,6 +894,9 @@ struct ecore_dev {
#ifndef ASIC_ONLY
bool b_is_emul_full;
#endif
+ /* LLH info */
+ u8 ppfid_bitmap;
+ struct ecore_llh_info *p_llh_info;
/* Indicates whether this PF serves a storage target */
bool b_is_target;
@@ -974,6 +998,29 @@ void ecore_db_recovery_execute(struct ecore_hwfn *p_hwfn,
u16 ecore_init_qm_get_num_vports(struct ecore_hwfn *p_hwfn);
u16 ecore_init_qm_get_num_pqs(struct ecore_hwfn *p_hwfn);
-#define ECORE_LEADING_HWFN(dev) (&dev->hwfns[0])
+#define MFW_PORT(_p_hwfn) ((_p_hwfn)->abs_pf_id % \
+ ecore_device_num_ports((_p_hwfn)->p_dev))
+
+/* The PFID<->PPFID calculation is based on the relative index of a PF on its
+ * port. In BB there is a bug in the LLH in which the PPFID is actually engine
+ * based, and thus it equals the PFID.
+ */
+#define ECORE_PFID_BY_PPFID(_p_hwfn, abs_ppfid) \
+ (ECORE_IS_BB((_p_hwfn)->p_dev) ? \
+ (abs_ppfid) : \
+ (abs_ppfid) * (_p_hwfn)->p_dev->num_ports_in_engine + \
+ MFW_PORT(_p_hwfn))
+#define ECORE_PPFID_BY_PFID(_p_hwfn) \
+ (ECORE_IS_BB((_p_hwfn)->p_dev) ? \
+ (_p_hwfn)->rel_pf_id : \
+ (_p_hwfn)->rel_pf_id / (_p_hwfn)->p_dev->num_ports_in_engine)
+
+enum _ecore_status_t ecore_all_ppfids_wr(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt, u32 addr,
+ u32 val);
+
+/* Utility functions for dumping the content of the NIG LLH filters */
+enum _ecore_status_t ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid);
+enum _ecore_status_t ecore_llh_dump_all(struct ecore_dev *p_dev);
#endif /* __ECORE_H */
diff --git a/drivers/net/qede/base/ecore_cxt.c b/drivers/net/qede/base/ecore_cxt.c
index 6bc6348..5c3370e 100644
--- a/drivers/net/qede/base/ecore_cxt.c
+++ b/drivers/net/qede/base/ecore_cxt.c
@@ -2114,7 +2114,7 @@ enum _ecore_status_t
ecore_dmae_host2grc(p_hwfn, p_ptt, (u64)(osal_uintptr_t)&ilt_hw_entry,
reg_offset, sizeof(ilt_hw_entry) / sizeof(u32),
- 0 /* no flags */);
+ OSAL_NULL /* default parameters */);
if (elem_type == ECORE_ELEM_CXT) {
u32 last_cid_allocated = (1 + (iid / elems_per_p)) *
@@ -2221,7 +2221,7 @@ enum _ecore_status_t
(u64)(osal_uintptr_t)&ilt_hw_entry,
reg_offset,
sizeof(ilt_hw_entry) / sizeof(u32),
- 0 /* no flags */);
+ OSAL_NULL /* default parameters */);
}
ecore_ptt_release(p_hwfn, p_ptt);
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 9667874..7668ad6 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -893,12 +893,19 @@ enum _ecore_status_t
ecore_dcbx_get_params(p_hwfn, &p_hwfn->p_dcbx_info->get, type);
- /* Update the DSCP to TC mapping bit if required */
+ /* Update the DSCP to TC mapping enable bit if required */
if ((type == ECORE_DCBX_OPERATIONAL_MIB) &&
p_hwfn->p_dcbx_info->dscp_nig_update) {
u8 val = !!p_hwfn->p_dcbx_info->get.dscp.enabled;
+ u32 addr = NIG_REG_DSCP_TO_TC_MAP_ENABLE;
+
+ rc = ecore_all_ppfids_wr(p_hwfn, p_ptt, addr, val);
+ if (rc != ECORE_SUCCESS) {
+ DP_NOTICE(p_hwfn, false,
+ "Failed to update the DSCP to TC mapping enable bit\n");
+ return rc;
+ }
- ecore_wr(p_hwfn, p_ptt, NIG_REG_DSCP_TO_TC_MAP_ENABLE, val);
p_hwfn->p_dcbx_info->dscp_nig_update = false;
}
diff --git a/drivers/net/qede/base/ecore_dev.c b/drivers/net/qede/base/ecore_dev.c
index 30e12e9..cf454b1 100644
--- a/drivers/net/qede/base/ecore_dev.c
+++ b/drivers/net/qede/base/ecore_dev.c
@@ -352,6 +352,1189 @@ void ecore_db_recovery_execute(struct ecore_hwfn *p_hwfn,
}
/******************** Doorbell Recovery end ****************/
+/********************************** NIG LLH ***********************************/
+
+enum ecore_llh_filter_type {
+ ECORE_LLH_FILTER_TYPE_MAC,
+ ECORE_LLH_FILTER_TYPE_PROTOCOL,
+};
+
+struct ecore_llh_mac_filter {
+ u8 addr[ETH_ALEN];
+};
+
+struct ecore_llh_protocol_filter {
+ enum ecore_llh_prot_filter_type_t type;
+ u16 source_port_or_eth_type;
+ u16 dest_port;
+};
+
+union ecore_llh_filter {
+ struct ecore_llh_mac_filter mac;
+ struct ecore_llh_protocol_filter protocol;
+};
+
+struct ecore_llh_filter_info {
+ bool b_enabled;
+ u32 ref_cnt;
+ enum ecore_llh_filter_type type;
+ union ecore_llh_filter filter;
+};
+
+struct ecore_llh_info {
+ /* Number of LLH filters banks */
+ u8 num_ppfid;
+
+#define MAX_NUM_PPFID 8
+ u8 ppfid_array[MAX_NUM_PPFID];
+
+ /* Array of filters arrays:
+ * "num_ppfid" elements of filters banks, where each is an array of
+ * "NIG_REG_LLH_FUNC_FILTER_EN_SIZE" filters.
+ */
+ struct ecore_llh_filter_info **pp_filters;
+};
+
+static void ecore_llh_free(struct ecore_dev *p_dev)
+{
+ struct ecore_llh_info *p_llh_info = p_dev->p_llh_info;
+ u32 i;
+
+ if (p_llh_info != OSAL_NULL) {
+ if (p_llh_info->pp_filters != OSAL_NULL) {
+ for (i = 0; i < p_llh_info->num_ppfid; i++)
+ OSAL_FREE(p_dev, p_llh_info->pp_filters[i]);
+ }
+
+ OSAL_FREE(p_dev, p_llh_info->pp_filters);
+ }
+
+ OSAL_FREE(p_dev, p_llh_info);
+ p_dev->p_llh_info = OSAL_NULL;
+}
+
+static enum _ecore_status_t ecore_llh_alloc(struct ecore_dev *p_dev)
+{
+ struct ecore_llh_info *p_llh_info;
+ u32 size;
+ u8 i;
+
+ p_llh_info = OSAL_ZALLOC(p_dev, GFP_KERNEL, sizeof(*p_llh_info));
+ if (!p_llh_info)
+ return ECORE_NOMEM;
+ p_dev->p_llh_info = p_llh_info;
+
+ for (i = 0; i < MAX_NUM_PPFID; i++) {
+ if (!(p_dev->ppfid_bitmap & (0x1 << i)))
+ continue;
+
+ p_llh_info->ppfid_array[p_llh_info->num_ppfid] = i;
+ DP_VERBOSE(p_dev, ECORE_MSG_SP, "ppfid_array[%d] = %hhd\n",
+ p_llh_info->num_ppfid, i);
+ p_llh_info->num_ppfid++;
+ }
+
+ size = p_llh_info->num_ppfid * sizeof(*p_llh_info->pp_filters);
+ p_llh_info->pp_filters = OSAL_ZALLOC(p_dev, GFP_KERNEL, size);
+ if (!p_llh_info->pp_filters)
+ return ECORE_NOMEM;
+
+ size = NIG_REG_LLH_FUNC_FILTER_EN_SIZE *
+ sizeof(**p_llh_info->pp_filters);
+ for (i = 0; i < p_llh_info->num_ppfid; i++) {
+ p_llh_info->pp_filters[i] = OSAL_ZALLOC(p_dev, GFP_KERNEL,
+ size);
+ if (!p_llh_info->pp_filters[i])
+ return ECORE_NOMEM;
+ }
+
+ return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t ecore_llh_shadow_sanity(struct ecore_dev *p_dev,
+ u8 ppfid, u8 filter_idx,
+ const char *action)
+{
+ struct ecore_llh_info *p_llh_info = p_dev->p_llh_info;
+
+ if (ppfid >= p_llh_info->num_ppfid) {
+ DP_NOTICE(p_dev, false,
+ "LLH shadow [%s]: using ppfid %d while only %d ppfids are available\n",
+ action, ppfid, p_llh_info->num_ppfid);
+ return ECORE_INVAL;
+ }
+
+ if (filter_idx >= NIG_REG_LLH_FUNC_FILTER_EN_SIZE) {
+ DP_NOTICE(p_dev, false,
+ "LLH shadow [%s]: using filter_idx %d while only %d filters are available\n",
+ action, filter_idx, NIG_REG_LLH_FUNC_FILTER_EN_SIZE);
+ return ECORE_INVAL;
+ }
+
+ return ECORE_SUCCESS;
+}
+
+#define ECORE_LLH_INVALID_FILTER_IDX 0xff
+
+static enum _ecore_status_t
+ecore_llh_shadow_search_filter(struct ecore_dev *p_dev, u8 ppfid,
+ union ecore_llh_filter *p_filter,
+ u8 *p_filter_idx)
+{
+ struct ecore_llh_info *p_llh_info = p_dev->p_llh_info;
+ struct ecore_llh_filter_info *p_filters;
+ enum _ecore_status_t rc;
+ u8 i;
+
+ rc = ecore_llh_shadow_sanity(p_dev, ppfid, 0, "search");
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ *p_filter_idx = ECORE_LLH_INVALID_FILTER_IDX;
+
+ p_filters = p_llh_info->pp_filters[ppfid];
+ for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) {
+ if (!OSAL_MEMCMP(p_filter, &p_filters[i].filter,
+ sizeof(*p_filter))) {
+ *p_filter_idx = i;
+ break;
+ }
+ }
+
+ return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_llh_shadow_get_free_idx(struct ecore_dev *p_dev, u8 ppfid,
+ u8 *p_filter_idx)
+{
+ struct ecore_llh_info *p_llh_info = p_dev->p_llh_info;
+ struct ecore_llh_filter_info *p_filters;
+ enum _ecore_status_t rc;
+ u8 i;
+
+ rc = ecore_llh_shadow_sanity(p_dev, ppfid, 0, "get_free_idx");
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ *p_filter_idx = ECORE_LLH_INVALID_FILTER_IDX;
+
+ p_filters = p_llh_info->pp_filters[ppfid];
+ for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) {
+ if (!p_filters[i].b_enabled) {
+ *p_filter_idx = i;
+ break;
+ }
+ }
+
+ return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+__ecore_llh_shadow_add_filter(struct ecore_dev *p_dev, u8 ppfid, u8 filter_idx,
+ enum ecore_llh_filter_type type,
+ union ecore_llh_filter *p_filter, u32 *p_ref_cnt)
+{
+ struct ecore_llh_info *p_llh_info = p_dev->p_llh_info;
+ struct ecore_llh_filter_info *p_filters;
+ enum _ecore_status_t rc;
+
+ rc = ecore_llh_shadow_sanity(p_dev, ppfid, filter_idx, "add");
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ p_filters = p_llh_info->pp_filters[ppfid];
+ if (!p_filters[filter_idx].ref_cnt) {
+ p_filters[filter_idx].b_enabled = true;
+ p_filters[filter_idx].type = type;
+ OSAL_MEMCPY(&p_filters[filter_idx].filter, p_filter,
+ sizeof(p_filters[filter_idx].filter));
+ }
+
+ *p_ref_cnt = ++p_filters[filter_idx].ref_cnt;
+
+ return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_llh_shadow_add_filter(struct ecore_dev *p_dev, u8 ppfid,
+ enum ecore_llh_filter_type type,
+ union ecore_llh_filter *p_filter,
+ u8 *p_filter_idx, u32 *p_ref_cnt)
+{
+ enum _ecore_status_t rc;
+
+ /* Check if the same filter already exist */
+ rc = ecore_llh_shadow_search_filter(p_dev, ppfid, p_filter,
+ p_filter_idx);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ /* Find a new entry in case of a new filter */
+ if (*p_filter_idx == ECORE_LLH_INVALID_FILTER_IDX) {
+ rc = ecore_llh_shadow_get_free_idx(p_dev, ppfid, p_filter_idx);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+ }
+
+ /* No free entry was found */
+ if (*p_filter_idx == ECORE_LLH_INVALID_FILTER_IDX) {
+ DP_NOTICE(p_dev, false,
+ "Failed to find an empty LLH filter to utilize [ppfid %d]\n",
+ ppfid);
+ return ECORE_NORESOURCES;
+ }
+
+ return __ecore_llh_shadow_add_filter(p_dev, ppfid, *p_filter_idx, type,
+ p_filter, p_ref_cnt);
+}
+
+static enum _ecore_status_t
+__ecore_llh_shadow_remove_filter(struct ecore_dev *p_dev, u8 ppfid,
+ u8 filter_idx, u32 *p_ref_cnt)
+{
+ struct ecore_llh_info *p_llh_info = p_dev->p_llh_info;
+ struct ecore_llh_filter_info *p_filters;
+ enum _ecore_status_t rc;
+
+ rc = ecore_llh_shadow_sanity(p_dev, ppfid, filter_idx, "remove");
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ p_filters = p_llh_info->pp_filters[ppfid];
+ if (!p_filters[filter_idx].ref_cnt) {
+ DP_NOTICE(p_dev, false,
+ "LLH shadow: trying to remove a filter with ref_cnt=0\n");
+ return ECORE_INVAL;
+ }
+
+ *p_ref_cnt = --p_filters[filter_idx].ref_cnt;
+ if (!p_filters[filter_idx].ref_cnt)
+ OSAL_MEM_ZERO(&p_filters[filter_idx],
+ sizeof(p_filters[filter_idx]));
+
+ return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_llh_shadow_remove_filter(struct ecore_dev *p_dev, u8 ppfid,
+ union ecore_llh_filter *p_filter,
+ u8 *p_filter_idx, u32 *p_ref_cnt)
+{
+ enum _ecore_status_t rc;
+
+ rc = ecore_llh_shadow_search_filter(p_dev, ppfid, p_filter,
+ p_filter_idx);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ /* No matching filter was found */
+ if (*p_filter_idx == ECORE_LLH_INVALID_FILTER_IDX) {
+ DP_NOTICE(p_dev, false,
+ "Failed to find a filter in the LLH shadow\n");
+ return ECORE_INVAL;
+ }
+
+ return __ecore_llh_shadow_remove_filter(p_dev, ppfid, *p_filter_idx,
+ p_ref_cnt);
+}
+
+static enum _ecore_status_t
+ecore_llh_shadow_remove_all_filters(struct ecore_dev *p_dev, u8 ppfid)
+{
+ struct ecore_llh_info *p_llh_info = p_dev->p_llh_info;
+ struct ecore_llh_filter_info *p_filters;
+ enum _ecore_status_t rc;
+
+ rc = ecore_llh_shadow_sanity(p_dev, ppfid, 0, "remove_all");
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ p_filters = p_llh_info->pp_filters[ppfid];
+ OSAL_MEM_ZERO(p_filters,
+ NIG_REG_LLH_FUNC_FILTER_EN_SIZE * sizeof(*p_filters));
+
+ return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t ecore_abs_ppfid(struct ecore_dev *p_dev,
+ u8 rel_ppfid, u8 *p_abs_ppfid)
+{
+ struct ecore_llh_info *p_llh_info = p_dev->p_llh_info;
+ u8 ppfids = p_llh_info->num_ppfid - 1;
+
+ if (rel_ppfid >= p_llh_info->num_ppfid) {
+ DP_NOTICE(p_dev, false,
+ "rel_ppfid %d is not valid, available indices are 0..%hhd\n",
+ rel_ppfid, ppfids);
+ return ECORE_INVAL;
+ }
+
+ *p_abs_ppfid = p_llh_info->ppfid_array[rel_ppfid];
+
+ return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+__ecore_llh_set_engine_affin(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
+{
+ struct ecore_dev *p_dev = p_hwfn->p_dev;
+ enum ecore_eng eng;
+ u8 ppfid;
+ enum _ecore_status_t rc;
+
+ rc = ecore_mcp_get_engine_config(p_hwfn, p_ptt);
+ if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL) {
+ DP_NOTICE(p_hwfn, false,
+ "Failed to get the engine affinity configuration\n");
+ return rc;
+ }
+
+ /* RoCE PF is bound to a single engine */
+ if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
+ eng = p_dev->fir_affin ? ECORE_ENG1 : ECORE_ENG0;
+ rc = ecore_llh_set_roce_affinity(p_dev, eng);
+ if (rc != ECORE_SUCCESS) {
+ DP_NOTICE(p_dev, false,
+ "Failed to set the RoCE engine affinity\n");
+ return rc;
+ }
+
+ DP_VERBOSE(p_dev, ECORE_MSG_SP,
+ "LLH: Set the engine affinity of RoCE packets as %d\n",
+ eng);
+ }
+
+ /* Storage PF is bound to a single engine while L2 PF uses both */
+ if (ECORE_IS_FCOE_PERSONALITY(p_hwfn) ||
+ ECORE_IS_ISCSI_PERSONALITY(p_hwfn))
+ eng = p_dev->fir_affin ? ECORE_ENG1 : ECORE_ENG0;
+ else /* L2_PERSONALITY */
+ eng = ECORE_BOTH_ENG;
+
+ for (ppfid = 0; ppfid < p_dev->p_llh_info->num_ppfid; ppfid++) {
+ rc = ecore_llh_set_ppfid_affinity(p_dev, ppfid, eng);
+ if (rc != ECORE_SUCCESS) {
+ DP_NOTICE(p_dev, false,
+ "Failed to set the engine affinity of ppfid %d\n",
+ ppfid);
+ return rc;
+ }
+ }
+
+ DP_VERBOSE(p_dev, ECORE_MSG_SP,
+ "LLH: Set the engine affinity of non-RoCE packets as %d\n",
+ eng);
+
+ return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_llh_set_engine_affin(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ bool avoid_eng_affin)
+{
+ struct ecore_dev *p_dev = p_hwfn->p_dev;
+ enum _ecore_status_t rc;
+
+ /* Backwards compatible mode:
+ * - RoCE packets - Use engine 0.
+ * - Non-RoCE packets - Use connection based classification for L2 PFs,
+ * and engine 0 otherwise.
+ */
+ if (avoid_eng_affin) {
+ enum ecore_eng eng;
+ u8 ppfid;
+
+ if (ECORE_IS_ROCE_PERSONALITY(p_hwfn)) {
+ eng = ECORE_ENG0;
+ rc = ecore_llh_set_roce_affinity(p_dev, eng);
+ if (rc != ECORE_SUCCESS) {
+ DP_NOTICE(p_dev, false,
+ "Failed to set the RoCE engine affinity\n");
+ return rc;
+ }
+
+ DP_VERBOSE(p_dev, ECORE_MSG_SP,
+ "LLH [backwards compatible mode]: Set the engine affinity of RoCE packets as %d\n",
+ eng);
+ }
+
+ eng = (ECORE_IS_FCOE_PERSONALITY(p_hwfn) ||
+ ECORE_IS_ISCSI_PERSONALITY(p_hwfn)) ? ECORE_ENG0
+ : ECORE_BOTH_ENG;
+ for (ppfid = 0; ppfid < p_dev->p_llh_info->num_ppfid; ppfid++) {
+ rc = ecore_llh_set_ppfid_affinity(p_dev, ppfid, eng);
+ if (rc != ECORE_SUCCESS) {
+ DP_NOTICE(p_dev, false,
+ "Failed to set the engine affinity of ppfid %d\n",
+ ppfid);
+ return rc;
+ }
+ }
+
+ DP_VERBOSE(p_dev, ECORE_MSG_SP,
+ "LLH [backwards compatible mode]: Set the engine affinity of non-RoCE packets as %d\n",
+ eng);
+
+ return ECORE_SUCCESS;
+ }
+
+ return __ecore_llh_set_engine_affin(p_hwfn, p_ptt);
+}
+
+static enum _ecore_status_t ecore_llh_hw_init_pf(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ bool avoid_eng_affin)
+{
+ struct ecore_dev *p_dev = p_hwfn->p_dev;
+ u8 ppfid, abs_ppfid;
+ enum _ecore_status_t rc;
+
+ for (ppfid = 0; ppfid < p_dev->p_llh_info->num_ppfid; ppfid++) {
+ u32 addr;
+
+ rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ addr = NIG_REG_LLH_PPFID2PFID_TBL_0 + abs_ppfid * 0x4;
+ ecore_wr(p_hwfn, p_ptt, addr, p_hwfn->rel_pf_id);
+ }
+
+ if (OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits) &&
+ !ECORE_IS_FCOE_PERSONALITY(p_hwfn)) {
+ rc = ecore_llh_add_mac_filter(p_dev, 0,
+ p_hwfn->hw_info.hw_mac_addr);
+ if (rc != ECORE_SUCCESS)
+ DP_NOTICE(p_dev, false,
+ "Failed to add an LLH filter with the primary MAC\n");
+ }
+
+ if (ECORE_IS_CMT(p_dev)) {
+ rc = ecore_llh_set_engine_affin(p_hwfn, p_ptt, avoid_eng_affin);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+ }
+
+ return ECORE_SUCCESS;
+}
+
+u8 ecore_llh_get_num_ppfid(struct ecore_dev *p_dev)
+{
+ return p_dev->p_llh_info->num_ppfid;
+}
+
+enum ecore_eng ecore_llh_get_l2_affinity_hint(struct ecore_dev *p_dev)
+{
+ return p_dev->l2_affin_hint ? ECORE_ENG1 : ECORE_ENG0;
+}
+
+/* TBD - should be removed when these definitions are available in reg_addr.h */
+#define NIG_REG_PPF_TO_ENGINE_SEL_ROCE_MASK 0x3
+#define NIG_REG_PPF_TO_ENGINE_SEL_ROCE_SHIFT 0
+#define NIG_REG_PPF_TO_ENGINE_SEL_NON_ROCE_MASK 0x3
+#define NIG_REG_PPF_TO_ENGINE_SEL_NON_ROCE_SHIFT 2
+
+enum _ecore_status_t ecore_llh_set_ppfid_affinity(struct ecore_dev *p_dev,
+ u8 ppfid, enum ecore_eng eng)
+{
+ struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+ struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+ u32 addr, val, eng_sel;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
+ u8 abs_ppfid;
+
+ if (p_ptt == OSAL_NULL)
+ return ECORE_AGAIN;
+
+ if (!ECORE_IS_CMT(p_dev))
+ goto out;
+
+ rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid);
+ if (rc != ECORE_SUCCESS)
+ goto out;
+
+ switch (eng) {
+ case ECORE_ENG0:
+ eng_sel = 0;
+ break;
+ case ECORE_ENG1:
+ eng_sel = 1;
+ break;
+ case ECORE_BOTH_ENG:
+ eng_sel = 2;
+ break;
+ default:
+ DP_NOTICE(p_dev, false,
+ "Invalid affinity value for ppfid [%d]\n", eng);
+ rc = ECORE_INVAL;
+ goto out;
+ }
+
+ addr = NIG_REG_PPF_TO_ENGINE_SEL + abs_ppfid * 0x4;
+ val = ecore_rd(p_hwfn, p_ptt, addr);
+ SET_FIELD(val, NIG_REG_PPF_TO_ENGINE_SEL_NON_ROCE, eng_sel);
+ ecore_wr(p_hwfn, p_ptt, addr, val);
+
+ /* The iWARP affinity is set as the affinity of ppfid 0 */
+ if (!ppfid && ECORE_IS_IWARP_PERSONALITY(p_hwfn))
+ p_dev->iwarp_affin = (eng == ECORE_ENG1) ? 1 : 0;
+out:
+ ecore_ptt_release(p_hwfn, p_ptt);
+
+ return rc;
+}
+
+enum _ecore_status_t ecore_llh_set_roce_affinity(struct ecore_dev *p_dev,
+ enum ecore_eng eng)
+{
+ struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+ struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+ u32 addr, val, eng_sel;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
+ u8 ppfid, abs_ppfid;
+
+ if (p_ptt == OSAL_NULL)
+ return ECORE_AGAIN;
+
+ if (!ECORE_IS_CMT(p_dev))
+ goto out;
+
+ switch (eng) {
+ case ECORE_ENG0:
+ eng_sel = 0;
+ break;
+ case ECORE_ENG1:
+ eng_sel = 1;
+ break;
+ case ECORE_BOTH_ENG:
+ eng_sel = 2;
+ ecore_wr(p_hwfn, p_ptt, NIG_REG_LLH_ENG_CLS_ROCE_QP_SEL,
+ 0xf /* QP bit 15 */);
+ break;
+ default:
+ DP_NOTICE(p_dev, false,
+ "Invalid affinity value for RoCE [%d]\n", eng);
+ rc = ECORE_INVAL;
+ goto out;
+ }
+
+ for (ppfid = 0; ppfid < p_dev->p_llh_info->num_ppfid; ppfid++) {
+ rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid);
+ if (rc != ECORE_SUCCESS)
+ goto out;
+
+ addr = NIG_REG_PPF_TO_ENGINE_SEL + abs_ppfid * 0x4;
+ val = ecore_rd(p_hwfn, p_ptt, addr);
+ SET_FIELD(val, NIG_REG_PPF_TO_ENGINE_SEL_ROCE, eng_sel);
+ ecore_wr(p_hwfn, p_ptt, addr, val);
+ }
+out:
+ ecore_ptt_release(p_hwfn, p_ptt);
+
+ return rc;
+}
+
+struct ecore_llh_filter_e4_details {
+ u64 value;
+ u32 mode;
+ u32 protocol_type;
+ u32 hdr_sel;
+ u32 enable;
+};
+
+static enum _ecore_status_t
+ecore_llh_access_filter_e4(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx,
+ struct ecore_llh_filter_e4_details *p_details,
+ bool b_write_access)
+{
+ u8 pfid = ECORE_PFID_BY_PPFID(p_hwfn, abs_ppfid);
+ struct ecore_dmae_params params;
+ enum _ecore_status_t rc;
+ u32 addr;
+
+ /* The NIG/LLH registers that are accessed in this function have only 16
+ * rows which are exposed to a PF. I.e. only the 16 filters of its
+ * default ppfid
+ * Accessing filters of other ppfids requires pretending to other PFs,
+ * and thus the usage of the ecore_ppfid_rd/wr() functions.
+ */
+
+ /* Filter enable - should be done first when removing a filter */
+ if (b_write_access && !p_details->enable) {
+ addr = NIG_REG_LLH_FUNC_FILTER_EN_BB_K2 + filter_idx * 0x4;
+ ecore_ppfid_wr(p_hwfn, p_ptt, abs_ppfid, addr,
+ p_details->enable);
+ }
+
+ /* Filter value */
+ addr = NIG_REG_LLH_FUNC_FILTER_VALUE_BB_K2 + 2 * filter_idx * 0x4;
+ OSAL_MEMSET(¶ms, 0, sizeof(params));
+
+ if (b_write_access) {
+ params.flags = ECORE_DMAE_FLAG_PF_DST;
+ params.dst_pfid = pfid;
+ rc = ecore_dmae_host2grc(p_hwfn, p_ptt,
+ (u64)(osal_uintptr_t)&p_details->value,
+ addr, 2 /* size_in_dwords */, ¶ms);
+ } else {
+ params.flags = ECORE_DMAE_FLAG_PF_SRC |
+ ECORE_DMAE_FLAG_COMPLETION_DST;
+ params.src_pfid = pfid;
+ rc = ecore_dmae_grc2host(p_hwfn, p_ptt, addr,
+ (u64)(osal_uintptr_t)&p_details->value,
+ 2 /* size_in_dwords */, ¶ms);
+ }
+
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ /* Filter mode */
+ addr = NIG_REG_LLH_FUNC_FILTER_MODE_BB_K2 + filter_idx * 0x4;
+ if (b_write_access)
+ ecore_ppfid_wr(p_hwfn, p_ptt, abs_ppfid, addr, p_details->mode);
+ else
+ p_details->mode = ecore_ppfid_rd(p_hwfn, p_ptt, abs_ppfid,
+ addr);
+
+ /* Filter protocol type */
+ addr = NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_BB_K2 + filter_idx * 0x4;
+ if (b_write_access)
+ ecore_ppfid_wr(p_hwfn, p_ptt, abs_ppfid, addr,
+ p_details->protocol_type);
+ else
+ p_details->protocol_type = ecore_ppfid_rd(p_hwfn, p_ptt,
+ abs_ppfid, addr);
+
+ /* Filter header select */
+ addr = NIG_REG_LLH_FUNC_FILTER_HDR_SEL_BB_K2 + filter_idx * 0x4;
+ if (b_write_access)
+ ecore_ppfid_wr(p_hwfn, p_ptt, abs_ppfid, addr,
+ p_details->hdr_sel);
+ else
+ p_details->hdr_sel = ecore_ppfid_rd(p_hwfn, p_ptt, abs_ppfid,
+ addr);
+
+ /* Filter enable - should be done last when adding a filter */
+ if (!b_write_access || p_details->enable) {
+ addr = NIG_REG_LLH_FUNC_FILTER_EN_BB_K2 + filter_idx * 0x4;
+ if (b_write_access)
+ ecore_ppfid_wr(p_hwfn, p_ptt, abs_ppfid, addr,
+ p_details->enable);
+ else
+ p_details->enable = ecore_ppfid_rd(p_hwfn, p_ptt,
+ abs_ppfid, addr);
+ }
+
+ return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_llh_add_filter_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ u8 abs_ppfid, u8 filter_idx, u8 filter_prot_type,
+ u32 high, u32 low)
+{
+ struct ecore_llh_filter_e4_details filter_details;
+
+ filter_details.enable = 1;
+ filter_details.value = ((u64)high << 32) | low;
+ filter_details.hdr_sel =
+ OSAL_TEST_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits) ?
+ 1 : /* inner/encapsulated header */
+ 0; /* outer/tunnel header */
+ filter_details.protocol_type = filter_prot_type;
+ filter_details.mode = filter_prot_type ?
+ 1 : /* protocol-based classification */
+ 0; /* MAC-address based classification */
+
+ return ecore_llh_access_filter_e4(p_hwfn, p_ptt, abs_ppfid, filter_idx,
+ &filter_details,
+ true /* write access */);
+}
+
+static enum _ecore_status_t
+ecore_llh_remove_filter_e4(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt, u8 abs_ppfid, u8 filter_idx)
+{
+ struct ecore_llh_filter_e4_details filter_details;
+
+ OSAL_MEMSET(&filter_details, 0, sizeof(filter_details));
+
+ return ecore_llh_access_filter_e4(p_hwfn, p_ptt, abs_ppfid, filter_idx,
+ &filter_details,
+ true /* write access */);
+}
+
+static enum _ecore_status_t
+ecore_llh_add_filter(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ u8 abs_ppfid, u8 filter_idx, u8 filter_prot_type, u32 high,
+ u32 low)
+{
+ return ecore_llh_add_filter_e4(p_hwfn, p_ptt, abs_ppfid,
+ filter_idx, filter_prot_type,
+ high, low);
+}
+
+static enum _ecore_status_t
+ecore_llh_remove_filter(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ u8 abs_ppfid, u8 filter_idx)
+{
+ return ecore_llh_remove_filter_e4(p_hwfn, p_ptt, abs_ppfid,
+ filter_idx);
+}
+
+enum _ecore_status_t ecore_llh_add_mac_filter(struct ecore_dev *p_dev, u8 ppfid,
+ u8 mac_addr[ETH_ALEN])
+{
+ struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+ struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+ union ecore_llh_filter filter;
+ u8 filter_idx, abs_ppfid;
+ u32 high, low, ref_cnt;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
+
+ if (p_ptt == OSAL_NULL)
+ return ECORE_AGAIN;
+
+ if (!OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits))
+ goto out;
+
+ OSAL_MEM_ZERO(&filter, sizeof(filter));
+ OSAL_MEMCPY(filter.mac.addr, mac_addr, ETH_ALEN);
+ rc = ecore_llh_shadow_add_filter(p_dev, ppfid,
+ ECORE_LLH_FILTER_TYPE_MAC,
+ &filter, &filter_idx, &ref_cnt);
+ if (rc != ECORE_SUCCESS)
+ goto err;
+
+ rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid);
+ if (rc != ECORE_SUCCESS)
+ goto err;
+
+ /* Configure the LLH only in case of a new the filter */
+ if (ref_cnt == 1) {
+ high = mac_addr[1] | (mac_addr[0] << 8);
+ low = mac_addr[5] | (mac_addr[4] << 8) | (mac_addr[3] << 16) |
+ (mac_addr[2] << 24);
+ rc = ecore_llh_add_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx,
+ 0, high, low);
+ if (rc != ECORE_SUCCESS)
+ goto err;
+ }
+
+ DP_VERBOSE(p_dev, ECORE_MSG_SP,
+ "LLH: Added MAC filter [%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx] to ppfid %hhd [abs %hhd] at idx %hhd [ref_cnt %d]\n",
+ mac_addr[0], mac_addr[1], mac_addr[2], mac_addr[3],
+ mac_addr[4], mac_addr[5], ppfid, abs_ppfid, filter_idx,
+ ref_cnt);
+
+ goto out;
+
+err:
+ DP_NOTICE(p_dev, false,
+ "LLH: Failed to add MAC filter [%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx] to ppfid %hhd\n",
+ mac_addr[0], mac_addr[1], mac_addr[2], mac_addr[3],
+ mac_addr[4], mac_addr[5], ppfid);
+out:
+ ecore_ptt_release(p_hwfn, p_ptt);
+
+ return rc;
+}
+
+static enum _ecore_status_t
+ecore_llh_protocol_filter_stringify(struct ecore_dev *p_dev,
+ enum ecore_llh_prot_filter_type_t type,
+ u16 source_port_or_eth_type, u16 dest_port,
+ char *str, osal_size_t str_len)
+{
+ switch (type) {
+ case ECORE_LLH_FILTER_ETHERTYPE:
+ OSAL_SNPRINTF(str, str_len, "Ethertype 0x%04x",
+ source_port_or_eth_type);
+ break;
+ case ECORE_LLH_FILTER_TCP_SRC_PORT:
+ OSAL_SNPRINTF(str, str_len, "TCP src port 0x%04x",
+ source_port_or_eth_type);
+ break;
+ case ECORE_LLH_FILTER_UDP_SRC_PORT:
+ OSAL_SNPRINTF(str, str_len, "UDP src port 0x%04x",
+ source_port_or_eth_type);
+ break;
+ case ECORE_LLH_FILTER_TCP_DEST_PORT:
+ OSAL_SNPRINTF(str, str_len, "TCP dst port 0x%04x", dest_port);
+ break;
+ case ECORE_LLH_FILTER_UDP_DEST_PORT:
+ OSAL_SNPRINTF(str, str_len, "UDP dst port 0x%04x", dest_port);
+ break;
+ case ECORE_LLH_FILTER_TCP_SRC_AND_DEST_PORT:
+ OSAL_SNPRINTF(str, str_len, "TCP src/dst ports 0x%04x/0x%04x",
+ source_port_or_eth_type, dest_port);
+ break;
+ case ECORE_LLH_FILTER_UDP_SRC_AND_DEST_PORT:
+ OSAL_SNPRINTF(str, str_len, "UDP src/dst ports 0x%04x/0x%04x",
+ source_port_or_eth_type, dest_port);
+ break;
+ default:
+ DP_NOTICE(p_dev, true,
+ "Non valid LLH protocol filter type %d\n", type);
+ return ECORE_INVAL;
+ }
+
+ return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_llh_protocol_filter_to_hilo(struct ecore_dev *p_dev,
+ enum ecore_llh_prot_filter_type_t type,
+ u16 source_port_or_eth_type, u16 dest_port,
+ u32 *p_high, u32 *p_low)
+{
+ *p_high = 0;
+ *p_low = 0;
+
+ switch (type) {
+ case ECORE_LLH_FILTER_ETHERTYPE:
+ *p_high = source_port_or_eth_type;
+ break;
+ case ECORE_LLH_FILTER_TCP_SRC_PORT:
+ case ECORE_LLH_FILTER_UDP_SRC_PORT:
+ *p_low = source_port_or_eth_type << 16;
+ break;
+ case ECORE_LLH_FILTER_TCP_DEST_PORT:
+ case ECORE_LLH_FILTER_UDP_DEST_PORT:
+ *p_low = dest_port;
+ break;
+ case ECORE_LLH_FILTER_TCP_SRC_AND_DEST_PORT:
+ case ECORE_LLH_FILTER_UDP_SRC_AND_DEST_PORT:
+ *p_low = (source_port_or_eth_type << 16) | dest_port;
+ break;
+ default:
+ DP_NOTICE(p_dev, true,
+ "Non valid LLH protocol filter type %d\n", type);
+ return ECORE_INVAL;
+ }
+
+ return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_llh_add_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
+ enum ecore_llh_prot_filter_type_t type,
+ u16 source_port_or_eth_type, u16 dest_port)
+{
+ struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+ struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+ u8 filter_idx, abs_ppfid, type_bitmap;
+ char str[32];
+ union ecore_llh_filter filter;
+ u32 high, low, ref_cnt;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
+
+ if (p_ptt == OSAL_NULL)
+ return ECORE_AGAIN;
+
+ if (!OSAL_TEST_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits))
+ goto out;
+
+ rc = ecore_llh_protocol_filter_stringify(p_dev, type,
+ source_port_or_eth_type,
+ dest_port, str, sizeof(str));
+ if (rc != ECORE_SUCCESS)
+ goto err;
+
+ OSAL_MEM_ZERO(&filter, sizeof(filter));
+ filter.protocol.type = type;
+ filter.protocol.source_port_or_eth_type = source_port_or_eth_type;
+ filter.protocol.dest_port = dest_port;
+ rc = ecore_llh_shadow_add_filter(p_dev, ppfid,
+ ECORE_LLH_FILTER_TYPE_PROTOCOL,
+ &filter, &filter_idx, &ref_cnt);
+ if (rc != ECORE_SUCCESS)
+ goto err;
+
+ rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid);
+ if (rc != ECORE_SUCCESS)
+ goto err;
+
+ /* Configure the LLH only in case of a new the filter */
+ if (ref_cnt == 1) {
+ rc = ecore_llh_protocol_filter_to_hilo(p_dev, type,
+ source_port_or_eth_type,
+ dest_port, &high, &low);
+ if (rc != ECORE_SUCCESS)
+ goto err;
+
+ type_bitmap = 0x1 << type;
+ rc = ecore_llh_add_filter(p_hwfn, p_ptt, abs_ppfid, filter_idx,
+ type_bitmap, high, low);
+ if (rc != ECORE_SUCCESS)
+ goto err;
+ }
+
+ DP_VERBOSE(p_dev, ECORE_MSG_SP,
+ "LLH: Added protocol filter [%s] to ppfid %hhd [abs %hhd] at idx %hhd [ref_cnt %d]\n",
+ str, ppfid, abs_ppfid, filter_idx, ref_cnt);
+
+ goto out;
+
+err:
+ DP_NOTICE(p_hwfn, false,
+ "LLH: Failed to add protocol filter [%s] to ppfid %hhd\n",
+ str, ppfid);
+out:
+ ecore_ptt_release(p_hwfn, p_ptt);
+
+ return rc;
+}
+
+void ecore_llh_remove_mac_filter(struct ecore_dev *p_dev, u8 ppfid,
+ u8 mac_addr[ETH_ALEN])
+{
+ struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+ struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+ union ecore_llh_filter filter;
+ u8 filter_idx, abs_ppfid;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
+ u32 ref_cnt;
+
+ if (p_ptt == OSAL_NULL)
+ return;
+
+ if (!OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits))
+ goto out;
+
+ OSAL_MEM_ZERO(&filter, sizeof(filter));
+ OSAL_MEMCPY(filter.mac.addr, mac_addr, ETH_ALEN);
+ rc = ecore_llh_shadow_remove_filter(p_dev, ppfid, &filter, &filter_idx,
+ &ref_cnt);
+ if (rc != ECORE_SUCCESS)
+ goto err;
+
+ rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid);
+ if (rc != ECORE_SUCCESS)
+ goto err;
+
+ /* Remove from the LLH in case the filter is not in use */
+ if (!ref_cnt) {
+ rc = ecore_llh_remove_filter(p_hwfn, p_ptt, abs_ppfid,
+ filter_idx);
+ if (rc != ECORE_SUCCESS)
+ goto err;
+ }
+
+ DP_VERBOSE(p_dev, ECORE_MSG_SP,
+ "LLH: Removed MAC filter [%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx] from ppfid %hhd [abs %hhd] at idx %hhd [ref_cnt %d]\n",
+ mac_addr[0], mac_addr[1], mac_addr[2], mac_addr[3],
+ mac_addr[4], mac_addr[5], ppfid, abs_ppfid, filter_idx,
+ ref_cnt);
+
+ goto out;
+
+err:
+ DP_NOTICE(p_dev, false,
+ "LLH: Failed to remove MAC filter [%02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx] from ppfid %hhd\n",
+ mac_addr[0], mac_addr[1], mac_addr[2], mac_addr[3],
+ mac_addr[4], mac_addr[5], ppfid);
+out:
+ ecore_ptt_release(p_hwfn, p_ptt);
+}
+
+void ecore_llh_remove_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
+ enum ecore_llh_prot_filter_type_t type,
+ u16 source_port_or_eth_type,
+ u16 dest_port)
+{
+ struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+ struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+ u8 filter_idx, abs_ppfid;
+ char str[32];
+ union ecore_llh_filter filter;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
+ u32 ref_cnt;
+
+ if (p_ptt == OSAL_NULL)
+ return;
+
+ if (!OSAL_TEST_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits))
+ goto out;
+
+ rc = ecore_llh_protocol_filter_stringify(p_dev, type,
+ source_port_or_eth_type,
+ dest_port, str, sizeof(str));
+ if (rc != ECORE_SUCCESS)
+ goto err;
+
+ OSAL_MEM_ZERO(&filter, sizeof(filter));
+ filter.protocol.type = type;
+ filter.protocol.source_port_or_eth_type = source_port_or_eth_type;
+ filter.protocol.dest_port = dest_port;
+ rc = ecore_llh_shadow_remove_filter(p_dev, ppfid, &filter, &filter_idx,
+ &ref_cnt);
+ if (rc != ECORE_SUCCESS)
+ goto err;
+
+ rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid);
+ if (rc != ECORE_SUCCESS)
+ goto err;
+
+ /* Remove from the LLH in case the filter is not in use */
+ if (!ref_cnt) {
+ rc = ecore_llh_remove_filter(p_hwfn, p_ptt, abs_ppfid,
+ filter_idx);
+ if (rc != ECORE_SUCCESS)
+ goto err;
+ }
+
+ DP_VERBOSE(p_dev, ECORE_MSG_SP,
+ "LLH: Removed protocol filter [%s] from ppfid %hhd [abs %hhd] at idx %hhd [ref_cnt %d]\n",
+ str, ppfid, abs_ppfid, filter_idx, ref_cnt);
+
+ goto out;
+
+err:
+ DP_NOTICE(p_dev, false,
+ "LLH: Failed to remove protocol filter [%s] from ppfid %hhd\n",
+ str, ppfid);
+out:
+ ecore_ptt_release(p_hwfn, p_ptt);
+}
+
+void ecore_llh_clear_ppfid_filters(struct ecore_dev *p_dev, u8 ppfid)
+{
+ struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+ struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+ u8 filter_idx, abs_ppfid;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
+
+ if (p_ptt == OSAL_NULL)
+ return;
+
+ if (!OSAL_TEST_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits) &&
+ !OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits))
+ goto out;
+
+ rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid);
+ if (rc != ECORE_SUCCESS)
+ goto out;
+
+ rc = ecore_llh_shadow_remove_all_filters(p_dev, ppfid);
+ if (rc != ECORE_SUCCESS)
+ goto out;
+
+ for (filter_idx = 0; filter_idx < NIG_REG_LLH_FUNC_FILTER_EN_SIZE;
+ filter_idx++) {
+ rc = ecore_llh_remove_filter_e4(p_hwfn, p_ptt,
+ abs_ppfid, filter_idx);
+ if (rc != ECORE_SUCCESS)
+ goto out;
+ }
+out:
+ ecore_ptt_release(p_hwfn, p_ptt);
+}
+
+void ecore_llh_clear_all_filters(struct ecore_dev *p_dev)
+{
+ u8 ppfid;
+
+ if (!OSAL_TEST_BIT(ECORE_MF_LLH_PROTO_CLSS, &p_dev->mf_bits) &&
+ !OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits))
+ return;
+
+ for (ppfid = 0; ppfid < p_dev->p_llh_info->num_ppfid; ppfid++)
+ ecore_llh_clear_ppfid_filters(p_dev, ppfid);
+}
+
+enum _ecore_status_t ecore_all_ppfids_wr(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt, u32 addr,
+ u32 val)
+{
+ struct ecore_dev *p_dev = p_hwfn->p_dev;
+ u8 ppfid, abs_ppfid;
+ enum _ecore_status_t rc;
+
+ for (ppfid = 0; ppfid < p_dev->p_llh_info->num_ppfid; ppfid++) {
+ rc = ecore_abs_ppfid(p_dev, ppfid, &abs_ppfid);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ ecore_ppfid_wr(p_hwfn, p_ptt, abs_ppfid, addr, val);
+ }
+
+ return ECORE_SUCCESS;
+}
+
+static enum _ecore_status_t
+ecore_llh_dump_ppfid_e4(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ u8 ppfid)
+{
+ struct ecore_llh_filter_e4_details filter_details;
+ u8 abs_ppfid, filter_idx;
+ u32 addr;
+ enum _ecore_status_t rc;
+
+ rc = ecore_abs_ppfid(p_hwfn->p_dev, ppfid, &abs_ppfid);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ addr = NIG_REG_PPF_TO_ENGINE_SEL + abs_ppfid * 0x4;
+ DP_NOTICE(p_hwfn, false,
+ "[rel_pf_id %hhd, ppfid={rel %hhd, abs %hhd}, engine_sel 0x%x]\n",
+ p_hwfn->rel_pf_id, ppfid, abs_ppfid,
+ ecore_rd(p_hwfn, p_ptt, addr));
+
+ for (filter_idx = 0; filter_idx < NIG_REG_LLH_FUNC_FILTER_EN_SIZE;
+ filter_idx++) {
+ OSAL_MEMSET(&filter_details, 0, sizeof(filter_details));
+ rc = ecore_llh_access_filter_e4(p_hwfn, p_ptt, abs_ppfid,
+ filter_idx, &filter_details,
+ false /* read access */);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ DP_NOTICE(p_hwfn, false,
+ "filter %2hhd: enable %d, value 0x%016lx, mode %d, protocol_type 0x%x, hdr_sel 0x%x\n",
+ filter_idx, filter_details.enable,
+ (unsigned long)filter_details.value,
+ filter_details.mode,
+ filter_details.protocol_type, filter_details.hdr_sel);
+ }
+
+ return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t ecore_llh_dump_ppfid(struct ecore_dev *p_dev, u8 ppfid)
+{
+ struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+ struct ecore_ptt *p_ptt = ecore_ptt_acquire(p_hwfn);
+ enum _ecore_status_t rc;
+
+ if (p_ptt == OSAL_NULL)
+ return ECORE_AGAIN;
+
+ rc = ecore_llh_dump_ppfid_e4(p_hwfn, p_ptt, ppfid);
+
+ ecore_ptt_release(p_hwfn, p_ptt);
+
+ return rc;
+}
+
+enum _ecore_status_t ecore_llh_dump_all(struct ecore_dev *p_dev)
+{
+ u8 ppfid;
+ enum _ecore_status_t rc;
+
+ for (ppfid = 0; ppfid < p_dev->p_llh_info->num_ppfid; ppfid++) {
+ rc = ecore_llh_dump_ppfid(p_dev, ppfid);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+ }
+
+ return ECORE_SUCCESS;
+}
+
+/******************************* NIG LLH - End ********************************/
+
/* Configurable */
#define ECORE_MIN_DPIS (4) /* The minimal num of DPIs required to
* load the driver. The number was
@@ -476,6 +1659,8 @@ void ecore_resc_free(struct ecore_dev *p_dev)
OSAL_FREE(p_dev, p_dev->reset_stats);
+ ecore_llh_free(p_dev);
+
for_each_hwfn(p_dev, i) {
struct ecore_hwfn *p_hwfn = &p_dev->hwfns[i];
@@ -1349,6 +2534,13 @@ enum _ecore_status_t ecore_resc_alloc(struct ecore_dev *p_dev)
}
} /* hwfn loop */
+ rc = ecore_llh_alloc(p_dev);
+ if (rc != ECORE_SUCCESS) {
+ DP_NOTICE(p_dev, true,
+ "Failed to allocate memory for the llh_info structure\n");
+ goto alloc_err;
+ }
+
p_dev->reset_stats = OSAL_ZALLOC(p_dev, GFP_KERNEL,
sizeof(*p_dev->reset_stats));
if (!p_dev->reset_stats) {
@@ -1489,8 +2681,7 @@ static enum _ecore_status_t ecore_calc_hw_mode(struct ecore_hwfn *p_hwfn)
return ECORE_INVAL;
}
- if (OSAL_TEST_BIT(ECORE_MF_OVLAN_CLSS,
- &p_hwfn->p_dev->mf_bits))
+ if (OSAL_TEST_BIT(ECORE_MF_OVLAN_CLSS, &p_hwfn->p_dev->mf_bits))
hw_mode |= 1 << MODE_MF_SD;
else
hw_mode |= 1 << MODE_MF_SI;
@@ -2094,17 +3285,7 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
int hw_mode)
{
- u32 ppf_to_eng_sel[NIG_REG_PPF_TO_ENGINE_SEL_RT_SIZE];
- u32 val;
enum _ecore_status_t rc = ECORE_SUCCESS;
- u8 i;
-
- /* In CMT for non-RoCE packets - use connection based classification */
- val = ECORE_IS_CMT(p_hwfn->p_dev) ? 0x8 : 0x0;
- for (i = 0; i < NIG_REG_PPF_TO_ENGINE_SEL_RT_SIZE; i++)
- ppf_to_eng_sel[i] = val;
- STORE_RT_REG_AGG(p_hwfn, NIG_REG_PPF_TO_ENGINE_SEL_RT_OFFSET,
- ppf_to_eng_sel);
/* In CMT the gate should be cleared by the 2nd hwfn */
if (!ECORE_IS_CMT(p_hwfn->p_dev) || !IS_LEAD_HWFN(p_hwfn))
@@ -2156,12 +3337,8 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
}
static enum _ecore_status_t
-ecore_hw_init_pf(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct ecore_tunnel_info *p_tunn,
- int hw_mode,
- bool b_hw_start,
- enum ecore_int_mode int_mode, bool allow_npar_tx_switch)
+ecore_hw_init_pf(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ int hw_mode, struct ecore_hw_init_params *p_params)
{
u8 rel_pf_id = p_hwfn->rel_pf_id;
u32 prs_reg;
@@ -2249,17 +3426,18 @@ static enum _ecore_status_t ecore_hw_init_port(struct ecore_hwfn *p_hwfn,
*/
rc = ecore_hw_init_pf_doorbell_bar(p_hwfn, p_ptt);
- if (rc)
+ if (rc != ECORE_SUCCESS)
return rc;
- if (b_hw_start) {
+
+ if (p_params->b_hw_start) {
/* enable interrupts */
- rc = ecore_int_igu_enable(p_hwfn, p_ptt, int_mode);
+ rc = ecore_int_igu_enable(p_hwfn, p_ptt, p_params->int_mode);
if (rc != ECORE_SUCCESS)
return rc;
/* send function start command */
- rc = ecore_sp_pf_start(p_hwfn, p_ptt, p_tunn,
- allow_npar_tx_switch);
+ rc = ecore_sp_pf_start(p_hwfn, p_ptt, p_params->p_tunn,
+ p_params->allow_npar_tx_switch);
if (rc) {
DP_NOTICE(p_hwfn, true,
"Function start ramrod failed\n");
@@ -2583,11 +3761,8 @@ enum _ecore_status_t ecore_hw_init(struct ecore_dev *p_dev,
/* Fall into */
case FW_MSG_CODE_DRV_LOAD_FUNCTION:
rc = ecore_hw_init_pf(p_hwfn, p_hwfn->p_main_ptt,
- p_params->p_tunn,
p_hwfn->hw_info.hw_mode,
- p_params->b_hw_start,
- p_params->int_mode,
- p_params->allow_npar_tx_switch);
+ p_params);
break;
default:
DP_NOTICE(p_hwfn, false,
@@ -2859,6 +4034,12 @@ enum _ecore_status_t ecore_hw_stop(struct ecore_dev *p_dev)
/* Need to wait 1ms to guarantee SBs are cleared */
OSAL_MSLEEP(1);
+ if (IS_LEAD_HWFN(p_hwfn) &&
+ OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS, &p_dev->mf_bits) &&
+ !ECORE_IS_FCOE_PERSONALITY(p_hwfn))
+ ecore_llh_remove_mac_filter(p_dev, 0,
+ p_hwfn->hw_info.hw_mac_addr);
+
if (!p_dev->recov_in_prog) {
ecore_verify_reg_val(p_hwfn, p_ptt,
QM_REG_USG_CNT_PF_TX, 0);
@@ -3372,6 +4553,59 @@ static enum _ecore_status_t ecore_hw_set_resc_info(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
+#define ECORE_NONUSED_PPFID_MASK_BB_4P_LO_PORTS 0xaa
+#define ECORE_NONUSED_PPFID_MASK_BB_4P_HI_PORTS 0x55
+#define ECORE_NONUSED_PPFID_MASK_AH_4P 0xf0
+
+static enum _ecore_status_t ecore_hw_get_ppfid_bitmap(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt)
+{
+ u8 native_ppfid_idx = ECORE_PPFID_BY_PFID(p_hwfn), new_bitmap;
+ struct ecore_dev *p_dev = p_hwfn->p_dev;
+ enum _ecore_status_t rc;
+
+ rc = ecore_mcp_get_ppfid_bitmap(p_hwfn, p_ptt);
+ if (rc != ECORE_SUCCESS && rc != ECORE_NOTIMPL)
+ return rc;
+ else if (rc == ECORE_NOTIMPL)
+ p_dev->ppfid_bitmap = 0x1 << native_ppfid_idx;
+
+ /* 4-ports mode has limitations that should be enforced:
+ * - BB: the MFW can access only PPFIDs which their corresponding PFIDs
+ * belong to this certain port.
+ * - AH/E5: only 4 PPFIDs per port are available.
+ */
+ if (ecore_device_num_ports(p_dev) == 4) {
+ u8 mask;
+
+ if (ECORE_IS_BB(p_dev))
+ mask = MFW_PORT(p_hwfn) > 1 ?
+ ECORE_NONUSED_PPFID_MASK_BB_4P_HI_PORTS :
+ ECORE_NONUSED_PPFID_MASK_BB_4P_LO_PORTS;
+ else
+ mask = ECORE_NONUSED_PPFID_MASK_AH_4P;
+
+ if (p_dev->ppfid_bitmap & mask) {
+ new_bitmap = p_dev->ppfid_bitmap & ~mask;
+ DP_INFO(p_hwfn,
+ "Fix the PPFID bitmap for 4-ports mode: 0x%hhx -> 0x%hhx\n",
+ p_dev->ppfid_bitmap, new_bitmap);
+ p_dev->ppfid_bitmap = new_bitmap;
+ }
+ }
+
+ /* The native PPFID is expected to be part of the allocated bitmap */
+ if (!(p_dev->ppfid_bitmap & (0x1 << native_ppfid_idx))) {
+ new_bitmap = 0x1 << native_ppfid_idx;
+ DP_INFO(p_hwfn,
+ "Fix the PPFID bitmap to inculde the native PPFID: %hhd -> 0x%hhx\n",
+ p_dev->ppfid_bitmap, new_bitmap);
+ p_dev->ppfid_bitmap = new_bitmap;
+ }
+
+ return ECORE_SUCCESS;
+}
+
static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
bool drv_resc_alloc)
@@ -3446,6 +4680,13 @@ static enum _ecore_status_t ecore_hw_get_resc(struct ecore_hwfn *p_hwfn,
"Failed to release the resource lock for the resource allocation commands\n");
}
+ /* PPFID bitmap */
+ if (IS_LEAD_HWFN(p_hwfn)) {
+ rc = ecore_hw_get_ppfid_bitmap(p_hwfn, p_ptt);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+ }
+
#ifndef ASIC_ONLY
if (CHIP_REV_IS_SLOW(p_hwfn->p_dev)) {
/* Reduced build contains less PQs */
@@ -4236,9 +5477,8 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev)
#endif
static enum _ecore_status_t
-ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn,
- void OSAL_IOMEM * p_regview,
- void OSAL_IOMEM * p_doorbells,
+ecore_hw_prepare_single(struct ecore_hwfn *p_hwfn, void OSAL_IOMEM *p_regview,
+ void OSAL_IOMEM *p_doorbells, u64 db_phys_addr,
struct ecore_hw_prepare_params *p_params)
{
struct ecore_mdump_retain_data mdump_retain;
@@ -4249,6 +5489,7 @@ void ecore_prepare_hibernate(struct ecore_dev *p_dev)
/* Split PCI bars evenly between hwfns */
p_hwfn->regview = p_regview;
p_hwfn->doorbells = p_doorbells;
+ p_hwfn->db_phys_addr = db_phys_addr;
if (IS_VF(p_dev))
return ecore_vf_hw_prepare(p_hwfn);
@@ -4401,9 +5642,9 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
ecore_init_iro_array(p_dev);
/* Initialize the first hwfn - will learn number of hwfns */
- rc = ecore_hw_prepare_single(p_hwfn,
- p_dev->regview,
- p_dev->doorbells, p_params);
+ rc = ecore_hw_prepare_single(p_hwfn, p_dev->regview,
+ p_dev->doorbells, p_dev->db_phys_addr,
+ p_params);
if (rc != ECORE_SUCCESS)
return rc;
@@ -4413,24 +5654,26 @@ enum _ecore_status_t ecore_hw_prepare(struct ecore_dev *p_dev,
if (ECORE_IS_CMT(p_dev)) {
void OSAL_IOMEM *p_regview, *p_doorbell;
u8 OSAL_IOMEM *addr;
+ u64 db_phys_addr;
+ u32 offset;
/* adjust bar offset for second engine */
- addr = (u8 OSAL_IOMEM *)p_dev->regview +
- ecore_hw_bar_size(p_hwfn,
- p_hwfn->p_main_ptt,
- BAR_ID_0) / 2;
+ offset = ecore_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt,
+ BAR_ID_0) / 2;
+ addr = (u8 OSAL_IOMEM *)p_dev->regview + offset;
p_regview = (void OSAL_IOMEM *)addr;
- addr = (u8 OSAL_IOMEM *)p_dev->doorbells +
- ecore_hw_bar_size(p_hwfn,
- p_hwfn->p_main_ptt,
- BAR_ID_1) / 2;
+ offset = ecore_hw_bar_size(p_hwfn, p_hwfn->p_main_ptt,
+ BAR_ID_1) / 2;
+ addr = (u8 OSAL_IOMEM *)p_dev->doorbells + offset;
p_doorbell = (void OSAL_IOMEM *)addr;
+ db_phys_addr = p_dev->db_phys_addr + offset;
p_dev->hwfns[1].b_en_pacing = p_params->b_en_pacing;
/* prepare second hw function */
rc = ecore_hw_prepare_single(&p_dev->hwfns[1], p_regview,
- p_doorbell, p_params);
+ p_doorbell, db_phys_addr,
+ p_params);
/* in case of error, need to free the previously
* initiliazed hwfn 0.
@@ -4827,419 +6070,6 @@ enum _ecore_status_t ecore_fw_rss_eng(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-static enum _ecore_status_t
-ecore_llh_add_mac_filter_bb_ah(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u32 high, u32 low,
- u32 *p_entry_num)
-{
- u32 en;
- int i;
-
- /* Find a free entry and utilize it */
- for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) {
- en = ecore_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_EN_BB_K2 +
- i * sizeof(u32));
- if (en)
- continue;
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE_BB_K2 +
- 2 * i * sizeof(u32), low);
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE_BB_K2 +
- (2 * i + 1) * sizeof(u32), high);
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_MODE_BB_K2 +
- i * sizeof(u32), 0);
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_BB_K2 +
- i * sizeof(u32), 0);
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_EN_BB_K2 +
- i * sizeof(u32), 1);
- break;
- }
-
- if (i >= NIG_REG_LLH_FUNC_FILTER_EN_SIZE)
- return ECORE_NORESOURCES;
-
- *p_entry_num = i;
-
- return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t ecore_llh_add_mac_filter(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u8 *p_filter)
-{
- u32 high, low, entry_num;
- enum _ecore_status_t rc = ECORE_SUCCESS;
-
- if (!OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS,
- &p_hwfn->p_dev->mf_bits))
- return ECORE_SUCCESS;
-
- high = p_filter[1] | (p_filter[0] << 8);
- low = p_filter[5] | (p_filter[4] << 8) |
- (p_filter[3] << 16) | (p_filter[2] << 24);
-
- if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev))
- rc = ecore_llh_add_mac_filter_bb_ah(p_hwfn, p_ptt, high, low,
- &entry_num);
- if (rc != ECORE_SUCCESS) {
- DP_NOTICE(p_hwfn, false,
- "Failed to find an empty LLH filter to utilize\n");
- return rc;
- }
-
- DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
- "MAC: %02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx is added at %d\n",
- p_filter[0], p_filter[1], p_filter[2], p_filter[3],
- p_filter[4], p_filter[5], entry_num);
-
- return rc;
-}
-
-static enum _ecore_status_t
-ecore_llh_remove_mac_filter_bb_ah(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u32 high, u32 low,
- u32 *p_entry_num)
-{
- int i;
-
- /* Find the entry and clean it */
- for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) {
- if (ecore_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE_BB_K2 +
- 2 * i * sizeof(u32)) != low)
- continue;
- if (ecore_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE_BB_K2 +
- (2 * i + 1) * sizeof(u32)) != high)
- continue;
-
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_EN_BB_K2 + i * sizeof(u32), 0);
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE_BB_K2 +
- 2 * i * sizeof(u32), 0);
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE_BB_K2 +
- (2 * i + 1) * sizeof(u32), 0);
- break;
- }
-
- if (i >= NIG_REG_LLH_FUNC_FILTER_EN_SIZE)
- return ECORE_INVAL;
-
- *p_entry_num = i;
-
- return ECORE_SUCCESS;
-}
-
-void ecore_llh_remove_mac_filter(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt, u8 *p_filter)
-{
- u32 high, low, entry_num;
- enum _ecore_status_t rc = ECORE_SUCCESS;
-
- if (!OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS,
- &p_hwfn->p_dev->mf_bits))
- return;
-
- high = p_filter[1] | (p_filter[0] << 8);
- low = p_filter[5] | (p_filter[4] << 8) |
- (p_filter[3] << 16) | (p_filter[2] << 24);
-
- if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev))
- rc = ecore_llh_remove_mac_filter_bb_ah(p_hwfn, p_ptt, high,
- low, &entry_num);
- if (rc != ECORE_SUCCESS) {
- DP_NOTICE(p_hwfn, false,
- "Tried to remove a non-configured filter\n");
- return;
- }
-
-
- DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
- "MAC: %02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx was removed from %d\n",
- p_filter[0], p_filter[1], p_filter[2], p_filter[3],
- p_filter[4], p_filter[5], entry_num);
-}
-
-static enum _ecore_status_t
-ecore_llh_add_protocol_filter_bb_ah(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- enum ecore_llh_port_filter_type_t type,
- u32 high, u32 low, u32 *p_entry_num)
-{
- u32 en;
- int i;
-
- /* Find a free entry and utilize it */
- for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) {
- en = ecore_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_EN_BB_K2 +
- i * sizeof(u32));
- if (en)
- continue;
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE_BB_K2 +
- 2 * i * sizeof(u32), low);
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE_BB_K2 +
- (2 * i + 1) * sizeof(u32), high);
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_MODE_BB_K2 +
- i * sizeof(u32), 1);
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_BB_K2 +
- i * sizeof(u32), 1 << type);
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_EN_BB_K2 + i * sizeof(u32), 1);
- break;
- }
-
- if (i >= NIG_REG_LLH_FUNC_FILTER_EN_SIZE)
- return ECORE_NORESOURCES;
-
- *p_entry_num = i;
-
- return ECORE_SUCCESS;
-}
-
-enum _ecore_status_t
-ecore_llh_add_protocol_filter(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u16 source_port_or_eth_type,
- u16 dest_port,
- enum ecore_llh_port_filter_type_t type)
-{
- u32 high, low, entry_num;
- enum _ecore_status_t rc = ECORE_SUCCESS;
-
- if (!OSAL_TEST_BIT(ECORE_MF_LLH_PROTO_CLSS,
- &p_hwfn->p_dev->mf_bits))
- return rc;
-
- high = 0;
- low = 0;
-
- switch (type) {
- case ECORE_LLH_FILTER_ETHERTYPE:
- high = source_port_or_eth_type;
- break;
- case ECORE_LLH_FILTER_TCP_SRC_PORT:
- case ECORE_LLH_FILTER_UDP_SRC_PORT:
- low = source_port_or_eth_type << 16;
- break;
- case ECORE_LLH_FILTER_TCP_DEST_PORT:
- case ECORE_LLH_FILTER_UDP_DEST_PORT:
- low = dest_port;
- break;
- case ECORE_LLH_FILTER_TCP_SRC_AND_DEST_PORT:
- case ECORE_LLH_FILTER_UDP_SRC_AND_DEST_PORT:
- low = (source_port_or_eth_type << 16) | dest_port;
- break;
- default:
- DP_NOTICE(p_hwfn, true,
- "Non valid LLH protocol filter type %d\n", type);
- return ECORE_INVAL;
- }
-
- if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev))
- rc = ecore_llh_add_protocol_filter_bb_ah(p_hwfn, p_ptt, type,
- high, low, &entry_num);
- if (rc != ECORE_SUCCESS) {
- DP_NOTICE(p_hwfn, false,
- "Failed to find an empty LLH filter to utilize\n");
- return rc;
- }
- switch (type) {
- case ECORE_LLH_FILTER_ETHERTYPE:
- DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
- "ETH type %x is added at %d\n",
- source_port_or_eth_type, entry_num);
- break;
- case ECORE_LLH_FILTER_TCP_SRC_PORT:
- DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
- "TCP src port %x is added at %d\n",
- source_port_or_eth_type, entry_num);
- break;
- case ECORE_LLH_FILTER_UDP_SRC_PORT:
- DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
- "UDP src port %x is added at %d\n",
- source_port_or_eth_type, entry_num);
- break;
- case ECORE_LLH_FILTER_TCP_DEST_PORT:
- DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
- "TCP dst port %x is added at %d\n", dest_port,
- entry_num);
- break;
- case ECORE_LLH_FILTER_UDP_DEST_PORT:
- DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
- "UDP dst port %x is added at %d\n", dest_port,
- entry_num);
- break;
- case ECORE_LLH_FILTER_TCP_SRC_AND_DEST_PORT:
- DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
- "TCP src/dst ports %x/%x are added at %d\n",
- source_port_or_eth_type, dest_port, entry_num);
- break;
- case ECORE_LLH_FILTER_UDP_SRC_AND_DEST_PORT:
- DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
- "UDP src/dst ports %x/%x are added at %d\n",
- source_port_or_eth_type, dest_port, entry_num);
- break;
- }
-
- return rc;
-}
-
-static enum _ecore_status_t
-ecore_llh_remove_protocol_filter_bb_ah(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- enum ecore_llh_port_filter_type_t type,
- u32 high, u32 low, u32 *p_entry_num)
-{
- int i;
-
- /* Find the entry and clean it */
- for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) {
- if (!ecore_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_EN_BB_K2 +
- i * sizeof(u32)))
- continue;
- if (!ecore_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_MODE_BB_K2 +
- i * sizeof(u32)))
- continue;
- if (!(ecore_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_BB_K2 +
- i * sizeof(u32)) & (1 << type)))
- continue;
- if (ecore_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE_BB_K2 +
- 2 * i * sizeof(u32)) != low)
- continue;
- if (ecore_rd(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE_BB_K2 +
- (2 * i + 1) * sizeof(u32)) != high)
- continue;
-
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_EN_BB_K2 + i * sizeof(u32), 0);
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_MODE_BB_K2 +
- i * sizeof(u32), 0);
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_PROTOCOL_TYPE_BB_K2 +
- i * sizeof(u32), 0);
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE_BB_K2 +
- 2 * i * sizeof(u32), 0);
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE_BB_K2 +
- (2 * i + 1) * sizeof(u32), 0);
- break;
- }
-
- if (i >= NIG_REG_LLH_FUNC_FILTER_EN_SIZE)
- return ECORE_INVAL;
-
- *p_entry_num = i;
-
- return ECORE_SUCCESS;
-}
-
-void
-ecore_llh_remove_protocol_filter(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u16 source_port_or_eth_type,
- u16 dest_port,
- enum ecore_llh_port_filter_type_t type)
-{
- u32 high, low, entry_num;
- enum _ecore_status_t rc = ECORE_SUCCESS;
-
- if (!OSAL_TEST_BIT(ECORE_MF_LLH_PROTO_CLSS,
- &p_hwfn->p_dev->mf_bits))
- return;
-
- high = 0;
- low = 0;
-
- switch (type) {
- case ECORE_LLH_FILTER_ETHERTYPE:
- high = source_port_or_eth_type;
- break;
- case ECORE_LLH_FILTER_TCP_SRC_PORT:
- case ECORE_LLH_FILTER_UDP_SRC_PORT:
- low = source_port_or_eth_type << 16;
- break;
- case ECORE_LLH_FILTER_TCP_DEST_PORT:
- case ECORE_LLH_FILTER_UDP_DEST_PORT:
- low = dest_port;
- break;
- case ECORE_LLH_FILTER_TCP_SRC_AND_DEST_PORT:
- case ECORE_LLH_FILTER_UDP_SRC_AND_DEST_PORT:
- low = (source_port_or_eth_type << 16) | dest_port;
- break;
- default:
- DP_NOTICE(p_hwfn, true,
- "Non valid LLH protocol filter type %d\n", type);
- return;
- }
-
- if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev))
- rc = ecore_llh_remove_protocol_filter_bb_ah(p_hwfn, p_ptt, type,
- high, low,
- &entry_num);
- if (rc != ECORE_SUCCESS) {
- DP_NOTICE(p_hwfn, false,
- "Tried to remove a non-configured filter [type %d, source_port_or_eth_type 0x%x, dest_port 0x%x]\n",
- type, source_port_or_eth_type, dest_port);
- return;
- }
-
- DP_VERBOSE(p_hwfn, ECORE_MSG_HW,
- "Protocol filter [type %d, source_port_or_eth_type 0x%x, dest_port 0x%x] was removed from %d\n",
- type, source_port_or_eth_type, dest_port, entry_num);
-}
-
-static void ecore_llh_clear_all_filters_bb_ah(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
-{
- int i;
-
- if (!(IS_MF_SI(p_hwfn) || IS_MF_DEFAULT(p_hwfn)))
- return;
-
- for (i = 0; i < NIG_REG_LLH_FUNC_FILTER_EN_SIZE; i++) {
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_EN_BB_K2 +
- i * sizeof(u32), 0);
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE_BB_K2 +
- 2 * i * sizeof(u32), 0);
- ecore_wr(p_hwfn, p_ptt,
- NIG_REG_LLH_FUNC_FILTER_VALUE_BB_K2 +
- (2 * i + 1) * sizeof(u32), 0);
- }
-}
-
-void ecore_llh_clear_all_filters(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt)
-{
- if (!OSAL_TEST_BIT(ECORE_MF_LLH_PROTO_CLSS,
- &p_hwfn->p_dev->mf_bits) &&
- !OSAL_TEST_BIT(ECORE_MF_LLH_MAC_CLSS,
- &p_hwfn->p_dev->mf_bits))
- return;
-
- if (ECORE_IS_BB(p_hwfn->p_dev) || ECORE_IS_AH(p_hwfn->p_dev))
- ecore_llh_clear_all_filters_bb_ah(p_hwfn, p_ptt);
-}
-
enum _ecore_status_t
ecore_llh_set_function_as_default(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt)
diff --git a/drivers/net/qede/base/ecore_dev_api.h b/drivers/net/qede/base/ecore_dev_api.h
index ab80b52..7308063 100644
--- a/drivers/net/qede/base/ecore_dev_api.h
+++ b/drivers/net/qede/base/ecore_dev_api.h
@@ -114,6 +114,9 @@ struct ecore_hw_init_params {
/* Driver load parameters */
struct ecore_drv_load_params *p_drv_load_params;
+ /* Avoid engine affinity for RoCE/storage in case of CMT mode */
+ bool avoid_eng_affin;
+
/* SPQ block timeout in msec */
u32 spq_timeout_ms;
};
@@ -428,11 +431,17 @@ enum ecore_dmae_address_type_t {
#define ECORE_DMAE_FLAG_VF_SRC 0x00000002
#define ECORE_DMAE_FLAG_VF_DST 0x00000004
#define ECORE_DMAE_FLAG_COMPLETION_DST 0x00000008
+#define ECORE_DMAE_FLAG_PORT 0x00000010
+#define ECORE_DMAE_FLAG_PF_SRC 0x00000020
+#define ECORE_DMAE_FLAG_PF_DST 0x00000040
struct ecore_dmae_params {
u32 flags; /* consists of ECORE_DMAE_FLAG_* values */
u8 src_vfid;
u8 dst_vfid;
+ u8 port_id;
+ u8 src_pfid;
+ u8 dst_pfid;
};
/**
@@ -444,7 +453,9 @@ struct ecore_dmae_params {
* @param source_addr
* @param grc_addr (dmae_data_offset)
* @param size_in_dwords
- * @param flags (one of the flags defined above)
+ * @param p_params (default parameters will be used in case of OSAL_NULL)
+ *
+ * @return enum _ecore_status_t
*/
enum _ecore_status_t
ecore_dmae_host2grc(struct ecore_hwfn *p_hwfn,
@@ -452,7 +463,7 @@ enum _ecore_status_t
u64 source_addr,
u32 grc_addr,
u32 size_in_dwords,
- u32 flags);
+ struct ecore_dmae_params *p_params);
/**
* @brief ecore_dmae_grc2host - Read data from dmae data offset
@@ -462,7 +473,9 @@ enum _ecore_status_t
* @param grc_addr (dmae_data_offset)
* @param dest_addr
* @param size_in_dwords
- * @param flags - one of the flags defined above
+ * @param p_params (default parameters will be used in case of OSAL_NULL)
+ *
+ * @return enum _ecore_status_t
*/
enum _ecore_status_t
ecore_dmae_grc2host(struct ecore_hwfn *p_hwfn,
@@ -470,7 +483,7 @@ enum _ecore_status_t
u32 grc_addr,
dma_addr_t dest_addr,
u32 size_in_dwords,
- u32 flags);
+ struct ecore_dmae_params *p_params);
/**
* @brief ecore_dmae_host2host - copy data from to source address
@@ -481,7 +494,9 @@ enum _ecore_status_t
* @param source_addr
* @param dest_addr
* @param size_in_dwords
- * @param params
+ * @param p_params (default parameters will be used in case of OSAL_NULL)
+ *
+ * @return enum _ecore_status_t
*/
enum _ecore_status_t
ecore_dmae_host2host(struct ecore_hwfn *p_hwfn,
@@ -562,28 +577,79 @@ enum _ecore_status_t ecore_fw_rss_eng(struct ecore_hwfn *p_hwfn,
u8 *dst_id);
/**
- * @brief ecore_llh_add_mac_filter - configures a MAC filter in llh
+ * @brief ecore_llh_get_num_ppfid - Return the allocated number of LLH filter
+ * banks that are allocated to the PF.
*
- * @param p_hwfn
- * @param p_ptt
- * @param p_filter - MAC to add
+ * @param p_dev
+ *
+ * @return u8 - Number of LLH filter banks
*/
-enum _ecore_status_t ecore_llh_add_mac_filter(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u8 *p_filter);
+u8 ecore_llh_get_num_ppfid(struct ecore_dev *p_dev);
+
+enum ecore_eng {
+ ECORE_ENG0,
+ ECORE_ENG1,
+ ECORE_BOTH_ENG,
+};
/**
- * @brief ecore_llh_remove_mac_filter - removes a MAC filtre from llh
+ * @brief ecore_llh_get_l2_affinity_hint - Return the hint for the L2 affinity
*
- * @param p_hwfn
- * @param p_ptt
- * @param p_filter - MAC to remove
+ * @param p_dev
+ *
+ * @return enum ecore_eng - L2 affintiy hint
+ */
+enum ecore_eng ecore_llh_get_l2_affinity_hint(struct ecore_dev *p_dev);
+
+/**
+ * @brief ecore_llh_set_ppfid_affinity - Set the engine affinity for the given
+ * LLH filter bank.
+ *
+ * @param p_dev
+ * @param ppfid - relative within the allocated ppfids ('0' is the default one).
+ * @param eng
+ *
+ * @return enum _ecore_status_t
*/
-void ecore_llh_remove_mac_filter(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u8 *p_filter);
+enum _ecore_status_t ecore_llh_set_ppfid_affinity(struct ecore_dev *p_dev,
+ u8 ppfid, enum ecore_eng eng);
-enum ecore_llh_port_filter_type_t {
+/**
+ * @brief ecore_llh_set_roce_affinity - Set the RoCE engine affinity
+ *
+ * @param p_dev
+ * @param eng
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_llh_set_roce_affinity(struct ecore_dev *p_dev,
+ enum ecore_eng eng);
+
+/**
+ * @brief ecore_llh_add_mac_filter - Add a LLH MAC filter into the given filter
+ * bank.
+ *
+ * @param p_dev
+ * @param ppfid - relative within the allocated ppfids ('0' is the default one).
+ * @param mac_addr - MAC to add
+ *
+ * @return enum _ecore_status_t
+ */
+enum _ecore_status_t ecore_llh_add_mac_filter(struct ecore_dev *p_dev, u8 ppfid,
+ u8 mac_addr[ETH_ALEN]);
+
+/**
+ * @brief ecore_llh_remove_mac_filter - Remove a LLH MAC filter from the given
+ * filter bank.
+ *
+ * @param p_dev
+ * @param ppfid - relative within the allocated ppfids ('0' is the default one).
+ * @param mac_addr - MAC to remove
+ */
+void ecore_llh_remove_mac_filter(struct ecore_dev *p_dev, u8 ppfid,
+ u8 mac_addr[ETH_ALEN]);
+
+enum ecore_llh_prot_filter_type_t {
ECORE_LLH_FILTER_ETHERTYPE,
ECORE_LLH_FILTER_TCP_SRC_PORT,
ECORE_LLH_FILTER_TCP_DEST_PORT,
@@ -594,45 +660,52 @@ enum ecore_llh_port_filter_type_t {
};
/**
- * @brief ecore_llh_add_protocol_filter - configures a protocol filter in llh
+ * @brief ecore_llh_add_protocol_filter - Add a LLH protocol filter into the
+ * given filter bank.
*
- * @param p_hwfn
- * @param p_ptt
+ * @param p_dev
+ * @param ppfid - relative within the allocated ppfids ('0' is the default one).
+ * @param type - type of filters and comparing
* @param source_port_or_eth_type - source port or ethertype to add
* @param dest_port - destination port to add
- * @param type - type of filters and comparing
+ *
+ * @return enum _ecore_status_t
*/
enum _ecore_status_t
-ecore_llh_add_protocol_filter(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u16 source_port_or_eth_type,
- u16 dest_port,
- enum ecore_llh_port_filter_type_t type);
+ecore_llh_add_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
+ enum ecore_llh_prot_filter_type_t type,
+ u16 source_port_or_eth_type, u16 dest_port);
/**
- * @brief ecore_llh_remove_protocol_filter - remove a protocol filter in llh
+ * @brief ecore_llh_remove_protocol_filter - Remove a LLH protocol filter from
+ * the given filter bank.
*
- * @param p_hwfn
- * @param p_ptt
+ * @param p_dev
+ * @param ppfid - relative within the allocated ppfids ('0' is the default one).
+ * @param type - type of filters and comparing
* @param source_port_or_eth_type - source port or ethertype to add
* @param dest_port - destination port to add
- * @param type - type of filters and comparing
*/
-void
-ecore_llh_remove_protocol_filter(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u16 source_port_or_eth_type,
- u16 dest_port,
- enum ecore_llh_port_filter_type_t type);
+void ecore_llh_remove_protocol_filter(struct ecore_dev *p_dev, u8 ppfid,
+ enum ecore_llh_prot_filter_type_t type,
+ u16 source_port_or_eth_type,
+ u16 dest_port);
/**
- * @brief ecore_llh_clear_all_filters - removes all MAC filters from llh
+ * @brief ecore_llh_clear_ppfid_filters - Remove all LLH filters from the given
+ * filter bank.
*
- * @param p_hwfn
- * @param p_ptt
+ * @param p_dev
+ * @param ppfid - relative within the allocated ppfids ('0' is the default one).
+ */
+void ecore_llh_clear_ppfid_filters(struct ecore_dev *p_dev, u8 ppfid);
+
+/**
+ * @brief ecore_llh_clear_all_filters - Remove all LLH filters
+ *
+ * @param p_dev
*/
-void ecore_llh_clear_all_filters(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt);
+void ecore_llh_clear_all_filters(struct ecore_dev *p_dev);
/**
* @brief ecore_llh_set_function_as_default - set function as default per port
diff --git a/drivers/net/qede/base/ecore_hw.c b/drivers/net/qede/base/ecore_hw.c
index 6cfbbab..72cd7e9 100644
--- a/drivers/net/qede/base/ecore_hw.c
+++ b/drivers/net/qede/base/ecore_hw.c
@@ -450,14 +450,17 @@ u32 ecore_vfid_to_concrete(struct ecore_hwfn *p_hwfn, u8 vfid)
* If this changes, this needs to be revisted.
*/
-/* Ecore DMAE
- * =============
- */
+/* DMAE */
+
+#define ECORE_DMAE_FLAGS_IS_SET(params, flag) \
+ ((params) != OSAL_NULL && ((params)->flags & ECORE_DMAE_FLAG_##flag))
+
static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn,
const u8 is_src_type_grc,
const u8 is_dst_type_grc,
struct ecore_dmae_params *p_params)
{
+ u8 src_pfid, dst_pfid, port_id;
u16 opcode_b = 0;
u32 opcode = 0;
@@ -467,16 +470,20 @@ static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn,
*/
opcode |= (is_src_type_grc ? DMAE_CMD_SRC_MASK_GRC
: DMAE_CMD_SRC_MASK_PCIE) << DMAE_CMD_SRC_SHIFT;
- opcode |= (p_hwfn->rel_pf_id & DMAE_CMD_SRC_PF_ID_MASK) <<
- DMAE_CMD_SRC_PF_ID_SHIFT;
+ src_pfid = ECORE_DMAE_FLAGS_IS_SET(p_params, PF_SRC) ?
+ p_params->src_pfid : p_hwfn->rel_pf_id;
+ opcode |= (src_pfid & DMAE_CMD_SRC_PF_ID_MASK) <<
+ DMAE_CMD_SRC_PF_ID_SHIFT;
/* The destination of the DMA can be: 0-None 1-PCIe 2-GRC 3-None */
opcode |= (is_dst_type_grc ? DMAE_CMD_DST_MASK_GRC
: DMAE_CMD_DST_MASK_PCIE) << DMAE_CMD_DST_SHIFT;
- opcode |= (p_hwfn->rel_pf_id & DMAE_CMD_DST_PF_ID_MASK) <<
- DMAE_CMD_DST_PF_ID_SHIFT;
+ dst_pfid = ECORE_DMAE_FLAGS_IS_SET(p_params, PF_DST) ?
+ p_params->dst_pfid : p_hwfn->rel_pf_id;
+ opcode |= (dst_pfid & DMAE_CMD_DST_PF_ID_MASK) <<
+ DMAE_CMD_DST_PF_ID_SHIFT;
- /* DMAE_E4_TODO need to check which value to specifiy here. */
+ /* DMAE_E4_TODO need to check which value to specify here. */
/* opcode |= (!b_complete_to_host)<< DMAE_CMD_C_DST_SHIFT; */
/* Whether to write a completion word to the completion destination:
@@ -486,7 +493,7 @@ static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn,
opcode |= DMAE_CMD_COMP_WORD_EN_MASK << DMAE_CMD_COMP_WORD_EN_SHIFT;
opcode |= DMAE_CMD_SRC_ADDR_RESET_MASK << DMAE_CMD_SRC_ADDR_RESET_SHIFT;
- if (p_params->flags & ECORE_DMAE_FLAG_COMPLETION_DST)
+ if (ECORE_DMAE_FLAGS_IS_SET(p_params, COMPLETION_DST))
opcode |= 1 << DMAE_CMD_COMP_FUNC_SHIFT;
/* swapping mode 3 - big endian there should be a define ifdefed in
@@ -494,7 +501,9 @@ static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn,
*/
opcode |= DMAE_CMD_ENDIANITY << DMAE_CMD_ENDIANITY_MODE_SHIFT;
- opcode |= p_hwfn->port_id << DMAE_CMD_PORT_ID_SHIFT;
+ port_id = (ECORE_DMAE_FLAGS_IS_SET(p_params, PORT)) ?
+ p_params->port_id : p_hwfn->port_id;
+ opcode |= port_id << DMAE_CMD_PORT_ID_SHIFT;
/* reset source address in next go */
opcode |= DMAE_CMD_SRC_ADDR_RESET_MASK << DMAE_CMD_SRC_ADDR_RESET_SHIFT;
@@ -503,14 +512,14 @@ static void ecore_dmae_opcode(struct ecore_hwfn *p_hwfn,
opcode |= DMAE_CMD_DST_ADDR_RESET_MASK << DMAE_CMD_DST_ADDR_RESET_SHIFT;
/* SRC/DST VFID: all 1's - pf, otherwise VF id */
- if (p_params->flags & ECORE_DMAE_FLAG_VF_SRC) {
+ if (ECORE_DMAE_FLAGS_IS_SET(p_params, VF_SRC)) {
opcode |= (1 << DMAE_CMD_SRC_VF_ID_VALID_SHIFT);
opcode_b |= (p_params->src_vfid << DMAE_CMD_SRC_VF_ID_SHIFT);
} else {
opcode_b |= (DMAE_CMD_SRC_VF_ID_MASK <<
DMAE_CMD_SRC_VF_ID_SHIFT);
}
- if (p_params->flags & ECORE_DMAE_FLAG_VF_DST) {
+ if (ECORE_DMAE_FLAGS_IS_SET(p_params, VF_DST)) {
opcode |= 1 << DMAE_CMD_DST_VF_ID_VALID_SHIFT;
opcode_b |= p_params->dst_vfid << DMAE_CMD_DST_VF_ID_SHIFT;
} else {
@@ -855,7 +864,7 @@ static enum _ecore_status_t ecore_dmae_operation_wait(struct ecore_hwfn *p_hwfn)
for (i = 0; i <= cnt_split; i++) {
offset = length_limit * i;
- if (!(p_params->flags & ECORE_DMAE_FLAG_RW_REPL_SRC)) {
+ if (!ECORE_DMAE_FLAGS_IS_SET(p_params, RW_REPL_SRC)) {
if (src_type == ECORE_DMAE_ADDRESS_GRC)
src_addr_split = src_addr + offset;
else
@@ -896,51 +905,45 @@ static enum _ecore_status_t ecore_dmae_operation_wait(struct ecore_hwfn *p_hwfn)
return ecore_status;
}
-enum _ecore_status_t
-ecore_dmae_host2grc(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u64 source_addr,
- u32 grc_addr, u32 size_in_dwords, u32 flags)
+enum _ecore_status_t ecore_dmae_host2grc(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u64 source_addr,
+ u32 grc_addr,
+ u32 size_in_dwords,
+ struct ecore_dmae_params *p_params)
{
u32 grc_addr_in_dw = grc_addr / sizeof(u32);
- struct ecore_dmae_params params;
enum _ecore_status_t rc;
- OSAL_MEMSET(¶ms, 0, sizeof(struct ecore_dmae_params));
- params.flags = flags;
-
OSAL_SPIN_LOCK(&p_hwfn->dmae_info.lock);
rc = ecore_dmae_execute_command(p_hwfn, p_ptt, source_addr,
grc_addr_in_dw,
ECORE_DMAE_ADDRESS_HOST_VIRT,
ECORE_DMAE_ADDRESS_GRC,
- size_in_dwords, ¶ms);
+ size_in_dwords, p_params);
OSAL_SPIN_UNLOCK(&p_hwfn->dmae_info.lock);
return rc;
}
-enum _ecore_status_t
-ecore_dmae_grc2host(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- u32 grc_addr,
- dma_addr_t dest_addr, u32 size_in_dwords, u32 flags)
+enum _ecore_status_t ecore_dmae_grc2host(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ u32 grc_addr,
+ dma_addr_t dest_addr,
+ u32 size_in_dwords,
+ struct ecore_dmae_params *p_params)
{
u32 grc_addr_in_dw = grc_addr / sizeof(u32);
- struct ecore_dmae_params params;
enum _ecore_status_t rc;
- OSAL_MEMSET(¶ms, 0, sizeof(struct ecore_dmae_params));
- params.flags = flags;
-
OSAL_SPIN_LOCK(&p_hwfn->dmae_info.lock);
rc = ecore_dmae_execute_command(p_hwfn, p_ptt, grc_addr_in_dw,
dest_addr, ECORE_DMAE_ADDRESS_GRC,
ECORE_DMAE_ADDRESS_HOST_VIRT,
- size_in_dwords, ¶ms);
+ size_in_dwords, p_params);
OSAL_SPIN_UNLOCK(&p_hwfn->dmae_info.lock);
@@ -989,7 +992,6 @@ enum _ecore_status_t ecore_dmae_sanity(struct ecore_hwfn *p_hwfn,
const char *phase)
{
u32 size = OSAL_PAGE_SIZE / 2, val;
- struct ecore_dmae_params params;
enum _ecore_status_t rc = ECORE_SUCCESS;
dma_addr_t p_phys;
void *p_virt;
@@ -1021,9 +1023,9 @@ enum _ecore_status_t ecore_dmae_sanity(struct ecore_hwfn *p_hwfn,
(unsigned long)(p_phys + size),
(u8 *)p_virt + size, size);
- OSAL_MEMSET(¶ms, 0, sizeof(params));
rc = ecore_dmae_host2host(p_hwfn, p_ptt, p_phys, p_phys + size,
- size / 4 /* size_in_dwords */, ¶ms);
+ size / 4 /* size_in_dwords */,
+ OSAL_NULL /* default parameters */);
if (rc != ECORE_SUCCESS) {
DP_NOTICE(p_hwfn, false,
"DMAE sanity [%s]: ecore_dmae_host2host() failed. rc = %d.\n",
@@ -1054,3 +1056,32 @@ enum _ecore_status_t ecore_dmae_sanity(struct ecore_hwfn *p_hwfn,
OSAL_DMA_FREE_COHERENT(p_hwfn->p_dev, p_virt, p_phys, 2 * size);
return rc;
}
+
+void ecore_ppfid_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ u8 abs_ppfid, u32 hw_addr, u32 val)
+{
+ u8 pfid = ECORE_PFID_BY_PPFID(p_hwfn, abs_ppfid);
+
+ ecore_fid_pretend(p_hwfn, p_ptt,
+ pfid << PXP_PRETEND_CONCRETE_FID_PFID_SHIFT);
+ ecore_wr(p_hwfn, p_ptt, hw_addr, val);
+ ecore_fid_pretend(p_hwfn, p_ptt,
+ p_hwfn->rel_pf_id <<
+ PXP_PRETEND_CONCRETE_FID_PFID_SHIFT);
+}
+
+u32 ecore_ppfid_rd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ u8 abs_ppfid, u32 hw_addr)
+{
+ u8 pfid = ECORE_PFID_BY_PPFID(p_hwfn, abs_ppfid);
+ u32 val;
+
+ ecore_fid_pretend(p_hwfn, p_ptt,
+ pfid << PXP_PRETEND_CONCRETE_FID_PFID_SHIFT);
+ val = ecore_rd(p_hwfn, p_ptt, hw_addr);
+ ecore_fid_pretend(p_hwfn, p_ptt,
+ p_hwfn->rel_pf_id <<
+ PXP_PRETEND_CONCRETE_FID_PFID_SHIFT);
+
+ return val;
+}
diff --git a/drivers/net/qede/base/ecore_hw.h b/drivers/net/qede/base/ecore_hw.h
index a62ba39..0b5b40c 100644
--- a/drivers/net/qede/base/ecore_hw.h
+++ b/drivers/net/qede/base/ecore_hw.h
@@ -134,8 +134,8 @@ struct ecore_ptt *ecore_get_reserved_ptt(struct ecore_hwfn *p_hwfn,
*
* @param p_hwfn
* @param p_ptt
- * @param val
* @param hw_addr
+ * @param val
*/
void ecore_wr(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
@@ -147,7 +147,6 @@ void ecore_wr(struct ecore_hwfn *p_hwfn,
*
* @param p_hwfn
* @param p_ptt
- * @param val
* @param hw_addr
*/
u32 ecore_rd(struct ecore_hwfn *p_hwfn,
@@ -269,4 +268,29 @@ enum _ecore_status_t ecore_dmae_sanity(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
const char *phase);
+/**
+ * @brief ecore_ppfid_wr - Write value to BAR using the given ptt while
+ * pretending to a PF to which the given PPFID pertains.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param abs_ppfid
+ * @param hw_addr
+ * @param val
+ */
+void ecore_ppfid_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ u8 abs_ppfid, u32 hw_addr, u32 val);
+
+/**
+ * @brief ecore_ppfid_rd - Read value from BAR using the given ptt while
+ * pretending to a PF to which the given PPFID pertains.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param abs_ppfid
+ * @param hw_addr
+ */
+u32 ecore_ppfid_rd(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ u8 abs_ppfid, u32 hw_addr);
+
#endif /* __ECORE_HW_H__ */
diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index b7636f3..47c1be2 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -101,7 +101,8 @@ static enum _ecore_status_t ecore_init_rt(struct ecore_hwfn *p_hwfn,
rc = ecore_dmae_host2grc(p_hwfn, p_ptt,
(osal_uintptr_t)(p_init_val + i),
- addr + (i << 2), segment, 0);
+ addr + (i << 2), segment,
+ OSAL_NULL /* default parameters */);
if (rc != ECORE_SUCCESS)
return rc;
@@ -165,8 +166,9 @@ static enum _ecore_status_t ecore_init_array_dmae(struct ecore_hwfn *p_hwfn,
} else {
rc = ecore_dmae_host2grc(p_hwfn, p_ptt,
(osal_uintptr_t)(p_buf +
- dmae_data_offset),
- addr, size, 0);
+ dmae_data_offset),
+ addr, size,
+ OSAL_NULL /* default parameters */);
}
return rc;
@@ -177,13 +179,15 @@ static enum _ecore_status_t ecore_init_fill_dmae(struct ecore_hwfn *p_hwfn,
u32 addr, u32 fill_count)
{
static u32 zero_buffer[DMAE_MAX_RW_SIZE];
+ struct ecore_dmae_params params;
OSAL_MEMSET(zero_buffer, 0, sizeof(u32) * DMAE_MAX_RW_SIZE);
+ OSAL_MEMSET(¶ms, 0, sizeof(params));
+ params.flags = ECORE_DMAE_FLAG_RW_REPL_SRC;
return ecore_dmae_host2grc(p_hwfn, p_ptt,
(osal_uintptr_t)&zero_buffer[0],
- addr, fill_count,
- ECORE_DMAE_FLAG_RW_REPL_SRC);
+ addr, fill_count, ¶ms);
}
static void ecore_init_fill(struct ecore_hwfn *p_hwfn,
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index fd8f657..e48a7bc 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -1561,11 +1561,13 @@ void ecore_int_cau_conf_sb(struct ecore_hwfn *p_hwfn,
ecore_dmae_host2grc(p_hwfn, p_ptt,
(u64)(osal_uintptr_t)&phys_addr,
CAU_REG_SB_ADDR_MEMORY +
- igu_sb_id * sizeof(u64), 2, 0);
+ igu_sb_id * sizeof(u64), 2,
+ OSAL_NULL /* default parameters */);
ecore_dmae_host2grc(p_hwfn, p_ptt,
(u64)(osal_uintptr_t)&sb_entry,
CAU_REG_SB_VAR_MEMORY +
- igu_sb_id * sizeof(u64), 2, 0);
+ igu_sb_id * sizeof(u64), 2,
+ OSAL_NULL /* default parameters */);
} else {
/* Initialize Status Block Address */
STORE_RT_REG_AGG(p_hwfn,
@@ -2646,7 +2648,8 @@ enum _ecore_status_t ecore_int_set_timer_res(struct ecore_hwfn *p_hwfn,
rc = ecore_dmae_grc2host(p_hwfn, p_ptt, CAU_REG_SB_VAR_MEMORY +
sb_id * sizeof(u64),
- (u64)(osal_uintptr_t)&sb_entry, 2, 0);
+ (u64)(osal_uintptr_t)&sb_entry, 2,
+ OSAL_NULL /* default parameters */);
if (rc != ECORE_SUCCESS) {
DP_ERR(p_hwfn, "dmae_grc2host failed %d\n", rc);
return rc;
@@ -2659,8 +2662,8 @@ enum _ecore_status_t ecore_int_set_timer_res(struct ecore_hwfn *p_hwfn,
rc = ecore_dmae_host2grc(p_hwfn, p_ptt,
(u64)(osal_uintptr_t)&sb_entry,
- CAU_REG_SB_VAR_MEMORY +
- sb_id * sizeof(u64), 2, 0);
+ CAU_REG_SB_VAR_MEMORY + sb_id * sizeof(u64), 2,
+ OSAL_NULL /* default parameters */);
if (rc != ECORE_SUCCESS) {
DP_ERR(p_hwfn, "dmae_host2grc failed %d\n", rc);
return rc;
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index c17082e..5a0905e 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -2217,7 +2217,8 @@ int ecore_get_rxq_coalesce(struct ecore_hwfn *p_hwfn,
rc = ecore_dmae_grc2host(p_hwfn, p_ptt, CAU_REG_SB_VAR_MEMORY +
p_cid->sb_igu_id * sizeof(u64),
- (u64)(osal_uintptr_t)&sb_entry, 2, 0);
+ (u64)(osal_uintptr_t)&sb_entry, 2,
+ OSAL_NULL /* default parameters */);
if (rc != ECORE_SUCCESS) {
DP_ERR(p_hwfn, "dmae_grc2host failed %d\n", rc);
return rc;
@@ -2251,7 +2252,8 @@ int ecore_get_txq_coalesce(struct ecore_hwfn *p_hwfn,
rc = ecore_dmae_grc2host(p_hwfn, p_ptt, CAU_REG_SB_VAR_MEMORY +
p_cid->sb_igu_id * sizeof(u64),
- (u64)(osal_uintptr_t)&sb_entry, 2, 0);
+ (u64)(osal_uintptr_t)&sb_entry, 2,
+ OSAL_NULL /* default parameters */);
if (rc != ECORE_SUCCESS) {
DP_ERR(p_hwfn, "dmae_grc2host failed %d\n", rc);
return rc;
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 3811d27..202db13 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -4144,6 +4144,75 @@ enum _ecore_status_t
return ECORE_SUCCESS;
}
+enum _ecore_status_t ecore_mcp_get_engine_config(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt)
+{
+ struct ecore_dev *p_dev = p_hwfn->p_dev;
+ struct ecore_mcp_mb_params mb_params;
+ u8 fir_valid, l2_valid;
+ enum _ecore_status_t rc;
+
+ OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+ mb_params.cmd = DRV_MSG_CODE_GET_ENGINE_CONFIG;
+ rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ if (mb_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+ DP_INFO(p_hwfn,
+ "The get_engine_config command is unsupported by the MFW\n");
+ return ECORE_NOTIMPL;
+ }
+
+ fir_valid = GET_MFW_FIELD(mb_params.mcp_param,
+ FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALID);
+ if (fir_valid)
+ p_dev->fir_affin =
+ GET_MFW_FIELD(mb_params.mcp_param,
+ FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALUE);
+
+ l2_valid = GET_MFW_FIELD(mb_params.mcp_param,
+ FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALID);
+ if (l2_valid)
+ p_dev->l2_affin_hint =
+ GET_MFW_FIELD(mb_params.mcp_param,
+ FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALUE);
+
+ DP_INFO(p_hwfn,
+ "Engine affinity config: FIR={valid %hhd, value %hhd}, L2_hint={valid %hhd, value %hhd}\n",
+ fir_valid, p_dev->fir_affin, l2_valid, p_dev->l2_affin_hint);
+
+ return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t ecore_mcp_get_ppfid_bitmap(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt)
+{
+ struct ecore_dev *p_dev = p_hwfn->p_dev;
+ struct ecore_mcp_mb_params mb_params;
+ enum _ecore_status_t rc;
+
+ OSAL_MEM_ZERO(&mb_params, sizeof(mb_params));
+ mb_params.cmd = DRV_MSG_CODE_GET_PPFID_BITMAP;
+ rc = ecore_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+ if (rc != ECORE_SUCCESS)
+ return rc;
+
+ if (mb_params.mcp_resp == FW_MSG_CODE_UNSUPPORTED) {
+ DP_INFO(p_hwfn,
+ "The get_ppfid_bitmap command is unsupported by the MFW\n");
+ return ECORE_NOTIMPL;
+ }
+
+ p_dev->ppfid_bitmap = GET_MFW_FIELD(mb_params.mcp_param,
+ FW_MB_PARAM_PPFID_BITMAP);
+
+ DP_VERBOSE(p_hwfn, ECORE_MSG_SP, "PPFID bitmap 0x%hhx\n",
+ p_dev->ppfid_bitmap);
+
+ return ECORE_SUCCESS;
+}
+
void ecore_mcp_wol_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
u32 offset, u32 val)
{
diff --git a/drivers/net/qede/base/ecore_mcp.h b/drivers/net/qede/base/ecore_mcp.h
index 8e12531..2c052b7 100644
--- a/drivers/net/qede/base/ecore_mcp.h
+++ b/drivers/net/qede/base/ecore_mcp.h
@@ -25,9 +25,6 @@
rel_pfid)
#define MCP_PF_ID(p_hwfn) MCP_PF_ID_BY_REL(p_hwfn, (p_hwfn)->rel_pf_id)
-#define MFW_PORT(_p_hwfn) ((_p_hwfn)->abs_pf_id % \
- ecore_device_num_ports((_p_hwfn)->p_dev))
-
struct ecore_mcp_info {
/* List for mailbox commands which were sent and wait for a response */
osal_list_t cmd_list;
@@ -566,4 +563,22 @@ enum _ecore_status_t
void ecore_mcp_wol_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
u32 offset, u32 val);
+/**
+ * @brief Get the engine affinity configuration.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ */
+enum _ecore_status_t ecore_mcp_get_engine_config(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt);
+
+/**
+ * @brief Get the PPFID bitmap.
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ */
+enum _ecore_status_t ecore_mcp_get_ppfid_bitmap(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt);
+
#endif /* __ECORE_MCP_H__ */
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 3ac1085..9e937e2 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -979,10 +979,12 @@ static u8 ecore_iov_alloc_vf_igu_sbs(struct ecore_hwfn *p_hwfn,
ecore_init_cau_sb_entry(p_hwfn, &sb_entry,
p_hwfn->rel_pf_id,
vf->abs_vf_id, 1);
+
ecore_dmae_host2grc(p_hwfn, p_ptt,
(u64)(osal_uintptr_t)&sb_entry,
CAU_REG_SB_VAR_MEMORY +
- p_block->igu_sb_id * sizeof(u64), 2, 0);
+ p_block->igu_sb_id * sizeof(u64), 2,
+ OSAL_NULL /* default parameters */);
}
vf->num_sbs = (u8)num_rx_queues;
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 46ec984..13c2e2d 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -1267,6 +1267,8 @@ struct public_drv_mb {
#define DRV_MSG_CODE_OEM_RESET_TO_DEFAULT 0x3f000000
#define DRV_MSG_CODE_OV_GET_CURR_CFG 0x40000000
#define DRV_MSG_CODE_GET_OEM_UPDATES 0x41000000
+/* params [31:8] - reserved, [7:0] - bitmap */
+#define DRV_MSG_CODE_GET_PPFID_BITMAP 0x43000000
/*deprecated don't use*/
#define DRV_MSG_CODE_INITIATE_FLR_DEPRECATED 0x02000000
@@ -1476,6 +1478,7 @@ struct public_drv_mb {
/* Param: Password len. Union: Plain Password */
#define DRV_MSG_CODE_ENCRYPT_PASSWORD 0x00360000
+#define DRV_MSG_CODE_GET_ENGINE_CONFIG 0x00370000 /* Param: None */
#define DRV_MSG_SEQ_NUMBER_MASK 0x0000ffff
@@ -1812,6 +1815,18 @@ struct public_drv_mb {
#define FW_MB_PARAM_OEM_UPDATE_S_TAG 0x02
#define FW_MB_PARAM_OEM_UPDATE_CFG 0x04
+#define FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALID_MASK 0x00000001
+#define FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALID_OFFSET 0
+#define FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALUE_MASK 0x00000002
+#define FW_MB_PARAM_ENG_CFG_FIR_AFFIN_VALUE_OFFSET 1
+#define FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALID_MASK 0x00000004
+#define FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALID_OFFSET 2
+#define FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALUE_MASK 0x00000008
+#define FW_MB_PARAM_ENG_CFG_L2_AFFIN_VALUE_OFFSET 3
+
+#define FW_MB_PARAM_PPFID_BITMAP_MASK 0xFF
+#define FW_MB_PARAM_PPFID_BITMAP_OFFSET 0
+
u32 drv_pulse_mb;
#define DRV_PULSE_SEQ_MASK 0x00007fff
#define DRV_PULSE_SYSTEM_TIME_MASK 0xffff0000
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index c3e0bd2..612337f 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1235,3 +1235,7 @@
#define DORQ_REG_TAG1_OVRD_MODE 0x1008b4UL
#define DORQ_REG_PF_EXT_VID_BB_K2 0x1008c8UL
#define PRS_REG_SEARCH_NON_IP_AS_GFT 0x1f11c0UL
+#define NIG_REG_LLH_PPFID2PFID_TBL_0 0x501970UL
+#define NIG_REG_PPF_TO_ENGINE_SEL 0x508900UL
+#define NIG_REG_LLH_ENG_CLS_ROCE_QP_SEL 0x501b98UL
+#define NIG_REG_LLH_FUNC_FILTER_HDR_SEL_BB_K2 0x501b40UL
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 16/18] net/qede/base: add APIs for dscp priority map configuration
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
` (14 preceding siblings ...)
2018-09-29 8:14 ` [dpdk-dev] [PATCH 15/18] net/qede/base: add RL update params Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 17/18] net/qede/base: semantic changes Mody, Rasesh
` (2 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
Add APIs for dscp priority map configuration. APIs added are
ecore_dcbx_get_dscp_priority(), ecore_dcbx_set_dscp_priority().
These base driver APIs can be used for dscp-map query/config.
Configure the doorbell queue (DORQ) to use vlan-id/priority.
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/base/ecore_dcbx.c | 85 +++++++++++++++++++++++++++-----
drivers/net/qede/base/ecore_dcbx_api.h | 10 ++++
drivers/net/qede/base/reg_addr.h | 1 +
3 files changed, 85 insertions(+), 11 deletions(-)
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 7668ad6..7981c42 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -129,7 +129,7 @@ u8 ecore_dcbx_get_dscp_value(struct ecore_hwfn *p_hwfn, u8 pri)
static void
ecore_dcbx_set_params(struct ecore_dcbx_results *p_data,
- struct ecore_hwfn *p_hwfn,
+ struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
bool enable, u8 prio, u8 tc,
enum dcbx_protocol_type type,
enum ecore_pci_personality personality)
@@ -154,12 +154,19 @@ u8 ecore_dcbx_get_dscp_value(struct ecore_hwfn *p_hwfn, u8 pri)
/* QM reconf data */
if (p_hwfn->hw_info.personality == personality)
p_hwfn->hw_info.offload_tc = tc;
+
+ /* Configure dcbx vlan priority in doorbell block for roce EDPM */
+ if (OSAL_TEST_BIT(ECORE_MF_UFP_SPECIFIC, &p_hwfn->p_dev->mf_bits) &&
+ (type == DCBX_PROTOCOL_ROCE)) {
+ ecore_wr(p_hwfn, p_ptt, DORQ_REG_TAG1_OVRD_MODE, 1);
+ ecore_wr(p_hwfn, p_ptt, DORQ_REG_PF_PCP_BB_K2, prio << 1);
+ }
}
/* Update app protocol data and hw_info fields with the TLV info */
static void
ecore_dcbx_update_app_info(struct ecore_dcbx_results *p_data,
- struct ecore_hwfn *p_hwfn,
+ struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
bool enable, u8 prio, u8 tc,
enum dcbx_protocol_type type)
{
@@ -175,7 +182,7 @@ u8 ecore_dcbx_get_dscp_value(struct ecore_hwfn *p_hwfn, u8 pri)
personality = ecore_dcbx_app_update[i].personality;
- ecore_dcbx_set_params(p_data, p_hwfn, enable,
+ ecore_dcbx_set_params(p_data, p_hwfn, p_ptt, enable,
prio, tc, type, personality);
}
}
@@ -231,7 +238,7 @@ u8 ecore_dcbx_get_dscp_value(struct ecore_hwfn *p_hwfn, u8 pri)
* reconfiguring QM. Get protocol specific data for PF update ramrod command.
*/
static enum _ecore_status_t
-ecore_dcbx_process_tlv(struct ecore_hwfn *p_hwfn,
+ecore_dcbx_process_tlv(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
struct ecore_dcbx_results *p_data,
struct dcbx_app_priority_entry *p_tbl, u32 pri_tc_tbl,
int count, u8 dcbx_version)
@@ -280,8 +287,8 @@ u8 ecore_dcbx_get_dscp_value(struct ecore_hwfn *p_hwfn, u8 pri)
enable = true;
}
- ecore_dcbx_update_app_info(p_data, p_hwfn, enable,
- priority, tc, type);
+ ecore_dcbx_update_app_info(p_data, p_hwfn, p_ptt,
+ enable, priority, tc, type);
}
}
@@ -302,8 +309,8 @@ u8 ecore_dcbx_get_dscp_value(struct ecore_hwfn *p_hwfn, u8 pri)
if (p_data->arr[type].update)
continue;
- enable = (type == DCBX_PROTOCOL_ETH) ? false : !!dcbx_version;
- ecore_dcbx_update_app_info(p_data, p_hwfn, enable,
+ /* if no app tlv was present, don't override in FW */
+ ecore_dcbx_update_app_info(p_data, p_hwfn, p_ptt, false,
priority, tc, type);
}
@@ -314,7 +321,7 @@ u8 ecore_dcbx_get_dscp_value(struct ecore_hwfn *p_hwfn, u8 pri)
* reconfiguring QM. Get protocol specific data for PF update ramrod command.
*/
static enum _ecore_status_t
-ecore_dcbx_process_mib_info(struct ecore_hwfn *p_hwfn)
+ecore_dcbx_process_mib_info(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt)
{
struct dcbx_app_priority_feature *p_app;
struct dcbx_app_priority_entry *p_tbl;
@@ -338,7 +345,7 @@ u8 ecore_dcbx_get_dscp_value(struct ecore_hwfn *p_hwfn, u8 pri)
p_info = &p_hwfn->hw_info;
num_entries = GET_MFW_FIELD(p_app->flags, DCBX_APP_NUM_ENTRIES);
- rc = ecore_dcbx_process_tlv(p_hwfn, &data, p_tbl, pri_tc_tbl,
+ rc = ecore_dcbx_process_tlv(p_hwfn, p_ptt, &data, p_tbl, pri_tc_tbl,
num_entries, dcbx_version);
if (rc != ECORE_SUCCESS)
return rc;
@@ -879,7 +886,7 @@ enum _ecore_status_t
if (type == ECORE_DCBX_OPERATIONAL_MIB) {
ecore_dcbx_get_dscp_params(p_hwfn, &p_hwfn->p_dcbx_info->get);
- rc = ecore_dcbx_process_mib_info(p_hwfn);
+ rc = ecore_dcbx_process_mib_info(p_hwfn, p_ptt);
if (!rc) {
/* reconfigure tcs of QM queues according
* to negotiation results
@@ -1540,3 +1547,59 @@ enum _ecore_status_t
return rc;
}
+
+enum _ecore_status_t
+ecore_dcbx_get_dscp_priority(struct ecore_hwfn *p_hwfn,
+ u8 dscp_index, u8 *p_dscp_pri)
+{
+ struct ecore_dcbx_get *p_dcbx_info;
+ enum _ecore_status_t rc;
+
+ if (dscp_index >= ECORE_DCBX_DSCP_SIZE) {
+ DP_ERR(p_hwfn, "Invalid dscp index %d\n", dscp_index);
+ return ECORE_INVAL;
+ }
+
+ p_dcbx_info = OSAL_ALLOC(p_hwfn->p_dev, GFP_KERNEL,
+ sizeof(*p_dcbx_info));
+ if (!p_dcbx_info)
+ return ECORE_NOMEM;
+
+ OSAL_MEMSET(p_dcbx_info, 0, sizeof(*p_dcbx_info));
+ rc = ecore_dcbx_query_params(p_hwfn, p_dcbx_info,
+ ECORE_DCBX_OPERATIONAL_MIB);
+ if (rc) {
+ OSAL_FREE(p_hwfn->p_dev, p_dcbx_info);
+ return rc;
+ }
+
+ *p_dscp_pri = p_dcbx_info->dscp.dscp_pri_map[dscp_index];
+ OSAL_FREE(p_hwfn->p_dev, p_dcbx_info);
+
+ return ECORE_SUCCESS;
+}
+
+enum _ecore_status_t
+ecore_dcbx_set_dscp_priority(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ u8 dscp_index, u8 pri_val)
+{
+ struct ecore_dcbx_set dcbx_set;
+ enum _ecore_status_t rc;
+
+ if (dscp_index >= ECORE_DCBX_DSCP_SIZE ||
+ pri_val >= ECORE_MAX_PFC_PRIORITIES) {
+ DP_ERR(p_hwfn, "Invalid dscp params: index = %d pri = %d\n",
+ dscp_index, pri_val);
+ return ECORE_INVAL;
+ }
+
+ OSAL_MEMSET(&dcbx_set, 0, sizeof(dcbx_set));
+ rc = ecore_dcbx_get_config_params(p_hwfn, &dcbx_set);
+ if (rc)
+ return rc;
+
+ dcbx_set.override_flags = ECORE_DCBX_OVERRIDE_DSCP_CFG;
+ dcbx_set.dscp.dscp_pri_map[dscp_index] = pri_val;
+
+ return ecore_dcbx_config_params(p_hwfn, p_ptt, &dcbx_set, 1);
+}
diff --git a/drivers/net/qede/base/ecore_dcbx_api.h b/drivers/net/qede/base/ecore_dcbx_api.h
index eaf8e08..6fad2ec 100644
--- a/drivers/net/qede/base/ecore_dcbx_api.h
+++ b/drivers/net/qede/base/ecore_dcbx_api.h
@@ -228,6 +228,16 @@ enum _ecore_status_t
ecore_lldp_set_system_tlvs(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
struct ecore_lldp_sys_tlvs *p_params);
+/* Returns priority value for a given dscp index */
+enum _ecore_status_t
+ecore_dcbx_get_dscp_priority(struct ecore_hwfn *p_hwfn,
+ u8 dscp_index, u8 *p_dscp_pri);
+
+/* Sets priority value for a given dscp index */
+enum _ecore_status_t
+ecore_dcbx_set_dscp_priority(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
+ u8 dscp_index, u8 pri_val);
+
static const struct ecore_dcbx_app_metadata ecore_dcbx_app_update[] = {
{DCBX_PROTOCOL_ISCSI, "ISCSI", ECORE_PCI_ISCSI},
{DCBX_PROTOCOL_FCOE, "FCOE", ECORE_PCI_FCOE},
diff --git a/drivers/net/qede/base/reg_addr.h b/drivers/net/qede/base/reg_addr.h
index 612337f..be59f77 100644
--- a/drivers/net/qede/base/reg_addr.h
+++ b/drivers/net/qede/base/reg_addr.h
@@ -1233,6 +1233,7 @@
#define NIG_REG_LLH_FUNC_TAG_EN 0x5019b0UL
#define NIG_REG_LLH_FUNC_TAG_VALUE 0x5019d0UL
#define DORQ_REG_TAG1_OVRD_MODE 0x1008b4UL
+#define DORQ_REG_PF_PCP_BB_K2 0x1008c4UL
#define DORQ_REG_PF_EXT_VID_BB_K2 0x1008c8UL
#define PRS_REG_SEARCH_NON_IP_AS_GFT 0x1f11c0UL
#define NIG_REG_LLH_PPFID2PFID_TBL_0 0x501970UL
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 17/18] net/qede/base: semantic changes
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
` (15 preceding siblings ...)
2018-09-29 8:14 ` [dpdk-dev] [PATCH 16/18] net/qede/base: add APIs for dscp priority map configuration Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 18/18] net/qede: bump PMD version to 2.10.0.1 Mody, Rasesh
2018-10-03 10:32 ` [dpdk-dev] [PATCH 00/18] net/qede: base driver update Ferruh Yigit
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
This patch consists of semantic/formatting changes.
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/base/ecore_dcbx.c | 3 ++-
drivers/net/qede/base/ecore_init_ops.c | 12 ++++-----
drivers/net/qede/base/ecore_int.c | 3 ++-
drivers/net/qede/base/ecore_int_api.h | 3 ++-
drivers/net/qede/base/ecore_l2.c | 16 ++++++------
drivers/net/qede/base/ecore_mcp.c | 11 ++++----
drivers/net/qede/base/ecore_spq.c | 44 ++++++++++++++++++--------------
drivers/net/qede/base/ecore_sriov.c | 12 +++++----
8 files changed, 58 insertions(+), 46 deletions(-)
diff --git a/drivers/net/qede/base/ecore_dcbx.c b/drivers/net/qede/base/ecore_dcbx.c
index 7981c42..cbc69cd 100644
--- a/drivers/net/qede/base/ecore_dcbx.c
+++ b/drivers/net/qede/base/ecore_dcbx.c
@@ -325,7 +325,7 @@ u8 ecore_dcbx_get_dscp_value(struct ecore_hwfn *p_hwfn, u8 pri)
{
struct dcbx_app_priority_feature *p_app;
struct dcbx_app_priority_entry *p_tbl;
- struct ecore_dcbx_results data = { 0 };
+ struct ecore_dcbx_results data;
struct dcbx_ets_feature *p_ets;
struct ecore_hw_info *p_info;
u32 pri_tc_tbl, flags;
@@ -345,6 +345,7 @@ u8 ecore_dcbx_get_dscp_value(struct ecore_hwfn *p_hwfn, u8 pri)
p_info = &p_hwfn->hw_info;
num_entries = GET_MFW_FIELD(p_app->flags, DCBX_APP_NUM_ENTRIES);
+ OSAL_MEMSET(&data, 0, sizeof(struct ecore_dcbx_results));
rc = ecore_dcbx_process_tlv(p_hwfn, p_ptt, &data, p_tbl, pri_tc_tbl,
num_entries, dcbx_version);
if (rc != ECORE_SUCCESS)
diff --git a/drivers/net/qede/base/ecore_init_ops.c b/drivers/net/qede/base/ecore_init_ops.c
index 47c1be2..044308b 100644
--- a/drivers/net/qede/base/ecore_init_ops.c
+++ b/drivers/net/qede/base/ecore_init_ops.c
@@ -420,11 +420,11 @@ static u8 ecore_init_cmd_mode_match(struct ecore_hwfn *p_hwfn,
u16 *p_offset, int modes)
{
struct ecore_dev *p_dev = p_hwfn->p_dev;
- const u8 *modes_tree_buf;
u8 arg1, arg2, tree_val;
+ const u8 *modes_tree;
- modes_tree_buf = p_dev->fw_data->modes_tree_buf;
- tree_val = modes_tree_buf[(*p_offset)++];
+ modes_tree = p_dev->fw_data->modes_tree_buf;
+ tree_val = modes_tree[(*p_offset)++];
switch (tree_val) {
case INIT_MODE_OP_NOT:
return ecore_init_cmd_mode_match(p_hwfn, p_offset, modes) ^ 1;
@@ -474,12 +474,12 @@ enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn,
{
struct ecore_dev *p_dev = p_hwfn->p_dev;
u32 cmd_num, num_init_ops;
- union init_op *init_ops;
+ union init_op *init;
bool b_dmae = false;
enum _ecore_status_t rc = ECORE_SUCCESS;
num_init_ops = p_dev->fw_data->init_ops_size;
- init_ops = p_dev->fw_data->init_ops;
+ init = p_dev->fw_data->init_ops;
#ifdef CONFIG_ECORE_ZIPPED_FW
p_hwfn->unzip_buf = OSAL_ZALLOC(p_hwfn->p_dev, GFP_ATOMIC,
@@ -491,7 +491,7 @@ enum _ecore_status_t ecore_init_run(struct ecore_hwfn *p_hwfn,
#endif
for (cmd_num = 0; cmd_num < num_init_ops; cmd_num++) {
- union init_op *cmd = &init_ops[cmd_num];
+ union init_op *cmd = &init[cmd_num];
u32 data = OSAL_LE32_TO_CPU(cmd->raw.op_data);
switch (GET_FIELD(data, INIT_CALLBACK_OP_OP)) {
diff --git a/drivers/net/qede/base/ecore_int.c b/drivers/net/qede/base/ecore_int.c
index e48a7bc..7368d55 100644
--- a/drivers/net/qede/base/ecore_int.c
+++ b/drivers/net/qede/base/ecore_int.c
@@ -1224,8 +1224,9 @@ static enum _ecore_status_t ecore_int_attentions(struct ecore_hwfn *p_hwfn)
static void ecore_sb_ack_attn(struct ecore_hwfn *p_hwfn,
void OSAL_IOMEM *igu_addr, u32 ack_cons)
{
- struct igu_prod_cons_update igu_ack = { 0 };
+ struct igu_prod_cons_update igu_ack;
+ OSAL_MEMSET(&igu_ack, 0, sizeof(struct igu_prod_cons_update));
igu_ack.sb_id_and_flags =
((ack_cons << IGU_PROD_CONS_UPDATE_SB_INDEX_SHIFT) |
(1 << IGU_PROD_CONS_UPDATE_UPDATE_FLAG_SHIFT) |
diff --git a/drivers/net/qede/base/ecore_int_api.h b/drivers/net/qede/base/ecore_int_api.h
index 5b9c31d..42538a4 100644
--- a/drivers/net/qede/base/ecore_int_api.h
+++ b/drivers/net/qede/base/ecore_int_api.h
@@ -92,8 +92,9 @@ static OSAL_INLINE u16 ecore_sb_update_sb_idx(struct ecore_sb_info *sb_info)
static OSAL_INLINE void ecore_sb_ack(struct ecore_sb_info *sb_info,
enum igu_int_cmd int_cmd, u8 upd_flg)
{
- struct igu_prod_cons_update igu_ack = { 0 };
+ struct igu_prod_cons_update igu_ack;
+ OSAL_MEMSET(&igu_ack, 0, sizeof(struct igu_prod_cons_update));
igu_ack.sb_id_and_flags =
((sb_info->sb_ack << IGU_PROD_CONS_UPDATE_SB_INDEX_SHIFT) |
(upd_flg << IGU_PROD_CONS_UPDATE_UPDATE_FLAG_SHIFT) |
diff --git a/drivers/net/qede/base/ecore_l2.c b/drivers/net/qede/base/ecore_l2.c
index 5a0905e..8b9817e 100644
--- a/drivers/net/qede/base/ecore_l2.c
+++ b/drivers/net/qede/base/ecore_l2.c
@@ -2205,10 +2205,10 @@ enum _ecore_status_t
return ecore_spq_post(p_hwfn, p_ent, OSAL_NULL);
}
-int ecore_get_rxq_coalesce(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct ecore_queue_cid *p_cid,
- u16 *p_rx_coal)
+enum _ecore_status_t ecore_get_rxq_coalesce(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct ecore_queue_cid *p_cid,
+ u16 *p_rx_coal)
{
u32 coalesce, address, is_valid;
struct cau_sb_entry sb_entry;
@@ -2240,10 +2240,10 @@ int ecore_get_rxq_coalesce(struct ecore_hwfn *p_hwfn,
return ECORE_SUCCESS;
}
-int ecore_get_txq_coalesce(struct ecore_hwfn *p_hwfn,
- struct ecore_ptt *p_ptt,
- struct ecore_queue_cid *p_cid,
- u16 *p_tx_coal)
+enum _ecore_status_t ecore_get_txq_coalesce(struct ecore_hwfn *p_hwfn,
+ struct ecore_ptt *p_ptt,
+ struct ecore_queue_cid *p_cid,
+ u16 *p_tx_coal)
{
u32 coalesce, address, is_valid;
struct cau_sb_entry sb_entry;
diff --git a/drivers/net/qede/base/ecore_mcp.c b/drivers/net/qede/base/ecore_mcp.c
index 202db13..6c65606 100644
--- a/drivers/net/qede/base/ecore_mcp.c
+++ b/drivers/net/qede/base/ecore_mcp.c
@@ -3084,7 +3084,7 @@ enum _ecore_status_t ecore_mcp_phy_read(struct ecore_dev *p_dev, u32 cmd,
{
struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
struct ecore_ptt *p_ptt;
- u32 resp, param;
+ u32 resp = 0, param;
enum _ecore_status_t rc;
p_ptt = ecore_ptt_acquire(p_hwfn);
@@ -3124,7 +3124,7 @@ enum _ecore_status_t ecore_mcp_nvm_del_file(struct ecore_dev *p_dev, u32 addr)
{
struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
struct ecore_ptt *p_ptt;
- u32 resp, param;
+ u32 resp = 0, param;
enum _ecore_status_t rc;
p_ptt = ecore_ptt_acquire(p_hwfn);
@@ -3143,7 +3143,7 @@ enum _ecore_status_t ecore_mcp_nvm_put_file_begin(struct ecore_dev *p_dev,
{
struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
struct ecore_ptt *p_ptt;
- u32 resp, param;
+ u32 resp = 0, param;
enum _ecore_status_t rc;
p_ptt = ecore_ptt_acquire(p_hwfn);
@@ -3237,8 +3237,8 @@ enum _ecore_status_t ecore_mcp_phy_write(struct ecore_dev *p_dev, u32 cmd,
u32 addr, u8 *p_buf, u32 len)
{
struct ecore_hwfn *p_hwfn = ECORE_LEADING_HWFN(p_dev);
+ u32 resp = 0, param, nvm_cmd;
struct ecore_ptt *p_ptt;
- u32 resp, param, nvm_cmd;
enum _ecore_status_t rc;
p_ptt = ecore_ptt_acquire(p_hwfn);
@@ -4216,10 +4216,11 @@ enum _ecore_status_t ecore_mcp_get_ppfid_bitmap(struct ecore_hwfn *p_hwfn,
void ecore_mcp_wol_wr(struct ecore_hwfn *p_hwfn, struct ecore_ptt *p_ptt,
u32 offset, u32 val)
{
- struct ecore_mcp_mb_params mb_params = {0};
enum _ecore_status_t rc = ECORE_SUCCESS;
u32 dword = val;
+ struct ecore_mcp_mb_params mb_params;
+ OSAL_MEMSET(&mb_params, 0, sizeof(struct ecore_mcp_mb_params));
mb_params.cmd = DRV_MSG_CODE_WRITE_WOL_REG;
mb_params.param = offset;
mb_params.p_data_src = &dword;
diff --git a/drivers/net/qede/base/ecore_spq.c b/drivers/net/qede/base/ecore_spq.c
index 1a02ba2..88ad961 100644
--- a/drivers/net/qede/base/ecore_spq.c
+++ b/drivers/net/qede/base/ecore_spq.c
@@ -282,6 +282,7 @@ static enum _ecore_status_t ecore_spq_hw_post(struct ecore_hwfn *p_hwfn,
struct event_ring_entry *p_eqe)
{
ecore_spq_async_comp_cb cb;
+ enum _ecore_status_t rc;
if (p_eqe->protocol_id >= MAX_PROTOCOL_TYPE) {
DP_ERR(p_hwfn, "Wrong protocol: %d\n", p_eqe->protocol_id);
@@ -289,15 +290,22 @@ static enum _ecore_status_t ecore_spq_hw_post(struct ecore_hwfn *p_hwfn,
}
cb = p_hwfn->p_spq->async_comp_cb[p_eqe->protocol_id];
- if (cb) {
- return cb(p_hwfn, p_eqe->opcode, p_eqe->echo,
- &p_eqe->data, p_eqe->fw_return_code);
- } else {
+ if (!cb) {
DP_NOTICE(p_hwfn,
true, "Unknown Async completion for protocol: %d\n",
p_eqe->protocol_id);
return ECORE_INVAL;
}
+
+ rc = cb(p_hwfn, p_eqe->opcode, p_eqe->echo,
+ &p_eqe->data, p_eqe->fw_return_code);
+ if (rc != ECORE_SUCCESS)
+ DP_NOTICE(p_hwfn, true,
+ "Async completion callback failed, rc = %d [opcode %x, echo %x, fw_return_code %x]",
+ rc, p_eqe->opcode, p_eqe->echo,
+ p_eqe->fw_return_code);
+
+ return rc;
}
enum _ecore_status_t
@@ -342,7 +350,7 @@ enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn,
struct ecore_eq *p_eq = cookie;
struct ecore_chain *p_chain = &p_eq->chain;
u16 fw_cons_idx = 0;
- enum _ecore_status_t rc = 0;
+ enum _ecore_status_t rc = ECORE_SUCCESS;
if (!p_hwfn->p_spq) {
DP_ERR(p_hwfn, "Unexpected NULL p_spq\n");
@@ -366,7 +374,8 @@ enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn,
while (fw_cons_idx != ecore_chain_get_cons_idx(p_chain)) {
struct event_ring_entry *p_eqe = ecore_chain_consume(p_chain);
if (!p_eqe) {
- rc = ECORE_INVAL;
+ DP_ERR(p_hwfn,
+ "Unexpected NULL chain consumer entry\n");
break;
}
@@ -382,15 +391,13 @@ enum _ecore_status_t ecore_eq_completion(struct ecore_hwfn *p_hwfn,
*/
p_eqe->flags);
- if (GET_FIELD(p_eqe->flags, EVENT_RING_ENTRY_ASYNC)) {
- if (ecore_async_event_completion(p_hwfn, p_eqe))
- rc = ECORE_INVAL;
- } else if (ecore_spq_completion(p_hwfn,
- p_eqe->echo,
- p_eqe->fw_return_code,
- &p_eqe->data)) {
- rc = ECORE_INVAL;
- }
+ if (GET_FIELD(p_eqe->flags, EVENT_RING_ENTRY_ASYNC))
+ ecore_async_event_completion(p_hwfn, p_eqe);
+ else
+ ecore_spq_completion(p_hwfn,
+ p_eqe->echo,
+ p_eqe->fw_return_code,
+ &p_eqe->data);
ecore_chain_recycle_consumed(p_chain);
}
@@ -936,12 +943,11 @@ enum _ecore_status_t ecore_spq_completion(struct ecore_hwfn *p_hwfn,
struct ecore_spq_entry *found = OSAL_NULL;
enum _ecore_status_t rc;
- if (!p_hwfn)
- return ECORE_INVAL;
-
p_spq = p_hwfn->p_spq;
- if (!p_spq)
+ if (!p_spq) {
+ DP_ERR(p_hwfn, "Unexpected NULL p_spq\n");
return ECORE_INVAL;
+ }
OSAL_SPIN_LOCK(&p_spq->lock);
OSAL_LIST_FOR_EACH_ENTRY_SAFE(p_ent,
diff --git a/drivers/net/qede/base/ecore_sriov.c b/drivers/net/qede/base/ecore_sriov.c
index 9e937e2..db929f0 100644
--- a/drivers/net/qede/base/ecore_sriov.c
+++ b/drivers/net/qede/base/ecore_sriov.c
@@ -218,7 +218,7 @@ struct ecore_vf_info *ecore_iov_get_vf_info(struct ecore_hwfn *p_hwfn,
static struct ecore_queue_cid *
ecore_iov_get_vf_rx_queue_cid(struct ecore_vf_queue *p_queue)
{
- int i;
+ u32 i;
for (i = 0; i < MAX_QUEUES_PER_QZONE; i++) {
if (p_queue->cids[i].p_cid &&
@@ -240,7 +240,7 @@ static bool ecore_iov_validate_queue_mode(struct ecore_vf_info *p_vf,
enum ecore_iov_validate_q_mode mode,
bool b_is_tx)
{
- int i;
+ u32 i;
if (mode == ECORE_IOV_VALIDATE_Q_NA)
return true;
@@ -2089,8 +2089,8 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
struct ecore_ptt *p_ptt,
struct ecore_vf_info *vf)
{
- struct ecore_sp_vport_start_params params = { 0 };
struct ecore_iov_vf_mbx *mbx = &vf->vf_mbx;
+ struct ecore_sp_vport_start_params params;
struct vfpf_vport_start_tlv *start;
u8 status = PFVF_STATUS_SUCCESS;
struct ecore_vf_info *vf_info;
@@ -2141,6 +2141,7 @@ static void ecore_iov_vf_mbx_start_vport(struct ecore_hwfn *p_hwfn,
*p_bitmap |= 1 << VFPF_BULLETIN_UNTAGGED_DEFAULT;
}
+ OSAL_MEMSET(¶ms, 0, sizeof(struct ecore_sp_vport_start_params));
params.tpa_mode = start->tpa_mode;
params.remove_inner_vlan = start->inner_vlan_removal;
params.tx_switching = true;
@@ -3668,7 +3669,7 @@ static void ecore_iov_vf_pf_set_coalesce(struct ecore_hwfn *p_hwfn,
struct ecore_queue_cid *p_cid;
u16 rx_coal, tx_coal;
u16 qid;
- int i;
+ u32 i;
req = &mbx->req_virt->update_coalesce;
@@ -3748,7 +3749,8 @@ enum _ecore_status_t
struct ecore_queue_cid *p_cid;
struct ecore_vf_info *vf;
struct ecore_ptt *p_ptt;
- int i, rc = 0;
+ int rc = 0;
+ u32 i;
if (!ecore_iov_is_valid_vfid(p_hwfn, vf_id, true, true)) {
DP_NOTICE(p_hwfn, true,
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* [dpdk-dev] [PATCH 18/18] net/qede: bump PMD version to 2.10.0.1
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
` (16 preceding siblings ...)
2018-09-29 8:14 ` [dpdk-dev] [PATCH 17/18] net/qede/base: semantic changes Mody, Rasesh
@ 2018-09-29 8:14 ` Mody, Rasesh
2018-10-03 10:32 ` [dpdk-dev] [PATCH 00/18] net/qede: base driver update Ferruh Yigit
18 siblings, 0 replies; 20+ messages in thread
From: Mody, Rasesh @ 2018-09-29 8:14 UTC (permalink / raw)
To: dev; +Cc: Mody, Rasesh, ferruh.yigit, Dept-Eng DPDK Dev
Signed-off-by: Rasesh Mody <rasesh.mody@cavium.com>
---
drivers/net/qede/qede_ethdev.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/qede/qede_ethdev.h b/drivers/net/qede/qede_ethdev.h
index 8a9df98..622bd01 100644
--- a/drivers/net/qede/qede_ethdev.h
+++ b/drivers/net/qede/qede_ethdev.h
@@ -44,7 +44,7 @@
/* Driver versions */
#define QEDE_PMD_VER_PREFIX "QEDE PMD"
#define QEDE_PMD_VERSION_MAJOR 2
-#define QEDE_PMD_VERSION_MINOR 9
+#define QEDE_PMD_VERSION_MINOR 10
#define QEDE_PMD_VERSION_REVISION 0
#define QEDE_PMD_VERSION_PATCH 1
--
1.7.10.3
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [dpdk-dev] [PATCH 00/18] net/qede: base driver update
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
` (17 preceding siblings ...)
2018-09-29 8:14 ` [dpdk-dev] [PATCH 18/18] net/qede: bump PMD version to 2.10.0.1 Mody, Rasesh
@ 2018-10-03 10:32 ` Ferruh Yigit
18 siblings, 0 replies; 20+ messages in thread
From: Ferruh Yigit @ 2018-10-03 10:32 UTC (permalink / raw)
To: Mody, Rasesh, dev; +Cc: Dept-Eng DPDK Dev
On 9/29/2018 9:13 AM, Mody, Rasesh wrote:
> This patch set updates the base driver to use FW 8.37.7.0 and adds
> support for other base driver functionalities. It also updates the
> PMD version to 2.10.0.1.
>
> Rasesh Mody (18):
> net/qede/base: upgrade to FW 8.37.7.0
> net/qede/base: check for EDPM enabled in DB recovery
> net/qede/base: add DPC sync after PF stop
> net/qede/base: workaround to indicate SHMEM data ready
> net/qede/base: add API to update FW RSS indirection table
> net/qede/base: add mf-bit/API for FIP special mode
> net/qede/base: add error handling for mutex allocation
> net/qede/base: adjust queue manager idx greater than max
> net/qede/base: add pretend function for port/PF
> net/qede/base: add support for SRIOV VF min rate
> net/qede/base: add periodic Doorbell Recovery support
> net/qede/base: get pre-negotiated OEM values
> net/qede/base: enable control frame filtering
> net/qede/base: changes for 100G
> net/qede/base: add RL update params
> net/qede/base: add APIs for dscp priority map configuration
> net/qede/base: semantic changes
> net/qede: bump PMD version to 2.10.0.1
Series applied to dpdk-next-net/master, thanks.
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2018-10-03 10:33 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-29 8:13 [dpdk-dev] [PATCH 00/18] net/qede: base driver update Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 01/18] net/qede/base: upgrade to FW 8.37.7.0 Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 03/18] net/qede/base: add DPC sync after PF stop Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 02/18] net/qede/base: check for EDPM enabled in DB recovery Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 04/18] net/qede/base: workaround to indicate SHMEM data ready Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 06/18] net/qede/base: add mf-bit/API for FIP special mode Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 05/18] net/qede/base: add API to update FW RSS indirection table Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 07/18] net/qede/base: add error handling for mutex allocation Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 08/18] net/qede/base: adjust queue manager idx greater than max Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 09/18] net/qede/base: add pretend function for port/PF Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 11/18] net/qede/base: add periodic Doorbell Recovery support Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 12/18] net/qede/base: get pre-negotiated OEM values Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 10/18] net/qede/base: add support for SRIOV VF min rate Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 13/18] net/qede/base: enable control frame filtering Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 14/18] net/qede/base: changes for 100G Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 15/18] net/qede/base: add RL update params Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 16/18] net/qede/base: add APIs for dscp priority map configuration Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 17/18] net/qede/base: semantic changes Mody, Rasesh
2018-09-29 8:14 ` [dpdk-dev] [PATCH 18/18] net/qede: bump PMD version to 2.10.0.1 Mody, Rasesh
2018-10-03 10:32 ` [dpdk-dev] [PATCH 00/18] net/qede: base driver update Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).