DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/49] shared code update
@ 2019-06-04  5:41 Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 01/49] net/ice/base: add macro for rounding up Leyi Rong
                   ` (50 more replies)
  0 siblings, 51 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:41 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong

Main changes:
1. Advanced switch rule support.
2. Add more APIs for tunnel management.
3. Add some minor features.
4. Code clean and bug fix.

Leyi Rong (49):
  net/ice/base: add macro for rounding up
  net/ice/base: update standard extr seq to include DIR flag
  net/ice/base: add API to configure MIB
  net/ice/base: add more recipe commands
  net/ice/base: add funcs to create new switch recipe
  net/ice/base: programming a new switch recipe
  net/ice/base: replay advanced rule after reset
  net/ice/base: code for removing advanced rule
  net/ice/base: add lock around profile map list
  net/ice/base: save and post reset replay q bandwidth
  net/ice/base: rollback AVF RSS configurations
  net/ice/base: move RSS replay list
  net/ice/base: cache the data of set PHY cfg AQ in SW
  net/ice/base: refactor HW table init function
  net/ice/base: add compatibility check for package version
  net/ice/base: add API to init FW logging
  net/ice/base: use macro instead of magic 8
  net/ice/base: move and redefine ice debug cq API
  net/ice/base: separate out control queue lock creation
  net/ice/base: add helper functions for PHY caching
  net/ice/base: added sibling head to parse nodes
  net/ice/base: add and fix debuglogs
  net/ice/base: add support for reading REPC statistics
  net/ice/base: move VSI to VSI group
  net/ice/base: forbid VSI to remove unassociated ucast filter
  net/ice/base: add some minor features
  net/ice/base: call out dev/func caps when printing
  net/ice/base: add some minor features
  net/ice/base: cleanup update link info
  net/ice/base: add rd64 support
  net/ice/base: track HW stat registers past rollover
  net/ice/base: implement LLDP persistent settings
  net/ice/base: check new FD filter duplicate location
  net/ice/base: correct UDP/TCP PTYPE assignments
  net/ice/base: calculate rate limit burst size correctly
  net/ice/base: add lock around profile map list
  net/ice/base: fix Flow Director VSI count
  net/ice/base: use more efficient structures
  net/ice/base: slightly code update
  net/ice/base: code clean up
  net/ice/base: cleanup ice flex pipe files
  net/ice/base: change how VMDq capability is wrapped
  net/ice/base: refactor VSI node sched code
  net/ice/base: add some minor new defines
  net/ice/base: add 16-byte Flex Rx Descriptor
  net/ice/base: add vxlan/generic tunnel management
  net/ice/base: enable additional switch rules
  net/ice/base: allow forward to Q groups in switch rule
  net/ice/base: changes for reducing ice add adv rule time

 drivers/net/ice/base/ice_adminq_cmd.h    |  128 +-
 drivers/net/ice/base/ice_common.c        |  618 ++++--
 drivers/net/ice/base/ice_common.h        |   32 +-
 drivers/net/ice/base/ice_controlq.c      |  247 ++-
 drivers/net/ice/base/ice_controlq.h      |    4 +-
 drivers/net/ice/base/ice_dcb.c           |   74 +-
 drivers/net/ice/base/ice_dcb.h           |   12 +-
 drivers/net/ice/base/ice_fdir.c          |   11 +-
 drivers/net/ice/base/ice_fdir.h          |    1 -
 drivers/net/ice/base/ice_flex_pipe.c     | 1251 ++++++------
 drivers/net/ice/base/ice_flex_pipe.h     |   74 +-
 drivers/net/ice/base/ice_flex_type.h     |   54 +-
 drivers/net/ice/base/ice_flow.c          |  447 +++-
 drivers/net/ice/base/ice_flow.h          |   26 +-
 drivers/net/ice/base/ice_lan_tx_rx.h     |   31 +-
 drivers/net/ice/base/ice_nvm.c           |   18 +-
 drivers/net/ice/base/ice_osdep.h         |   23 +
 drivers/net/ice/base/ice_protocol_type.h |    7 +-
 drivers/net/ice/base/ice_sched.c         |  214 +-
 drivers/net/ice/base/ice_sched.h         |   24 +-
 drivers/net/ice/base/ice_switch.c        | 2382 +++++++++++++++++++++-
 drivers/net/ice/base/ice_switch.h        |   64 +-
 drivers/net/ice/base/ice_type.h          |   95 +-
 drivers/net/ice/ice_ethdev.c             |    4 +-
 24 files changed, 4478 insertions(+), 1363 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 01/49] net/ice/base: add macro for rounding up
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 02/49] net/ice/base: update standard extr seq to include DIR flag Leyi Rong
                   ` (49 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Bruce Allan, Paul M Stillwell Jr

Add macro ROUND_UP for rounding up to an arbitrary multiple.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_type.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index e4979b832..d994ea3d2 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -18,6 +18,18 @@
 #define ICE_BYTES_PER_DWORD	4
 #define ICE_MAX_TRAFFIC_CLASS	8
 
+#ifndef ROUND_UP
+/**
+ * ROUND_UP - round up to next arbitrary multiple (not a power of 2)
+ * @a: value to round up
+ * @b: arbitrary multiple
+ *
+ * Round up to the next multiple of the arbitrary b.
+ * Note, when b is a power of 2 use ICE_ALIGN() instead.
+ */
+#define ROUND_UP(a, b)	((b) * DIVIDE_AND_ROUND_UP((a), (b)))
+#endif
+
 #ifndef MIN_T
 #define MIN_T(_t, _a, _b)	min((_t)(_a), (_t)(_b))
 #endif
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 02/49] net/ice/base: update standard extr seq to include DIR flag
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 01/49] net/ice/base: add macro for rounding up Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04 17:06   ` Maxime Coquelin
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 03/49] net/ice/base: add API to configure MIB Leyi Rong
                   ` (48 subsequent siblings)
  50 siblings, 1 reply; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Chinh T Cao, Paul M Stillwell Jr

Once upon a time, the ice_flow_create_xtrct_seq() function in ice_flow.c
extracted only protocol fields explicitly specified by the caller of the
ice_flow_add_prof() function via its struct ice_flow_seg_info instances.
However, to support different ingress and egress flow profiles with the
same matching criteria, it would be necessary to also match on the packet
Direction metadata. The primary reason was because there could not be more
than one HW profile with the same CDID, PTG, and VSIG. The Direction
metadata was not a parameter used to select HW profile IDs.

Thus, for ACL, the direction flag would need to be added to the extraction
sequence. This information will be use later as one criteria for ACL
scenario entry matching.

Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c | 43 +++++++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index be819e0e9..f1bf5b5e7 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -495,6 +495,42 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
 	return ICE_SUCCESS;
 }
 
+/**
+ * ice_flow_xtract_pkt_flags - Create an extr sequence entry for packet flags
+ * @hw: pointer to the HW struct
+ * @params: information about the flow to be processed
+ * @flags: The value of pkt_flags[x:x] in RX/TX MDID metadata.
+ *
+ * This function will allocate an extraction sequence entries for a DWORD size
+ * chunk of the packet flags.
+ */
+static enum ice_status
+ice_flow_xtract_pkt_flags(struct ice_hw *hw,
+			  struct ice_flow_prof_params *params,
+			  enum ice_flex_mdid_pkt_flags flags)
+{
+	u8 fv_words = hw->blk[params->blk].es.fvw;
+	u8 idx;
+
+	/* Make sure the number of extraction sequence entries required does not
+	 * exceed the block's capacity.
+	 */
+	if (params->es_cnt >= fv_words)
+		return ICE_ERR_MAX_LIMIT;
+
+	/* some blocks require a reversed field vector layout */
+	if (hw->blk[params->blk].es.reverse)
+		idx = fv_words - params->es_cnt - 1;
+	else
+		idx = params->es_cnt;
+
+	params->es[idx].prot_id = ICE_PROT_META_ID;
+	params->es[idx].off = flags;
+	params->es_cnt++;
+
+	return ICE_SUCCESS;
+}
+
 /**
  * ice_flow_xtract_fld - Create an extraction sequence entry for the given field
  * @hw: pointer to the HW struct
@@ -744,6 +780,13 @@ ice_flow_create_xtrct_seq(struct ice_hw *hw,
 	enum ice_status status = ICE_SUCCESS;
 	u8 i;
 
+	/* For ACL, we also need to extract the direction bit (Rx,Tx) data from
+	 * packet flags
+	 */
+	if (params->blk == ICE_BLK_ACL)
+		ice_flow_xtract_pkt_flags(hw, params,
+					  ICE_RX_MDID_PKT_FLAGS_15_0);
+
 	for (i = 0; i < params->prof->segs_cnt; i++) {
 		u64 match = params->prof->segs[i].match;
 		u16 j;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 03/49] net/ice/base: add API to configure MIB
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 01/49] net/ice/base: add macro for rounding up Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 02/49] net/ice/base: update standard extr seq to include DIR flag Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04 17:14   ` Maxime Coquelin
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 04/49] net/ice/base: add more recipe commands Leyi Rong
                   ` (47 subsequent siblings)
  50 siblings, 1 reply; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Chinh T Cao, Dave Ertman, Paul M Stillwell Jr

Add ice_cfg_lldp_mib_change and treat DCBx state NOT_STARTED as valid.

Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_dcb.c | 41 +++++++++++++++++++++++++++++-----
 drivers/net/ice/base/ice_dcb.h |  3 ++-
 2 files changed, 38 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c
index a7810578d..100c4bb0f 100644
--- a/drivers/net/ice/base/ice_dcb.c
+++ b/drivers/net/ice/base/ice_dcb.c
@@ -927,10 +927,11 @@ enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi)
 /**
  * ice_init_dcb
  * @hw: pointer to the HW struct
+ * @enable_mib_change: enable MIB change event
  *
  * Update DCB configuration from the Firmware
  */
-enum ice_status ice_init_dcb(struct ice_hw *hw)
+enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change)
 {
 	struct ice_port_info *pi = hw->port_info;
 	enum ice_status ret = ICE_SUCCESS;
@@ -944,7 +945,8 @@ enum ice_status ice_init_dcb(struct ice_hw *hw)
 	pi->dcbx_status = ice_get_dcbx_status(hw);
 
 	if (pi->dcbx_status == ICE_DCBX_STATUS_DONE ||
-	    pi->dcbx_status == ICE_DCBX_STATUS_IN_PROGRESS) {
+	    pi->dcbx_status == ICE_DCBX_STATUS_IN_PROGRESS ||
+	    pi->dcbx_status == ICE_DCBX_STATUS_NOT_STARTED) {
 		/* Get current DCBX configuration */
 		ret = ice_get_dcb_cfg(pi);
 		pi->is_sw_lldp = (hw->adminq.sq_last_status == ICE_AQ_RC_EPERM);
@@ -952,13 +954,42 @@ enum ice_status ice_init_dcb(struct ice_hw *hw)
 			return ret;
 	} else if (pi->dcbx_status == ICE_DCBX_STATUS_DIS) {
 		return ICE_ERR_NOT_READY;
-	} else if (pi->dcbx_status == ICE_DCBX_STATUS_MULTIPLE_PEERS) {
 	}
 
 	/* Configure the LLDP MIB change event */
-	ret = ice_aq_cfg_lldp_mib_change(hw, true, NULL);
+	if (enable_mib_change) {
+		ret = ice_aq_cfg_lldp_mib_change(hw, true, NULL);
+		if (!ret)
+			pi->is_sw_lldp = false;
+	}
+
+	return ret;
+}
+
+/**
+ * ice_cfg_lldp_mib_change
+ * @hw: pointer to the HW struct
+ * @ena_mib: enable/disable MIB change event
+ *
+ * Configure (disable/enable) MIB
+ */
+enum ice_status ice_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_mib)
+{
+	struct ice_port_info *pi = hw->port_info;
+	enum ice_status ret;
+
+	if (!hw->func_caps.common_cap.dcb)
+		return ICE_ERR_NOT_SUPPORTED;
+
+	/* Get DCBX status */
+	pi->dcbx_status = ice_get_dcbx_status(hw);
+
+	if (pi->dcbx_status == ICE_DCBX_STATUS_DIS)
+		return ICE_ERR_NOT_READY;
+
+	ret = ice_aq_cfg_lldp_mib_change(hw, ena_mib, NULL);
 	if (!ret)
-		pi->is_sw_lldp = false;
+		pi->is_sw_lldp = !ena_mib;
 
 	return ret;
 }
diff --git a/drivers/net/ice/base/ice_dcb.h b/drivers/net/ice/base/ice_dcb.h
index d922c8a29..65d2bafef 100644
--- a/drivers/net/ice/base/ice_dcb.h
+++ b/drivers/net/ice/base/ice_dcb.h
@@ -197,7 +197,7 @@ ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
 		   struct ice_dcbx_cfg *dcbcfg);
 enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi);
 enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi);
-enum ice_status ice_init_dcb(struct ice_hw *hw);
+enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change);
 void ice_dcb_cfg_to_lldp(u8 *lldpmib, u16 *miblen, struct ice_dcbx_cfg *dcbcfg);
 enum ice_status
 ice_query_port_ets(struct ice_port_info *pi,
@@ -217,6 +217,7 @@ enum ice_status ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd);
 enum ice_status
 ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent,
 		       bool *dcbx_agent_status, struct ice_sq_cd *cd);
+enum ice_status ice_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_mib);
 enum ice_status
 ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update,
 			   struct ice_sq_cd *cd);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 04/49] net/ice/base: add more recipe commands
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (2 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 03/49] net/ice/base: add API to configure MIB Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 05/49] net/ice/base: add funcs to create new switch recipe Leyi Rong
                   ` (46 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Lev Faerman, Paul M Stillwell Jr

Add the Add Recipe (0x0290), Recipe to Profile (0x0291), Get Recipe
(0x0292) and Get Recipe to Profile (0x0293) Commands.

Signed-off-by: Lev Faerman <lev.faerman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 73 +++++++++++++++++++++++++++
 1 file changed, 73 insertions(+)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index bbdca83fc..7b0aa8aaa 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -696,6 +696,72 @@ struct ice_aqc_storm_cfg {
 
 #define ICE_MAX_NUM_RECIPES 64
 
+/* Add/Get Recipe (indirect 0x0290/0x0292)*/
+struct ice_aqc_add_get_recipe {
+	__le16 num_sub_recipes;	/* Input in Add cmd, Output in Get cmd */
+	__le16 return_index;	/* Input, used for Get cmd only */
+	u8 reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct ice_aqc_recipe_content {
+	u8 rid;
+#define ICE_AQ_RECIPE_ID_S		0
+#define ICE_AQ_RECIPE_ID_M		(0x3F << ICE_AQ_RECIPE_ID_S)
+#define ICE_AQ_RECIPE_ID_IS_ROOT	BIT(7)
+	u8 lkup_indx[5];
+#define ICE_AQ_RECIPE_LKUP_DATA_S	0
+#define ICE_AQ_RECIPE_LKUP_DATA_M	(0x3F << ICE_AQ_RECIPE_LKUP_DATA_S)
+#define ICE_AQ_RECIPE_LKUP_IGNORE	BIT(7)
+#define ICE_AQ_SW_ID_LKUP_MASK		0x00FF
+	__le16 mask[5];
+	u8 result_indx;
+#define ICE_AQ_RECIPE_RESULT_DATA_S	0
+#define ICE_AQ_RECIPE_RESULT_DATA_M	(0x3F << ICE_AQ_RECIPE_RESULT_DATA_S)
+#define ICE_AQ_RECIPE_RESULT_EN		BIT(7)
+	u8 rsvd0[3];
+	u8 act_ctrl_join_priority;
+	u8 act_ctrl_fwd_priority;
+#define ICE_AQ_RECIPE_FWD_PRIORITY_S	0
+#define ICE_AQ_RECIPE_FWD_PRIORITY_M	(0xF << ICE_AQ_RECIPE_FWD_PRIORITY_S)
+	u8 act_ctrl;
+#define ICE_AQ_RECIPE_ACT_NEED_PASS_L2	BIT(0)
+#define ICE_AQ_RECIPE_ACT_ALLOW_PASS_L2	BIT(1)
+#define ICE_AQ_RECIPE_ACT_INV_ACT	BIT(2)
+#define ICE_AQ_RECIPE_ACT_PRUNE_INDX_S	4
+#define ICE_AQ_RECIPE_ACT_PRUNE_INDX_M	(0x3 << ICE_AQ_RECIPE_ACT_PRUNE_INDX_S)
+	u8 rsvd1;
+	__le32 dflt_act;
+#define ICE_AQ_RECIPE_DFLT_ACT_S	0
+#define ICE_AQ_RECIPE_DFLT_ACT_M	(0x7FFFF << ICE_AQ_RECIPE_DFLT_ACT_S)
+#define ICE_AQ_RECIPE_DFLT_ACT_VALID	BIT(31)
+};
+
+struct ice_aqc_recipe_data_elem {
+	u8 recipe_indx;
+	u8 resp_bits;
+#define ICE_AQ_RECIPE_WAS_UPDATED	BIT(0)
+	u8 rsvd0[2];
+	u8 recipe_bitmap[8];
+	u8 rsvd1[4];
+	struct ice_aqc_recipe_content content;
+	u8 rsvd2[20];
+};
+
+/* This struct contains a number of entries as per the
+ * num_sub_recipes in the command
+ */
+struct ice_aqc_add_get_recipe_data {
+	struct ice_aqc_recipe_data_elem recipe[1];
+};
+
+/* Set/Get Recipes to Profile Association (direct 0x0291/0x0293) */
+struct ice_aqc_recipe_to_profile {
+	__le16 profile_id;
+	u8 rsvd[6];
+	ice_declare_bitmap(recipe_assoc, ICE_MAX_NUM_RECIPES);
+};
 
 /* Add/Update/Remove/Get switch rules (indirect 0x02A0, 0x02A1, 0x02A2, 0x02A3)
  */
@@ -2210,6 +2276,8 @@ struct ice_aq_desc {
 		struct ice_aqc_get_sw_cfg get_sw_conf;
 		struct ice_aqc_sw_rules sw_rules;
 		struct ice_aqc_storm_cfg storm_conf;
+		struct ice_aqc_add_get_recipe add_get_recipe;
+		struct ice_aqc_recipe_to_profile recipe_to_profile;
 		struct ice_aqc_get_topo get_topo;
 		struct ice_aqc_sched_elem_cmd sched_elem_cmd;
 		struct ice_aqc_query_txsched_res query_sched_res;
@@ -2369,6 +2437,11 @@ enum ice_adminq_opc {
 	ice_aqc_opc_set_storm_cfg			= 0x0280,
 	ice_aqc_opc_get_storm_cfg			= 0x0281,
 
+	/* recipe commands */
+	ice_aqc_opc_add_recipe				= 0x0290,
+	ice_aqc_opc_recipe_to_profile			= 0x0291,
+	ice_aqc_opc_get_recipe				= 0x0292,
+	ice_aqc_opc_get_recipe_to_profile		= 0x0293,
 
 	/* switch rules population commands */
 	ice_aqc_opc_add_sw_rules			= 0x02A0,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 05/49] net/ice/base: add funcs to create new switch recipe
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (3 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 04/49] net/ice/base: add more recipe commands Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04 17:27   ` Maxime Coquelin
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 06/49] net/ice/base: programming a " Leyi Rong
                   ` (45 subsequent siblings)
  50 siblings, 1 reply; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Grishma Kotecha, Paul M Stillwell Jr

Add functions to support following admin queue commands:
1. 0x0208: allocate resource to hold a switch recipe. This is needed
when a new switch recipe needs to be created.
2. 0x0290: create a recipe with protocol header information and
other details that determine how this recipe filter work.
3. 0x0292: get details of an existing recipe.
4. 0x0291: associate a switch recipe to a profile.

Signed-off-by: Grishma Kotecha <grishma.kotecha@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 132 ++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_switch.h |  12 +++
 2 files changed, 144 insertions(+)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index a1c29d606..b84a07459 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -914,6 +914,138 @@ ice_aq_sw_rules(struct ice_hw *hw, void *rule_list, u16 rule_list_sz,
 	return ice_aq_send_cmd(hw, &desc, rule_list, rule_list_sz, cd);
 }
 
+/**
+ * ice_aq_add_recipe - add switch recipe
+ * @hw: pointer to the HW struct
+ * @s_recipe_list: pointer to switch rule population list
+ * @num_recipes: number of switch recipes in the list
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add(0x0290)
+ */
+enum ice_status
+ice_aq_add_recipe(struct ice_hw *hw,
+		  struct ice_aqc_recipe_data_elem *s_recipe_list,
+		  u16 num_recipes, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_get_recipe *cmd;
+	struct ice_aq_desc desc;
+	u16 buf_size;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_add_recipe");
+	cmd = &desc.params.add_get_recipe;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_recipe);
+
+	cmd->num_sub_recipes = CPU_TO_LE16(num_recipes);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	buf_size = num_recipes * sizeof(*s_recipe_list);
+
+	return ice_aq_send_cmd(hw, &desc, s_recipe_list, buf_size, cd);
+}
+
+/**
+ * ice_aq_get_recipe - get switch recipe
+ * @hw: pointer to the HW struct
+ * @s_recipe_list: pointer to switch rule population list
+ * @num_recipes: pointer to the number of recipes (input and output)
+ * @recipe_root: root recipe number of recipe(s) to retrieve
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get(0x0292)
+ *
+ * On input, *num_recipes should equal the number of entries in s_recipe_list.
+ * On output, *num_recipes will equal the number of entries returned in
+ * s_recipe_list.
+ *
+ * The caller must supply enough space in s_recipe_list to hold all possible
+ * recipes and *num_recipes must equal ICE_MAX_NUM_RECIPES.
+ */
+enum ice_status
+ice_aq_get_recipe(struct ice_hw *hw,
+		  struct ice_aqc_recipe_data_elem *s_recipe_list,
+		  u16 *num_recipes, u16 recipe_root, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_get_recipe *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 buf_size;
+
+	if (*num_recipes != ICE_MAX_NUM_RECIPES)
+		return ICE_ERR_PARAM;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_get_recipe");
+	cmd = &desc.params.add_get_recipe;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_recipe);
+
+	cmd->return_index = CPU_TO_LE16(recipe_root);
+	cmd->num_sub_recipes = 0;
+
+	buf_size = *num_recipes * sizeof(*s_recipe_list);
+
+	status = ice_aq_send_cmd(hw, &desc, s_recipe_list, buf_size, cd);
+	/* cppcheck-suppress constArgument */
+	*num_recipes = LE16_TO_CPU(cmd->num_sub_recipes);
+
+	return status;
+}
+
+/**
+ * ice_aq_map_recipe_to_profile - Map recipe to packet profile
+ * @hw: pointer to the HW struct
+ * @profile_id: package profile ID to associate the recipe with
+ * @r_bitmap: Recipe bitmap filled in and need to be returned as response
+ * @cd: pointer to command details structure or NULL
+ * Recipe to profile association (0x0291)
+ */
+enum ice_status
+ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
+			     struct ice_sq_cd *cd)
+{
+	struct ice_aqc_recipe_to_profile *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_assoc_recipe_to_prof");
+	cmd = &desc.params.recipe_to_profile;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_recipe_to_profile);
+	cmd->profile_id = CPU_TO_LE16(profile_id);
+	/* Set the recipe ID bit in the bitmask to let the device know which
+	 * profile we are associating the recipe to
+	 */
+	ice_memcpy(cmd->recipe_assoc, r_bitmap, sizeof(cmd->recipe_assoc),
+		   ICE_NONDMA_TO_NONDMA);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_alloc_recipe - add recipe resource
+ * @hw: pointer to the hardware structure
+ * @rid: recipe ID returned as response to AQ call
+ */
+enum ice_status ice_alloc_recipe(struct ice_hw *hw, u16 *rid)
+{
+	struct ice_aqc_alloc_free_res_elem *sw_buf;
+	enum ice_status status;
+	u16 buf_len;
+
+	buf_len = sizeof(*sw_buf);
+	sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len);
+	if (!sw_buf)
+		return ICE_ERR_NO_MEMORY;
+
+	sw_buf->num_elems = CPU_TO_LE16(1);
+	sw_buf->res_type = CPU_TO_LE16((ICE_AQC_RES_TYPE_RECIPE <<
+					ICE_AQC_RES_TYPE_S) |
+					ICE_AQC_RES_TYPE_FLAG_SHARED);
+	status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len,
+				       ice_aqc_opc_alloc_res, NULL);
+	if (!status)
+		*rid = LE16_TO_CPU(sw_buf->elem[0].e.sw_resp);
+	ice_free(hw, sw_buf);
+
+	return status;
+}
 
 /* ice_init_port_info - Initialize port_info with switch configuration data
  * @pi: pointer to port_info
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index 13525d8d0..fd61c0eea 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -408,8 +408,20 @@ enum ice_status
 ice_get_vsi_vlan_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask,
 			 u16 *vid);
 
+enum ice_status
+ice_aq_add_recipe(struct ice_hw *hw,
+		  struct ice_aqc_recipe_data_elem *s_recipe_list,
+		  u16 num_recipes, struct ice_sq_cd *cd);
 
+enum ice_status
+ice_aq_get_recipe(struct ice_hw *hw,
+		  struct ice_aqc_recipe_data_elem *s_recipe_list,
+		  u16 *num_recipes, u16 recipe_root, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
+			     struct ice_sq_cd *cd);
 
+enum ice_status ice_alloc_recipe(struct ice_hw *hw, u16 *recipe_id);
 enum ice_status ice_replay_all_fltr(struct ice_hw *hw);
 
 enum ice_status ice_init_def_sw_recp(struct ice_hw *hw);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 06/49] net/ice/base: programming a new switch recipe
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (4 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 05/49] net/ice/base: add funcs to create new switch recipe Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 07/49] net/ice/base: replay advanced rule after reset Leyi Rong
                   ` (44 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Grishma Kotecha, Paul M Stillwell Jr

1. Added an interface to support adding advanced switch rules.
2. Advanced rules are provided in a form of protocol headers and values
to match in addition to actions (limited actions are current supported).
3. Retrieve field vectors for ICE configuration package to determine
extracted fields and extracted locations for recipe creation.
4. Chain multiple recipes together to match multiple protocol headers.
5. Add structure to manage the dynamic recipes.

Signed-off-by: Grishma Kotecha <grishma.kotecha@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c |   33 +-
 drivers/net/ice/base/ice_flex_pipe.h |    7 +-
 drivers/net/ice/base/ice_switch.c    | 1640 ++++++++++++++++++++++++++
 drivers/net/ice/base/ice_switch.h    |   21 +
 4 files changed, 1698 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 14e632fab..babad94f8 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -734,7 +734,7 @@ static void ice_release_global_cfg_lock(struct ice_hw *hw)
  *
  * This function will request ownership of the change lock.
  */
-static enum ice_status
+enum ice_status
 ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type access)
 {
 	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_change_lock");
@@ -749,7 +749,7 @@ ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type access)
  *
  * This function will release the change lock using the proper Admin Command.
  */
-static void ice_release_change_lock(struct ice_hw *hw)
+void ice_release_change_lock(struct ice_hw *hw)
 {
 	ice_debug(hw, ICE_DBG_TRACE, "ice_release_change_lock");
 
@@ -1801,6 +1801,35 @@ void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld)
 	ice_free(hw, bld);
 }
 
+/**
+ * ice_find_prot_off - find prot ID and offset pair, based on prof and FV index
+ * @hw: pointer to the hardware structure
+ * @blk: hardware block
+ * @prof: profile ID
+ * @fv_idx: field vector word index
+ * @prot: variable to receive the protocol ID
+ * @off: variable to receive the protocol offset
+ */
+enum ice_status
+ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx,
+		  u8 *prot, u16 *off)
+{
+	struct ice_fv_word *fv_ext;
+
+	if (prof >= hw->blk[blk].es.count)
+		return ICE_ERR_PARAM;
+
+	if (fv_idx >= hw->blk[blk].es.fvw)
+		return ICE_ERR_PARAM;
+
+	fv_ext = hw->blk[blk].es.t + (prof * hw->blk[blk].es.fvw);
+
+	*prot = fv_ext[fv_idx].prot_id;
+	*off = fv_ext[fv_idx].off;
+
+	return ICE_SUCCESS;
+}
+
 /* PTG Management */
 
 /**
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 00c2b6682..2710dded6 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -15,7 +15,12 @@
 
 enum ice_status
 ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count);
-
+enum ice_status
+ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type access);
+void ice_release_change_lock(struct ice_hw *hw);
+enum ice_status
+ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx,
+		  u8 *prot, u16 *off);
 struct ice_generic_seg_hdr *
 ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
 		    struct ice_pkg_hdr *pkg_hdr);
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index b84a07459..c53021aed 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -53,6 +53,210 @@ static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0,
 	 sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi) + \
 	 ((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi)))
 
+static const
+u8 dummy_gre_packet[] = { 0, 0, 0, 0,		/* Ether starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0x08, 0,		/* Ether ends */
+			  0x45, 0, 0, 0x3E,	/* IP starts */
+			  0, 0, 0, 0,
+			  0, 0x2F, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,		/* IP ends */
+			  0x80, 0, 0x65, 0x58,	/* GRE starts */
+			  0, 0, 0, 0,		/* GRE ends */
+			  0, 0, 0, 0,		/* Ether starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0x08, 0,		/* Ether ends */
+			  0x45, 0, 0, 0x14,	/* IP starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0		/* IP ends */
+			};
+
+static const u8
+dummy_udp_tun_packet[] = {0, 0, 0, 0,		/* Ether starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0x08, 0,		/* Ether ends */
+			  0x45, 0, 0, 0x32,	/* IP starts */
+			  0, 0, 0, 0,
+			  0, 0x11, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,		/* IP ends */
+			  0, 0, 0x12, 0xB5,	/* UDP start*/
+			  0, 0x1E, 0, 0,	/* UDP end*/
+			  0, 0, 0, 0,		/* VXLAN start */
+			  0, 0, 0, 0,		/* VXLAN end*/
+			  0, 0, 0, 0,		/* Ether starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0			/* Ether ends */
+			};
+
+static const u8
+dummy_tcp_tun_packet[] = {0, 0, 0, 0,		/* Ether starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0x08, 0,              /* Ether ends */
+			  0x45, 0, 0, 0x28,     /* IP starts */
+			  0, 0x01, 0, 0,
+			  0x40, 0x06, 0xF5, 0x69,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,   /* IP ends */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0x50, 0x02, 0x20,
+			  0, 0x9, 0x79, 0, 0,
+			  0, 0 /* 2 bytes padding for 4 byte alignment*/
+			};
+
+/* this is a recipe to profile bitmap association */
+static ice_declare_bitmap(recipe_to_profile[ICE_MAX_NUM_RECIPES],
+			  ICE_MAX_NUM_PROFILES);
+static ice_declare_bitmap(available_result_ids, ICE_CHAIN_FV_INDEX_START + 1);
+
+/**
+ * ice_get_recp_frm_fw - update SW bookkeeping from FW recipe entries
+ * @hw: pointer to hardware structure
+ * @recps: struct that we need to populate
+ * @rid: recipe ID that we are populating
+ *
+ * This function is used to populate all the necessary entries into our
+ * bookkeeping so that we have a current list of all the recipes that are
+ * programmed in the firmware.
+ */
+static enum ice_status
+ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid)
+{
+	u16 i, sub_recps, fv_word_idx = 0, result_idx = 0;
+	ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_PROFILES);
+	u16 result_idxs[ICE_MAX_CHAIN_RECIPE] = { 0 };
+	struct ice_aqc_recipe_data_elem *tmp;
+	u16 num_recps = ICE_MAX_NUM_RECIPES;
+	struct ice_prot_lkup_ext *lkup_exts;
+	enum ice_status status;
+
+	/* we need a buffer big enough to accommodate all the recipes */
+	tmp = (struct ice_aqc_recipe_data_elem *)ice_calloc(hw,
+		ICE_MAX_NUM_RECIPES, sizeof(*tmp));
+	if (!tmp)
+		return ICE_ERR_NO_MEMORY;
+
+	tmp[0].recipe_indx = rid;
+	status = ice_aq_get_recipe(hw, tmp, &num_recps, rid, NULL);
+	/* non-zero status meaning recipe doesn't exist */
+	if (status)
+		goto err_unroll;
+	lkup_exts = &recps[rid].lkup_exts;
+	/* start populating all the entries for recps[rid] based on lkups from
+	 * firmware
+	 */
+	for (sub_recps = 0; sub_recps < num_recps; sub_recps++) {
+		struct ice_aqc_recipe_data_elem root_bufs = tmp[sub_recps];
+		struct ice_recp_grp_entry *rg_entry;
+		u8 prof_id, prot = 0;
+		u16 off = 0;
+
+		rg_entry = (struct ice_recp_grp_entry *)
+			ice_malloc(hw, sizeof(*rg_entry));
+		if (!rg_entry) {
+			status = ICE_ERR_NO_MEMORY;
+			goto err_unroll;
+		}
+		/* Avoid 8th bit since its result enable bit */
+		result_idxs[result_idx] = root_bufs.content.result_indx &
+			~ICE_AQ_RECIPE_RESULT_EN;
+		/* Check if result enable bit is set */
+		if (root_bufs.content.result_indx & ICE_AQ_RECIPE_RESULT_EN)
+			ice_clear_bit(ICE_CHAIN_FV_INDEX_START -
+				      result_idxs[result_idx++],
+				      available_result_ids);
+		ice_memcpy(r_bitmap,
+			   recipe_to_profile[tmp[sub_recps].recipe_indx],
+			   sizeof(r_bitmap), ICE_NONDMA_TO_NONDMA);
+		/* get the first profile that is associated with rid */
+		prof_id = ice_find_first_bit(r_bitmap, ICE_MAX_NUM_PROFILES);
+		for (i = 0; i < ICE_NUM_WORDS_RECIPE; i++) {
+			u8 lkup_indx = root_bufs.content.lkup_indx[i + 1];
+
+			rg_entry->fv_idx[i] = lkup_indx;
+			/* If the recipe is a chained recipe then all its
+			 * child recipe's result will have a result index.
+			 * To fill fv_words we should not use those result
+			 * index, we only need the protocol ids and offsets.
+			 * We will skip all the fv_idx which stores result
+			 * index in them. We also need to skip any fv_idx which
+			 * has ICE_AQ_RECIPE_LKUP_IGNORE or 0 since it isn't a
+			 * valid offset value.
+			 */
+			if (result_idxs[0] == rg_entry->fv_idx[i] ||
+			    result_idxs[1] == rg_entry->fv_idx[i] ||
+			    result_idxs[2] == rg_entry->fv_idx[i] ||
+			    result_idxs[3] == rg_entry->fv_idx[i] ||
+			    result_idxs[4] == rg_entry->fv_idx[i] ||
+			    rg_entry->fv_idx[i] == ICE_AQ_RECIPE_LKUP_IGNORE ||
+			    rg_entry->fv_idx[i] == 0)
+				continue;
+
+			ice_find_prot_off(hw, ICE_BLK_SW, prof_id,
+					  rg_entry->fv_idx[i], &prot, &off);
+			lkup_exts->fv_words[fv_word_idx].prot_id = prot;
+			lkup_exts->fv_words[fv_word_idx].off = off;
+			fv_word_idx++;
+		}
+		/* populate rg_list with the data from the child entry of this
+		 * recipe
+		 */
+		LIST_ADD(&rg_entry->l_entry, &recps[rid].rg_list);
+	}
+	lkup_exts->n_val_words = fv_word_idx;
+	recps[rid].n_grp_count = num_recps;
+	recps[rid].root_buf = (struct ice_aqc_recipe_data_elem *)
+		ice_calloc(hw, recps[rid].n_grp_count,
+			   sizeof(struct ice_aqc_recipe_data_elem));
+	if (!recps[rid].root_buf)
+		goto err_unroll;
+
+	ice_memcpy(recps[rid].root_buf, tmp, recps[rid].n_grp_count *
+		   sizeof(*recps[rid].root_buf), ICE_NONDMA_TO_NONDMA);
+	recps[rid].recp_created = true;
+	if (tmp[sub_recps].content.rid & ICE_AQ_RECIPE_ID_IS_ROOT)
+		recps[rid].root_rid = rid;
+err_unroll:
+	ice_free(hw, tmp);
+	return status;
+}
+
+/**
+ * ice_get_recp_to_prof_map - updates recipe to profile mapping
+ * @hw: pointer to hardware structure
+ *
+ * This function is used to populate recipe_to_profile matrix where index to
+ * this array is the recipe ID and the element is the mapping of which profiles
+ * is this recipe mapped to.
+ */
+static void
+ice_get_recp_to_prof_map(struct ice_hw *hw)
+{
+	ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_RECIPES);
+	u16 i;
+
+	for (i = 0; i < ICE_MAX_NUM_PROFILES; i++) {
+		u16 j;
+
+		ice_zero_bitmap(r_bitmap, ICE_MAX_NUM_RECIPES);
+		if (ice_aq_get_recipe_to_profile(hw, i, (u8 *)r_bitmap, NULL))
+			continue;
+
+		for (j = 0; j < ICE_MAX_NUM_RECIPES; j++)
+			if (ice_is_bit_set(r_bitmap, j))
+				ice_set_bit(i, recipe_to_profile[j]);
+	}
+}
 
 /**
  * ice_init_def_sw_recp - initialize the recipe book keeping tables
@@ -1018,6 +1222,35 @@ ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
 	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
 }
 
+/**
+ * ice_aq_get_recipe_to_profile - Map recipe to packet profile
+ * @hw: pointer to the HW struct
+ * @profile_id: package profile ID to associate the recipe with
+ * @r_bitmap: Recipe bitmap filled in and need to be returned as response
+ * @cd: pointer to command details structure or NULL
+ * Associate profile ID with given recipe (0x0293)
+ */
+enum ice_status
+ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
+			     struct ice_sq_cd *cd)
+{
+	struct ice_aqc_recipe_to_profile *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_get_recipe_to_prof");
+	cmd = &desc.params.recipe_to_profile;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_recipe_to_profile);
+	cmd->profile_id = CPU_TO_LE16(profile_id);
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+	if (!status)
+		ice_memcpy(r_bitmap, cmd->recipe_assoc,
+			   sizeof(cmd->recipe_assoc), ICE_NONDMA_TO_NONDMA);
+
+	return status;
+}
+
 /**
  * ice_alloc_recipe - add recipe resource
  * @hw: pointer to the hardware structure
@@ -3899,6 +4132,1413 @@ ice_add_mac_with_counter(struct ice_hw *hw, struct ice_fltr_info *f_info)
 	return ret;
 }
 
+/* This is mapping table entry that maps every word within a given protocol
+ * structure to the real byte offset as per the specification of that
+ * protocol header.
+ * for example dst address is 3 words in ethertype header and corresponding
+ * bytes are 0, 2, 3 in the actual packet header and src address is at 4, 6, 8
+ * IMPORTANT: Every structure part of "ice_prot_hdr" union should have a
+ * matching entry describing its field. This needs to be updated if new
+ * structure is added to that union.
+ */
+static const struct ice_prot_ext_tbl_entry ice_prot_ext[] = {
+	{ ICE_MAC_OFOS,		{ 0, 2, 4, 6, 8, 10, 12 } },
+	{ ICE_MAC_IL,		{ 0, 2, 4, 6, 8, 10, 12 } },
+	{ ICE_IPV4_OFOS,	{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18 } },
+	{ ICE_IPV4_IL,		{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18 } },
+	{ ICE_IPV6_IL,		{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24,
+				 26, 28, 30, 32, 34, 36, 38 } },
+	{ ICE_IPV6_OFOS,	{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24,
+				 26, 28, 30, 32, 34, 36, 38 } },
+	{ ICE_TCP_IL,		{ 0, 2 } },
+	{ ICE_UDP_ILOS,		{ 0, 2 } },
+	{ ICE_SCTP_IL,		{ 0, 2 } },
+	{ ICE_VXLAN,		{ 8, 10, 12 } },
+	{ ICE_GENEVE,		{ 8, 10, 12 } },
+	{ ICE_VXLAN_GPE,	{ 0, 2, 4 } },
+	{ ICE_NVGRE,		{ 0, 2 } },
+	{ ICE_PROTOCOL_LAST,	{ 0 } }
+};
+
+/* The following table describes preferred grouping of recipes.
+ * If a recipe that needs to be programmed is a superset or matches one of the
+ * following combinations, then the recipe needs to be chained as per the
+ * following policy.
+ */
+static const struct ice_pref_recipe_group ice_recipe_pack[] = {
+	{3, { { ICE_MAC_OFOS_HW, 0, 0 }, { ICE_MAC_OFOS_HW, 2, 0 },
+	      { ICE_MAC_OFOS_HW, 4, 0 } } },
+	{4, { { ICE_MAC_IL_HW, 0, 0 }, { ICE_MAC_IL_HW, 2, 0 },
+	      { ICE_MAC_IL_HW, 4, 0 }, { ICE_META_DATA_ID_HW, 44, 0 } } },
+	{2, { { ICE_IPV4_IL_HW, 0, 0 }, { ICE_IPV4_IL_HW, 2, 0 } } },
+	{2, { { ICE_IPV4_IL_HW, 12, 0 }, { ICE_IPV4_IL_HW, 14, 0 } } },
+};
+
+static const struct ice_protocol_entry ice_prot_id_tbl[] = {
+	{ ICE_MAC_OFOS,		ICE_MAC_OFOS_HW },
+	{ ICE_MAC_IL,		ICE_MAC_IL_HW },
+	{ ICE_IPV4_OFOS,	ICE_IPV4_OFOS_HW },
+	{ ICE_IPV4_IL,		ICE_IPV4_IL_HW },
+	{ ICE_IPV6_OFOS,	ICE_IPV6_OFOS_HW },
+	{ ICE_IPV6_IL,		ICE_IPV6_IL_HW },
+	{ ICE_TCP_IL,		ICE_TCP_IL_HW },
+	{ ICE_UDP_ILOS,		ICE_UDP_ILOS_HW },
+	{ ICE_SCTP_IL,		ICE_SCTP_IL_HW },
+	{ ICE_VXLAN,		ICE_UDP_OF_HW },
+	{ ICE_GENEVE,		ICE_UDP_OF_HW },
+	{ ICE_VXLAN_GPE,	ICE_UDP_OF_HW },
+	{ ICE_NVGRE,		ICE_GRE_OF_HW },
+	{ ICE_PROTOCOL_LAST,	0 }
+};
+
+/**
+ * ice_find_recp - find a recipe
+ * @hw: pointer to the hardware structure
+ * @lkup_exts: extension sequence to match
+ *
+ * Returns index of matching recipe, or ICE_MAX_NUM_RECIPES if not found.
+ */
+static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts)
+{
+	struct ice_sw_recipe *recp;
+	u16 i;
+
+	ice_get_recp_to_prof_map(hw);
+	/* Initialize available_result_ids which tracks available result idx */
+	for (i = 0; i <= ICE_CHAIN_FV_INDEX_START; i++)
+		ice_set_bit(ICE_CHAIN_FV_INDEX_START - i,
+			    available_result_ids);
+
+	/* Walk through existing recipes to find a match */
+	recp = hw->switch_info->recp_list;
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		/* If recipe was not created for this ID, in SW bookkeeping,
+		 * check if FW has an entry for this recipe. If the FW has an
+		 * entry update it in our SW bookkeeping and continue with the
+		 * matching.
+		 */
+		if (!recp[i].recp_created)
+			if (ice_get_recp_frm_fw(hw,
+						hw->switch_info->recp_list, i))
+				continue;
+
+		/* if number of words we are looking for match */
+		if (lkup_exts->n_val_words == recp[i].lkup_exts.n_val_words) {
+			struct ice_fv_word *a = lkup_exts->fv_words;
+			struct ice_fv_word *b = recp[i].lkup_exts.fv_words;
+			bool found = true;
+			u8 p, q;
+
+			for (p = 0; p < lkup_exts->n_val_words; p++) {
+				for (q = 0; q < recp[i].lkup_exts.n_val_words;
+				     q++) {
+					if (a[p].off == b[q].off &&
+					    a[p].prot_id == b[q].prot_id)
+						/* Found the "p"th word in the
+						 * given recipe
+						 */
+						break;
+				}
+				/* After walking through all the words in the
+				 * "i"th recipe if "p"th word was not found then
+				 * this recipe is not what we are looking for.
+				 * So break out from this loop and try the next
+				 * recipe
+				 */
+				if (q >= recp[i].lkup_exts.n_val_words) {
+					found = false;
+					break;
+				}
+			}
+			/* If for "i"th recipe the found was never set to false
+			 * then it means we found our match
+			 */
+			if (found)
+				return i; /* Return the recipe ID */
+		}
+	}
+	return ICE_MAX_NUM_RECIPES;
+}
+
+/**
+ * ice_prot_type_to_id - get protocol ID from protocol type
+ * @type: protocol type
+ * @id: pointer to variable that will receive the ID
+ *
+ * Returns true if found, false otherwise
+ */
+static bool ice_prot_type_to_id(enum ice_protocol_type type, u16 *id)
+{
+	u16 i;
+
+	for (i = 0; ice_prot_id_tbl[i].type != ICE_PROTOCOL_LAST; i++)
+		if (ice_prot_id_tbl[i].type == type) {
+			*id = ice_prot_id_tbl[i].protocol_id;
+			return true;
+		}
+	return false;
+}
+
+/**
+ * ice_find_valid_words - count valid words
+ * @rule: advanced rule with lookup information
+ * @lkup_exts: byte offset extractions of the words that are valid
+ *
+ * calculate valid words in a lookup rule using mask value
+ */
+static u16
+ice_fill_valid_words(struct ice_adv_lkup_elem *rule,
+		     struct ice_prot_lkup_ext *lkup_exts)
+{
+	u16 j, word = 0;
+	u16 prot_id;
+	u16 ret_val;
+
+	if (!ice_prot_type_to_id(rule->type, &prot_id))
+		return 0;
+
+	word = lkup_exts->n_val_words;
+
+	for (j = 0; j < sizeof(rule->m_u) / sizeof(u16); j++)
+		if (((u16 *)&rule->m_u)[j] == 0xffff &&
+		    rule->type < ARRAY_SIZE(ice_prot_ext)) {
+			/* No more space to accommodate */
+			if (word >= ICE_MAX_CHAIN_WORDS)
+				return 0;
+			lkup_exts->fv_words[word].off =
+				ice_prot_ext[rule->type].offs[j];
+			lkup_exts->fv_words[word].prot_id =
+				ice_prot_id_tbl[rule->type].protocol_id;
+			word++;
+		}
+
+	ret_val = word - lkup_exts->n_val_words;
+	lkup_exts->n_val_words = word;
+
+	return ret_val;
+}
+
+/**
+ * ice_find_prot_off_ind - check for specific ID and offset in rule
+ * @lkup_exts: an array of protocol header extractions
+ * @prot_type: protocol type to check
+ * @off: expected offset of the extraction
+ *
+ * Check if the prot_ext has given protocol ID and offset
+ */
+static u8
+ice_find_prot_off_ind(struct ice_prot_lkup_ext *lkup_exts, u8 prot_type,
+		      u16 off)
+{
+	u8 j;
+
+	for (j = 0; j < lkup_exts->n_val_words; j++)
+		if (lkup_exts->fv_words[j].off == off &&
+		    lkup_exts->fv_words[j].prot_id == prot_type)
+			return j;
+
+	return ICE_MAX_CHAIN_WORDS;
+}
+
+/**
+ * ice_is_recipe_subset - check if recipe group policy is a subset of lookup
+ * @lkup_exts: an array of protocol header extractions
+ * @r_policy: preferred recipe grouping policy
+ *
+ * Helper function to check if given recipe group is subset we need to check if
+ * all the words described by the given recipe group exist in the advanced rule
+ * look up information
+ */
+static bool
+ice_is_recipe_subset(struct ice_prot_lkup_ext *lkup_exts,
+		     const struct ice_pref_recipe_group *r_policy)
+{
+	u8 ind[ICE_NUM_WORDS_RECIPE];
+	u8 count = 0;
+	u8 i;
+
+	/* check if everything in the r_policy is part of the entire rule */
+	for (i = 0; i < r_policy->n_val_pairs; i++) {
+		u8 j;
+
+		j = ice_find_prot_off_ind(lkup_exts, r_policy->pairs[i].prot_id,
+					  r_policy->pairs[i].off);
+		if (j >= ICE_MAX_CHAIN_WORDS)
+			return false;
+
+		/* store the indexes temporarily found by the find function
+		 * this will be used to mark the words as 'done'
+		 */
+		ind[count++] = j;
+	}
+
+	/* If the entire policy recipe was a true match, then mark the fields
+	 * that are covered by the recipe as 'done' meaning that these words
+	 * will be clumped together in one recipe.
+	 * "Done" here means in our searching if certain recipe group
+	 * matches or is subset of the given rule, then we mark all
+	 * the corresponding offsets as found. So the remaining recipes should
+	 * be created with whatever words that were left.
+	 */
+	for (i = 0; i < count; i++) {
+		u8 in = ind[i];
+
+		ice_set_bit(in, lkup_exts->done);
+	}
+	return true;
+}
+
+/**
+ * ice_create_first_fit_recp_def - Create a recipe grouping
+ * @hw: pointer to the hardware structure
+ * @lkup_exts: an array of protocol header extractions
+ * @rg_list: pointer to a list that stores new recipe groups
+ * @recp_cnt: pointer to a variable that stores returned number of recipe groups
+ *
+ * Using first fit algorithm, take all the words that are still not done
+ * and start grouping them in 4-word groups. Each group makes up one
+ * recipe.
+ */
+static enum ice_status
+ice_create_first_fit_recp_def(struct ice_hw *hw,
+			      struct ice_prot_lkup_ext *lkup_exts,
+			      struct LIST_HEAD_TYPE *rg_list,
+			      u8 *recp_cnt)
+{
+	struct ice_pref_recipe_group *grp = NULL;
+	u8 j;
+
+	*recp_cnt = 0;
+
+	/* Walk through every word in the rule to check if it is not done. If so
+	 * then this word needs to be part of a new recipe.
+	 */
+	for (j = 0; j < lkup_exts->n_val_words; j++)
+		if (!ice_is_bit_set(lkup_exts->done, j)) {
+			if (!grp ||
+			    grp->n_val_pairs == ICE_NUM_WORDS_RECIPE) {
+				struct ice_recp_grp_entry *entry;
+
+				entry = (struct ice_recp_grp_entry *)
+					ice_malloc(hw, sizeof(*entry));
+				if (!entry)
+					return ICE_ERR_NO_MEMORY;
+				LIST_ADD(&entry->l_entry, rg_list);
+				grp = &entry->r_group;
+				(*recp_cnt)++;
+			}
+
+			grp->pairs[grp->n_val_pairs].prot_id =
+				lkup_exts->fv_words[j].prot_id;
+			grp->pairs[grp->n_val_pairs].off =
+				lkup_exts->fv_words[j].off;
+			grp->n_val_pairs++;
+		}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_fill_fv_word_index - fill in the field vector indices for a recipe group
+ * @hw: pointer to the hardware structure
+ * @fv_list: field vector with the extraction sequence information
+ * @rg_list: recipe groupings with protocol-offset pairs
+ *
+ * Helper function to fill in the field vector indices for protocol-offset
+ * pairs. These indexes are then ultimately programmed into a recipe.
+ */
+static void
+ice_fill_fv_word_index(struct ice_hw *hw, struct LIST_HEAD_TYPE *fv_list,
+		       struct LIST_HEAD_TYPE *rg_list)
+{
+	struct ice_sw_fv_list_entry *fv;
+	struct ice_recp_grp_entry *rg;
+	struct ice_fv_word *fv_ext;
+
+	if (LIST_EMPTY(fv_list))
+		return;
+
+	fv = LIST_FIRST_ENTRY(fv_list, struct ice_sw_fv_list_entry, list_entry);
+	fv_ext = fv->fv_ptr->ew;
+
+	LIST_FOR_EACH_ENTRY(rg, rg_list, ice_recp_grp_entry, l_entry) {
+		u8 i;
+
+		for (i = 0; i < rg->r_group.n_val_pairs; i++) {
+			struct ice_fv_word *pr;
+			u8 j;
+
+			pr = &rg->r_group.pairs[i];
+			for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++)
+				if (fv_ext[j].prot_id == pr->prot_id &&
+				    fv_ext[j].off == pr->off) {
+					/* Store index of field vector */
+					rg->fv_idx[i] = j;
+					break;
+				}
+		}
+	}
+}
+
+/**
+ * ice_add_sw_recipe - function to call AQ calls to create switch recipe
+ * @hw: pointer to hardware structure
+ * @rm: recipe management list entry
+ * @match_tun: if field vector index for tunnel needs to be programmed
+ */
+static enum ice_status
+ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm,
+		  bool match_tun)
+{
+	struct ice_aqc_recipe_data_elem *tmp;
+	struct ice_aqc_recipe_data_elem *buf;
+	struct ice_recp_grp_entry *entry;
+	enum ice_status status;
+	u16 recipe_count;
+	u8 chain_idx;
+	u8 recps = 0;
+
+	/* When more than one recipe are required, another recipe is needed to
+	 * chain them together. Matching a tunnel metadata ID takes up one of
+	 * the match fields in the chaining recipe reducing the number of
+	 * chained recipes by one.
+	 */
+	if (rm->n_grp_count > 1)
+		rm->n_grp_count++;
+	if (rm->n_grp_count > ICE_MAX_CHAIN_RECIPE ||
+	    (match_tun && rm->n_grp_count > (ICE_MAX_CHAIN_RECIPE - 1)))
+		return ICE_ERR_MAX_LIMIT;
+
+	tmp = (struct ice_aqc_recipe_data_elem *)ice_calloc(hw,
+							    ICE_MAX_NUM_RECIPES,
+							    sizeof(*tmp));
+	if (!tmp)
+		return ICE_ERR_NO_MEMORY;
+
+	buf = (struct ice_aqc_recipe_data_elem *)
+		ice_calloc(hw, rm->n_grp_count, sizeof(*buf));
+	if (!buf) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_mem;
+	}
+
+	ice_zero_bitmap(rm->r_bitmap, ICE_MAX_NUM_RECIPES);
+	recipe_count = ICE_MAX_NUM_RECIPES;
+	status = ice_aq_get_recipe(hw, tmp, &recipe_count, ICE_SW_LKUP_MAC,
+				   NULL);
+	if (status || recipe_count == 0)
+		goto err_unroll;
+
+	/* Allocate the recipe resources, and configure them according to the
+	 * match fields from protocol headers and extracted field vectors.
+	 */
+	chain_idx = ICE_CHAIN_FV_INDEX_START -
+		ice_find_first_bit(available_result_ids,
+				   ICE_CHAIN_FV_INDEX_START + 1);
+	LIST_FOR_EACH_ENTRY(entry, &rm->rg_list, ice_recp_grp_entry, l_entry) {
+		u8 i;
+
+		status = ice_alloc_recipe(hw, &entry->rid);
+		if (status)
+			goto err_unroll;
+
+		/* Clear the result index of the located recipe, as this will be
+		 * updated, if needed, later in the recipe creation process.
+		 */
+		tmp[0].content.result_indx = 0;
+
+		buf[recps] = tmp[0];
+		buf[recps].recipe_indx = (u8)entry->rid;
+		/* if the recipe is a non-root recipe RID should be programmed
+		 * as 0 for the rules to be applied correctly.
+		 */
+		buf[recps].content.rid = 0;
+		ice_memset(&buf[recps].content.lkup_indx, 0,
+			   sizeof(buf[recps].content.lkup_indx),
+			   ICE_NONDMA_MEM);
+
+		/* All recipes use look-up field index 0 to match switch ID. */
+		buf[recps].content.lkup_indx[0] = 0;
+		buf[recps].content.mask[0] =
+			CPU_TO_LE16(ICE_AQ_SW_ID_LKUP_MASK);
+		/* Setup lkup_indx 1..4 to INVALID/ignore and set the mask
+		 * to be 0
+		 */
+		for (i = 1; i <= ICE_NUM_WORDS_RECIPE; i++) {
+			buf[recps].content.lkup_indx[i] = 0x80;
+			buf[recps].content.mask[i] = 0;
+		}
+
+		for (i = 0; i < entry->r_group.n_val_pairs; i++) {
+			buf[recps].content.lkup_indx[i + 1] = entry->fv_idx[i];
+			buf[recps].content.mask[i + 1] = CPU_TO_LE16(0xFFFF);
+		}
+
+		if (rm->n_grp_count > 1) {
+			entry->chain_idx = chain_idx;
+			buf[recps].content.result_indx =
+				ICE_AQ_RECIPE_RESULT_EN |
+				((chain_idx << ICE_AQ_RECIPE_RESULT_DATA_S) &
+				 ICE_AQ_RECIPE_RESULT_DATA_M);
+			ice_clear_bit(ICE_CHAIN_FV_INDEX_START - chain_idx,
+				      available_result_ids);
+			chain_idx = ICE_CHAIN_FV_INDEX_START -
+				ice_find_first_bit(available_result_ids,
+						   ICE_CHAIN_FV_INDEX_START +
+						   1);
+		}
+
+		/* fill recipe dependencies */
+		ice_zero_bitmap((ice_bitmap_t *)buf[recps].recipe_bitmap,
+				ICE_MAX_NUM_RECIPES);
+		ice_set_bit(buf[recps].recipe_indx,
+			    (ice_bitmap_t *)buf[recps].recipe_bitmap);
+		buf[recps].content.act_ctrl_fwd_priority = rm->priority;
+		recps++;
+	}
+
+	if (rm->n_grp_count == 1) {
+		rm->root_rid = buf[0].recipe_indx;
+		ice_set_bit(buf[0].recipe_indx, rm->r_bitmap);
+		buf[0].content.rid = rm->root_rid | ICE_AQ_RECIPE_ID_IS_ROOT;
+		if (sizeof(buf[0].recipe_bitmap) >= sizeof(rm->r_bitmap)) {
+			ice_memcpy(buf[0].recipe_bitmap, rm->r_bitmap,
+				   sizeof(buf[0].recipe_bitmap),
+				   ICE_NONDMA_TO_NONDMA);
+		} else {
+			status = ICE_ERR_BAD_PTR;
+			goto err_unroll;
+		}
+		/* Applicable only for ROOT_RECIPE, set the fwd_priority for
+		 * the recipe which is getting created if specified
+		 * by user. Usually any advanced switch filter, which results
+		 * into new extraction sequence, ended up creating a new recipe
+		 * of type ROOT and usually recipes are associated with profiles
+		 * Switch rule referreing newly created recipe, needs to have
+		 * either/or 'fwd' or 'join' priority, otherwise switch rule
+		 * evaluation will not happen correctly. In other words, if
+		 * switch rule to be evaluated on priority basis, then recipe
+		 * needs to have priority, otherwise it will be evaluated last.
+		 */
+		buf[0].content.act_ctrl_fwd_priority = rm->priority;
+	} else {
+		struct ice_recp_grp_entry *last_chain_entry;
+		u16 rid, i = 0;
+
+		/* Allocate the last recipe that will chain the outcomes of the
+		 * other recipes together
+		 */
+		status = ice_alloc_recipe(hw, &rid);
+		if (status)
+			goto err_unroll;
+
+		buf[recps].recipe_indx = (u8)rid;
+		buf[recps].content.rid = (u8)rid;
+		buf[recps].content.rid |= ICE_AQ_RECIPE_ID_IS_ROOT;
+		/* the new entry created should also be part of rg_list to
+		 * make sure we have complete recipe
+		 */
+		last_chain_entry = (struct ice_recp_grp_entry *)ice_malloc(hw,
+			sizeof(*last_chain_entry));
+		if (!last_chain_entry) {
+			status = ICE_ERR_NO_MEMORY;
+			goto err_unroll;
+		}
+		last_chain_entry->rid = rid;
+		ice_memset(&buf[recps].content.lkup_indx, 0,
+			   sizeof(buf[recps].content.lkup_indx),
+			   ICE_NONDMA_MEM);
+		buf[recps].content.lkup_indx[i] = hw->port_info->sw_id;
+		buf[recps].content.mask[i] =
+			CPU_TO_LE16(ICE_AQ_SW_ID_LKUP_MASK);
+		for (i = 1; i <= ICE_NUM_WORDS_RECIPE; i++) {
+			buf[recps].content.lkup_indx[i] =
+				ICE_AQ_RECIPE_LKUP_IGNORE;
+			buf[recps].content.mask[i] = 0;
+		}
+
+		i = 1;
+		/* update r_bitmap with the recp that is used for chaining */
+		ice_set_bit(rid, rm->r_bitmap);
+		/* this is the recipe that chains all the other recipes so it
+		 * should not have a chaining ID to indicate the same
+		 */
+		last_chain_entry->chain_idx = ICE_INVAL_CHAIN_IND;
+		LIST_FOR_EACH_ENTRY(entry, &rm->rg_list, ice_recp_grp_entry,
+				    l_entry) {
+			last_chain_entry->fv_idx[i] = entry->chain_idx;
+			buf[recps].content.lkup_indx[i] = entry->chain_idx;
+			buf[recps].content.mask[i++] = CPU_TO_LE16(0xFFFF);
+			ice_set_bit(entry->rid, rm->r_bitmap);
+		}
+		LIST_ADD(&last_chain_entry->l_entry, &rm->rg_list);
+		if (sizeof(buf[recps].recipe_bitmap) >=
+		    sizeof(rm->r_bitmap)) {
+			ice_memcpy(buf[recps].recipe_bitmap, rm->r_bitmap,
+				   sizeof(buf[recps].recipe_bitmap),
+				   ICE_NONDMA_TO_NONDMA);
+		} else {
+			status = ICE_ERR_BAD_PTR;
+			goto err_unroll;
+		}
+		buf[recps].content.act_ctrl_fwd_priority = rm->priority;
+
+		/* To differentiate among different UDP tunnels, a meta data ID
+		 * flag is used.
+		 */
+		if (match_tun) {
+			buf[recps].content.lkup_indx[i] = ICE_TUN_FLAG_FV_IND;
+			buf[recps].content.mask[i] =
+				CPU_TO_LE16(ICE_TUN_FLAG_MASK);
+		}
+
+		recps++;
+		rm->root_rid = (u8)rid;
+	}
+	status = ice_acquire_change_lock(hw, ICE_RES_WRITE);
+	if (status)
+		goto err_unroll;
+
+	status = ice_aq_add_recipe(hw, buf, rm->n_grp_count, NULL);
+	ice_release_change_lock(hw);
+	if (status)
+		goto err_unroll;
+
+	/* Every recipe that just got created add it to the recipe
+	 * book keeping list
+	 */
+	LIST_FOR_EACH_ENTRY(entry, &rm->rg_list, ice_recp_grp_entry, l_entry) {
+		struct ice_switch_info *sw = hw->switch_info;
+		struct ice_sw_recipe *recp;
+
+		recp = &sw->recp_list[entry->rid];
+		recp->root_rid = entry->rid;
+		ice_memcpy(&recp->ext_words, entry->r_group.pairs,
+			   entry->r_group.n_val_pairs *
+			   sizeof(struct ice_fv_word),
+			   ICE_NONDMA_TO_NONDMA);
+
+		recp->n_ext_words = entry->r_group.n_val_pairs;
+		recp->chain_idx = entry->chain_idx;
+		recp->recp_created = true;
+		recp->big_recp = false;
+	}
+	rm->root_buf = buf;
+	ice_free(hw, tmp);
+	return status;
+
+err_unroll:
+err_mem:
+	ice_free(hw, tmp);
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_create_recipe_group - creates recipe group
+ * @hw: pointer to hardware structure
+ * @rm: recipe management list entry
+ * @lkup_exts: lookup elements
+ */
+static enum ice_status
+ice_create_recipe_group(struct ice_hw *hw, struct ice_sw_recipe *rm,
+			struct ice_prot_lkup_ext *lkup_exts)
+{
+	struct ice_recp_grp_entry *entry;
+	struct ice_recp_grp_entry *tmp;
+	enum ice_status status;
+	u8 recp_count = 0;
+	u16 groups, i;
+
+	rm->n_grp_count = 0;
+
+	/* Each switch recipe can match up to 5 words or metadata. One word in
+	 * each recipe is used to match the switch ID. Four words are left for
+	 * matching other values. If the new advanced recipe requires more than
+	 * 4 words, it needs to be split into multiple recipes which are chained
+	 * together using the intermediate result that each produces as input to
+	 * the other recipes in the sequence.
+	 */
+	groups = ARRAY_SIZE(ice_recipe_pack);
+
+	/* Check if any of the preferred recipes from the grouping policy
+	 * matches.
+	 */
+	for (i = 0; i < groups; i++)
+		/* Check if the recipe from the preferred grouping matches
+		 * or is a subset of the fields that needs to be looked up.
+		 */
+		if (ice_is_recipe_subset(lkup_exts, &ice_recipe_pack[i])) {
+			/* This recipe can be used by itself or grouped with
+			 * other recipes.
+			 */
+			entry = (struct ice_recp_grp_entry *)
+				ice_malloc(hw, sizeof(*entry));
+			if (!entry) {
+				status = ICE_ERR_NO_MEMORY;
+				goto err_unroll;
+			}
+			entry->r_group = ice_recipe_pack[i];
+			LIST_ADD(&entry->l_entry, &rm->rg_list);
+			rm->n_grp_count++;
+		}
+
+	/* Create recipes for words that are marked not done by packing them
+	 * as best fit.
+	 */
+	status = ice_create_first_fit_recp_def(hw, lkup_exts,
+					       &rm->rg_list, &recp_count);
+	if (!status) {
+		rm->n_grp_count += recp_count;
+		rm->n_ext_words = lkup_exts->n_val_words;
+		ice_memcpy(&rm->ext_words, lkup_exts->fv_words,
+			   sizeof(rm->ext_words), ICE_NONDMA_TO_NONDMA);
+		goto out;
+	}
+
+err_unroll:
+	LIST_FOR_EACH_ENTRY_SAFE(entry, tmp, &rm->rg_list, ice_recp_grp_entry,
+				 l_entry) {
+		LIST_DEL(&entry->l_entry);
+		ice_free(hw, entry);
+	}
+
+out:
+	return status;
+}
+
+/**
+ * ice_get_fv - get field vectors/extraction sequences for spec. lookup types
+ * @hw: pointer to hardware structure
+ * @lkups: lookup elements or match criteria for the advanced recipe, one
+ *	   structure per protocol header
+ * @lkups_cnt: number of protocols
+ * @fv_list: pointer to a list that holds the returned field vectors
+ */
+static enum ice_status
+ice_get_fv(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
+	   struct LIST_HEAD_TYPE *fv_list)
+{
+	enum ice_status status;
+	u16 *prot_ids;
+	u16 i;
+
+	prot_ids = (u16 *)ice_calloc(hw, lkups_cnt, sizeof(*prot_ids));
+	if (!prot_ids)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < lkups_cnt; i++)
+		if (!ice_prot_type_to_id(lkups[i].type, &prot_ids[i])) {
+			status = ICE_ERR_CFG;
+			goto free_mem;
+		}
+
+	/* Find field vectors that include all specified protocol types */
+	status = ice_get_sw_fv_list(hw, prot_ids, lkups_cnt, fv_list);
+
+free_mem:
+	ice_free(hw, prot_ids);
+	return status;
+}
+
+/**
+ * ice_add_adv_recipe - Add an advanced recipe that is not part of the default
+ * @hw: pointer to hardware structure
+ * @lkups: lookup elements or match criteria for the advanced recipe, one
+ *  structure per protocol header
+ * @lkups_cnt: number of protocols
+ * @rinfo: other information regarding the rule e.g. priority and action info
+ * @rid: return the recipe ID of the recipe created
+ */
+static enum ice_status
+ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+		   u16 lkups_cnt, struct ice_adv_rule_info *rinfo, u16 *rid)
+{
+	struct ice_prot_lkup_ext *lkup_exts;
+	struct ice_recp_grp_entry *r_entry;
+	struct ice_sw_fv_list_entry *fvit;
+	struct ice_recp_grp_entry *r_tmp;
+	struct ice_sw_fv_list_entry *tmp;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_sw_recipe *rm;
+	bool match_tun = false;
+	u8 i;
+
+	if (!lkups_cnt)
+		return ICE_ERR_PARAM;
+
+	lkup_exts = (struct ice_prot_lkup_ext *)
+		ice_malloc(hw, sizeof(*lkup_exts));
+	if (!lkup_exts)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Determine the number of words to be matched and if it exceeds a
+	 * recipe's restrictions
+	 */
+	for (i = 0; i < lkups_cnt; i++) {
+		u16 count;
+
+		if (lkups[i].type >= ICE_PROTOCOL_LAST) {
+			status = ICE_ERR_CFG;
+			goto err_free_lkup_exts;
+		}
+
+		count = ice_fill_valid_words(&lkups[i], lkup_exts);
+		if (!count) {
+			status = ICE_ERR_CFG;
+			goto err_free_lkup_exts;
+		}
+	}
+
+	*rid = ice_find_recp(hw, lkup_exts);
+	if (*rid < ICE_MAX_NUM_RECIPES)
+		/* Success if found a recipe that match the existing criteria */
+		goto err_free_lkup_exts;
+
+	/* Recipe we need does not exist, add a recipe */
+
+	rm = (struct ice_sw_recipe *)ice_malloc(hw, sizeof(*rm));
+	if (!rm) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_free_lkup_exts;
+	}
+
+	/* Get field vectors that contain fields extracted from all the protocol
+	 * headers being programmed.
+	 */
+	INIT_LIST_HEAD(&rm->fv_list);
+	INIT_LIST_HEAD(&rm->rg_list);
+
+	status = ice_get_fv(hw, lkups, lkups_cnt, &rm->fv_list);
+	if (status)
+		goto err_unroll;
+
+	/* Group match words into recipes using preferred recipe grouping
+	 * criteria.
+	 */
+	status = ice_create_recipe_group(hw, rm, lkup_exts);
+	if (status)
+		goto err_unroll;
+
+	/* There is only profile for UDP tunnels. So, it is necessary to use a
+	 * metadata ID flag to differentiate different tunnel types. A separate
+	 * recipe needs to be used for the metadata.
+	 */
+	if ((rinfo->tun_type == ICE_SW_TUN_VXLAN_GPE ||
+	     rinfo->tun_type == ICE_SW_TUN_GENEVE ||
+	     rinfo->tun_type == ICE_SW_TUN_VXLAN) && rm->n_grp_count > 1)
+		match_tun = true;
+
+	/* set the recipe priority if specified */
+	rm->priority = rinfo->priority ? rinfo->priority : 0;
+
+	/* Find offsets from the field vector. Pick the first one for all the
+	 * recipes.
+	 */
+	ice_fill_fv_word_index(hw, &rm->fv_list, &rm->rg_list);
+	status = ice_add_sw_recipe(hw, rm, match_tun);
+	if (status)
+		goto err_unroll;
+
+	/* Associate all the recipes created with all the profiles in the
+	 * common field vector.
+	 */
+	LIST_FOR_EACH_ENTRY(fvit, &rm->fv_list, ice_sw_fv_list_entry,
+			    list_entry) {
+		ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_RECIPES);
+
+		status = ice_aq_get_recipe_to_profile(hw, fvit->profile_id,
+						      (u8 *)r_bitmap, NULL);
+		if (status)
+			goto err_unroll;
+
+		ice_or_bitmap(rm->r_bitmap, r_bitmap, rm->r_bitmap,
+			      ICE_MAX_NUM_RECIPES);
+		status = ice_acquire_change_lock(hw, ICE_RES_WRITE);
+		if (status)
+			goto err_unroll;
+
+		status = ice_aq_map_recipe_to_profile(hw, fvit->profile_id,
+						      (u8 *)rm->r_bitmap,
+						      NULL);
+		ice_release_change_lock(hw);
+
+		if (status)
+			goto err_unroll;
+	}
+
+	*rid = rm->root_rid;
+	ice_memcpy(&hw->switch_info->recp_list[*rid].lkup_exts,
+		   lkup_exts, sizeof(*lkup_exts), ICE_NONDMA_TO_NONDMA);
+err_unroll:
+	LIST_FOR_EACH_ENTRY_SAFE(r_entry, r_tmp, &rm->rg_list,
+				 ice_recp_grp_entry, l_entry) {
+		LIST_DEL(&r_entry->l_entry);
+		ice_free(hw, r_entry);
+	}
+
+	LIST_FOR_EACH_ENTRY_SAFE(fvit, tmp, &rm->fv_list, ice_sw_fv_list_entry,
+				 list_entry) {
+		LIST_DEL(&fvit->list_entry);
+		ice_free(hw, fvit);
+	}
+
+	if (rm->root_buf)
+		ice_free(hw, rm->root_buf);
+
+	ice_free(hw, rm);
+
+err_free_lkup_exts:
+	ice_free(hw, lkup_exts);
+
+	return status;
+}
+
+#define ICE_MAC_HDR_OFFSET	0
+#define ICE_IP_HDR_OFFSET	14
+#define ICE_GRE_HDR_OFFSET	34
+#define ICE_MAC_IL_HDR_OFFSET	42
+#define ICE_IP_IL_HDR_OFFSET	56
+#define ICE_L4_HDR_OFFSET	34
+#define ICE_UDP_TUN_HDR_OFFSET	42
+
+/**
+ * ice_find_dummy_packet - find dummy packet with given match criteria
+ *
+ * @lkups: lookup elements or match criteria for the advanced recipe, one
+ *	   structure per protocol header
+ * @lkups_cnt: number of protocols
+ * @tun_type: tunnel type from the match criteria
+ * @pkt: dummy packet to fill according to filter match criteria
+ * @pkt_len: packet length of dummy packet
+ */
+static void
+ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
+		      enum ice_sw_tunnel_type tun_type, const u8 **pkt,
+		      u16 *pkt_len)
+{
+	u16 i;
+
+	if (tun_type == ICE_SW_TUN_NVGRE || tun_type == ICE_ALL_TUNNELS) {
+		*pkt = dummy_gre_packet;
+		*pkt_len = sizeof(dummy_gre_packet);
+		return;
+	}
+
+	if (tun_type == ICE_SW_TUN_VXLAN || tun_type == ICE_SW_TUN_GENEVE ||
+	    tun_type == ICE_SW_TUN_VXLAN_GPE) {
+		*pkt = dummy_udp_tun_packet;
+		*pkt_len = sizeof(dummy_udp_tun_packet);
+		return;
+	}
+
+	for (i = 0; i < lkups_cnt; i++) {
+		if (lkups[i].type == ICE_UDP_ILOS) {
+			*pkt = dummy_udp_tun_packet;
+			*pkt_len = sizeof(dummy_udp_tun_packet);
+			return;
+		}
+	}
+
+	*pkt = dummy_tcp_tun_packet;
+	*pkt_len = sizeof(dummy_tcp_tun_packet);
+}
+
+/**
+ * ice_fill_adv_dummy_packet - fill a dummy packet with given match criteria
+ *
+ * @lkups: lookup elements or match criteria for the advanced recipe, one
+ *	   structure per protocol header
+ * @lkups_cnt: number of protocols
+ * @tun_type: to know if the dummy packet is supposed to be tunnel packet
+ * @s_rule: stores rule information from the match criteria
+ * @dummy_pkt: dummy packet to fill according to filter match criteria
+ * @pkt_len: packet length of dummy packet
+ */
+static void
+ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
+			  enum ice_sw_tunnel_type tun_type,
+			  struct ice_aqc_sw_rules_elem *s_rule,
+			  const u8 *dummy_pkt, u16 pkt_len)
+{
+	u8 *pkt;
+	u16 i;
+
+	/* Start with a packet with a pre-defined/dummy content. Then, fill
+	 * in the header values to be looked up or matched.
+	 */
+	pkt = s_rule->pdata.lkup_tx_rx.hdr;
+
+	ice_memcpy(pkt, dummy_pkt, pkt_len, ICE_NONDMA_TO_NONDMA);
+
+	for (i = 0; i < lkups_cnt; i++) {
+		u32 len, pkt_off, hdr_size, field_off;
+
+		switch (lkups[i].type) {
+		case ICE_MAC_OFOS:
+		case ICE_MAC_IL:
+			pkt_off = offsetof(struct ice_ether_hdr, dst_addr) +
+				((lkups[i].type == ICE_MAC_IL) ?
+				 ICE_MAC_IL_HDR_OFFSET : 0);
+			len = sizeof(lkups[i].h_u.eth_hdr.dst_addr);
+			if ((tun_type == ICE_SW_TUN_VXLAN ||
+			     tun_type == ICE_SW_TUN_GENEVE ||
+			     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
+			     lkups[i].type == ICE_MAC_IL) {
+				pkt_off += sizeof(struct ice_udp_tnl_hdr);
+			}
+
+			ice_memcpy(&pkt[pkt_off],
+				   &lkups[i].h_u.eth_hdr.dst_addr, len,
+				   ICE_NONDMA_TO_NONDMA);
+			pkt_off = offsetof(struct ice_ether_hdr, src_addr) +
+				((lkups[i].type == ICE_MAC_IL) ?
+				 ICE_MAC_IL_HDR_OFFSET : 0);
+			len = sizeof(lkups[i].h_u.eth_hdr.src_addr);
+			if ((tun_type == ICE_SW_TUN_VXLAN ||
+			     tun_type == ICE_SW_TUN_GENEVE ||
+			     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
+			     lkups[i].type == ICE_MAC_IL) {
+				pkt_off += sizeof(struct ice_udp_tnl_hdr);
+			}
+			ice_memcpy(&pkt[pkt_off],
+				   &lkups[i].h_u.eth_hdr.src_addr, len,
+				   ICE_NONDMA_TO_NONDMA);
+			if (lkups[i].h_u.eth_hdr.ethtype_id) {
+				pkt_off = offsetof(struct ice_ether_hdr,
+						   ethtype_id) +
+					((lkups[i].type == ICE_MAC_IL) ?
+					 ICE_MAC_IL_HDR_OFFSET : 0);
+				len = sizeof(lkups[i].h_u.eth_hdr.ethtype_id);
+				if ((tun_type == ICE_SW_TUN_VXLAN ||
+				     tun_type == ICE_SW_TUN_GENEVE ||
+				     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
+				     lkups[i].type == ICE_MAC_IL) {
+					pkt_off +=
+						sizeof(struct ice_udp_tnl_hdr);
+				}
+				ice_memcpy(&pkt[pkt_off],
+					   &lkups[i].h_u.eth_hdr.ethtype_id,
+					   len, ICE_NONDMA_TO_NONDMA);
+			}
+			break;
+		case ICE_IPV4_OFOS:
+			hdr_size = sizeof(struct ice_ipv4_hdr);
+			if (lkups[i].h_u.ipv4_hdr.dst_addr) {
+				pkt_off = ICE_IP_HDR_OFFSET +
+					   offsetof(struct ice_ipv4_hdr,
+						    dst_addr);
+				field_off = offsetof(struct ice_ipv4_hdr,
+						     dst_addr);
+				len = hdr_size - field_off;
+				ice_memcpy(&pkt[pkt_off],
+					   &lkups[i].h_u.ipv4_hdr.dst_addr,
+					   len, ICE_NONDMA_TO_NONDMA);
+			}
+			if (lkups[i].h_u.ipv4_hdr.src_addr) {
+				pkt_off = ICE_IP_HDR_OFFSET +
+					   offsetof(struct ice_ipv4_hdr,
+						    src_addr);
+				field_off = offsetof(struct ice_ipv4_hdr,
+						     src_addr);
+				len = hdr_size - field_off;
+				ice_memcpy(&pkt[pkt_off],
+					   &lkups[i].h_u.ipv4_hdr.src_addr,
+					   len, ICE_NONDMA_TO_NONDMA);
+			}
+			break;
+		case ICE_IPV4_IL:
+			break;
+		case ICE_TCP_IL:
+		case ICE_UDP_ILOS:
+		case ICE_SCTP_IL:
+			hdr_size = sizeof(struct ice_udp_tnl_hdr);
+			if (lkups[i].h_u.l4_hdr.dst_port) {
+				pkt_off = ICE_L4_HDR_OFFSET +
+					   offsetof(struct ice_l4_hdr,
+						    dst_port);
+				field_off = offsetof(struct ice_l4_hdr,
+						     dst_port);
+				len =  hdr_size - field_off;
+				ice_memcpy(&pkt[pkt_off],
+					   &lkups[i].h_u.l4_hdr.dst_port,
+					   len, ICE_NONDMA_TO_NONDMA);
+			}
+			if (lkups[i].h_u.l4_hdr.src_port) {
+				pkt_off = ICE_L4_HDR_OFFSET +
+					offsetof(struct ice_l4_hdr, src_port);
+				field_off = offsetof(struct ice_l4_hdr,
+						     src_port);
+				len =  hdr_size - field_off;
+				ice_memcpy(&pkt[pkt_off],
+					   &lkups[i].h_u.l4_hdr.src_port,
+					   len, ICE_NONDMA_TO_NONDMA);
+			}
+			break;
+		case ICE_VXLAN:
+		case ICE_GENEVE:
+		case ICE_VXLAN_GPE:
+			pkt_off = ICE_UDP_TUN_HDR_OFFSET +
+				   offsetof(struct ice_udp_tnl_hdr, vni);
+			field_off = offsetof(struct ice_udp_tnl_hdr, vni);
+			len =  sizeof(struct ice_udp_tnl_hdr) - field_off;
+			ice_memcpy(&pkt[pkt_off], &lkups[i].h_u.tnl_hdr.vni,
+				   len, ICE_NONDMA_TO_NONDMA);
+			break;
+		default:
+			break;
+		}
+	}
+	s_rule->pdata.lkup_tx_rx.hdr_len = CPU_TO_LE16(pkt_len);
+}
+
+/**
+ * ice_find_adv_rule_entry - Search a rule entry
+ * @hw: pointer to the hardware structure
+ * @lkups: lookup elements or match criteria for the advanced recipe, one
+ *	   structure per protocol header
+ * @lkups_cnt: number of protocols
+ * @recp_id: recipe ID for which we are finding the rule
+ * @rinfo: other information regarding the rule e.g. priority and action info
+ *
+ * Helper function to search for a given advance rule entry
+ * Returns pointer to entry storing the rule if found
+ */
+static struct ice_adv_fltr_mgmt_list_entry *
+ice_find_adv_rule_entry(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+			u16 lkups_cnt, u8 recp_id,
+			struct ice_adv_rule_info *rinfo)
+{
+	struct ice_adv_fltr_mgmt_list_entry *list_itr;
+	struct ice_switch_info *sw = hw->switch_info;
+	int i;
+
+	LIST_FOR_EACH_ENTRY(list_itr, &sw->recp_list[recp_id].filt_rules,
+			    ice_adv_fltr_mgmt_list_entry, list_entry) {
+		bool lkups_matched = true;
+
+		if (lkups_cnt != list_itr->lkups_cnt)
+			continue;
+		for (i = 0; i < list_itr->lkups_cnt; i++)
+			if (memcmp(&list_itr->lkups[i], &lkups[i],
+				   sizeof(*lkups))) {
+				lkups_matched = false;
+				break;
+			}
+		if (rinfo->sw_act.flag == list_itr->rule_info.sw_act.flag &&
+		    rinfo->tun_type == list_itr->rule_info.tun_type &&
+		    lkups_matched)
+			return list_itr;
+	}
+	return NULL;
+}
+
+/**
+ * ice_adv_add_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @m_entry: pointer to current adv filter management list entry
+ * @cur_fltr: filter information from the book keeping entry
+ * @new_fltr: filter information with the new VSI to be added
+ *
+ * Call AQ command to add or update previously created VSI list with new VSI.
+ *
+ * Helper function to do book keeping associated with adding filter information
+ * The algorithm to do the booking keeping is described below :
+ * When a VSI needs to subscribe to a given advanced filter
+ *	if only one VSI has been added till now
+ *		Allocate a new VSI list and add two VSIs
+ *		to this list using switch rule command
+ *		Update the previously created switch rule with the
+ *		newly created VSI list ID
+ *	if a VSI list was previously created
+ *		Add the new VSI to the previously created VSI list set
+ *		using the update switch rule command
+ */
+static enum ice_status
+ice_adv_add_update_vsi_list(struct ice_hw *hw,
+			    struct ice_adv_fltr_mgmt_list_entry *m_entry,
+			    struct ice_adv_rule_info *cur_fltr,
+			    struct ice_adv_rule_info *new_fltr)
+{
+	enum ice_status status;
+	u16 vsi_list_id = 0;
+
+	if (cur_fltr->sw_act.fltr_act == ICE_FWD_TO_Q ||
+	    cur_fltr->sw_act.fltr_act == ICE_FWD_TO_QGRP)
+		return ICE_ERR_NOT_IMPL;
+
+	if (cur_fltr->sw_act.fltr_act == ICE_DROP_PACKET &&
+	    new_fltr->sw_act.fltr_act == ICE_DROP_PACKET)
+		return ICE_ERR_ALREADY_EXISTS;
+
+	if ((new_fltr->sw_act.fltr_act == ICE_FWD_TO_Q ||
+	     new_fltr->sw_act.fltr_act == ICE_FWD_TO_QGRP) &&
+	    (cur_fltr->sw_act.fltr_act == ICE_FWD_TO_VSI ||
+	     cur_fltr->sw_act.fltr_act == ICE_FWD_TO_VSI_LIST))
+		return ICE_ERR_NOT_IMPL;
+
+	if (m_entry->vsi_count < 2 && !m_entry->vsi_list_info) {
+		 /* Only one entry existed in the mapping and it was not already
+		  * a part of a VSI list. So, create a VSI list with the old and
+		  * new VSIs.
+		  */
+		struct ice_fltr_info tmp_fltr;
+		u16 vsi_handle_arr[2];
+
+		/* A rule already exists with the new VSI being added */
+		if (cur_fltr->sw_act.fwd_id.hw_vsi_id ==
+		    new_fltr->sw_act.fwd_id.hw_vsi_id)
+			return ICE_ERR_ALREADY_EXISTS;
+
+		vsi_handle_arr[0] = cur_fltr->sw_act.vsi_handle;
+		vsi_handle_arr[1] = new_fltr->sw_act.vsi_handle;
+		status = ice_create_vsi_list_rule(hw, &vsi_handle_arr[0], 2,
+						  &vsi_list_id,
+						  ICE_SW_LKUP_LAST);
+		if (status)
+			return status;
+
+		tmp_fltr.fltr_rule_id = cur_fltr->fltr_rule_id;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI_LIST;
+		tmp_fltr.fwd_id.vsi_list_id = vsi_list_id;
+		/* Update the previous switch rule of "forward to VSI" to
+		 * "fwd to VSI list"
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			return status;
+
+		cur_fltr->sw_act.fwd_id.vsi_list_id = vsi_list_id;
+		cur_fltr->sw_act.fltr_act = ICE_FWD_TO_VSI_LIST;
+		m_entry->vsi_list_info =
+			ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
+						vsi_list_id);
+	} else {
+		u16 vsi_handle = new_fltr->sw_act.vsi_handle;
+
+		if (!m_entry->vsi_list_info)
+			return ICE_ERR_CFG;
+
+		/* A rule already exists with the new VSI being added */
+		if (ice_is_bit_set(m_entry->vsi_list_info->vsi_map, vsi_handle))
+			return ICE_SUCCESS;
+
+		/* Update the previously created VSI list set with
+		 * the new VSI ID passed in
+		 */
+		vsi_list_id = cur_fltr->sw_act.fwd_id.vsi_list_id;
+
+		status = ice_update_vsi_list_rule(hw, &vsi_handle, 1,
+						  vsi_list_id, false,
+						  ice_aqc_opc_update_sw_rules,
+						  ICE_SW_LKUP_LAST);
+		/* update VSI list mapping info with new VSI ID */
+		if (!status)
+			ice_set_bit(vsi_handle,
+				    m_entry->vsi_list_info->vsi_map);
+	}
+	if (!status)
+		m_entry->vsi_count++;
+	return status;
+}
+
+/**
+ * ice_add_adv_rule - create an advanced switch rule
+ * @hw: pointer to the hardware structure
+ * @lkups: information on the words that needs to be looked up. All words
+ * together makes one recipe
+ * @lkups_cnt: num of entries in the lkups array
+ * @rinfo: other information related to the rule that needs to be programmed
+ * @added_entry: this will return recipe_id, rule_id and vsi_handle. should be
+ *               ignored is case of error.
+ *
+ * This function can program only 1 rule at a time. The lkups is used to
+ * describe the all the words that forms the "lookup" portion of the recipe.
+ * These words can span multiple protocols. Callers to this function need to
+ * pass in a list of protocol headers with lookup information along and mask
+ * that determines which words are valid from the given protocol header.
+ * rinfo describes other information related to this rule such as forwarding
+ * IDs, priority of this rule, etc.
+ */
+enum ice_status
+ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo,
+		 struct ice_rule_query_data *added_entry)
+{
+	struct ice_adv_fltr_mgmt_list_entry *m_entry, *adv_fltr = NULL;
+	u16 rid = 0, i, pkt_len, rule_buf_sz, vsi_handle;
+	struct ice_aqc_sw_rules_elem *s_rule;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_switch_info *sw;
+	enum ice_status status;
+	const u8 *pkt = NULL;
+	u32 act = 0;
+
+	if (!lkups_cnt)
+		return ICE_ERR_PARAM;
+
+	for (i = 0; i < lkups_cnt; i++) {
+		u16 j, *ptr;
+
+		/* Validate match masks to make sure they match complete 16-bit
+		 * words.
+		 */
+		ptr = (u16 *)&lkups->m_u;
+		for (j = 0; j < sizeof(lkups->m_u) / sizeof(u16); j++)
+			if (ptr[j] != 0 && ptr[j] != 0xffff)
+				return ICE_ERR_PARAM;
+	}
+
+	if (!(rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI ||
+	      rinfo->sw_act.fltr_act == ICE_FWD_TO_Q ||
+	      rinfo->sw_act.fltr_act == ICE_DROP_PACKET))
+		return ICE_ERR_CFG;
+
+	vsi_handle = rinfo->sw_act.vsi_handle;
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	if (rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI)
+		rinfo->sw_act.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, vsi_handle);
+	if (rinfo->sw_act.flag & ICE_FLTR_TX)
+		rinfo->sw_act.src = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	status = ice_add_adv_recipe(hw, lkups, lkups_cnt, rinfo, &rid);
+	if (status)
+		return status;
+	m_entry = ice_find_adv_rule_entry(hw, lkups, lkups_cnt, rid, rinfo);
+	if (m_entry) {
+		/* we have to add VSI to VSI_LIST and increment vsi_count.
+		 * Also Update VSI list so that we can change forwarding rule
+		 * if the rule already exists, we will check if it exists with
+		 * same vsi_id, if not then add it to the VSI list if it already
+		 * exists if not then create a VSI list and add the existing VSI
+		 * ID and the new VSI ID to the list
+		 * We will add that VSI to the list
+		 */
+		status = ice_adv_add_update_vsi_list(hw, m_entry,
+						     &m_entry->rule_info,
+						     rinfo);
+		if (added_entry) {
+			added_entry->rid = rid;
+			added_entry->rule_id = m_entry->rule_info.fltr_rule_id;
+			added_entry->vsi_handle = rinfo->sw_act.vsi_handle;
+		}
+		return status;
+	}
+	ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt,
+			      &pkt_len);
+	rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + pkt_len;
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rule_buf_sz);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+	act |= ICE_SINGLE_ACT_LB_ENABLE | ICE_SINGLE_ACT_LAN_ENABLE;
+	switch (rinfo->sw_act.fltr_act) {
+	case ICE_FWD_TO_VSI:
+		act |= (rinfo->sw_act.fwd_id.hw_vsi_id <<
+			ICE_SINGLE_ACT_VSI_ID_S) & ICE_SINGLE_ACT_VSI_ID_M;
+		act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_Q:
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (rinfo->sw_act.fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+		       ICE_SINGLE_ACT_Q_INDEX_M;
+		break;
+	case ICE_DROP_PACKET:
+		act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_DROP |
+		       ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	default:
+		status = ICE_ERR_CFG;
+		goto err_ice_add_adv_rule;
+	}
+
+	/* set the rule LOOKUP type based on caller specified 'RX'
+	 * instead of hardcoding it to be either LOOKUP_TX/RX
+	 *
+	 * for 'RX' set the source to be the port number
+	 * for 'TX' set the source to be the source HW VSI number (determined
+	 * by caller)
+	 */
+	if (rinfo->rx) {
+		s_rule->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_RX);
+		s_rule->pdata.lkup_tx_rx.src =
+			CPU_TO_LE16(hw->port_info->lport);
+	} else {
+		s_rule->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_TX);
+		s_rule->pdata.lkup_tx_rx.src = CPU_TO_LE16(rinfo->sw_act.src);
+	}
+
+	s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(rid);
+	s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
+
+	ice_fill_adv_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, s_rule,
+				  pkt, pkt_len);
+
+	status = ice_aq_sw_rules(hw, (struct ice_aqc_sw_rules *)s_rule,
+				 rule_buf_sz, 1, ice_aqc_opc_add_sw_rules,
+				 NULL);
+	if (status)
+		goto err_ice_add_adv_rule;
+	adv_fltr = (struct ice_adv_fltr_mgmt_list_entry *)
+		ice_malloc(hw, sizeof(struct ice_adv_fltr_mgmt_list_entry));
+	if (!adv_fltr) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_ice_add_adv_rule;
+	}
+
+	adv_fltr->lkups = (struct ice_adv_lkup_elem *)
+		ice_memdup(hw, lkups, lkups_cnt * sizeof(*lkups),
+			   ICE_NONDMA_TO_NONDMA);
+	if (!adv_fltr->lkups) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_ice_add_adv_rule;
+	}
+
+	adv_fltr->lkups_cnt = lkups_cnt;
+	adv_fltr->rule_info = *rinfo;
+	adv_fltr->rule_info.fltr_rule_id =
+		LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+	sw = hw->switch_info;
+	sw->recp_list[rid].adv_rule = true;
+	rule_head = &sw->recp_list[rid].filt_rules;
+
+	if (rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI) {
+		struct ice_fltr_info tmp_fltr;
+
+		tmp_fltr.fltr_rule_id =
+			LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI;
+		tmp_fltr.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, vsi_handle);
+		tmp_fltr.vsi_handle = vsi_handle;
+		/* Update the previous switch rule of "forward to VSI" to
+		 * "fwd to VSI list"
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			goto err_ice_add_adv_rule;
+		adv_fltr->vsi_count = 1;
+	}
+
+	/* Add rule entry to book keeping list */
+	LIST_ADD(&adv_fltr->list_entry, rule_head);
+	if (added_entry) {
+		added_entry->rid = rid;
+		added_entry->rule_id = adv_fltr->rule_info.fltr_rule_id;
+		added_entry->vsi_handle = rinfo->sw_act.vsi_handle;
+	}
+err_ice_add_adv_rule:
+	if (status && adv_fltr) {
+		ice_free(hw, adv_fltr->lkups);
+		ice_free(hw, adv_fltr);
+	}
+
+	ice_free(hw, s_rule);
+
+	return status;
+}
 /**
  * ice_replay_fltr - Replay all the filters stored by a specific list head
  * @hw: pointer to the hardware structure
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index fd61c0eea..890df13dd 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -172,11 +172,21 @@ struct ice_sw_act_ctrl {
 	u8 qgrp_size;
 };
 
+struct ice_rule_query_data {
+	/* Recipe ID for which the requested rule was added */
+	u16 rid;
+	/* Rule ID that was added or is supposed to be removed */
+	u16 rule_id;
+	/* vsi_handle for which Rule was added or is supposed to be removed */
+	u16 vsi_handle;
+};
+
 struct ice_adv_rule_info {
 	enum ice_sw_tunnel_type tun_type;
 	struct ice_sw_act_ctrl sw_act;
 	u32 priority;
 	u8 rx; /* true means LOOKUP_RX otherwise LOOKUP_TX */
+	u16 fltr_rule_id;
 };
 
 /* A collection of one or more four word recipe */
@@ -222,6 +232,7 @@ struct ice_sw_recipe {
 	/* Profiles this recipe should be associated with */
 	struct LIST_HEAD_TYPE fv_list;
 
+#define ICE_MAX_NUM_PROFILES 256
 	/* Profiles this recipe is associated with */
 	u8 num_profs, *prof_ids;
 
@@ -281,6 +292,8 @@ struct ice_adv_fltr_mgmt_list_entry {
 	struct ice_adv_lkup_elem *lkups;
 	struct ice_adv_rule_info rule_info;
 	u16 lkups_cnt;
+	struct ice_vsi_list_map_info *vsi_list_info;
+	u16 vsi_count;
 };
 
 enum ice_promisc_flags {
@@ -421,7 +434,15 @@ enum ice_status
 ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
 			     struct ice_sq_cd *cd);
 
+enum ice_status
+ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
+			     struct ice_sq_cd *cd);
+
 enum ice_status ice_alloc_recipe(struct ice_hw *hw, u16 *recipe_id);
+enum ice_status
+ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo,
+		 struct ice_rule_query_data *added_entry);
 enum ice_status ice_replay_all_fltr(struct ice_hw *hw);
 
 enum ice_status ice_init_def_sw_recp(struct ice_hw *hw);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 07/49] net/ice/base: replay advanced rule after reset
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (5 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 06/49] net/ice/base: programming a " Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-05  8:58   ` Maxime Coquelin
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 08/49] net/ice/base: code for removing advanced rule Leyi Rong
                   ` (43 subsequent siblings)
  50 siblings, 1 reply; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Victor Raj, Paul M Stillwell Jr

Code added to replay the advanced rule per VSI basis and remove the
advanced rule information from shared code recipe list.

Signed-off-by: Victor Raj <victor.raj@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 81 ++++++++++++++++++++++++++-----
 1 file changed, 69 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index c53021aed..ca0497ca7 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -3033,6 +3033,27 @@ ice_rem_sw_rule_info(struct ice_hw *hw, struct LIST_HEAD_TYPE *rule_head)
 	}
 }
 
+/**
+ * ice_rem_adv_rule_info
+ * @hw: pointer to the hardware structure
+ * @rule_head: pointer to the switch list structure that we want to delete
+ */
+static void
+ice_rem_adv_rule_info(struct ice_hw *hw, struct LIST_HEAD_TYPE *rule_head)
+{
+	struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
+	struct ice_adv_fltr_mgmt_list_entry *lst_itr;
+
+	if (LIST_EMPTY(rule_head))
+		return;
+
+	LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry, rule_head,
+				 ice_adv_fltr_mgmt_list_entry, list_entry) {
+		LIST_DEL(&lst_itr->list_entry);
+		ice_free(hw, lst_itr->lkups);
+		ice_free(hw, lst_itr);
+	}
+}
 
 /**
  * ice_rem_all_sw_rules_info
@@ -3049,6 +3070,8 @@ void ice_rem_all_sw_rules_info(struct ice_hw *hw)
 		rule_head = &sw->recp_list[i].filt_rules;
 		if (!sw->recp_list[i].adv_rule)
 			ice_rem_sw_rule_info(hw, rule_head);
+		else
+			ice_rem_adv_rule_info(hw, rule_head);
 	}
 }
 
@@ -5687,6 +5710,38 @@ ice_replay_vsi_fltr(struct ice_hw *hw, u16 vsi_handle, u8 recp_id,
 	return status;
 }
 
+/**
+ * ice_replay_vsi_adv_rule - Replay advanced rule for requested VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: driver VSI handle
+ * @list_head: list for which filters need to be replayed
+ *
+ * Replay the advanced rule for the given VSI.
+ */
+static enum ice_status
+ice_replay_vsi_adv_rule(struct ice_hw *hw, u16 vsi_handle,
+			struct LIST_HEAD_TYPE *list_head)
+{
+	struct ice_rule_query_data added_entry = { 0 };
+	struct ice_adv_fltr_mgmt_list_entry *adv_fltr;
+	enum ice_status status = ICE_SUCCESS;
+
+	if (LIST_EMPTY(list_head))
+		return status;
+	LIST_FOR_EACH_ENTRY(adv_fltr, list_head, ice_adv_fltr_mgmt_list_entry,
+			    list_entry) {
+		struct ice_adv_rule_info *rinfo = &adv_fltr->rule_info;
+		u16 lk_cnt = adv_fltr->lkups_cnt;
+
+		if (vsi_handle != rinfo->sw_act.vsi_handle)
+			continue;
+		status = ice_add_adv_rule(hw, adv_fltr->lkups, lk_cnt, rinfo,
+					  &added_entry);
+		if (status)
+			break;
+	}
+	return status;
+}
 
 /**
  * ice_replay_vsi_all_fltr - replay all filters stored in bookkeeping lists
@@ -5698,23 +5753,23 @@ ice_replay_vsi_fltr(struct ice_hw *hw, u16 vsi_handle, u8 recp_id,
 enum ice_status ice_replay_vsi_all_fltr(struct ice_hw *hw, u16 vsi_handle)
 {
 	struct ice_switch_info *sw = hw->switch_info;
-	enum ice_status status = ICE_SUCCESS;
+	enum ice_status status;
 	u8 i;
 
+	/* Update the recipes that were created */
 	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
-		/* Update the default recipe lines and ones that were created */
-		if (i < ICE_MAX_NUM_RECIPES || sw->recp_list[i].recp_created) {
-			struct LIST_HEAD_TYPE *head;
+		struct LIST_HEAD_TYPE *head;
 
-			head = &sw->recp_list[i].filt_replay_rules;
-			if (!sw->recp_list[i].adv_rule)
-				status = ice_replay_vsi_fltr(hw, vsi_handle, i,
-							     head);
-			if (status != ICE_SUCCESS)
-				return status;
-		}
+		head = &sw->recp_list[i].filt_replay_rules;
+		if (!sw->recp_list[i].adv_rule)
+			status = ice_replay_vsi_fltr(hw, vsi_handle, i, head);
+		else
+			status = ice_replay_vsi_adv_rule(hw, vsi_handle, head);
+		if (status != ICE_SUCCESS)
+			return status;
 	}
-	return status;
+
+	return ICE_SUCCESS;
 }
 
 /**
@@ -5738,6 +5793,8 @@ void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw)
 			l_head = &sw->recp_list[i].filt_replay_rules;
 			if (!sw->recp_list[i].adv_rule)
 				ice_rem_sw_rule_info(hw, l_head);
+			else
+				ice_rem_adv_rule_info(hw, l_head);
 		}
 	}
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 08/49] net/ice/base: code for removing advanced rule
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (6 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 07/49] net/ice/base: replay advanced rule after reset Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-05  9:07   ` Maxime Coquelin
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 09/49] net/ice/base: add lock around profile map list Leyi Rong
                   ` (42 subsequent siblings)
  50 siblings, 1 reply; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Shivanshu Shukla, Paul M Stillwell Jr

This patch also contains ice_remove_adv_rule function to remove existing
advanced rules. it also handles the case when we have multiple VSI using
the same rule using the following helper functions:

ice_adv_rem_update_vsi_list - function to remove VS from VSI list for
advanced rules.

Signed-off-by: Shivanshu Shukla <shivanshu.shukla@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 309 +++++++++++++++++++++++++++++-
 drivers/net/ice/base/ice_switch.h |   9 +
 2 files changed, 310 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index ca0497ca7..3719ac4bb 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -2217,17 +2217,38 @@ ice_find_vsi_list_entry(struct ice_hw *hw, u8 recp_id, u16 vsi_handle,
 {
 	struct ice_vsi_list_map_info *map_info = NULL;
 	struct ice_switch_info *sw = hw->switch_info;
-	struct ice_fltr_mgmt_list_entry *list_itr;
 	struct LIST_HEAD_TYPE *list_head;
 
 	list_head = &sw->recp_list[recp_id].filt_rules;
-	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
-			    list_entry) {
-		if (list_itr->vsi_count == 1 && list_itr->vsi_list_info) {
-			map_info = list_itr->vsi_list_info;
-			if (ice_is_bit_set(map_info->vsi_map, vsi_handle)) {
-				*vsi_list_id = map_info->vsi_list_id;
-				return map_info;
+	if (sw->recp_list[recp_id].adv_rule) {
+		struct ice_adv_fltr_mgmt_list_entry *list_itr;
+
+		LIST_FOR_EACH_ENTRY(list_itr, list_head,
+				    ice_adv_fltr_mgmt_list_entry,
+				    list_entry) {
+			if (list_itr->vsi_list_info) {
+				map_info = list_itr->vsi_list_info;
+				if (ice_is_bit_set(map_info->vsi_map,
+						   vsi_handle)) {
+					*vsi_list_id = map_info->vsi_list_id;
+					return map_info;
+				}
+			}
+		}
+	} else {
+		struct ice_fltr_mgmt_list_entry *list_itr;
+
+		LIST_FOR_EACH_ENTRY(list_itr, list_head,
+				    ice_fltr_mgmt_list_entry,
+				    list_entry) {
+			if (list_itr->vsi_count == 1 &&
+			    list_itr->vsi_list_info) {
+				map_info = list_itr->vsi_list_info;
+				if (ice_is_bit_set(map_info->vsi_map,
+						   vsi_handle)) {
+					*vsi_list_id = map_info->vsi_list_id;
+					return map_info;
+				}
 			}
 		}
 	}
@@ -5562,6 +5583,278 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 
 	return status;
 }
+
+/**
+ * ice_adv_rem_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle of the VSI to remove
+ * @fm_list: filter management entry for which the VSI list management needs to
+ *	     be done
+ */
+static enum ice_status
+ice_adv_rem_update_vsi_list(struct ice_hw *hw, u16 vsi_handle,
+			    struct ice_adv_fltr_mgmt_list_entry *fm_list)
+{
+	struct ice_vsi_list_map_info *vsi_list_info;
+	enum ice_sw_lkup_type lkup_type;
+	enum ice_status status;
+	u16 vsi_list_id;
+
+	if (fm_list->rule_info.sw_act.fltr_act != ICE_FWD_TO_VSI_LIST ||
+	    fm_list->vsi_count == 0)
+		return ICE_ERR_PARAM;
+
+	/* A rule with the VSI being removed does not exist */
+	if (!ice_is_bit_set(fm_list->vsi_list_info->vsi_map, vsi_handle))
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	lkup_type = ICE_SW_LKUP_LAST;
+	vsi_list_id = fm_list->rule_info.sw_act.fwd_id.vsi_list_id;
+	status = ice_update_vsi_list_rule(hw, &vsi_handle, 1, vsi_list_id, true,
+					  ice_aqc_opc_update_sw_rules,
+					  lkup_type);
+	if (status)
+		return status;
+
+	fm_list->vsi_count--;
+	ice_clear_bit(vsi_handle, fm_list->vsi_list_info->vsi_map);
+	vsi_list_info = fm_list->vsi_list_info;
+	if (fm_list->vsi_count == 1) {
+		struct ice_fltr_info tmp_fltr;
+		u16 rem_vsi_handle;
+
+		rem_vsi_handle = ice_find_first_bit(vsi_list_info->vsi_map,
+						    ICE_MAX_VSI);
+		if (!ice_is_vsi_valid(hw, rem_vsi_handle))
+			return ICE_ERR_OUT_OF_RANGE;
+
+		/* Make sure VSI list is empty before removing it below */
+		status = ice_update_vsi_list_rule(hw, &rem_vsi_handle, 1,
+						  vsi_list_id, true,
+						  ice_aqc_opc_update_sw_rules,
+						  lkup_type);
+		if (status)
+			return status;
+		tmp_fltr.fltr_rule_id = fm_list->rule_info.fltr_rule_id;
+		fm_list->rule_info.sw_act.fltr_act = ICE_FWD_TO_VSI;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI;
+		tmp_fltr.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, rem_vsi_handle);
+		fm_list->rule_info.sw_act.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, rem_vsi_handle);
+
+		/* Update the previous switch rule of "MAC forward to VSI" to
+		 * "MAC fwd to VSI list"
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to update pkt fwd rule to FWD_TO_VSI on HW VSI %d, error %d\n",
+				  tmp_fltr.fwd_id.hw_vsi_id, status);
+			return status;
+		}
+	}
+
+	if (fm_list->vsi_count == 1) {
+		/* Remove the VSI list since it is no longer used */
+		status = ice_remove_vsi_list_rule(hw, vsi_list_id, lkup_type);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to remove VSI list %d, error %d\n",
+				  vsi_list_id, status);
+			return status;
+		}
+
+		LIST_DEL(&vsi_list_info->list_entry);
+		ice_free(hw, vsi_list_info);
+		fm_list->vsi_list_info = NULL;
+	}
+
+	return status;
+}
+
+/**
+ * ice_rem_adv_rule - removes existing advanced switch rule
+ * @hw: pointer to the hardware structure
+ * @lkups: information on the words that needs to be looked up. All words
+ *         together makes one recipe
+ * @lkups_cnt: num of entries in the lkups array
+ * @rinfo: Its the pointer to the rule information for the rule
+ *
+ * This function can be used to remove 1 rule at a time. The lkups is
+ * used to describe all the words that forms the "lookup" portion of the
+ * rule. These words can span multiple protocols. Callers to this function
+ * need to pass in a list of protocol headers with lookup information along
+ * and mask that determines which words are valid from the given protocol
+ * header. rinfo describes other information related to this rule such as
+ * forwarding IDs, priority of this rule, etc.
+ */
+enum ice_status
+ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo)
+{
+	struct ice_adv_fltr_mgmt_list_entry *list_elem;
+	struct ice_prot_lkup_ext lkup_exts;
+	u16 rule_buf_sz, pkt_len, i, rid;
+	enum ice_status status = ICE_SUCCESS;
+	bool remove_rule = false;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	const u8 *pkt = NULL;
+	u16 vsi_handle;
+
+	ice_memset(&lkup_exts, 0, sizeof(lkup_exts), ICE_NONDMA_MEM);
+	for (i = 0; i < lkups_cnt; i++) {
+		u16 count;
+
+		if (lkups[i].type >= ICE_PROTOCOL_LAST)
+			return ICE_ERR_CFG;
+
+		count = ice_fill_valid_words(&lkups[i], &lkup_exts);
+		if (!count)
+			return ICE_ERR_CFG;
+	}
+	rid = ice_find_recp(hw, &lkup_exts);
+	/* If did not find a recipe that match the existing criteria */
+	if (rid == ICE_MAX_NUM_RECIPES)
+		return ICE_ERR_PARAM;
+
+	rule_lock = &hw->switch_info->recp_list[rid].filt_rule_lock;
+	list_elem = ice_find_adv_rule_entry(hw, lkups, lkups_cnt, rid, rinfo);
+	/* the rule is already removed */
+	if (!list_elem)
+		return ICE_SUCCESS;
+	ice_acquire_lock(rule_lock);
+	if (list_elem->rule_info.sw_act.fltr_act != ICE_FWD_TO_VSI_LIST) {
+		remove_rule = true;
+	} else if (list_elem->vsi_count > 1) {
+		list_elem->vsi_list_info->ref_cnt--;
+		remove_rule = false;
+		vsi_handle = rinfo->sw_act.vsi_handle;
+		status = ice_adv_rem_update_vsi_list(hw, vsi_handle, list_elem);
+	} else {
+		vsi_handle = rinfo->sw_act.vsi_handle;
+		status = ice_adv_rem_update_vsi_list(hw, vsi_handle, list_elem);
+		if (status) {
+			ice_release_lock(rule_lock);
+			return status;
+		}
+		if (list_elem->vsi_count == 0)
+			remove_rule = true;
+	}
+	ice_release_lock(rule_lock);
+	if (remove_rule) {
+		struct ice_aqc_sw_rules_elem *s_rule;
+
+		ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt,
+				      &pkt_len);
+		rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + pkt_len;
+		s_rule =
+			(struct ice_aqc_sw_rules_elem *)ice_malloc(hw,
+								   rule_buf_sz);
+		if (!s_rule)
+			return ICE_ERR_NO_MEMORY;
+		s_rule->pdata.lkup_tx_rx.act = 0;
+		s_rule->pdata.lkup_tx_rx.index =
+			CPU_TO_LE16(list_elem->rule_info.fltr_rule_id);
+		s_rule->pdata.lkup_tx_rx.hdr_len = 0;
+		status = ice_aq_sw_rules(hw, (struct ice_aqc_sw_rules *)s_rule,
+					 rule_buf_sz, 1,
+					 ice_aqc_opc_remove_sw_rules, NULL);
+		if (status == ICE_SUCCESS) {
+			ice_acquire_lock(rule_lock);
+			LIST_DEL(&list_elem->list_entry);
+			ice_free(hw, list_elem->lkups);
+			ice_free(hw, list_elem);
+			ice_release_lock(rule_lock);
+		}
+		ice_free(hw, s_rule);
+	}
+	return status;
+}
+
+/**
+ * ice_rem_adv_rule_by_id - removes existing advanced switch rule by ID
+ * @hw: pointer to the hardware structure
+ * @remove_entry: data struct which holds rule_id, VSI handle and recipe ID
+ *
+ * This function is used to remove 1 rule at a time. The removal is based on
+ * the remove_entry parameter. This function will remove rule for a given
+ * vsi_handle with a given rule_id which is passed as parameter in remove_entry
+ */
+enum ice_status
+ice_rem_adv_rule_by_id(struct ice_hw *hw,
+		       struct ice_rule_query_data *remove_entry)
+{
+	struct ice_adv_fltr_mgmt_list_entry *list_itr;
+	struct LIST_HEAD_TYPE *list_head;
+	struct ice_adv_rule_info rinfo;
+	struct ice_switch_info *sw;
+
+	sw = hw->switch_info;
+	if (!sw->recp_list[remove_entry->rid].recp_created)
+		return ICE_ERR_PARAM;
+	list_head = &sw->recp_list[remove_entry->rid].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_adv_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (list_itr->rule_info.fltr_rule_id ==
+		    remove_entry->rule_id) {
+			rinfo = list_itr->rule_info;
+			rinfo.sw_act.vsi_handle = remove_entry->vsi_handle;
+			return ice_rem_adv_rule(hw, list_itr->lkups,
+						list_itr->lkups_cnt, &rinfo);
+		}
+	}
+	return ICE_ERR_PARAM;
+}
+
+/**
+ * ice_rem_adv_for_vsi - removes existing advanced switch rules for a
+ *                       given VSI handle
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle for which we are supposed to remove all the rules.
+ *
+ * This function is used to remove all the rules for a given VSI and as soon
+ * as removing a rule fails, it will return immediately with the error code,
+ * else it will return ICE_SUCCESS
+ */
+enum ice_status
+ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_adv_fltr_mgmt_list_entry *list_itr;
+	struct ice_vsi_list_map_info *map_info;
+	struct LIST_HEAD_TYPE *list_head;
+	struct ice_adv_rule_info rinfo;
+	struct ice_switch_info *sw;
+	enum ice_status status;
+	u16 vsi_list_id = 0;
+	u8 rid;
+
+	sw = hw->switch_info;
+	for (rid = 0; rid < ICE_MAX_NUM_RECIPES; rid++) {
+		if (!sw->recp_list[rid].recp_created)
+			continue;
+		if (!sw->recp_list[rid].adv_rule)
+			continue;
+		list_head = &sw->recp_list[rid].filt_rules;
+		map_info = NULL;
+		LIST_FOR_EACH_ENTRY(list_itr, list_head,
+				    ice_adv_fltr_mgmt_list_entry, list_entry) {
+			map_info = ice_find_vsi_list_entry(hw, rid, vsi_handle,
+							   &vsi_list_id);
+			if (!map_info)
+				continue;
+			rinfo = list_itr->rule_info;
+			rinfo.sw_act.vsi_handle = vsi_handle;
+			status = ice_rem_adv_rule(hw, list_itr->lkups,
+						  list_itr->lkups_cnt, &rinfo);
+			if (status)
+				return status;
+			map_info = NULL;
+		}
+	}
+	return ICE_SUCCESS;
+}
+
 /**
  * ice_replay_fltr - Replay all the filters stored by a specific list head
  * @hw: pointer to the hardware structure
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index 890df13dd..a6e17e861 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -443,6 +443,15 @@ enum ice_status
 ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo,
 		 struct ice_rule_query_data *added_entry);
+enum ice_status
+ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle);
+enum ice_status
+ice_rem_adv_rule_by_id(struct ice_hw *hw,
+		       struct ice_rule_query_data *remove_entry);
+enum ice_status
+ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo);
+
 enum ice_status ice_replay_all_fltr(struct ice_hw *hw);
 
 enum ice_status ice_init_def_sw_recp(struct ice_hw *hw);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 09/49] net/ice/base: add lock around profile map list
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (7 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 08/49] net/ice/base: code for removing advanced rule Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 10/49] net/ice/base: save and post reset replay q bandwidth Leyi Rong
                   ` (41 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Add locking mechanism around profile map list.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 29 ++++++++++++++++++++++++----
 drivers/net/ice/base/ice_flex_type.h |  5 +++--
 2 files changed, 28 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index babad94f8..8f0b513f4 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -3919,15 +3919,16 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 }
 
 /**
- * ice_search_prof_id - Search for a profile tracking ID
+ * ice_search_prof_id_low - Search for a profile tracking ID low level
  * @hw: pointer to the HW struct
  * @blk: hardware block
  * @id: profile tracking ID
  *
- * This will search for a profile tracking ID which was previously added.
+ * This will search for a profile tracking ID which was previously added. This
+ * version assumes that the caller has already acquired the prof map lock.
  */
-struct ice_prof_map *
-ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id)
+static struct ice_prof_map *
+ice_search_prof_id_low(struct ice_hw *hw, enum ice_block blk, u64 id)
 {
 	struct ice_prof_map *entry = NULL;
 	struct ice_prof_map *map;
@@ -3943,6 +3944,26 @@ ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id)
 	return entry;
 }
 
+/**
+ * ice_search_prof_id - Search for a profile tracking ID
+ * @hw: pointer to the HW struct
+ * @blk: hardware block
+ * @id: profile tracking ID
+ *
+ * This will search for a profile tracking ID which was previously added.
+ */
+struct ice_prof_map *
+ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id)
+{
+	struct ice_prof_map *entry;
+
+	ice_acquire_lock(&hw->blk[blk].es.prof_map_lock);
+	entry = ice_search_prof_id_low(hw, blk, id);
+	ice_release_lock(&hw->blk[blk].es.prof_map_lock);
+
+	return entry;
+}
+
 /**
  * ice_set_prof_context - Set context for a given profile
  * @hw: pointer to the HW struct
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
index f2a5f27e7..892c94b1f 100644
--- a/drivers/net/ice/base/ice_flex_type.h
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -503,10 +503,11 @@ struct ice_es {
 	u16 count;
 	u16 fvw;
 	u16 *ref_count;
-	u8 *written;
-	u8 reverse; /* set to true to reverse FV order */
 	struct LIST_HEAD_TYPE prof_map;
 	struct ice_fv_word *t;
+	struct ice_lock prof_map_lock;	/* protect access to profiles list */
+	u8 *written;
+	u8 reverse; /* set to true to reverse FV order */
 };
 
 /* PTYPE Group management */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 10/49] net/ice/base: save and post reset replay q bandwidth
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (8 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 09/49] net/ice/base: add lock around profile map list Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 11/49] net/ice/base: rollback AVF RSS configurations Leyi Rong
                   ` (40 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Tarun Singh, Paul M Stillwell Jr

Added code to save the queue bandwidth information when it is applied
and it is replayed when queue is re-enabled again. Earlier saved value
is used for replay purpose.
Added vsi_handle, tc, and q_handle argument to the ice_cfg_q_bw_lmt,
ice_cfg_q_bw_dflt_lmt.

Signed-off-by: Tarun Singh <tarun.k.singh@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c |  7 ++-
 drivers/net/ice/base/ice_common.h |  4 ++
 drivers/net/ice/base/ice_sched.c  | 91 ++++++++++++++++++++++++++-----
 drivers/net/ice/base/ice_sched.h  |  8 +--
 drivers/net/ice/base/ice_switch.h |  5 --
 drivers/net/ice/base/ice_type.h   |  8 +++
 6 files changed, 98 insertions(+), 25 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index c74e4e1d4..09296ead2 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -3606,7 +3606,7 @@ ice_get_ctx(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info)
  * @tc: TC number
  * @q_handle: software queue handle
  */
-static struct ice_q_ctx *
+struct ice_q_ctx *
 ice_get_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 q_handle)
 {
 	struct ice_vsi_ctx *vsi;
@@ -3703,9 +3703,12 @@ ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle,
 	node.node_teid = buf->txqs[0].q_teid;
 	node.data.elem_type = ICE_AQC_ELEM_TYPE_LEAF;
 	q_ctx->q_handle = q_handle;
+	q_ctx->q_teid = LE32_TO_CPU(node.node_teid);
 
-	/* add a leaf node into schduler tree queue layer */
+	/* add a leaf node into scheduler tree queue layer */
 	status = ice_sched_add_node(pi, hw->num_tx_sched_layers - 1, &node);
+	if (!status)
+		status = ice_sched_replay_q_bw(pi, q_ctx);
 
 ena_txq_exit:
 	ice_release_lock(&pi->sched_lock);
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 58c66fdc0..aee754b85 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -186,6 +186,10 @@ void ice_sched_replay_agg(struct ice_hw *hw);
 enum ice_status ice_sched_replay_tc_node_bw(struct ice_hw *hw);
 enum ice_status ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle);
 enum ice_status
+ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx);
+struct ice_q_ctx *
+ice_get_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 q_handle);
+enum ice_status
 ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
 			 enum ice_rl_type rl_type, u8 bw_alloc);
 enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes);
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 8773e62a9..855e3848c 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -4326,27 +4326,61 @@ ice_sched_validate_srl_node(struct ice_sched_node *node, u8 sel_layer)
 	return ICE_ERR_CFG;
 }
 
+/**
+ * ice_sched_save_q_bw - save queue node's BW information
+ * @q_ctx: queue context structure
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save BW information of queue type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_q_bw(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type, u32 bw)
+{
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw(&q_ctx->bw_t_info, bw);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw(&q_ctx->bw_t_info, bw);
+		break;
+	case ICE_SHARED_BW:
+		ice_set_clear_shared_bw(&q_ctx->bw_t_info, bw);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
 /**
  * ice_sched_set_q_bw_lmt - sets queue BW limit
  * @pi: port information structure
- * @q_id: queue ID (leaf node TEID)
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
  * @rl_type: min, max, or shared
  * @bw: bandwidth in Kbps
  *
  * This function sets BW limit of queue scheduling node.
  */
 static enum ice_status
-ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u32 q_id,
-		       enum ice_rl_type rl_type, u32 bw)
+ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		       u16 q_handle, enum ice_rl_type rl_type, u32 bw)
 {
 	enum ice_status status = ICE_ERR_PARAM;
 	struct ice_sched_node *node;
+	struct ice_q_ctx *q_ctx;
 
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
 	ice_acquire_lock(&pi->sched_lock);
-
-	node = ice_sched_find_node_by_teid(pi->root, q_id);
+	q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handle);
+	if (!q_ctx)
+		goto exit_q_bw_lmt;
+	node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
 	if (!node) {
-		ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_id\n");
+		ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_teid\n");
 		goto exit_q_bw_lmt;
 	}
 
@@ -4374,6 +4408,9 @@ ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u32 q_id,
 	else
 		status = ice_sched_set_node_bw_lmt(pi, node, rl_type, bw);
 
+	if (!status)
+		status = ice_sched_save_q_bw(q_ctx, rl_type, bw);
+
 exit_q_bw_lmt:
 	ice_release_lock(&pi->sched_lock);
 	return status;
@@ -4382,32 +4419,38 @@ ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u32 q_id,
 /**
  * ice_cfg_q_bw_lmt - configure queue BW limit
  * @pi: port information structure
- * @q_id: queue ID (leaf node TEID)
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
  * @rl_type: min, max, or shared
  * @bw: bandwidth in Kbps
  *
  * This function configures BW limit of queue scheduling node.
  */
 enum ice_status
-ice_cfg_q_bw_lmt(struct ice_port_info *pi, u32 q_id, enum ice_rl_type rl_type,
-		 u32 bw)
+ice_cfg_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		 u16 q_handle, enum ice_rl_type rl_type, u32 bw)
 {
-	return ice_sched_set_q_bw_lmt(pi, q_id, rl_type, bw);
+	return ice_sched_set_q_bw_lmt(pi, vsi_handle, tc, q_handle, rl_type,
+				      bw);
 }
 
 /**
  * ice_cfg_q_bw_dflt_lmt - configure queue BW default limit
  * @pi: port information structure
- * @q_id: queue ID (leaf node TEID)
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
  * @rl_type: min, max, or shared
  *
  * This function configures BW default limit of queue scheduling node.
  */
 enum ice_status
-ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u32 q_id,
-		      enum ice_rl_type rl_type)
+ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		      u16 q_handle, enum ice_rl_type rl_type)
 {
-	return ice_sched_set_q_bw_lmt(pi, q_id, rl_type, ICE_SCHED_DFLT_BW);
+	return ice_sched_set_q_bw_lmt(pi, vsi_handle, tc, q_handle, rl_type,
+				      ICE_SCHED_DFLT_BW);
 }
 
 /**
@@ -5421,3 +5464,23 @@ ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle)
 	ice_release_lock(&pi->sched_lock);
 	return status;
 }
+
+/**
+ * ice_sched_replay_q_bw - replay queue type node BW
+ * @pi: port information structure
+ * @q_ctx: queue context structure
+ *
+ * This function replays queue type node bandwidth. This function needs to be
+ * called with scheduler lock held.
+ */
+enum ice_status
+ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx)
+{
+	struct ice_sched_node *q_node;
+
+	/* Following also checks the presence of node in tree */
+	q_node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
+	if (!q_node)
+		return ICE_ERR_PARAM;
+	return ice_sched_replay_node_bw(pi->hw, q_node, &q_ctx->bw_t_info);
+}
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 92377a82e..56f9977ab 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -122,11 +122,11 @@ ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
 		    u8 tc_bitmap);
 enum ice_status ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id);
 enum ice_status
-ice_cfg_q_bw_lmt(struct ice_port_info *pi, u32 q_id, enum ice_rl_type rl_type,
-		 u32 bw);
+ice_cfg_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		 u16 q_handle, enum ice_rl_type rl_type, u32 bw);
 enum ice_status
-ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u32 q_id,
-		      enum ice_rl_type rl_type);
+ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		      u16 q_handle, enum ice_rl_type rl_type);
 enum ice_status
 ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
 		       enum ice_rl_type rl_type, u32 bw);
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index a6e17e861..e3fb0434d 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -21,11 +21,6 @@
 #define ICE_VSI_INVAL_ID 0xFFFF
 #define ICE_INVAL_Q_HANDLE 0xFFFF
 
-/* VSI queue context structure */
-struct ice_q_ctx {
-	u16  q_handle;
-};
-
 /* VSI context structure for add/get/update/free operations */
 struct ice_vsi_ctx {
 	u16 vsi_num;
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index d994ea3d2..b229be158 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -569,6 +569,14 @@ struct ice_bw_type_info {
 	u32 shared_bw;
 };
 
+/* VSI queue context structure for given TC */
+struct ice_q_ctx {
+	u16  q_handle;
+	u32  q_teid;
+	/* bw_t_info saves queue BW information */
+	struct ice_bw_type_info bw_t_info;
+};
+
 /* VSI type list entry to locate corresponding VSI/aggregator nodes */
 struct ice_sched_vsi_info {
 	struct ice_sched_node *vsi_node[ICE_MAX_TRAFFIC_CLASS];
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 11/49] net/ice/base: rollback AVF RSS configurations
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (9 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 10/49] net/ice/base: save and post reset replay q bandwidth Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 12/49] net/ice/base: move RSS replay list Leyi Rong
                   ` (39 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Vignesh Sridhar, Paul M Stillwell Jr

Adding support to remove RSS configurations added
prior to failing case in AVF.

Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c | 128 ++++++++++++++++++++++++++++++++
 1 file changed, 128 insertions(+)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index f1bf5b5e7..d97fe1fc7 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -1915,6 +1915,134 @@ ice_rem_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	return status;
 }
 
+/* Mapping of AVF hash bit fields to an L3-L4 hash combination.
+ * As the ice_flow_avf_hdr_field represent individual bit shifts in a hash,
+ * convert its values to their appropriate flow L3, L4 values.
+ */
+#define ICE_FLOW_AVF_RSS_IPV4_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_OTHER) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_FRAG_IPV4))
+#define ICE_FLOW_AVF_RSS_TCP_IPV4_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_TCP_SYN_NO_ACK) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_TCP))
+#define ICE_FLOW_AVF_RSS_UDP_IPV4_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_UNICAST_IPV4_UDP) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_MULTICAST_IPV4_UDP) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_UDP))
+#define ICE_FLOW_AVF_RSS_ALL_IPV4_MASKS \
+	(ICE_FLOW_AVF_RSS_TCP_IPV4_MASKS | ICE_FLOW_AVF_RSS_UDP_IPV4_MASKS | \
+	 ICE_FLOW_AVF_RSS_IPV4_MASKS | BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_SCTP))
+
+#define ICE_FLOW_AVF_RSS_IPV6_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_OTHER) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_FRAG_IPV6))
+#define ICE_FLOW_AVF_RSS_UDP_IPV6_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_UNICAST_IPV6_UDP) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_MULTICAST_IPV6_UDP) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_UDP))
+#define ICE_FLOW_AVF_RSS_TCP_IPV6_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_TCP_SYN_NO_ACK) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_TCP))
+#define ICE_FLOW_AVF_RSS_ALL_IPV6_MASKS \
+	(ICE_FLOW_AVF_RSS_TCP_IPV6_MASKS | ICE_FLOW_AVF_RSS_UDP_IPV6_MASKS | \
+	 ICE_FLOW_AVF_RSS_IPV6_MASKS | BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_SCTP))
+
+#define ICE_FLOW_MAX_CFG	10
+
+/**
+ * ice_add_avf_rss_cfg - add an RSS configuration for AVF driver
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: software VSI handle
+ * @avf_hash: hash bit fields (ICE_AVF_FLOW_FIELD_*) to configure
+ *
+ * This function will take the hash bitmap provided by the AVF driver via a
+ * message, convert it to ICE-compatible values, and configure RSS flow
+ * profiles.
+ */
+enum ice_status
+ice_add_avf_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 avf_hash)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u64 hash_flds;
+
+	if (avf_hash == ICE_AVF_FLOW_FIELD_INVALID ||
+	    !ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Make sure no unsupported bits are specified */
+	if (avf_hash & ~(ICE_FLOW_AVF_RSS_ALL_IPV4_MASKS |
+			 ICE_FLOW_AVF_RSS_ALL_IPV6_MASKS))
+		return ICE_ERR_CFG;
+
+	hash_flds = avf_hash;
+
+	/* Always create an L3 RSS configuration for any L4 RSS configuration */
+	if (hash_flds & ICE_FLOW_AVF_RSS_ALL_IPV4_MASKS)
+		hash_flds |= ICE_FLOW_AVF_RSS_IPV4_MASKS;
+
+	if (hash_flds & ICE_FLOW_AVF_RSS_ALL_IPV6_MASKS)
+		hash_flds |= ICE_FLOW_AVF_RSS_IPV6_MASKS;
+
+	/* Create the corresponding RSS configuration for each valid hash bit */
+	while (hash_flds) {
+		u64 rss_hash = ICE_HASH_INVALID;
+
+		if (hash_flds & ICE_FLOW_AVF_RSS_ALL_IPV4_MASKS) {
+			if (hash_flds & ICE_FLOW_AVF_RSS_IPV4_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV4;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_IPV4_MASKS;
+			} else if (hash_flds &
+				   ICE_FLOW_AVF_RSS_TCP_IPV4_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV4 |
+					ICE_FLOW_HASH_TCP_PORT;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_TCP_IPV4_MASKS;
+			} else if (hash_flds &
+				   ICE_FLOW_AVF_RSS_UDP_IPV4_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV4 |
+					ICE_FLOW_HASH_UDP_PORT;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_UDP_IPV4_MASKS;
+			} else if (hash_flds &
+				   BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_SCTP)) {
+				rss_hash = ICE_FLOW_HASH_IPV4 |
+					ICE_FLOW_HASH_SCTP_PORT;
+				hash_flds &=
+					~BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_SCTP);
+			}
+		} else if (hash_flds & ICE_FLOW_AVF_RSS_ALL_IPV6_MASKS) {
+			if (hash_flds & ICE_FLOW_AVF_RSS_IPV6_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV6;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_IPV6_MASKS;
+			} else if (hash_flds &
+				   ICE_FLOW_AVF_RSS_TCP_IPV6_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV6 |
+					ICE_FLOW_HASH_TCP_PORT;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_TCP_IPV6_MASKS;
+			} else if (hash_flds &
+				   ICE_FLOW_AVF_RSS_UDP_IPV6_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV6 |
+					ICE_FLOW_HASH_UDP_PORT;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_UDP_IPV6_MASKS;
+			} else if (hash_flds &
+				   BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_SCTP)) {
+				rss_hash = ICE_FLOW_HASH_IPV6 |
+					ICE_FLOW_HASH_SCTP_PORT;
+				hash_flds &=
+					~BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_SCTP);
+			}
+		}
+
+		if (rss_hash == ICE_HASH_INVALID)
+			return ICE_ERR_OUT_OF_RANGE;
+
+		status = ice_add_rss_cfg(hw, vsi_handle, rss_hash,
+					 ICE_FLOW_SEG_HDR_NONE);
+		if (status)
+			break;
+	}
+
+	return status;
+}
+
 /**
  * ice_rem_rss_cfg - remove an existing RSS config with matching hashed fields
  * @hw: pointer to the hardware structure
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 12/49] net/ice/base: move RSS replay list
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (10 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 11/49] net/ice/base: rollback AVF RSS configurations Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 13/49] net/ice/base: cache the data of set PHY cfg AQ in SW Leyi Rong
                   ` (38 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang
  Cc: dev, Leyi Rong, Vignesh Sridhar, Henry Tieman, Paul M Stillwell Jr

1. Move the RSS list pointer and lock from the VSI context to the ice_hw
structure. This is to ensure that the RSS configurations added to the
list prior to reset and maintained until the PF is unloaded. This will
ensure that the configuration list is unaffected by VFRs that would
destroy the VSI context. This will allow the replay of RSS entries for
VF VSI, as against current method of re-adding default configurations
and also eliminates the need to re-allocate the RSS list and lock post-VFR.
2. Align RSS flow functions to the new position of the RSS list and lock.
3. Adding bitmap for flow type status.

Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Henry Tieman <henry.w.tieman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c   | 100 +++++++++++++++++-------------
 drivers/net/ice/base/ice_flow.h   |   4 +-
 drivers/net/ice/base/ice_switch.c |   6 +-
 drivers/net/ice/base/ice_switch.h |   2 -
 drivers/net/ice/base/ice_type.h   |   3 +
 5 files changed, 63 insertions(+), 52 deletions(-)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index d97fe1fc7..dccd7d3c7 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -1605,27 +1605,32 @@ ice_flow_set_rss_seg_info(struct ice_flow_seg_info *segs, u64 hash_fields,
 }
 
 /**
- * ice_rem_all_rss_vsi_ctx - remove all RSS configurations from VSI context
+ * ice_rem_vsi_rss_list - remove VSI from RSS list
  * @hw: pointer to the hardware structure
  * @vsi_handle: software VSI handle
  *
+ * Remove the VSI from all RSS configurations in the list.
  */
-void ice_rem_all_rss_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
+void ice_rem_vsi_rss_list(struct ice_hw *hw, u16 vsi_handle)
 {
 	struct ice_rss_cfg *r, *tmp;
 
-	if (!ice_is_vsi_valid(hw, vsi_handle) ||
-	    LIST_EMPTY(&hw->vsi_ctx[vsi_handle]->rss_list_head))
+	if (LIST_EMPTY(&hw->rss_list_head))
 		return;
 
-	ice_acquire_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
-	LIST_FOR_EACH_ENTRY_SAFE(r, tmp,
-				 &hw->vsi_ctx[vsi_handle]->rss_list_head,
+	ice_acquire_lock(&hw->rss_locks);
+	LIST_FOR_EACH_ENTRY_SAFE(r, tmp, &hw->rss_list_head,
 				 ice_rss_cfg, l_entry) {
-		LIST_DEL(&r->l_entry);
-		ice_free(hw, r);
+		if (ice_is_bit_set(r->vsis, vsi_handle)) {
+			ice_clear_bit(vsi_handle, r->vsis);
+
+			if (!ice_is_any_bit_set(r->vsis, ICE_MAX_VSI)) {
+				LIST_DEL(&r->l_entry);
+				ice_free(hw, r);
+			}
+		}
 	}
-	ice_release_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_release_lock(&hw->rss_locks);
 }
 
 /**
@@ -1667,7 +1672,7 @@ enum ice_status ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
 }
 
 /**
- * ice_rem_rss_cfg_vsi_ctx - remove RSS configuration from VSI context
+ * ice_rem_rss_list - remove RSS configuration from list
  * @hw: pointer to the hardware structure
  * @vsi_handle: software VSI handle
  * @prof: pointer to flow profile
@@ -1675,8 +1680,7 @@ enum ice_status ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
  * Assumption: lock has already been acquired for RSS list
  */
 static void
-ice_rem_rss_cfg_vsi_ctx(struct ice_hw *hw, u16 vsi_handle,
-			struct ice_flow_prof *prof)
+ice_rem_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof)
 {
 	struct ice_rss_cfg *r, *tmp;
 
@@ -1684,20 +1688,22 @@ ice_rem_rss_cfg_vsi_ctx(struct ice_hw *hw, u16 vsi_handle,
 	 * hash configurations associated to the flow profile. If found
 	 * remove from the RSS entry list of the VSI context and delete entry.
 	 */
-	LIST_FOR_EACH_ENTRY_SAFE(r, tmp,
-				 &hw->vsi_ctx[vsi_handle]->rss_list_head,
+	LIST_FOR_EACH_ENTRY_SAFE(r, tmp, &hw->rss_list_head,
 				 ice_rss_cfg, l_entry) {
 		if (r->hashed_flds == prof->segs[prof->segs_cnt - 1].match &&
 		    r->packet_hdr == prof->segs[prof->segs_cnt - 1].hdrs) {
-			LIST_DEL(&r->l_entry);
-			ice_free(hw, r);
+			ice_clear_bit(vsi_handle, r->vsis);
+			if (!ice_is_any_bit_set(r->vsis, ICE_MAX_VSI)) {
+				LIST_DEL(&r->l_entry);
+				ice_free(hw, r);
+			}
 			return;
 		}
 	}
 }
 
 /**
- * ice_add_rss_vsi_ctx - add RSS configuration to VSI context
+ * ice_add_rss_list - add RSS configuration to list
  * @hw: pointer to the hardware structure
  * @vsi_handle: software VSI handle
  * @prof: pointer to flow profile
@@ -1705,16 +1711,17 @@ ice_rem_rss_cfg_vsi_ctx(struct ice_hw *hw, u16 vsi_handle,
  * Assumption: lock has already been acquired for RSS list
  */
 static enum ice_status
-ice_add_rss_vsi_ctx(struct ice_hw *hw, u16 vsi_handle,
-		    struct ice_flow_prof *prof)
+ice_add_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof)
 {
 	struct ice_rss_cfg *r, *rss_cfg;
 
-	LIST_FOR_EACH_ENTRY(r, &hw->vsi_ctx[vsi_handle]->rss_list_head,
+	LIST_FOR_EACH_ENTRY(r, &hw->rss_list_head,
 			    ice_rss_cfg, l_entry)
 		if (r->hashed_flds == prof->segs[prof->segs_cnt - 1].match &&
-		    r->packet_hdr == prof->segs[prof->segs_cnt - 1].hdrs)
+		    r->packet_hdr == prof->segs[prof->segs_cnt - 1].hdrs) {
+			ice_set_bit(vsi_handle, r->vsis);
 			return ICE_SUCCESS;
+		}
 
 	rss_cfg = (struct ice_rss_cfg *)ice_malloc(hw, sizeof(*rss_cfg));
 	if (!rss_cfg)
@@ -1722,8 +1729,9 @@ ice_add_rss_vsi_ctx(struct ice_hw *hw, u16 vsi_handle,
 
 	rss_cfg->hashed_flds = prof->segs[prof->segs_cnt - 1].match;
 	rss_cfg->packet_hdr = prof->segs[prof->segs_cnt - 1].hdrs;
-	LIST_ADD_TAIL(&rss_cfg->l_entry,
-		      &hw->vsi_ctx[vsi_handle]->rss_list_head);
+	ice_set_bit(vsi_handle, rss_cfg->vsis);
+
+	LIST_ADD_TAIL(&rss_cfg->l_entry, &hw->rss_list_head);
 
 	return ICE_SUCCESS;
 }
@@ -1785,7 +1793,7 @@ ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	if (prof) {
 		status = ice_flow_disassoc_prof(hw, blk, prof, vsi_handle);
 		if (!status)
-			ice_rem_rss_cfg_vsi_ctx(hw, vsi_handle, prof);
+			ice_rem_rss_list(hw, vsi_handle, prof);
 		else
 			goto exit;
 
@@ -1806,7 +1814,7 @@ ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	if (prof) {
 		status = ice_flow_assoc_prof(hw, blk, prof, vsi_handle);
 		if (!status)
-			status = ice_add_rss_vsi_ctx(hw, vsi_handle, prof);
+			status = ice_add_rss_list(hw, vsi_handle, prof);
 		goto exit;
 	}
 
@@ -1828,7 +1836,7 @@ ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 		goto exit;
 	}
 
-	status = ice_add_rss_vsi_ctx(hw, vsi_handle, prof);
+	status = ice_add_rss_list(hw, vsi_handle, prof);
 
 exit:
 	ice_free(hw, segs);
@@ -1856,9 +1864,9 @@ ice_add_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	    !ice_is_vsi_valid(hw, vsi_handle))
 		return ICE_ERR_PARAM;
 
-	ice_acquire_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_acquire_lock(&hw->rss_locks);
 	status = ice_add_rss_cfg_sync(hw, vsi_handle, hashed_flds, addl_hdrs);
-	ice_release_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_release_lock(&hw->rss_locks);
 
 	return status;
 }
@@ -1905,7 +1913,7 @@ ice_rem_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	/* Remove RSS configuration from VSI context before deleting
 	 * the flow profile.
 	 */
-	ice_rem_rss_cfg_vsi_ctx(hw, vsi_handle, prof);
+	ice_rem_rss_list(hw, vsi_handle, prof);
 
 	if (!ice_is_any_bit_set(prof->vsis, ICE_MAX_VSI))
 		status = ice_flow_rem_prof_sync(hw, blk, prof);
@@ -2066,15 +2074,15 @@ ice_rem_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	    !ice_is_vsi_valid(hw, vsi_handle))
 		return ICE_ERR_PARAM;
 
-	ice_acquire_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_acquire_lock(&hw->rss_locks);
 	status = ice_rem_rss_cfg_sync(hw, vsi_handle, hashed_flds, addl_hdrs);
-	ice_release_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_release_lock(&hw->rss_locks);
 
 	return status;
 }
 
 /**
- * ice_replay_rss_cfg - remove RSS configurations associated with VSI
+ * ice_replay_rss_cfg - replay RSS configurations associated with VSI
  * @hw: pointer to the hardware structure
  * @vsi_handle: software VSI handle
  */
@@ -2086,15 +2094,18 @@ enum ice_status ice_replay_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
 	if (!ice_is_vsi_valid(hw, vsi_handle))
 		return ICE_ERR_PARAM;
 
-	ice_acquire_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
-	LIST_FOR_EACH_ENTRY(r, &hw->vsi_ctx[vsi_handle]->rss_list_head,
+	ice_acquire_lock(&hw->rss_locks);
+	LIST_FOR_EACH_ENTRY(r, &hw->rss_list_head,
 			    ice_rss_cfg, l_entry) {
-		status = ice_add_rss_cfg_sync(hw, vsi_handle, r->hashed_flds,
-					      r->packet_hdr);
-		if (status)
-			break;
+		if (ice_is_bit_set(r->vsis, vsi_handle)) {
+			status = ice_add_rss_cfg_sync(hw, vsi_handle,
+						      r->hashed_flds,
+						      r->packet_hdr);
+			if (status)
+				break;
+		}
 	}
-	ice_release_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_release_lock(&hw->rss_locks);
 
 	return status;
 }
@@ -2116,14 +2127,15 @@ u64 ice_get_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u32 hdrs)
 	if (hdrs == ICE_FLOW_SEG_HDR_NONE || !ice_is_vsi_valid(hw, vsi_handle))
 		return ICE_HASH_INVALID;
 
-	ice_acquire_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
-	LIST_FOR_EACH_ENTRY(r, &hw->vsi_ctx[vsi_handle]->rss_list_head,
+	ice_acquire_lock(&hw->rss_locks);
+	LIST_FOR_EACH_ENTRY(r, &hw->rss_list_head,
 			    ice_rss_cfg, l_entry)
-		if (r->packet_hdr == hdrs) {
+		if (ice_is_bit_set(r->vsis, vsi_handle) &&
+		    r->packet_hdr == hdrs) {
 			rss_cfg = r;
 			break;
 		}
-	ice_release_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_release_lock(&hw->rss_locks);
 
 	return rss_cfg ? rss_cfg->hashed_flds : ICE_HASH_INVALID;
 }
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
index f0c74a348..4fa13064e 100644
--- a/drivers/net/ice/base/ice_flow.h
+++ b/drivers/net/ice/base/ice_flow.h
@@ -270,6 +270,8 @@ struct ice_flow_prof {
 
 struct ice_rss_cfg {
 	struct LIST_ENTRY_TYPE l_entry;
+	/* bitmap of VSIs added to the RSS entry */
+	ice_declare_bitmap(vsis, ICE_MAX_VSI);
 	u64 hashed_flds;
 	u32 packet_hdr;
 };
@@ -338,7 +340,7 @@ ice_flow_set_fld_prefix(struct ice_flow_seg_info *seg, enum ice_flow_field fld,
 void
 ice_flow_add_fld_raw(struct ice_flow_seg_info *seg, u16 off, u8 len,
 		     u16 val_loc, u16 mask_loc);
-void ice_rem_all_rss_vsi_ctx(struct ice_hw *hw, u16 vsi_handle);
+void ice_rem_vsi_rss_list(struct ice_hw *hw, u16 vsi_handle);
 enum ice_status ice_replay_rss_cfg(struct ice_hw *hw, u16 vsi_handle);
 enum ice_status
 ice_add_avf_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds);
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 3719ac4bb..7cccaf4d3 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -686,10 +686,7 @@ static void ice_clear_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
 
 	vsi = ice_get_vsi_ctx(hw, vsi_handle);
 	if (vsi) {
-		if (!LIST_EMPTY(&vsi->rss_list_head))
-			ice_rem_all_rss_vsi_ctx(hw, vsi_handle);
 		ice_clear_vsi_q_ctx(hw, vsi_handle);
-		ice_destroy_lock(&vsi->rss_locks);
 		ice_free(hw, vsi);
 		hw->vsi_ctx[vsi_handle] = NULL;
 	}
@@ -740,8 +737,7 @@ ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
 			return ICE_ERR_NO_MEMORY;
 		}
 		*tmp_vsi_ctx = *vsi_ctx;
-		ice_init_lock(&tmp_vsi_ctx->rss_locks);
-		INIT_LIST_HEAD(&tmp_vsi_ctx->rss_list_head);
+
 		ice_save_vsi_ctx(hw, vsi_handle, tmp_vsi_ctx);
 	} else {
 		/* update with new HW VSI num */
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index e3fb0434d..2f140a86d 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -32,8 +32,6 @@ struct ice_vsi_ctx {
 	u8 alloc_from_pool;
 	u16 num_lan_q_entries[ICE_MAX_TRAFFIC_CLASS];
 	struct ice_q_ctx *lan_q_ctx[ICE_MAX_TRAFFIC_CLASS];
-	struct ice_lock rss_locks;	/* protect rss config in VSI ctx */
-	struct LIST_HEAD_TYPE rss_list_head;
 };
 
 /* This is to be used by add/update mirror rule Admin Queue command */
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index b229be158..45b0b3c05 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -817,6 +817,9 @@ struct ice_hw {
 	u16 fdir_fltr_cnt[ICE_FLTR_PTYPE_MAX];
 
 	struct ice_fd_hw_prof **fdir_prof;
+	ice_declare_bitmap(fdir_perfect_fltr, ICE_FLTR_PTYPE_MAX);
+	struct ice_lock rss_locks;	/* protect RSS configuration */
+	struct LIST_HEAD_TYPE rss_list_head;
 };
 
 /* Statistics collected by each port, VSI, VEB, and S-channel */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 13/49] net/ice/base: cache the data of set PHY cfg AQ in SW
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (11 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 12/49] net/ice/base: move RSS replay list Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 14/49] net/ice/base: refactor HW table init function Leyi Rong
                   ` (37 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Chinh T Cao, Paul M Stillwell Jr

After the transition from cable-unplug to cable-plug events, FW will
clear the set-phy-cfg data, sent by user. Thus, we will need to
cache these info.
1. The submitted data when set-phy-cfg is called. This info will be used
later to check if FW clears out the PHY info, requested by user.
2. The FC, FEC and LinkSpeed, requested by user. This info will be used
later, by device driver, to construct the new input data for the
set-phy-cfg AQ command.

Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 119 +++++++++++++++++++++++-------
 drivers/net/ice/base/ice_common.h |   2 +-
 drivers/net/ice/base/ice_type.h   |  31 ++++++--
 drivers/net/ice/ice_ethdev.c      |   2 +-
 4 files changed, 122 insertions(+), 32 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 09296ead2..a0ab25aef 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -270,21 +270,23 @@ enum ice_status
 ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
 		     struct ice_link_status *link, struct ice_sq_cd *cd)
 {
-	struct ice_link_status *hw_link_info_old, *hw_link_info;
 	struct ice_aqc_get_link_status_data link_data = { 0 };
 	struct ice_aqc_get_link_status *resp;
+	struct ice_link_status *li_old, *li;
 	enum ice_media_type *hw_media_type;
 	struct ice_fc_info *hw_fc_info;
 	bool tx_pause, rx_pause;
 	struct ice_aq_desc desc;
 	enum ice_status status;
+	struct ice_hw *hw;
 	u16 cmd_flags;
 
 	if (!pi)
 		return ICE_ERR_PARAM;
-	hw_link_info_old = &pi->phy.link_info_old;
+	hw = pi->hw;
+	li_old = &pi->phy.link_info_old;
 	hw_media_type = &pi->phy.media_type;
-	hw_link_info = &pi->phy.link_info;
+	li = &pi->phy.link_info;
 	hw_fc_info = &pi->fc;
 
 	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_status);
@@ -293,27 +295,27 @@ ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
 	resp->cmd_flags = CPU_TO_LE16(cmd_flags);
 	resp->lport_num = pi->lport;
 
-	status = ice_aq_send_cmd(pi->hw, &desc, &link_data, sizeof(link_data),
-				 cd);
+	status = ice_aq_send_cmd(hw, &desc, &link_data, sizeof(link_data), cd);
 
 	if (status != ICE_SUCCESS)
 		return status;
 
 	/* save off old link status information */
-	*hw_link_info_old = *hw_link_info;
+	*li_old = *li;
 
 	/* update current link status information */
-	hw_link_info->link_speed = LE16_TO_CPU(link_data.link_speed);
-	hw_link_info->phy_type_low = LE64_TO_CPU(link_data.phy_type_low);
-	hw_link_info->phy_type_high = LE64_TO_CPU(link_data.phy_type_high);
+	li->link_speed = LE16_TO_CPU(link_data.link_speed);
+	li->phy_type_low = LE64_TO_CPU(link_data.phy_type_low);
+	li->phy_type_high = LE64_TO_CPU(link_data.phy_type_high);
 	*hw_media_type = ice_get_media_type(pi);
-	hw_link_info->link_info = link_data.link_info;
-	hw_link_info->an_info = link_data.an_info;
-	hw_link_info->ext_info = link_data.ext_info;
-	hw_link_info->max_frame_size = LE16_TO_CPU(link_data.max_frame_size);
-	hw_link_info->fec_info = link_data.cfg & ICE_AQ_FEC_MASK;
-	hw_link_info->topo_media_conflict = link_data.topo_media_conflict;
-	hw_link_info->pacing = link_data.cfg & ICE_AQ_CFG_PACING_M;
+	li->link_info = link_data.link_info;
+	li->an_info = link_data.an_info;
+	li->ext_info = link_data.ext_info;
+	li->max_frame_size = LE16_TO_CPU(link_data.max_frame_size);
+	li->fec_info = link_data.cfg & ICE_AQ_FEC_MASK;
+	li->topo_media_conflict = link_data.topo_media_conflict;
+	li->pacing = link_data.cfg & (ICE_AQ_CFG_PACING_M |
+				      ICE_AQ_CFG_PACING_TYPE_M);
 
 	/* update fc info */
 	tx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_TX);
@@ -327,13 +329,24 @@ ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
 	else
 		hw_fc_info->current_mode = ICE_FC_NONE;
 
-	hw_link_info->lse_ena =
-		!!(resp->cmd_flags & CPU_TO_LE16(ICE_AQ_LSE_IS_ENABLED));
-
+	li->lse_ena = !!(resp->cmd_flags & CPU_TO_LE16(ICE_AQ_LSE_IS_ENABLED));
+
+	ice_debug(hw, ICE_DBG_LINK, "link_speed = 0x%x\n", li->link_speed);
+	ice_debug(hw, ICE_DBG_LINK, "phy_type_low = 0x%llx\n",
+		  (unsigned long long)li->phy_type_low);
+	ice_debug(hw, ICE_DBG_LINK, "phy_type_high = 0x%llx\n",
+		  (unsigned long long)li->phy_type_high);
+	ice_debug(hw, ICE_DBG_LINK, "media_type = 0x%x\n", *hw_media_type);
+	ice_debug(hw, ICE_DBG_LINK, "link_info = 0x%x\n", li->link_info);
+	ice_debug(hw, ICE_DBG_LINK, "an_info = 0x%x\n", li->an_info);
+	ice_debug(hw, ICE_DBG_LINK, "ext_info = 0x%x\n", li->ext_info);
+	ice_debug(hw, ICE_DBG_LINK, "lse_ena = 0x%x\n", li->lse_ena);
+	ice_debug(hw, ICE_DBG_LINK, "max_frame = 0x%x\n", li->max_frame_size);
+	ice_debug(hw, ICE_DBG_LINK, "pacing = 0x%x\n", li->pacing);
 
 	/* save link status information */
 	if (link)
-		*link = *hw_link_info;
+		*link = *li;
 
 	/* flag cleared so calling functions don't call AQ again */
 	pi->phy.get_link_info = false;
@@ -2412,7 +2425,7 @@ ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
 /**
  * ice_aq_set_phy_cfg
  * @hw: pointer to the HW struct
- * @lport: logical port number
+ * @pi: port info structure of the interested logical port
  * @cfg: structure with PHY configuration data to be set
  * @cd: pointer to command details structure or NULL
  *
@@ -2422,10 +2435,11 @@ ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
  * parameters. This status will be indicated by the command response (0x0601).
  */
 enum ice_status
-ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi,
 		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd)
 {
 	struct ice_aq_desc desc;
+	enum ice_status status;
 
 	if (!cfg)
 		return ICE_ERR_PARAM;
@@ -2440,10 +2454,26 @@ ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
 	}
 
 	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_phy_cfg);
-	desc.params.set_phy.lport_num = lport;
+	desc.params.set_phy.lport_num = pi->lport;
 	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
 
-	return ice_aq_send_cmd(hw, &desc, cfg, sizeof(*cfg), cd);
+	ice_debug(hw, ICE_DBG_LINK, "phy_type_low = 0x%llx\n",
+		  (unsigned long long)LE64_TO_CPU(cfg->phy_type_low));
+	ice_debug(hw, ICE_DBG_LINK, "phy_type_high = 0x%llx\n",
+		  (unsigned long long)LE64_TO_CPU(cfg->phy_type_high));
+	ice_debug(hw, ICE_DBG_LINK, "caps = 0x%x\n", cfg->caps);
+	ice_debug(hw, ICE_DBG_LINK, "low_power_ctrl = 0x%x\n",
+		  cfg->low_power_ctrl);
+	ice_debug(hw, ICE_DBG_LINK, "eee_cap = 0x%x\n", cfg->eee_cap);
+	ice_debug(hw, ICE_DBG_LINK, "eeer_value = 0x%x\n", cfg->eeer_value);
+	ice_debug(hw, ICE_DBG_LINK, "link_fec_opt = 0x%x\n", cfg->link_fec_opt);
+
+	status = ice_aq_send_cmd(hw, &desc, cfg, sizeof(*cfg), cd);
+
+	if (!status)
+		pi->phy.curr_user_phy_cfg = *cfg;
+
+	return status;
 }
 
 /**
@@ -2487,6 +2517,38 @@ enum ice_status ice_update_link_info(struct ice_port_info *pi)
 	return status;
 }
 
+/**
+ * ice_cache_phy_user_req
+ * @pi: port information structure
+ * @cache_data: PHY logging data
+ * @cache_mode: PHY logging mode
+ *
+ * Log the user request on (FC, FEC, SPEED) for later user.
+ */
+static void
+ice_cache_phy_user_req(struct ice_port_info *pi,
+		       struct ice_phy_cache_mode_data cache_data,
+		       enum ice_phy_cache_mode cache_mode)
+{
+	if (!pi)
+		return;
+
+	switch (cache_mode) {
+	case ICE_FC_MODE:
+		pi->phy.curr_user_fc_req = cache_data.data.curr_user_fc_req;
+		break;
+	case ICE_SPEED_MODE:
+		pi->phy.curr_user_speed_req =
+			cache_data.data.curr_user_speed_req;
+		break;
+	case ICE_FEC_MODE:
+		pi->phy.curr_user_fec_req = cache_data.data.curr_user_fec_req;
+		break;
+	default:
+		break;
+	}
+}
+
 /**
  * ice_set_fc
  * @pi: port information structure
@@ -2499,6 +2561,7 @@ enum ice_status
 ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
 {
 	struct ice_aqc_set_phy_cfg_data cfg = { 0 };
+	struct ice_phy_cache_mode_data cache_data;
 	struct ice_aqc_get_phy_caps_data *pcaps;
 	enum ice_status status;
 	u8 pause_mask = 0x0;
@@ -2509,6 +2572,10 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
 	hw = pi->hw;
 	*aq_failures = ICE_SET_FC_AQ_FAIL_NONE;
 
+	/* Cache user FC request */
+	cache_data.data.curr_user_fc_req = pi->fc.req_mode;
+	ice_cache_phy_user_req(pi, cache_data, ICE_FC_MODE);
+
 	switch (pi->fc.req_mode) {
 	case ICE_FC_FULL:
 		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
@@ -2540,8 +2607,10 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
 	/* clear the old pause settings */
 	cfg.caps = pcaps->caps & ~(ICE_AQC_PHY_EN_TX_LINK_PAUSE |
 				   ICE_AQC_PHY_EN_RX_LINK_PAUSE);
+
 	/* set the new capabilities */
 	cfg.caps |= pause_mask;
+
 	/* If the capabilities have changed, then set the new config */
 	if (cfg.caps != pcaps->caps) {
 		int retry_count, retry_max = 10;
@@ -2557,7 +2626,7 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
 		cfg.eeer_value = pcaps->eeer_value;
 		cfg.link_fec_opt = pcaps->link_fec_options;
 
-		status = ice_aq_set_phy_cfg(hw, pi->lport, &cfg, NULL);
+		status = ice_aq_set_phy_cfg(hw, pi, &cfg, NULL);
 		if (status) {
 			*aq_failures = ICE_SET_FC_AQ_FAIL_SET;
 			goto out;
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index aee754b85..cccb5f009 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -134,7 +134,7 @@ ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
 
 enum ice_status ice_clear_pf_cfg(struct ice_hw *hw);
 enum ice_status
-ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi,
 		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd);
 enum ice_status
 ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 45b0b3c05..5da267f1b 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -148,6 +148,12 @@ enum ice_fc_mode {
 	ICE_FC_DFLT
 };
 
+enum ice_phy_cache_mode {
+	ICE_FC_MODE = 0,
+	ICE_SPEED_MODE,
+	ICE_FEC_MODE
+};
+
 enum ice_fec_mode {
 	ICE_FEC_NONE = 0,
 	ICE_FEC_RS,
@@ -155,6 +161,14 @@ enum ice_fec_mode {
 	ICE_FEC_AUTO
 };
 
+struct ice_phy_cache_mode_data {
+	union {
+		enum ice_fec_mode curr_user_fec_req;
+		enum ice_fc_mode curr_user_fc_req;
+		u16 curr_user_speed_req;
+	} data;
+};
+
 enum ice_set_fc_aq_failures {
 	ICE_SET_FC_AQ_FAIL_NONE = 0,
 	ICE_SET_FC_AQ_FAIL_GET,
@@ -232,6 +246,13 @@ struct ice_phy_info {
 	u64 phy_type_high;
 	enum ice_media_type media_type;
 	u8 get_link_info;
+	/* Please refer to struct ice_aqc_get_link_status_data to get
+	 * detail of enable bit in curr_user_speed_req
+	 */
+	u16 curr_user_speed_req;
+	enum ice_fec_mode curr_user_fec_req;
+	enum ice_fc_mode curr_user_fc_req;
+	struct ice_aqc_set_phy_cfg_data curr_user_phy_cfg;
 };
 
 #define ICE_MAX_NUM_MIRROR_RULES	64
@@ -648,6 +669,8 @@ struct ice_port_info {
 	u8 port_state;
 #define ICE_SCHED_PORT_STATE_INIT	0x0
 #define ICE_SCHED_PORT_STATE_READY	0x1
+	u8 lport;
+#define ICE_LPORT_MASK			0xff
 	u16 dflt_tx_vsi_rule_id;
 	u16 dflt_tx_vsi_num;
 	u16 dflt_rx_vsi_rule_id;
@@ -663,11 +686,9 @@ struct ice_port_info {
 	struct ice_dcbx_cfg remote_dcbx_cfg;	/* Peer Cfg */
 	struct ice_dcbx_cfg desired_dcbx_cfg;	/* CEE Desired Cfg */
 	/* LLDP/DCBX Status */
-	u8 dcbx_status;
-	u8 is_sw_lldp;
-	u8 lport;
-#define ICE_LPORT_MASK		0xff
-	u8 is_vf;
+	u8 dcbx_status:3;		/* see ICE_DCBX_STATUS_DIS */
+	u8 is_sw_lldp:1;
+	u8 is_vf:1;
 };
 
 struct ice_switch_info {
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index bdbceb411..962d506a1 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -2295,7 +2295,7 @@ ice_force_phys_link_state(struct ice_hw *hw, bool link_up)
 	else
 		cfg.caps &= ~ICE_AQ_PHY_ENA_LINK;
 
-	status = ice_aq_set_phy_cfg(hw, pi->lport, &cfg, NULL);
+	status = ice_aq_set_phy_cfg(hw, pi, &cfg, NULL);
 
 out:
 	ice_free(hw, pcaps);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 14/49] net/ice/base: refactor HW table init function
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (12 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 13/49] net/ice/base: cache the data of set PHY cfg AQ in SW Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-05 10:35   ` Maxime Coquelin
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 15/49] net/ice/base: add compatibility check for package version Leyi Rong
                   ` (36 subsequent siblings)
  50 siblings, 1 reply; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Vignesh Sridhar, Paul M Stillwell Jr

1. Separated the calls to initialize and allocate the HW XLT tables
from call to fill table. This is to allow the ice_init_hw_tbls call
to be made prior to package download so that all HW structures are
correctly initialized. This will avoid any invalid memory references
if package download fails on unloading the driver.
2. Fill HW tables with package content after successful package download.
3. Free HW table and flow profile allocations when unloading driver.
4. Add flag in block structure to check if lists in block are
initialized. This is to avoid any NULL reference in releasing flow
profiles that may have been freed in previous calls to free tables.

Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c    |   6 +-
 drivers/net/ice/base/ice_flex_pipe.c | 284 ++++++++++++++-------------
 drivers/net/ice/base/ice_flex_pipe.h |   1 +
 drivers/net/ice/base/ice_flex_type.h |   1 +
 4 files changed, 151 insertions(+), 141 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index a0ab25aef..62c7fad0d 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -916,12 +916,13 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 
 	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC);
 	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC_2);
-
 	/* Obtain counter base index which would be used by flow director */
 	status = ice_alloc_fd_res_cntr(hw, &hw->fd_ctr_base);
 	if (status)
 		goto err_unroll_fltr_mgmt_struct;
-
+	status = ice_init_hw_tbls(hw);
+	if (status)
+		goto err_unroll_fltr_mgmt_struct;
 	return ICE_SUCCESS;
 
 err_unroll_fltr_mgmt_struct:
@@ -952,6 +953,7 @@ void ice_deinit_hw(struct ice_hw *hw)
 	ice_sched_cleanup_all(hw);
 	ice_sched_clear_agg(hw);
 	ice_free_seg(hw);
+	ice_free_hw_tbls(hw);
 
 	if (hw->port_info) {
 		ice_free(hw, hw->port_info);
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 8f0b513f4..93e056853 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1375,10 +1375,12 @@ enum ice_status ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len)
 
 	if (!status) {
 		hw->seg = seg;
-		/* on successful package download, update other required
-		 * registers to support the package
+		/* on successful package download update other required
+		 * registers to support the package and fill HW tables
+		 * with package content.
 		 */
 		ice_init_pkg_regs(hw);
+		ice_fill_blk_tbls(hw);
 	} else {
 		ice_debug(hw, ICE_DBG_INIT, "package load failed, %d\n",
 			  status);
@@ -2755,6 +2757,65 @@ static const u32 ice_blk_sids[ICE_BLK_COUNT][ICE_SID_OFF_COUNT] = {
 	}
 };
 
+/**
+ * ice_init_sw_xlt1_db - init software XLT1 database from HW tables
+ * @hw: pointer to the hardware structure
+ * @blk: the HW block to initialize
+ */
+static
+void ice_init_sw_xlt1_db(struct ice_hw *hw, enum ice_block blk)
+{
+	u16 pt;
+
+	for (pt = 0; pt < hw->blk[blk].xlt1.count; pt++) {
+		u8 ptg;
+
+		ptg = hw->blk[blk].xlt1.t[pt];
+		if (ptg != ICE_DEFAULT_PTG) {
+			ice_ptg_alloc_val(hw, blk, ptg);
+			ice_ptg_add_mv_ptype(hw, blk, pt, ptg);
+		}
+	}
+}
+
+/**
+ * ice_init_sw_xlt2_db - init software XLT2 database from HW tables
+ * @hw: pointer to the hardware structure
+ * @blk: the HW block to initialize
+ */
+static void ice_init_sw_xlt2_db(struct ice_hw *hw, enum ice_block blk)
+{
+	u16 vsi;
+
+	for (vsi = 0; vsi < hw->blk[blk].xlt2.count; vsi++) {
+		u16 vsig;
+
+		vsig = hw->blk[blk].xlt2.t[vsi];
+		if (vsig) {
+			ice_vsig_alloc_val(hw, blk, vsig);
+			ice_vsig_add_mv_vsi(hw, blk, vsi, vsig);
+			/* no changes at this time, since this has been
+			 * initialized from the original package
+			 */
+			hw->blk[blk].xlt2.vsis[vsi].changed = 0;
+		}
+	}
+}
+
+/**
+ * ice_init_sw_db - init software database from HW tables
+ * @hw: pointer to the hardware structure
+ */
+static void ice_init_sw_db(struct ice_hw *hw)
+{
+	u16 i;
+
+	for (i = 0; i < ICE_BLK_COUNT; i++) {
+		ice_init_sw_xlt1_db(hw, (enum ice_block)i);
+		ice_init_sw_xlt2_db(hw, (enum ice_block)i);
+	}
+}
+
 /**
  * ice_fill_tbl - Reads content of a single table type into database
  * @hw: pointer to the hardware structure
@@ -2853,12 +2914,12 @@ static void ice_fill_tbl(struct ice_hw *hw, enum ice_block block_id, u32 sid)
 		case ICE_SID_FLD_VEC_PE:
 			es = (struct ice_sw_fv_section *)sect;
 			src = (u8 *)es->fv;
-			sect_len = LE16_TO_CPU(es->count) *
-				hw->blk[block_id].es.fvw *
+			sect_len = (u32)(LE16_TO_CPU(es->count) *
+					 hw->blk[block_id].es.fvw) *
 				sizeof(*hw->blk[block_id].es.t);
 			dst = (u8 *)hw->blk[block_id].es.t;
-			dst_len = hw->blk[block_id].es.count *
-				hw->blk[block_id].es.fvw *
+			dst_len = (u32)(hw->blk[block_id].es.count *
+					hw->blk[block_id].es.fvw) *
 				sizeof(*hw->blk[block_id].es.t);
 			break;
 		default:
@@ -2886,75 +2947,61 @@ static void ice_fill_tbl(struct ice_hw *hw, enum ice_block block_id, u32 sid)
 }
 
 /**
- * ice_fill_blk_tbls - Read package content for tables of a block
+ * ice_fill_blk_tbls - Read package context for tables
  * @hw: pointer to the hardware structure
- * @block_id: The block ID which contains the tables to be copied
  *
  * Reads the current package contents and populates the driver
- * database with the data it contains to allow for advanced driver
- * features.
- */
-static void ice_fill_blk_tbls(struct ice_hw *hw, enum ice_block block_id)
-{
-	ice_fill_tbl(hw, block_id, hw->blk[block_id].xlt1.sid);
-	ice_fill_tbl(hw, block_id, hw->blk[block_id].xlt2.sid);
-	ice_fill_tbl(hw, block_id, hw->blk[block_id].prof.sid);
-	ice_fill_tbl(hw, block_id, hw->blk[block_id].prof_redir.sid);
-	ice_fill_tbl(hw, block_id, hw->blk[block_id].es.sid);
-}
-
-/**
- * ice_free_flow_profs - free flow profile entries
- * @hw: pointer to the hardware structure
+ * database with the data iteratively for all advanced feature
+ * blocks. Assume that the Hw tables have been allocated.
  */
-static void ice_free_flow_profs(struct ice_hw *hw)
+void ice_fill_blk_tbls(struct ice_hw *hw)
 {
 	u8 i;
 
 	for (i = 0; i < ICE_BLK_COUNT; i++) {
-		struct ice_flow_prof *p, *tmp;
-
-		if (!&hw->fl_profs[i])
-			continue;
-
-		/* This call is being made as part of resource deallocation
-		 * during unload. Lock acquire and release will not be
-		 * necessary here.
-		 */
-		LIST_FOR_EACH_ENTRY_SAFE(p, tmp, &hw->fl_profs[i],
-					 ice_flow_prof, l_entry) {
-			struct ice_flow_entry *e, *t;
-
-			LIST_FOR_EACH_ENTRY_SAFE(e, t, &p->entries,
-						 ice_flow_entry, l_entry)
-				ice_flow_rem_entry(hw, ICE_FLOW_ENTRY_HNDL(e));
-
-			LIST_DEL(&p->l_entry);
-			if (p->acts)
-				ice_free(hw, p->acts);
-			ice_free(hw, p);
-		}
+		enum ice_block blk_id = (enum ice_block)i;
 
-		ice_destroy_lock(&hw->fl_profs_locks[i]);
+		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].xlt1.sid);
+		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].xlt2.sid);
+		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].prof.sid);
+		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].prof_redir.sid);
+		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].es.sid);
 	}
+
+	ice_init_sw_db(hw);
 }
 
 /**
- * ice_free_prof_map - frees the profile map
+ * ice_free_flow_profs - free flow profile entries
  * @hw: pointer to the hardware structure
- * @blk: the HW block which contains the profile map to be freed
+ * @blk_idx: HW block index
  */
-static void ice_free_prof_map(struct ice_hw *hw, enum ice_block blk)
+static void ice_free_flow_profs(struct ice_hw *hw, u8 blk_idx)
 {
-	struct ice_prof_map *del, *tmp;
+	struct ice_flow_prof *p, *tmp;
 
-	if (LIST_EMPTY(&hw->blk[blk].es.prof_map))
-		return;
+	/* This call is being made as part of resource deallocation
+	 * during unload. Lock acquire and release will not be
+	 * necessary here.
+	 */
+	LIST_FOR_EACH_ENTRY_SAFE(p, tmp, &hw->fl_profs[blk_idx],
+				 ice_flow_prof, l_entry) {
+		struct ice_flow_entry *e, *t;
+
+		LIST_FOR_EACH_ENTRY_SAFE(e, t, &p->entries,
+					 ice_flow_entry, l_entry)
+			ice_flow_rem_entry(hw, ICE_FLOW_ENTRY_HNDL(e));
 
-	LIST_FOR_EACH_ENTRY_SAFE(del, tmp, &hw->blk[blk].es.prof_map,
-				 ice_prof_map, list) {
-		ice_rem_prof(hw, blk, del->profile_cookie);
+		LIST_DEL(&p->l_entry);
+		if (p->acts)
+			ice_free(hw, p->acts);
+		ice_free(hw, p);
 	}
+
+	/* if driver is in reset and tables are being cleared
+	 * re-initialize the flow profile list heads
+	 */
+	INIT_LIST_HEAD(&hw->fl_profs[blk_idx]);
 }
 
 /**
@@ -2980,10 +3027,25 @@ static void ice_free_vsig_tbl(struct ice_hw *hw, enum ice_block blk)
  */
 void ice_free_hw_tbls(struct ice_hw *hw)
 {
+	struct ice_rss_cfg *r, *rt;
 	u8 i;
 
 	for (i = 0; i < ICE_BLK_COUNT; i++) {
-		ice_free_prof_map(hw, (enum ice_block)i);
+		if (hw->blk[i].is_list_init) {
+			struct ice_es *es = &hw->blk[i].es;
+			struct ice_prof_map *del, *tmp;
+
+			LIST_FOR_EACH_ENTRY_SAFE(del, tmp, &es->prof_map,
+						 ice_prof_map, list) {
+				LIST_DEL(&del->list);
+				ice_free(hw, del);
+			}
+			ice_destroy_lock(&es->prof_map_lock);
+
+			ice_free_flow_profs(hw, i);
+			ice_destroy_lock(&hw->fl_profs_locks[i]);
+			hw->blk[i].is_list_init = false;
+		}
 		ice_free_vsig_tbl(hw, (enum ice_block)i);
 		ice_free(hw, hw->blk[i].xlt1.ptypes);
 		ice_free(hw, hw->blk[i].xlt1.ptg_tbl);
@@ -2998,84 +3060,24 @@ void ice_free_hw_tbls(struct ice_hw *hw)
 		ice_free(hw, hw->blk[i].es.written);
 	}
 
+	LIST_FOR_EACH_ENTRY_SAFE(r, rt, &hw->rss_list_head,
+				 ice_rss_cfg, l_entry) {
+		LIST_DEL(&r->l_entry);
+		ice_free(hw, r);
+	}
+	ice_destroy_lock(&hw->rss_locks);
 	ice_memset(hw->blk, 0, sizeof(hw->blk), ICE_NONDMA_MEM);
-
-	ice_free_flow_profs(hw);
 }
 
 /**
  * ice_init_flow_profs - init flow profile locks and list heads
  * @hw: pointer to the hardware structure
+ * @blk_idx: HW block index
  */
-static void ice_init_flow_profs(struct ice_hw *hw)
+static void ice_init_flow_profs(struct ice_hw *hw, u8 blk_idx)
 {
-	u8 i;
-
-	for (i = 0; i < ICE_BLK_COUNT; i++) {
-		ice_init_lock(&hw->fl_profs_locks[i]);
-		INIT_LIST_HEAD(&hw->fl_profs[i]);
-	}
-}
-
-/**
- * ice_init_sw_xlt1_db - init software XLT1 database from HW tables
- * @hw: pointer to the hardware structure
- * @blk: the HW block to initialize
- */
-static
-void ice_init_sw_xlt1_db(struct ice_hw *hw, enum ice_block blk)
-{
-	u16 pt;
-
-	for (pt = 0; pt < hw->blk[blk].xlt1.count; pt++) {
-		u8 ptg;
-
-		ptg = hw->blk[blk].xlt1.t[pt];
-		if (ptg != ICE_DEFAULT_PTG) {
-			ice_ptg_alloc_val(hw, blk, ptg);
-			ice_ptg_add_mv_ptype(hw, blk, pt, ptg);
-		}
-	}
-}
-
-/**
- * ice_init_sw_xlt2_db - init software XLT2 database from HW tables
- * @hw: pointer to the hardware structure
- * @blk: the HW block to initialize
- */
-static
-void ice_init_sw_xlt2_db(struct ice_hw *hw, enum ice_block blk)
-{
-	u16 vsi;
-
-	for (vsi = 0; vsi < hw->blk[blk].xlt2.count; vsi++) {
-		u16 vsig;
-
-		vsig = hw->blk[blk].xlt2.t[vsi];
-		if (vsig) {
-			ice_vsig_alloc_val(hw, blk, vsig);
-			ice_vsig_add_mv_vsi(hw, blk, vsi, vsig);
-			/* no changes at this time, since this has been
-			 * initialized from the original package
-			 */
-			hw->blk[blk].xlt2.vsis[vsi].changed = 0;
-		}
-	}
-}
-
-/**
- * ice_init_sw_db - init software database from HW tables
- * @hw: pointer to the hardware structure
- */
-static
-void ice_init_sw_db(struct ice_hw *hw)
-{
-	u16 i;
-
-	for (i = 0; i < ICE_BLK_COUNT; i++) {
-		ice_init_sw_xlt1_db(hw, (enum ice_block)i);
-		ice_init_sw_xlt2_db(hw, (enum ice_block)i);
-	}
+	ice_init_lock(&hw->fl_profs_locks[blk_idx]);
+	INIT_LIST_HEAD(&hw->fl_profs[blk_idx]);
 }
 
 /**
@@ -3086,14 +3088,23 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw)
 {
 	u8 i;
 
-	ice_init_flow_profs(hw);
-
+	ice_init_lock(&hw->rss_locks);
+	INIT_LIST_HEAD(&hw->rss_list_head);
 	for (i = 0; i < ICE_BLK_COUNT; i++) {
 		struct ice_prof_redir *prof_redir = &hw->blk[i].prof_redir;
 		struct ice_prof_tcam *prof = &hw->blk[i].prof;
 		struct ice_xlt1 *xlt1 = &hw->blk[i].xlt1;
 		struct ice_xlt2 *xlt2 = &hw->blk[i].xlt2;
 		struct ice_es *es = &hw->blk[i].es;
+		u16 j;
+
+		if (hw->blk[i].is_list_init)
+			continue;
+
+		ice_init_flow_profs(hw, i);
+		ice_init_lock(&es->prof_map_lock);
+		INIT_LIST_HEAD(&es->prof_map);
+		hw->blk[i].is_list_init = true;
 
 		hw->blk[i].overwrite = blk_sizes[i].overwrite;
 		es->reverse = blk_sizes[i].reverse;
@@ -3131,6 +3142,9 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw)
 		if (!xlt2->vsig_tbl)
 			goto err;
 
+		for (j = 0; j < xlt2->count; j++)
+			INIT_LIST_HEAD(&xlt2->vsig_tbl[j].prop_lst);
+
 		xlt2->t = (u16 *)ice_calloc(hw, xlt2->count, sizeof(*xlt2->t));
 		if (!xlt2->t)
 			goto err;
@@ -3157,8 +3171,8 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw)
 		es->count = blk_sizes[i].es;
 		es->fvw = blk_sizes[i].fvw;
 		es->t = (struct ice_fv_word *)
-			ice_calloc(hw, es->count * es->fvw, sizeof(*es->t));
-
+			ice_calloc(hw, (u32)(es->count * es->fvw),
+				   sizeof(*es->t));
 		if (!es->t)
 			goto err;
 
@@ -3170,15 +3184,7 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw)
 
 		if (!es->ref_count)
 			goto err;
-
-		INIT_LIST_HEAD(&es->prof_map);
-
-		/* Now that tables are allocated, read in package data */
-		ice_fill_blk_tbls(hw, (enum ice_block)i);
 	}
-
-	ice_init_sw_db(hw);
-
 	return ICE_SUCCESS;
 
 err:
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 2710dded6..e8cc9cef3 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -98,6 +98,7 @@ enum ice_status
 ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len);
 enum ice_status ice_init_hw_tbls(struct ice_hw *hw);
 void ice_free_seg(struct ice_hw *hw);
+void ice_fill_blk_tbls(struct ice_hw *hw);
 void ice_free_hw_tbls(struct ice_hw *hw);
 enum ice_status
 ice_add_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
index 892c94b1f..7133983ff 100644
--- a/drivers/net/ice/base/ice_flex_type.h
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -676,6 +676,7 @@ struct ice_blk_info {
 	struct ice_prof_redir prof_redir;
 	struct ice_es es;
 	u8 overwrite; /* set to true to allow overwrite of table entries */
+	u8 is_list_init;
 };
 
 enum ice_chg_type {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 15/49] net/ice/base: add compatibility check for package version
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (13 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 14/49] net/ice/base: refactor HW table init function Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 16/49] net/ice/base: add API to init FW logging Leyi Rong
                   ` (35 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

1. Perform a check against the package version to make sure that
it will be compatible with the shared code implementation. There
will be points in time when the shared code and package will need
to be changed in lock step; the mechanism added here is meant to
deal with those situations.
2. Support package tunnel labels owned by PF. VXLAN and GENEVE
tunnel labels names in the package are changing to incorporate
the PF that owns them.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 96 ++++++++++++++++++++++------
 drivers/net/ice/base/ice_flex_pipe.h |  8 +++
 drivers/net/ice/base/ice_flex_type.h | 10 ---
 3 files changed, 85 insertions(+), 29 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 93e056853..5faee6d52 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -7,19 +7,12 @@
 #include "ice_protocol_type.h"
 #include "ice_flow.h"
 
+/* To support tunneling entries by PF, the package will append the PF number to
+ * the label; for example TNL_VXLAN_PF0, TNL_VXLAN_PF1, TNL_VXLAN_PF2, etc.
+ */
 static const struct ice_tunnel_type_scan tnls[] = {
-	{ TNL_VXLAN,		"TNL_VXLAN" },
-	{ TNL_GTPC,		"TNL_GTPC" },
-	{ TNL_GTPC_TEID,	"TNL_GTPC_TEID" },
-	{ TNL_GTPU,		"TNL_GTPC" },
-	{ TNL_GTPU_TEID,	"TNL_GTPU_TEID" },
-	{ TNL_VXLAN_GPE,	"TNL_VXLAN_GPE" },
-	{ TNL_GENEVE,		"TNL_GENEVE" },
-	{ TNL_NAT,		"TNL_NAT" },
-	{ TNL_ROCE_V2,		"TNL_ROCE_V2" },
-	{ TNL_MPLSO_UDP,	"TNL_MPLSO_UDP" },
-	{ TNL_UDP2_END,		"TNL_UDP2_END" },
-	{ TNL_UPD_END,		"TNL_UPD_END" },
+	{ TNL_VXLAN,		"TNL_VXLAN_PF" },
+	{ TNL_GENEVE,		"TNL_GENEVE_PF" },
 	{ TNL_LAST,		"" }
 };
 
@@ -485,8 +478,17 @@ void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg)
 
 	while (label_name && hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) {
 		for (i = 0; tnls[i].type != TNL_LAST; i++) {
-			if (!strncmp(label_name, tnls[i].label_prefix,
-				     strlen(tnls[i].label_prefix))) {
+			size_t len = strlen(tnls[i].label_prefix);
+
+			/* Look for matching label start, before continuing */
+			if (strncmp(label_name, tnls[i].label_prefix, len))
+				continue;
+
+			/* Make sure this label matches our PF. Note that the PF
+			 * character ('0' - '7') will be located where our
+			 * prefix string's null terminator is located.
+			 */
+			if ((label_name[len] - '0') == hw->pf_id) {
 				hw->tnl.tbl[hw->tnl.count].type = tnls[i].type;
 				hw->tnl.tbl[hw->tnl.count].valid = false;
 				hw->tnl.tbl[hw->tnl.count].in_use = false;
@@ -1083,12 +1085,8 @@ enum ice_status ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
 enum ice_status
 ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
 {
-	struct ice_aqc_get_pkg_info_resp *pkg_info;
 	struct ice_global_metadata_seg *meta_seg;
 	struct ice_generic_seg_hdr *seg_hdr;
-	enum ice_status status;
-	u16 size;
-	u32 i;
 
 	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 	if (!pkg_hdr)
@@ -1127,7 +1125,25 @@ ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
 		return ICE_ERR_CFG;
 	}
 
-#define ICE_PKG_CNT	4
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_get_pkg_info
+ * @hw: pointer to the hardware structure
+ *
+ * Store details of the package currently loaded in HW into the HW structure.
+ */
+enum ice_status
+ice_get_pkg_info(struct ice_hw *hw)
+{
+	struct ice_aqc_get_pkg_info_resp *pkg_info;
+	enum ice_status status;
+	u16 size;
+	u32 i;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_init_pkg_info\n");
+
 	size = sizeof(*pkg_info) + (sizeof(pkg_info->pkg_info[0]) *
 				    (ICE_PKG_CNT - 1));
 	pkg_info = (struct ice_aqc_get_pkg_info_resp *)ice_malloc(hw, size);
@@ -1310,6 +1326,32 @@ static void ice_init_pkg_regs(struct ice_hw *hw)
 	ice_init_fd_mask_regs(hw);
 }
 
+/**
+ * ice_chk_pkg_version - check package version for compatibility with driver
+ * @hw: pointer to the hardware structure
+ * @pkg_ver: pointer to a version structure to check
+ *
+ * Check to make sure that the package about to be downloaded is compatible with
+ * the driver. To be compatible, the major and minor components of the package
+ * version must match our ICE_PKG_SUPP_VER_MAJ and ICE_PKG_SUPP_VER_MNR
+ * definitions.
+ */
+static enum ice_status
+ice_chk_pkg_version(struct ice_hw *hw, struct ice_pkg_ver *pkg_ver)
+{
+	if (pkg_ver->major != ICE_PKG_SUPP_VER_MAJ ||
+	    pkg_ver->minor != ICE_PKG_SUPP_VER_MNR) {
+		ice_info(hw, "ERROR: Incompatible package: %d.%d.%d.%d - requires package version: %d.%d.*.*\n",
+			 pkg_ver->major, pkg_ver->minor, pkg_ver->update,
+			 pkg_ver->draft, ICE_PKG_SUPP_VER_MAJ,
+			 ICE_PKG_SUPP_VER_MNR);
+
+		return ICE_ERR_NOT_SUPPORTED;
+	}
+
+	return ICE_SUCCESS;
+}
+
 /**
  * ice_init_pkg - initialize/download package
  * @hw: pointer to the hardware structure
@@ -1357,6 +1399,13 @@ enum ice_status ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len)
 	if (status)
 		return status;
 
+	/* before downloading the package, check package version for
+	 * compatibility with driver
+	 */
+	status = ice_chk_pkg_version(hw, &hw->pkg_ver);
+	if (status)
+		return status;
+
 	/* find segment in given package */
 	seg = (struct ice_seg *)ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE, pkg);
 	if (!seg) {
@@ -1373,6 +1422,15 @@ enum ice_status ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len)
 		status = ICE_SUCCESS;
 	}
 
+	/* Get information on the package currently loaded in HW, then make sure
+	 * the driver is compatible with this version.
+	 */
+	if (!status) {
+		status = ice_get_pkg_info(hw);
+		if (!status)
+			status = ice_chk_pkg_version(hw, &hw->active_pkg_ver);
+	}
+
 	if (!status) {
 		hw->seg = seg;
 		/* on successful package download update other required
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index e8cc9cef3..375758c8d 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -7,12 +7,18 @@
 
 #include "ice_type.h"
 
+/* Package minimal version supported */
+#define ICE_PKG_SUPP_VER_MAJ	1
+#define ICE_PKG_SUPP_VER_MNR	2
+
 /* Package format version */
 #define ICE_PKG_FMT_VER_MAJ	1
 #define ICE_PKG_FMT_VER_MNR	0
 #define ICE_PKG_FMT_VER_UPD	0
 #define ICE_PKG_FMT_VER_DFT	0
 
+#define ICE_PKG_CNT 4
+
 enum ice_status
 ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count);
 enum ice_status
@@ -28,6 +34,8 @@ enum ice_status ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg);
 
 enum ice_status
 ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_header);
+enum ice_status
+ice_get_pkg_info(struct ice_hw *hw);
 
 void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg);
 
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
index 7133983ff..d23b2ae82 100644
--- a/drivers/net/ice/base/ice_flex_type.h
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -455,17 +455,7 @@ struct ice_pkg_enum {
 
 enum ice_tunnel_type {
 	TNL_VXLAN = 0,
-	TNL_GTPC,
-	TNL_GTPC_TEID,
-	TNL_GTPU,
-	TNL_GTPU_TEID,
-	TNL_VXLAN_GPE,
 	TNL_GENEVE,
-	TNL_NAT,
-	TNL_ROCE_V2,
-	TNL_MPLSO_UDP,
-	TNL_UDP2_END,
-	TNL_UPD_END,
 	TNL_LAST = 0xFF,
 	TNL_ALL = 0xFF,
 };
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 16/49] net/ice/base: add API to init FW logging
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (14 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 15/49] net/ice/base: add compatibility check for package version Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 17/49] net/ice/base: use macro instead of magic 8 Leyi Rong
                   ` (34 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

In order to initialize the current status of the FW logging,
the api ice_get_fw_log_cfg is added. The function retrieves
the current setting of the FW logging from HW and updates the
ice_hw structure accordingly.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h |  1 +
 drivers/net/ice/base/ice_common.c     | 48 +++++++++++++++++++++++++++
 2 files changed, 49 insertions(+)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 7b0aa8aaa..739f79e88 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -2196,6 +2196,7 @@ enum ice_aqc_fw_logging_mod {
 	ICE_AQC_FW_LOG_ID_WATCHDOG,
 	ICE_AQC_FW_LOG_ID_TASK_DISPATCH,
 	ICE_AQC_FW_LOG_ID_MNG,
+	ICE_AQC_FW_LOG_ID_SYNCE,
 	ICE_AQC_FW_LOG_ID_MAX,
 };
 
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 62c7fad0d..7093ee4f4 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -582,6 +582,49 @@ static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
 #define ICE_FW_LOG_DESC_SIZE_MAX	\
 	ICE_FW_LOG_DESC_SIZE(ICE_AQC_FW_LOG_ID_MAX)
 
+/**
+ * ice_get_fw_log_cfg - get FW logging configuration
+ * @hw: pointer to the HW struct
+ */
+static enum ice_status ice_get_fw_log_cfg(struct ice_hw *hw)
+{
+	struct ice_aqc_fw_logging_data *config;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 size;
+
+	size = ICE_FW_LOG_DESC_SIZE_MAX;
+	config = (struct ice_aqc_fw_logging_data *)ice_malloc(hw, size);
+	if (!config)
+		return ICE_ERR_NO_MEMORY;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging_info);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	status = ice_aq_send_cmd(hw, &desc, config, size, NULL);
+	if (!status) {
+		u16 i;
+
+		/* Save fw logging information into the HW structure */
+		for (i = 0; i < ICE_AQC_FW_LOG_ID_MAX; i++) {
+			u16 v, m, flgs;
+
+			v = LE16_TO_CPU(config->entry[i]);
+			m = (v & ICE_AQC_FW_LOG_ID_M) >> ICE_AQC_FW_LOG_ID_S;
+			flgs = (v & ICE_AQC_FW_LOG_EN_M) >> ICE_AQC_FW_LOG_EN_S;
+
+			if (m < ICE_AQC_FW_LOG_ID_MAX)
+				hw->fw_log.evnts[m].cur = flgs;
+		}
+	}
+
+	ice_free(hw, config);
+
+	return status;
+}
+
 /**
  * ice_cfg_fw_log - configure FW logging
  * @hw: pointer to the HW struct
@@ -636,6 +679,11 @@ static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
 	    (!hw->fw_log.actv_evnts || !ice_check_sq_alive(hw, &hw->adminq)))
 		return ICE_SUCCESS;
 
+	/* Get current FW log settings */
+	status = ice_get_fw_log_cfg(hw);
+	if (status)
+		return status;
+
 	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging);
 	cmd = &desc.params.fw_logging;
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 17/49] net/ice/base: use macro instead of magic 8
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (15 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 16/49] net/ice/base: add API to init FW logging Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-05 10:39   ` Maxime Coquelin
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 18/49] net/ice/base: move and redefine ice debug cq API Leyi Rong
                   ` (33 subsequent siblings)
  50 siblings, 1 reply; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Bruce Allan, Paul M Stillwell Jr

Replace the use of the magic number 8 by BITS_PER_BYTE when calculating
the number of bits from the number of bytes.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c |  4 +-
 drivers/net/ice/base/ice_flow.c      | 74 +++++++++++++++-------------
 2 files changed, 43 insertions(+), 35 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 5faee6d52..b569b91a7 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -3862,7 +3862,7 @@ ice_update_fd_swap(struct ice_hw *hw, u16 prof_id, struct ice_fv_word *es)
 
 			idx = (j * 4) + k;
 			if (used[idx])
-				raw_entry |= used[idx] << (k * 8);
+				raw_entry |= used[idx] << (k * BITS_PER_BYTE);
 		}
 
 		/* write the appropriate register set, based on HW block */
@@ -3955,7 +3955,7 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 				u16 ptype;
 				u8 m;
 
-				ptype = byte * 8 + bit;
+				ptype = byte * BITS_PER_BYTE + bit;
 				if (ptype < ICE_FLOW_PTYPE_MAX) {
 					prof->ptype[prof->ptype_count] = ptype;
 
diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index dccd7d3c7..9f2a794bc 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -26,8 +26,8 @@
  * protocol headers. Displacement values are expressed in number of bits.
  */
 #define ICE_FLOW_FLD_IPV6_TTL_DSCP_DISP	(-4)
-#define ICE_FLOW_FLD_IPV6_TTL_PROT_DISP	((-2) * 8)
-#define ICE_FLOW_FLD_IPV6_TTL_TTL_DISP	((-1) * 8)
+#define ICE_FLOW_FLD_IPV6_TTL_PROT_DISP	((-2) * BITS_PER_BYTE)
+#define ICE_FLOW_FLD_IPV6_TTL_TTL_DISP	((-1) * BITS_PER_BYTE)
 
 /* Describe properties of a protocol header field */
 struct ice_flow_field_info {
@@ -36,70 +36,76 @@ struct ice_flow_field_info {
 	u16 size;	/* Size of fields in bits */
 };
 
+#define ICE_FLOW_FLD_INFO(_hdr, _offset_bytes, _size_bytes) { \
+	.hdr = _hdr, \
+	.off = _offset_bytes * BITS_PER_BYTE, \
+	.size = _size_bytes * BITS_PER_BYTE, \
+}
+
 /* Table containing properties of supported protocol header fields */
 static const
 struct ice_flow_field_info ice_flds_info[ICE_FLOW_FIELD_IDX_MAX] = {
 	/* Ether */
 	/* ICE_FLOW_FIELD_IDX_ETH_DA */
-	{ ICE_FLOW_SEG_HDR_ETH, 0, ETH_ALEN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ETH, 0, ETH_ALEN),
 	/* ICE_FLOW_FIELD_IDX_ETH_SA */
-	{ ICE_FLOW_SEG_HDR_ETH, ETH_ALEN * 8, ETH_ALEN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ETH, ETH_ALEN, ETH_ALEN),
 	/* ICE_FLOW_FIELD_IDX_S_VLAN */
-	{ ICE_FLOW_SEG_HDR_VLAN, 12 * 8, ICE_FLOW_FLD_SZ_VLAN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_VLAN, 12, ICE_FLOW_FLD_SZ_VLAN),
 	/* ICE_FLOW_FIELD_IDX_C_VLAN */
-	{ ICE_FLOW_SEG_HDR_VLAN, 14 * 8, ICE_FLOW_FLD_SZ_VLAN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_VLAN, 14, ICE_FLOW_FLD_SZ_VLAN),
 	/* ICE_FLOW_FIELD_IDX_ETH_TYPE */
-	{ ICE_FLOW_SEG_HDR_ETH, 12 * 8, ICE_FLOW_FLD_SZ_ETH_TYPE * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ETH, 12, ICE_FLOW_FLD_SZ_ETH_TYPE),
 	/* IPv4 */
 	/* ICE_FLOW_FIELD_IDX_IP_DSCP */
-	{ ICE_FLOW_SEG_HDR_IPV4, 1 * 8, 1 * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV4, 1, 1),
 	/* ICE_FLOW_FIELD_IDX_IP_TTL */
-	{ ICE_FLOW_SEG_HDR_NONE, 8 * 8, 1 * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_NONE, 8, 1),
 	/* ICE_FLOW_FIELD_IDX_IP_PROT */
-	{ ICE_FLOW_SEG_HDR_NONE, 9 * 8, ICE_FLOW_FLD_SZ_IP_PROT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_NONE, 9, ICE_FLOW_FLD_SZ_IP_PROT),
 	/* ICE_FLOW_FIELD_IDX_IPV4_SA */
-	{ ICE_FLOW_SEG_HDR_IPV4, 12 * 8, ICE_FLOW_FLD_SZ_IPV4_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV4, 12, ICE_FLOW_FLD_SZ_IPV4_ADDR),
 	/* ICE_FLOW_FIELD_IDX_IPV4_DA */
-	{ ICE_FLOW_SEG_HDR_IPV4, 16 * 8, ICE_FLOW_FLD_SZ_IPV4_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV4, 16, ICE_FLOW_FLD_SZ_IPV4_ADDR),
 	/* IPv6 */
 	/* ICE_FLOW_FIELD_IDX_IPV6_SA */
-	{ ICE_FLOW_SEG_HDR_IPV6, 8 * 8, ICE_FLOW_FLD_SZ_IPV6_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV6, 8, ICE_FLOW_FLD_SZ_IPV6_ADDR),
 	/* ICE_FLOW_FIELD_IDX_IPV6_DA */
-	{ ICE_FLOW_SEG_HDR_IPV6, 24 * 8, ICE_FLOW_FLD_SZ_IPV6_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV6, 24, ICE_FLOW_FLD_SZ_IPV6_ADDR),
 	/* Transport */
 	/* ICE_FLOW_FIELD_IDX_TCP_SRC_PORT */
-	{ ICE_FLOW_SEG_HDR_TCP, 0 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_TCP, 0, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_TCP_DST_PORT */
-	{ ICE_FLOW_SEG_HDR_TCP, 2 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_TCP, 2, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_UDP_SRC_PORT */
-	{ ICE_FLOW_SEG_HDR_UDP, 0 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_UDP, 0, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_UDP_DST_PORT */
-	{ ICE_FLOW_SEG_HDR_UDP, 2 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_UDP, 2, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT */
-	{ ICE_FLOW_SEG_HDR_SCTP, 0 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_SCTP, 0, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_SCTP_DST_PORT */
-	{ ICE_FLOW_SEG_HDR_SCTP, 2 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_SCTP, 2, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_TCP_FLAGS */
-	{ ICE_FLOW_SEG_HDR_TCP, 13 * 8, ICE_FLOW_FLD_SZ_TCP_FLAGS * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_TCP, 13, ICE_FLOW_FLD_SZ_TCP_FLAGS),
 	/* ARP */
 	/* ICE_FLOW_FIELD_IDX_ARP_SIP */
-	{ ICE_FLOW_SEG_HDR_ARP, 14 * 8, ICE_FLOW_FLD_SZ_IPV4_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ARP, 14, ICE_FLOW_FLD_SZ_IPV4_ADDR),
 	/* ICE_FLOW_FIELD_IDX_ARP_DIP */
-	{ ICE_FLOW_SEG_HDR_ARP, 24 * 8, ICE_FLOW_FLD_SZ_IPV4_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ARP, 24, ICE_FLOW_FLD_SZ_IPV4_ADDR),
 	/* ICE_FLOW_FIELD_IDX_ARP_SHA */
-	{ ICE_FLOW_SEG_HDR_ARP, 8 * 8, ETH_ALEN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ARP, 8, ETH_ALEN),
 	/* ICE_FLOW_FIELD_IDX_ARP_DHA */
-	{ ICE_FLOW_SEG_HDR_ARP, 18 * 8, ETH_ALEN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ARP, 18, ETH_ALEN),
 	/* ICE_FLOW_FIELD_IDX_ARP_OP */
-	{ ICE_FLOW_SEG_HDR_ARP, 6 * 8, ICE_FLOW_FLD_SZ_ARP_OPER * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ARP, 6, ICE_FLOW_FLD_SZ_ARP_OPER),
 	/* ICMP */
 	/* ICE_FLOW_FIELD_IDX_ICMP_TYPE */
-	{ ICE_FLOW_SEG_HDR_ICMP, 0 * 8, ICE_FLOW_FLD_SZ_ICMP_TYPE * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ICMP, 0, ICE_FLOW_FLD_SZ_ICMP_TYPE),
 	/* ICE_FLOW_FIELD_IDX_ICMP_CODE */
-	{ ICE_FLOW_SEG_HDR_ICMP, 1 * 8, ICE_FLOW_FLD_SZ_ICMP_CODE * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ICMP, 1, ICE_FLOW_FLD_SZ_ICMP_CODE),
 	/* GRE */
 	/* ICE_FLOW_FIELD_IDX_GRE_KEYID */
-	{ ICE_FLOW_SEG_HDR_GRE, 12 * 8, ICE_FLOW_FLD_SZ_GRE_KEYID * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_GRE, 12, ICE_FLOW_FLD_SZ_GRE_KEYID),
 };
 
 /* Bitmaps indicating relevant packet types for a particular protocol header
@@ -644,7 +650,7 @@ ice_flow_xtract_fld(struct ice_hw *hw, struct ice_flow_prof_params *params,
 	/* Each extraction sequence entry is a word in size, and extracts a
 	 * word-aligned offset from a protocol header.
 	 */
-	ese_bits = ICE_FLOW_FV_EXTRACT_SZ * 8;
+	ese_bits = ICE_FLOW_FV_EXTRACT_SZ * BITS_PER_BYTE;
 
 	flds[fld].xtrct.prot_id = prot_id;
 	flds[fld].xtrct.off = (ice_flds_info[fld].off / ese_bits) *
@@ -737,15 +743,17 @@ ice_flow_xtract_raws(struct ice_hw *hw, struct ice_flow_prof_params *params,
 		raw->info.xtrct.prot_id = ICE_PROT_PAY;
 		raw->info.xtrct.off = (off / ICE_FLOW_FV_EXTRACT_SZ) *
 			ICE_FLOW_FV_EXTRACT_SZ;
-		raw->info.xtrct.disp = (off % ICE_FLOW_FV_EXTRACT_SZ) * 8;
+		raw->info.xtrct.disp = (off % ICE_FLOW_FV_EXTRACT_SZ) *
+			BITS_PER_BYTE;
 		raw->info.xtrct.idx = params->es_cnt;
 
 		/* Determine the number of field vector entries this raw field
 		 * consumes.
 		 */
 		cnt = DIVIDE_AND_ROUND_UP(raw->info.xtrct.disp +
-					  (raw->info.src.last * 8),
-					  ICE_FLOW_FV_EXTRACT_SZ * 8);
+					  (raw->info.src.last * BITS_PER_BYTE),
+					  (ICE_FLOW_FV_EXTRACT_SZ *
+					   BITS_PER_BYTE));
 		off = raw->info.xtrct.off;
 		for (j = 0; j < cnt; j++) {
 			/* Make sure the number of extraction sequence required
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 18/49] net/ice/base: move and redefine ice debug cq API
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (16 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 17/49] net/ice/base: use macro instead of magic 8 Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 19/49] net/ice/base: separate out control queue lock creation Leyi Rong
                   ` (32 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

The ice_debug_cq function is only called from ice_controlq.c, and has no
other callers outside of that file. Move it and mark it static to avoid
namespace pollution.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c   | 47 -------------------------
 drivers/net/ice/base/ice_common.h   |  2 --
 drivers/net/ice/base/ice_controlq.c | 54 +++++++++++++++++++++++++++--
 3 files changed, 51 insertions(+), 52 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 7093ee4f4..c1af24322 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -1474,53 +1474,6 @@ ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index)
 }
 #endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
 
-/**
- * ice_debug_cq
- * @hw: pointer to the hardware structure
- * @mask: debug mask
- * @desc: pointer to control queue descriptor
- * @buf: pointer to command buffer
- * @buf_len: max length of buf
- *
- * Dumps debug log about control command with descriptor contents.
- */
-void
-ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len)
-{
-	struct ice_aq_desc *cq_desc = (struct ice_aq_desc *)desc;
-	u16 len;
-
-	if (!(mask & hw->debug_mask))
-		return;
-
-	if (!desc)
-		return;
-
-	len = LE16_TO_CPU(cq_desc->datalen);
-
-	ice_debug(hw, mask,
-		  "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
-		  LE16_TO_CPU(cq_desc->opcode),
-		  LE16_TO_CPU(cq_desc->flags),
-		  LE16_TO_CPU(cq_desc->datalen), LE16_TO_CPU(cq_desc->retval));
-	ice_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
-		  LE32_TO_CPU(cq_desc->cookie_high),
-		  LE32_TO_CPU(cq_desc->cookie_low));
-	ice_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
-		  LE32_TO_CPU(cq_desc->params.generic.param0),
-		  LE32_TO_CPU(cq_desc->params.generic.param1));
-	ice_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
-		  LE32_TO_CPU(cq_desc->params.generic.addr_high),
-		  LE32_TO_CPU(cq_desc->params.generic.addr_low));
-	if (buf && cq_desc->datalen != 0) {
-		ice_debug(hw, mask, "Buffer:\n");
-		if (buf_len < len)
-			len = buf_len;
-
-		ice_debug_array(hw, mask, 16, 1, (u8 *)buf, len);
-	}
-}
-
 
 /* FW Admin Queue command wrappers */
 
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index cccb5f009..58f22b0d3 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -20,8 +20,6 @@ enum ice_fw_modes {
 
 enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw);
 
-void
-ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len);
 enum ice_status ice_init_hw(struct ice_hw *hw);
 void ice_deinit_hw(struct ice_hw *hw);
 enum ice_status ice_check_reset(struct ice_hw *hw);
diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
index f3404023a..90dec0156 100644
--- a/drivers/net/ice/base/ice_controlq.c
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -727,6 +727,54 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 	return ICE_CTL_Q_DESC_UNUSED(sq);
 }
 
+/**
+ * ice_debug_cq
+ * @hw: pointer to the hardware structure
+ * @desc: pointer to control queue descriptor
+ * @buf: pointer to command buffer
+ * @buf_len: max length of buf
+ *
+ * Dumps debug log about control command with descriptor contents.
+ */
+static void ice_debug_cq(struct ice_hw *hw, void *desc, void *buf, u16 buf_len)
+{
+	struct ice_aq_desc *cq_desc = (struct ice_aq_desc *)desc;
+	u16 datalen, flags;
+
+	if (!((ICE_DBG_AQ_DESC | ICE_DBG_AQ_DESC_BUF) & hw->debug_mask))
+		return;
+
+	if (!desc)
+		return;
+
+	datalen = LE16_TO_CPU(cq_desc->datalen);
+	flags = LE16_TO_CPU(cq_desc->flags);
+
+	ice_debug(hw, ICE_DBG_AQ_DESC,
+		  "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+		  LE16_TO_CPU(cq_desc->opcode), flags, datalen,
+		  LE16_TO_CPU(cq_desc->retval));
+	ice_debug(hw, ICE_DBG_AQ_DESC, "\tcookie (h,l) 0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->cookie_high),
+		  LE32_TO_CPU(cq_desc->cookie_low));
+	ice_debug(hw, ICE_DBG_AQ_DESC, "\tparam (0,1)  0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.param0),
+		  LE32_TO_CPU(cq_desc->params.generic.param1));
+	ice_debug(hw, ICE_DBG_AQ_DESC, "\taddr (h,l)   0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.addr_high),
+		  LE32_TO_CPU(cq_desc->params.generic.addr_low));
+	/* Dump buffer iff 1) one exists and 2) is either a response indicated
+	 * by the DD and/or CMP flag set or a command with the RD flag set.
+	 */
+	if (buf && cq_desc->datalen != 0 &&
+	    (flags & (ICE_AQ_FLAG_DD | ICE_AQ_FLAG_CMP) ||
+	     flags & ICE_AQ_FLAG_RD)) {
+		ice_debug(hw, ICE_DBG_AQ_DESC_BUF, "Buffer:\n");
+		ice_debug_array(hw, ICE_DBG_AQ_DESC_BUF, 16, 1, (u8 *)buf,
+				min(buf_len, datalen));
+	}
+}
+
 /**
  * ice_sq_done - check if FW has processed the Admin Send Queue (ATQ)
  * @hw: pointer to the HW struct
@@ -886,7 +934,7 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	ice_debug(hw, ICE_DBG_AQ_MSG,
 		  "ATQ: Control Send queue desc and buffer:\n");
 
-	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc_on_ring, buf, buf_size);
+	ice_debug_cq(hw, (void *)desc_on_ring, buf, buf_size);
 
 
 	(cq->sq.next_to_use)++;
@@ -950,7 +998,7 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	ice_debug(hw, ICE_DBG_AQ_MSG,
 		  "ATQ: desc and buffer writeback:\n");
 
-	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, buf, buf_size);
+	ice_debug_cq(hw, (void *)desc, buf, buf_size);
 
 
 	/* save writeback AQ if requested */
@@ -1055,7 +1103,7 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 
 	ice_debug(hw, ICE_DBG_AQ_MSG, "ARQ: desc and buffer:\n");
 
-	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, e->msg_buf,
+	ice_debug_cq(hw, (void *)desc, e->msg_buf,
 		     cq->rq_buf_size);
 
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 19/49] net/ice/base: separate out control queue lock creation
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (17 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 18/49] net/ice/base: move and redefine ice debug cq API Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 20/49] net/ice/base: add helper functions for PHY caching Leyi Rong
                   ` (31 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

The ice_init_all_ctrlq and ice_shutdown_all_ctrlq functions create and
destroy the locks used to protect the send and receive process of each
control queue.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c   |   6 +-
 drivers/net/ice/base/ice_common.h   |   2 +
 drivers/net/ice/base/ice_controlq.c | 112 +++++++++++++++++++++-------
 3 files changed, 91 insertions(+), 29 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index c1af24322..5b4a13a41 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -853,7 +853,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 	ice_get_itr_intrl_gran(hw);
 
 
-	status = ice_init_all_ctrlq(hw);
+	status = ice_create_all_ctrlq(hw);
 	if (status)
 		goto err_unroll_cqinit;
 
@@ -981,7 +981,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 	ice_free(hw, hw->port_info);
 	hw->port_info = NULL;
 err_unroll_cqinit:
-	ice_shutdown_all_ctrlq(hw);
+	ice_destroy_all_ctrlq(hw);
 	return status;
 }
 
@@ -1010,7 +1010,7 @@ void ice_deinit_hw(struct ice_hw *hw)
 
 	/* Attempt to disable FW logging before shutting down control queues */
 	ice_cfg_fw_log(hw, false);
-	ice_shutdown_all_ctrlq(hw);
+	ice_destroy_all_ctrlq(hw);
 
 	/* Clear VSI contexts if not already cleared */
 	ice_clear_all_vsi_ctx(hw);
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 58f22b0d3..4cd87fc1e 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -25,8 +25,10 @@ void ice_deinit_hw(struct ice_hw *hw);
 enum ice_status ice_check_reset(struct ice_hw *hw);
 enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req);
 
+enum ice_status ice_create_all_ctrlq(struct ice_hw *hw);
 enum ice_status ice_init_all_ctrlq(struct ice_hw *hw);
 void ice_shutdown_all_ctrlq(struct ice_hw *hw);
+void ice_destroy_all_ctrlq(struct ice_hw *hw);
 enum ice_status
 ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 		  struct ice_rq_event_info *e, u16 *pending);
diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
index 90dec0156..6d893e2f2 100644
--- a/drivers/net/ice/base/ice_controlq.c
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -283,7 +283,7 @@ ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
  * @cq: pointer to the specific Control queue
  *
  * This is the main initialization routine for the Control Send Queue
- * Prior to calling this function, drivers *MUST* set the following fields
+ * Prior to calling this function, the driver *MUST* set the following fields
  * in the cq->structure:
  *     - cq->num_sq_entries
  *     - cq->sq_buf_size
@@ -342,7 +342,7 @@ static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
  * @cq: pointer to the specific Control queue
  *
  * The main initialization routine for the Admin Receive (Event) Queue.
- * Prior to calling this function, drivers *MUST* set the following fields
+ * Prior to calling this function, the driver *MUST* set the following fields
  * in the cq->structure:
  *     - cq->num_rq_entries
  *     - cq->rq_buf_size
@@ -535,14 +535,8 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
 	return ICE_SUCCESS;
 
 init_ctrlq_free_rq:
-	if (cq->rq.count) {
-		ice_shutdown_rq(hw, cq);
-		ice_destroy_lock(&cq->rq_lock);
-	}
-	if (cq->sq.count) {
-		ice_shutdown_sq(hw, cq);
-		ice_destroy_lock(&cq->sq_lock);
-	}
+	ice_shutdown_rq(hw, cq);
+	ice_shutdown_sq(hw, cq);
 	return status;
 }
 
@@ -551,12 +545,14 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
  * @hw: pointer to the hardware structure
  * @q_type: specific Control queue type
  *
- * Prior to calling this function, drivers *MUST* set the following fields
+ * Prior to calling this function, the driver *MUST* set the following fields
  * in the cq->structure:
  *     - cq->num_sq_entries
  *     - cq->num_rq_entries
  *     - cq->rq_buf_size
  *     - cq->sq_buf_size
+ *
+ * NOTE: this function does not initialize the controlq locks
  */
 static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 {
@@ -582,8 +578,6 @@ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 	    !cq->rq_buf_size || !cq->sq_buf_size) {
 		return ICE_ERR_CFG;
 	}
-	ice_init_lock(&cq->sq_lock);
-	ice_init_lock(&cq->rq_lock);
 
 	/* setup SQ command write back timeout */
 	cq->sq_cmd_timeout = ICE_CTL_Q_SQ_CMD_TIMEOUT;
@@ -591,7 +585,7 @@ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 	/* allocate the ATQ */
 	ret_code = ice_init_sq(hw, cq);
 	if (ret_code)
-		goto init_ctrlq_destroy_locks;
+		return ret_code;
 
 	/* allocate the ARQ */
 	ret_code = ice_init_rq(hw, cq);
@@ -603,9 +597,6 @@ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 
 init_ctrlq_free_sq:
 	ice_shutdown_sq(hw, cq);
-init_ctrlq_destroy_locks:
-	ice_destroy_lock(&cq->sq_lock);
-	ice_destroy_lock(&cq->rq_lock);
 	return ret_code;
 }
 
@@ -613,12 +604,14 @@ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
  * ice_init_all_ctrlq - main initialization routine for all control queues
  * @hw: pointer to the hardware structure
  *
- * Prior to calling this function, drivers *MUST* set the following fields
+ * Prior to calling this function, the driver MUST* set the following fields
  * in the cq->structure for all control queues:
  *     - cq->num_sq_entries
  *     - cq->num_rq_entries
  *     - cq->rq_buf_size
  *     - cq->sq_buf_size
+ *
+ * NOTE: this function does not initialize the controlq locks.
  */
 enum ice_status ice_init_all_ctrlq(struct ice_hw *hw)
 {
@@ -637,10 +630,48 @@ enum ice_status ice_init_all_ctrlq(struct ice_hw *hw)
 	return ice_init_ctrlq(hw, ICE_CTL_Q_MAILBOX);
 }
 
+/**
+ * ice_init_ctrlq_locks - Initialize locks for a control queue
+ * @cq: pointer to the control queue
+ *
+ * Initializes the send and receive queue locks for a given control queue.
+ */
+static void ice_init_ctrlq_locks(struct ice_ctl_q_info *cq)
+{
+	ice_init_lock(&cq->sq_lock);
+	ice_init_lock(&cq->rq_lock);
+}
+
+/**
+ * ice_create_all_ctrlq - main initialization routine for all control queues
+ * @hw: pointer to the hardware structure
+ *
+ * Prior to calling this function, the driver *MUST* set the following fields
+ * in the cq->structure for all control queues:
+ *     - cq->num_sq_entries
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *     - cq->sq_buf_size
+ *
+ * This function creates all the control queue locks and then calls
+ * ice_init_all_ctrlq. It should be called once during driver load. If the
+ * driver needs to re-initialize control queues at run time it should call
+ * ice_init_all_ctrlq instead.
+ */
+enum ice_status ice_create_all_ctrlq(struct ice_hw *hw)
+{
+	ice_init_ctrlq_locks(&hw->adminq);
+	ice_init_ctrlq_locks(&hw->mailboxq);
+
+	return ice_init_all_ctrlq(hw);
+}
+
 /**
  * ice_shutdown_ctrlq - shutdown routine for any control queue
  * @hw: pointer to the hardware structure
  * @q_type: specific Control queue type
+ *
+ * NOTE: this function does not destroy the control queue locks.
  */
 static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 {
@@ -659,19 +690,17 @@ static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 		return;
 	}
 
-	if (cq->sq.count) {
-		ice_shutdown_sq(hw, cq);
-		ice_destroy_lock(&cq->sq_lock);
-	}
-	if (cq->rq.count) {
-		ice_shutdown_rq(hw, cq);
-		ice_destroy_lock(&cq->rq_lock);
-	}
+	ice_shutdown_sq(hw, cq);
+	ice_shutdown_rq(hw, cq);
 }
 
 /**
  * ice_shutdown_all_ctrlq - shutdown routine for all control queues
  * @hw: pointer to the hardware structure
+ *
+ * NOTE: this function does not destroy the control queue locks. The driver
+ * may call this at runtime to shutdown and later restart control queues, such
+ * as in response to a reset event.
  */
 void ice_shutdown_all_ctrlq(struct ice_hw *hw)
 {
@@ -681,6 +710,37 @@ void ice_shutdown_all_ctrlq(struct ice_hw *hw)
 	ice_shutdown_ctrlq(hw, ICE_CTL_Q_MAILBOX);
 }
 
+/**
+ * ice_destroy_ctrlq_locks - Destroy locks for a control queue
+ * @cq: pointer to the control queue
+ *
+ * Destroys the send and receive queue locks for a given control queue.
+ */
+static void
+ice_destroy_ctrlq_locks(struct ice_ctl_q_info *cq)
+{
+	ice_destroy_lock(&cq->sq_lock);
+	ice_destroy_lock(&cq->rq_lock);
+}
+
+/**
+ * ice_destroy_all_ctrlq - exit routine for all control queues
+ * @hw: pointer to the hardware structure
+ *
+ * This function shuts down all the control queues and then destroys the
+ * control queue locks. It should be called once during driver unload. The
+ * driver should call ice_shutdown_all_ctrlq if it needs to shut down and
+ * reinitialize control queues, such as in response to a reset event.
+ */
+void ice_destroy_all_ctrlq(struct ice_hw *hw)
+{
+	/* shut down all the control queues first */
+	ice_shutdown_all_ctrlq(hw);
+
+	ice_destroy_ctrlq_locks(&hw->adminq);
+	ice_destroy_ctrlq_locks(&hw->mailboxq);
+}
+
 /**
  * ice_clean_sq - cleans Admin send queue (ATQ)
  * @hw: pointer to the hardware structure
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 20/49] net/ice/base: add helper functions for PHY caching
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (18 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 19/49] net/ice/base: separate out control queue lock creation Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 21/49] net/ice/base: added sibling head to parse nodes Leyi Rong
                   ` (30 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Tony Nguyen, Paul M Stillwell Jr

Add additional functions to aide in caching PHY configuration.
In order to cache the initial modes, we need to determine the
operating mode based on capabilities. Add helper functions
for flow control and FEC to take a set of capabilities and
return the operating mode matching those capabilities. Also
add a helper function to determine whether a PHY capability
matches a PHY configuration.

Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h |  1 +
 drivers/net/ice/base/ice_common.c     | 83 +++++++++++++++++++++++++++
 drivers/net/ice/base/ice_common.h     |  9 ++-
 3 files changed, 91 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 739f79e88..77f93b950 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -1594,6 +1594,7 @@ struct ice_aqc_get_link_status_data {
 #define ICE_AQ_LINK_PWR_QSFP_CLASS_3	2
 #define ICE_AQ_LINK_PWR_QSFP_CLASS_4	3
 	__le16 link_speed;
+#define ICE_AQ_LINK_SPEED_M		0x7FF
 #define ICE_AQ_LINK_SPEED_10MB		BIT(0)
 #define ICE_AQ_LINK_SPEED_100MB		BIT(1)
 #define ICE_AQ_LINK_SPEED_1000MB	BIT(2)
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 5b4a13a41..7f7f4dad0 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -2552,6 +2552,53 @@ ice_cache_phy_user_req(struct ice_port_info *pi,
 	}
 }
 
+/**
+ * ice_caps_to_fc_mode
+ * @caps: PHY capabilities
+ *
+ * Convert PHY FC capabilities to ice FC mode
+ */
+enum ice_fc_mode ice_caps_to_fc_mode(u8 caps)
+{
+	if (caps & ICE_AQC_PHY_EN_TX_LINK_PAUSE &&
+	    caps & ICE_AQC_PHY_EN_RX_LINK_PAUSE)
+		return ICE_FC_FULL;
+
+	if (caps & ICE_AQC_PHY_EN_TX_LINK_PAUSE)
+		return ICE_FC_TX_PAUSE;
+
+	if (caps & ICE_AQC_PHY_EN_RX_LINK_PAUSE)
+		return ICE_FC_RX_PAUSE;
+
+	return ICE_FC_NONE;
+}
+
+/**
+ * ice_caps_to_fec_mode
+ * @caps: PHY capabilities
+ * @fec_options: Link FEC options
+ *
+ * Convert PHY FEC capabilities to ice FEC mode
+ */
+enum ice_fec_mode ice_caps_to_fec_mode(u8 caps, u8 fec_options)
+{
+	if (caps & ICE_AQC_PHY_EN_AUTO_FEC)
+		return ICE_FEC_AUTO;
+
+	if (fec_options & (ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
+			   ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
+			   ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN |
+			   ICE_AQC_PHY_FEC_25G_KR_REQ))
+		return ICE_FEC_BASER;
+
+	if (fec_options & (ICE_AQC_PHY_FEC_25G_RS_528_REQ |
+			   ICE_AQC_PHY_FEC_25G_RS_544_REQ |
+			   ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN))
+		return ICE_FEC_RS;
+
+	return ICE_FEC_NONE;
+}
+
 /**
  * ice_set_fc
  * @pi: port information structure
@@ -2658,6 +2705,42 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
 	return status;
 }
 
+/**
+ * ice_phy_caps_equals_cfg
+ * @phy_caps: PHY capabilities
+ * @phy_cfg: PHY configuration
+ *
+ * Helper function to determine if PHY capabilities matches PHY
+ * configuration
+ */
+bool
+ice_phy_caps_equals_cfg(struct ice_aqc_get_phy_caps_data *phy_caps,
+			struct ice_aqc_set_phy_cfg_data *phy_cfg)
+{
+	u8 caps_mask, cfg_mask;
+
+	if (!phy_caps || !phy_cfg)
+		return false;
+
+	/* These bits are not common between capabilities and configuration.
+	 * Do not use them to determine equality.
+	 */
+	caps_mask = ICE_AQC_PHY_CAPS_MASK & ~(ICE_AQC_PHY_AN_MODE |
+					      ICE_AQC_PHY_EN_MOD_QUAL);
+	cfg_mask = ICE_AQ_PHY_ENA_VALID_MASK & ~ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
+
+	if (phy_caps->phy_type_low != phy_cfg->phy_type_low ||
+	    phy_caps->phy_type_high != phy_cfg->phy_type_high ||
+	    ((phy_caps->caps & caps_mask) != (phy_cfg->caps & cfg_mask)) ||
+	    phy_caps->low_power_ctrl != phy_cfg->low_power_ctrl ||
+	    phy_caps->eee_cap != phy_cfg->eee_cap ||
+	    phy_caps->eeer_value != phy_cfg->eeer_value ||
+	    phy_caps->link_fec_options != phy_cfg->link_fec_opt)
+		return false;
+
+	return true;
+}
+
 /**
  * ice_copy_phy_caps_to_cfg - Copy PHY ability data to configuration data
  * @caps: PHY ability structure to copy date from
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 4cd87fc1e..10131b473 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -136,14 +136,19 @@ enum ice_status ice_clear_pf_cfg(struct ice_hw *hw);
 enum ice_status
 ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi,
 		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd);
+enum ice_fc_mode ice_caps_to_fc_mode(u8 caps);
+enum ice_fec_mode ice_caps_to_fec_mode(u8 caps, u8 fec_options);
 enum ice_status
 ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
 	   bool ena_auto_link_update);
-void
-ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec);
+bool
+ice_phy_caps_equals_cfg(struct ice_aqc_get_phy_caps_data *caps,
+			struct ice_aqc_set_phy_cfg_data *cfg);
 void
 ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps,
 			 struct ice_aqc_set_phy_cfg_data *cfg);
+void
+ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec);
 enum ice_status
 ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
 			   struct ice_sq_cd *cd);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 21/49] net/ice/base: added sibling head to parse nodes
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (19 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 20/49] net/ice/base: add helper functions for PHY caching Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 22/49] net/ice/base: add and fix debuglogs Leyi Rong
                   ` (29 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Victor Raj, Paul M Stillwell Jr

There was a bug in the previous code which never traverses all the
children to get the first node of the requested layer.

Added a sibling head pointer to point the first node of each layer
per TC. This helps the traverse easy and quicker and also removed the
recursive, complexity of the code.

Signed-off-by: Victor Raj <victor.raj@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_sched.c | 61 ++++++++++++--------------------
 drivers/net/ice/base/ice_type.h  |  2 ++
 2 files changed, 25 insertions(+), 38 deletions(-)

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 855e3848c..0c1c18ba1 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -260,33 +260,17 @@ ice_sched_remove_elems(struct ice_hw *hw, struct ice_sched_node *parent,
 
 /**
  * ice_sched_get_first_node - get the first node of the given layer
- * @hw: pointer to the HW struct
+ * @pi: port information structure
  * @parent: pointer the base node of the subtree
  * @layer: layer number
  *
  * This function retrieves the first node of the given layer from the subtree
  */
 static struct ice_sched_node *
-ice_sched_get_first_node(struct ice_hw *hw, struct ice_sched_node *parent,
-			 u8 layer)
+ice_sched_get_first_node(struct ice_port_info *pi,
+			 struct ice_sched_node *parent, u8 layer)
 {
-	u8 i;
-
-	if (layer < hw->sw_entry_point_layer)
-		return NULL;
-	for (i = 0; i < parent->num_children; i++) {
-		struct ice_sched_node *node = parent->children[i];
-
-		if (node) {
-			if (node->tx_sched_layer == layer)
-				return node;
-			/* this recursion is intentional, and wouldn't
-			 * go more than 9 calls
-			 */
-			return ice_sched_get_first_node(hw, node, layer);
-		}
-	}
-	return NULL;
+	return pi->sib_head[parent->tc_num][layer];
 }
 
 /**
@@ -342,7 +326,7 @@ void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node)
 	parent = node->parent;
 	/* root has no parent */
 	if (parent) {
-		struct ice_sched_node *p, *tc_node;
+		struct ice_sched_node *p;
 
 		/* update the parent */
 		for (i = 0; i < parent->num_children; i++)
@@ -354,16 +338,7 @@ void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node)
 				break;
 			}
 
-		/* search for previous sibling that points to this node and
-		 * remove the reference
-		 */
-		tc_node = ice_sched_get_tc_node(pi, node->tc_num);
-		if (!tc_node) {
-			ice_debug(hw, ICE_DBG_SCHED,
-				  "Invalid TC number %d\n", node->tc_num);
-			goto err_exit;
-		}
-		p = ice_sched_get_first_node(hw, tc_node, node->tx_sched_layer);
+		p = ice_sched_get_first_node(pi, node, node->tx_sched_layer);
 		while (p) {
 			if (p->sibling == node) {
 				p->sibling = node->sibling;
@@ -371,8 +346,13 @@ void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node)
 			}
 			p = p->sibling;
 		}
+
+		/* update the sibling head if head is getting removed */
+		if (pi->sib_head[node->tc_num][node->tx_sched_layer] == node)
+			pi->sib_head[node->tc_num][node->tx_sched_layer] =
+				node->sibling;
 	}
-err_exit:
+
 	/* leaf nodes have no children */
 	if (node->children)
 		ice_free(hw, node->children);
@@ -979,13 +959,17 @@ ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 
 		/* add it to previous node sibling pointer */
 		/* Note: siblings are not linked across branches */
-		prev = ice_sched_get_first_node(hw, tc_node, layer);
+		prev = ice_sched_get_first_node(pi, tc_node, layer);
 		if (prev && prev != new_node) {
 			while (prev->sibling)
 				prev = prev->sibling;
 			prev->sibling = new_node;
 		}
 
+		/* initialize the sibling head */
+		if (!pi->sib_head[tc_node->tc_num][layer])
+			pi->sib_head[tc_node->tc_num][layer] = new_node;
+
 		if (i == 0)
 			*first_node_teid = teid;
 	}
@@ -1451,7 +1435,7 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
 		goto lan_q_exit;
 
 	/* get the first queue group node from VSI sub-tree */
-	qgrp_node = ice_sched_get_first_node(pi->hw, vsi_node, qgrp_layer);
+	qgrp_node = ice_sched_get_first_node(pi, vsi_node, qgrp_layer);
 	while (qgrp_node) {
 		/* make sure the qgroup node is part of the VSI subtree */
 		if (ice_sched_find_node_in_subtree(pi->hw, vsi_node, qgrp_node))
@@ -1482,7 +1466,7 @@ ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
 	u8 vsi_layer;
 
 	vsi_layer = ice_sched_get_vsi_layer(hw);
-	node = ice_sched_get_first_node(hw, tc_node, vsi_layer);
+	node = ice_sched_get_first_node(hw->port_info, tc_node, vsi_layer);
 
 	/* Check whether it already exists */
 	while (node) {
@@ -1511,7 +1495,7 @@ ice_sched_get_agg_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
 	u8 agg_layer;
 
 	agg_layer = ice_sched_get_agg_layer(hw);
-	node = ice_sched_get_first_node(hw, tc_node, agg_layer);
+	node = ice_sched_get_first_node(hw->port_info, tc_node, agg_layer);
 
 	/* Check whether it already exists */
 	while (node) {
@@ -1663,7 +1647,8 @@ ice_sched_calc_vsi_support_nodes(struct ice_hw *hw,
 			/* If intermediate nodes are reached max children
 			 * then add a new one.
 			 */
-			node = ice_sched_get_first_node(hw, tc_node, (u8)i);
+			node = ice_sched_get_first_node(hw->port_info, tc_node,
+							(u8)i);
 			/* scan all the siblings */
 			while (node) {
 				if (node->num_children < hw->max_children[i])
@@ -2528,7 +2513,7 @@ ice_sched_add_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc)
 	 * intermediate node on those layers
 	 */
 	for (i = hw->sw_entry_point_layer; i < aggl; i++) {
-		parent = ice_sched_get_first_node(hw, tc_node, i);
+		parent = ice_sched_get_first_node(pi, tc_node, i);
 
 		/* scan all the siblings */
 		while (parent) {
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 5da267f1b..3523b0c35 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -679,6 +679,8 @@ struct ice_port_info {
 	struct ice_mac_info mac;
 	struct ice_phy_info phy;
 	struct ice_lock sched_lock;	/* protect access to TXSched tree */
+	struct ice_sched_node *
+		sib_head[ICE_MAX_TRAFFIC_CLASS][ICE_AQC_TOPO_MAX_LEVEL_NUM];
 	/* List contain profile ID(s) and other params per layer */
 	struct LIST_HEAD_TYPE rl_prof_list[ICE_AQC_TOPO_MAX_LEVEL_NUM];
 	struct ice_dcbx_cfg local_dcbx_cfg;	/* Oper/Local Cfg */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 22/49] net/ice/base: add and fix debuglogs
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (20 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 21/49] net/ice/base: added sibling head to parse nodes Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 23/49] net/ice/base: add support for reading REPC statistics Leyi Rong
                   ` (28 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Marta Plantykow, Paul M Stillwell Jr

Adding missing debuglogs and fixing existing debuglogs.

Signed-off-by: Marta Plantykow <marta.a.plantykow@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c    | 16 +++----
 drivers/net/ice/base/ice_controlq.c  | 19 ++++++++
 drivers/net/ice/base/ice_flex_pipe.c | 70 ++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_flex_pipe.h |  1 +
 drivers/net/ice/base/ice_nvm.c       | 14 +++---
 5 files changed, 105 insertions(+), 15 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 7f7f4dad0..da72434d3 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -833,7 +833,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 	u16 mac_buf_len;
 	void *mac_buf;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_init_hw");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 
 	/* Set MAC type based on DeviceID */
@@ -1623,7 +1623,7 @@ ice_aq_req_res(struct ice_hw *hw, enum ice_aq_res_ids res,
 	struct ice_aq_desc desc;
 	enum ice_status status;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_req_res");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	cmd_resp = &desc.params.res_owner;
 
@@ -1692,7 +1692,7 @@ ice_aq_release_res(struct ice_hw *hw, enum ice_aq_res_ids res, u8 sdp_number,
 	struct ice_aqc_req_res *cmd;
 	struct ice_aq_desc desc;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_release_res");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	cmd = &desc.params.res_owner;
 
@@ -1722,7 +1722,7 @@ ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
 	u32 time_left = timeout;
 	enum ice_status status;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_res");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
 
@@ -1780,7 +1780,7 @@ void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res)
 	enum ice_status status;
 	u32 total_delay = 0;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_release_res");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	status = ice_aq_release_res(hw, res, 0, NULL);
 
@@ -1814,7 +1814,7 @@ ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
 	struct ice_aqc_alloc_free_res_cmd *cmd;
 	struct ice_aq_desc desc;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_alloc_free_res");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	cmd = &desc.params.sw_res_ctrl;
 
@@ -3189,7 +3189,7 @@ ice_aq_add_lan_txq(struct ice_hw *hw, u8 num_qgrps,
 	struct ice_aqc_add_txqs *cmd;
 	struct ice_aq_desc desc;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_add_lan_txq");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	cmd = &desc.params.add_txqs;
 
@@ -3245,7 +3245,7 @@ ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps,
 	enum ice_status status;
 	u16 i, sz = 0;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_dis_lan_txq");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 	cmd = &desc.params.dis_txqs;
 	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dis_txqs);
 
diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
index 6d893e2f2..4cb6df113 100644
--- a/drivers/net/ice/base/ice_controlq.c
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -35,6 +35,8 @@ static void ice_adminq_init_regs(struct ice_hw *hw)
 {
 	struct ice_ctl_q_info *cq = &hw->adminq;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	ICE_CQ_INIT_REGS(cq, PF_FW);
 }
 
@@ -295,6 +297,8 @@ static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 {
 	enum ice_status ret_code;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	if (cq->sq.count > 0) {
 		/* queue already initialized */
 		ret_code = ICE_ERR_NOT_READY;
@@ -354,6 +358,8 @@ static enum ice_status ice_init_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 {
 	enum ice_status ret_code;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	if (cq->rq.count > 0) {
 		/* queue already initialized */
 		ret_code = ICE_ERR_NOT_READY;
@@ -422,6 +428,8 @@ ice_shutdown_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 {
 	enum ice_status ret_code = ICE_SUCCESS;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	ice_acquire_lock(&cq->sq_lock);
 
 	if (!cq->sq.count) {
@@ -485,6 +493,8 @@ ice_shutdown_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 {
 	enum ice_status ret_code = ICE_SUCCESS;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	ice_acquire_lock(&cq->rq_lock);
 
 	if (!cq->rq.count) {
@@ -521,6 +531,8 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
 	struct ice_ctl_q_info *cq = &hw->adminq;
 	enum ice_status status;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 
 	status = ice_aq_get_fw_ver(hw, NULL);
 	if (status)
@@ -559,6 +571,8 @@ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 	struct ice_ctl_q_info *cq;
 	enum ice_status ret_code;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	switch (q_type) {
 	case ICE_CTL_Q_ADMIN:
 		ice_adminq_init_regs(hw);
@@ -617,6 +631,8 @@ enum ice_status ice_init_all_ctrlq(struct ice_hw *hw)
 {
 	enum ice_status ret_code;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 
 	/* Init FW admin queue */
 	ret_code = ice_init_ctrlq(hw, ICE_CTL_Q_ADMIN);
@@ -677,6 +693,8 @@ static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 {
 	struct ice_ctl_q_info *cq;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	switch (q_type) {
 	case ICE_CTL_Q_ADMIN:
 		cq = &hw->adminq;
@@ -704,6 +722,7 @@ static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
  */
 void ice_shutdown_all_ctrlq(struct ice_hw *hw)
 {
+	ice_debug(hw, ICE_DBG_TRACE, "ice_shutdown_all_ctrlq\n");
 	/* Shutdown FW admin queue */
 	ice_shutdown_ctrlq(hw, ICE_CTL_Q_ADMIN);
 	/* Shutdown PF-VF Mailbox */
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index b569b91a7..e7e349298 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -2417,6 +2417,11 @@ ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig)
 		ice_free(hw, del);
 	}
 
+	/* if VSIG characteristic list was cleared for reset
+	 * re-initialize the list head
+	 */
+	INIT_LIST_HEAD(&hw->blk[blk].xlt2.vsig_tbl[idx].prop_lst);
+
 	return ICE_SUCCESS;
 }
 
@@ -3138,6 +3143,71 @@ static void ice_init_flow_profs(struct ice_hw *hw, u8 blk_idx)
 	INIT_LIST_HEAD(&hw->fl_profs[blk_idx]);
 }
 
+/**
+ * ice_clear_hw_tbls - clear HW tables and flow profiles
+ * @hw: pointer to the hardware structure
+ */
+void ice_clear_hw_tbls(struct ice_hw *hw)
+{
+	u8 i;
+
+	for (i = 0; i < ICE_BLK_COUNT; i++) {
+		struct ice_prof_redir *prof_redir = &hw->blk[i].prof_redir;
+		struct ice_prof_tcam *prof = &hw->blk[i].prof;
+		struct ice_xlt1 *xlt1 = &hw->blk[i].xlt1;
+		struct ice_xlt2 *xlt2 = &hw->blk[i].xlt2;
+		struct ice_es *es = &hw->blk[i].es;
+
+		if (hw->blk[i].is_list_init) {
+			struct ice_prof_map *del, *tmp;
+
+			ice_acquire_lock(&es->prof_map_lock);
+			LIST_FOR_EACH_ENTRY_SAFE(del, tmp, &es->prof_map,
+						 ice_prof_map, list) {
+				LIST_DEL(&del->list);
+				ice_free(hw, del);
+			}
+			INIT_LIST_HEAD(&es->prof_map);
+			ice_release_lock(&es->prof_map_lock);
+
+			ice_acquire_lock(&hw->fl_profs_locks[i]);
+			ice_free_flow_profs(hw, i);
+			ice_release_lock(&hw->fl_profs_locks[i]);
+		}
+
+		ice_free_vsig_tbl(hw, (enum ice_block)i);
+
+		ice_memset(xlt1->ptypes, 0, xlt1->count * sizeof(*xlt1->ptypes),
+			   ICE_NONDMA_MEM);
+		ice_memset(xlt1->ptg_tbl, 0,
+			   ICE_MAX_PTGS * sizeof(*xlt1->ptg_tbl),
+			   ICE_NONDMA_MEM);
+		ice_memset(xlt1->t, 0, xlt1->count * sizeof(*xlt1->t),
+			   ICE_NONDMA_MEM);
+
+		ice_memset(xlt2->vsis, 0, xlt2->count * sizeof(*xlt2->vsis),
+			   ICE_NONDMA_MEM);
+		ice_memset(xlt2->vsig_tbl, 0,
+			   xlt2->count * sizeof(*xlt2->vsig_tbl),
+			   ICE_NONDMA_MEM);
+		ice_memset(xlt2->t, 0, xlt2->count * sizeof(*xlt2->t),
+			   ICE_NONDMA_MEM);
+
+		ice_memset(prof->t, 0, prof->count * sizeof(*prof->t),
+			   ICE_NONDMA_MEM);
+		ice_memset(prof_redir->t, 0,
+			   prof_redir->count * sizeof(*prof_redir->t),
+			   ICE_NONDMA_MEM);
+
+		ice_memset(es->t, 0, es->count * sizeof(*es->t),
+			   ICE_NONDMA_MEM);
+		ice_memset(es->ref_count, 0, es->count * sizeof(*es->ref_count),
+			   ICE_NONDMA_MEM);
+		ice_memset(es->written, 0, es->count * sizeof(*es->written),
+			   ICE_NONDMA_MEM);
+	}
+}
+
 /**
  * ice_init_hw_tbls - init hardware table memory
  * @hw: pointer to the hardware structure
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 375758c8d..df8eac05b 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -107,6 +107,7 @@ ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len);
 enum ice_status ice_init_hw_tbls(struct ice_hw *hw);
 void ice_free_seg(struct ice_hw *hw);
 void ice_fill_blk_tbls(struct ice_hw *hw);
+void ice_clear_hw_tbls(struct ice_hw *hw);
 void ice_free_hw_tbls(struct ice_hw *hw);
 enum ice_status
 ice_add_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
index b770abfd0..fa9c348ce 100644
--- a/drivers/net/ice/base/ice_nvm.c
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -24,7 +24,7 @@ ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length,
 	struct ice_aq_desc desc;
 	struct ice_aqc_nvm *cmd;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_read_nvm");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	cmd = &desc.params.nvm;
 
@@ -95,7 +95,7 @@ ice_read_sr_aq(struct ice_hw *hw, u32 offset, u16 words, u16 *data,
 {
 	enum ice_status status;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_aq");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	status = ice_check_sr_access_params(hw, offset, words);
 
@@ -123,7 +123,7 @@ ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
 {
 	enum ice_status status;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_word_aq");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	status = ice_read_sr_aq(hw, offset, 1, data, true);
 	if (!status)
@@ -152,7 +152,7 @@ ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
 	u16 words_read = 0;
 	u16 i = 0;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_buf_aq");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	do {
 		u16 read_size, off_w;
@@ -202,7 +202,7 @@ ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
 static enum ice_status
 ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access)
 {
-	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_nvm");
+	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_nvm\n");
 
 	if (hw->nvm.blank_nvm_mode)
 		return ICE_SUCCESS;
@@ -218,7 +218,7 @@ ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access)
  */
 static void ice_release_nvm(struct ice_hw *hw)
 {
-	ice_debug(hw, ICE_DBG_TRACE, "ice_release_nvm");
+	ice_debug(hw, ICE_DBG_TRACE, "ice_release_nvm\n");
 
 	if (hw->nvm.blank_nvm_mode)
 		return;
@@ -263,7 +263,7 @@ enum ice_status ice_init_nvm(struct ice_hw *hw)
 	u32 fla, gens_stat;
 	u8 sr_size;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_init_nvm");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	/* The SR size is stored regardless of the NVM programming mode
 	 * as the blank mode may be used in the factory line.
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 23/49] net/ice/base: add support for reading REPC statistics
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (21 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 22/49] net/ice/base: add and fix debuglogs Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 24/49] net/ice/base: move VSI to VSI group Leyi Rong
                   ` (27 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

Add a new ice_stat_update_repc function which will read the register and
increment the appropriate statistics in the ice_eth_stats structure.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 51 +++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_common.h |  3 ++
 drivers/net/ice/base/ice_type.h   |  2 ++
 3 files changed, 56 insertions(+)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index da72434d3..b4a9172b9 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -4138,6 +4138,57 @@ ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
 		*cur_stat = (new_data + BIT_ULL(32)) - *prev_stat;
 }
 
+/**
+ * ice_stat_update_repc - read GLV_REPC stats from chip and update stat values
+ * @hw: ptr to the hardware info
+ * @vsi_handle: VSI handle
+ * @prev_stat_loaded: bool to specify if the previous stat values are loaded
+ * @cur_stats: ptr to current stats structure
+ *
+ * The GLV_REPC statistic register actually tracks two 16bit statistics, and
+ * thus cannot be read using the normal ice_stat_update32 function.
+ *
+ * Read the GLV_REPC register associated with the given VSI, and update the
+ * rx_no_desc and rx_error values in the ice_eth_stats structure.
+ *
+ * Because the statistics in GLV_REPC stick at 0xFFFF, the register must be
+ * cleared each time it's read.
+ *
+ * Note that the GLV_RDPC register also counts the causes that would trigger
+ * GLV_REPC. However, it does not give the finer grained detail about why the
+ * packets are being dropped. The GLV_REPC values can be used to distinguish
+ * whether Rx packets are dropped due to errors or due to no available
+ * descriptors.
+ */
+void
+ice_stat_update_repc(struct ice_hw *hw, u16 vsi_handle, bool prev_stat_loaded,
+		     struct ice_eth_stats *cur_stats)
+{
+	u16 vsi_num, no_desc, error_cnt;
+	u32 repc;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return;
+
+	vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	/* If we haven't loaded stats yet, just clear the current value */
+	if (!prev_stat_loaded) {
+		wr32(hw, GLV_REPC(vsi_num), 0);
+		return;
+	}
+
+	repc = rd32(hw, GLV_REPC(vsi_num));
+	no_desc = (repc & GLV_REPC_NO_DESC_CNT_M) >> GLV_REPC_NO_DESC_CNT_S;
+	error_cnt = (repc & GLV_REPC_ERROR_CNT_M) >> GLV_REPC_ERROR_CNT_S;
+
+	/* Clear the count by writing to the stats register */
+	wr32(hw, GLV_REPC(vsi_num), 0);
+
+	cur_stats->rx_no_desc += no_desc;
+	cur_stats->rx_errors += error_cnt;
+}
+
 
 /**
  * ice_sched_query_elem - query element information from HW
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 10131b473..2ea4a6e8e 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -205,6 +205,9 @@ ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
 void
 ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
 		  u64 *prev_stat, u64 *cur_stat);
+void
+ice_stat_update_repc(struct ice_hw *hw, u16 vsi_handle, bool prev_stat_loaded,
+		     struct ice_eth_stats *cur_stats);
 enum ice_status
 ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
 		     struct ice_aqc_get_elem *buf);
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 3523b0c35..477f34595 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -853,6 +853,8 @@ struct ice_eth_stats {
 	u64 rx_broadcast;		/* bprc */
 	u64 rx_discards;		/* rdpc */
 	u64 rx_unknown_protocol;	/* rupp */
+	u64 rx_no_desc;			/* repc */
+	u64 rx_errors;			/* repc */
 	u64 tx_bytes;			/* gotc */
 	u64 tx_unicast;			/* uptc */
 	u64 tx_multicast;		/* mptc */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 24/49] net/ice/base: move VSI to VSI group
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (22 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 23/49] net/ice/base: add support for reading REPC statistics Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 25/49] net/ice/base: forbid VSI to remove unassociated ucast filter Leyi Rong
                   ` (26 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Vignesh Sridhar, Paul M Stillwell Jr

Add function to add a VSI to a given VSIG and update package with this
entry. The usual flow in XLT management would iterate through all
characteristics of the input VSI and create a new VSIG and TCAMs till a
matching characteristic is found. When a match is found the VSI is moved
into a matching VSIG and entries are collapsed, leading to added package
update calls. This function serves as an optimization if we know
beforehand that the input VSI has characteristics same as VSI configured
previously added to a VSIG. This is particularly useful for VF VSIs
which are all usually programmed with the same configurations.

Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 41 ++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_flex_pipe.h |  2 ++
 drivers/net/ice/base/ice_flow.c      | 28 +++++++++++++++++++
 drivers/net/ice/base/ice_flow.h      |  4 ++-
 4 files changed, 74 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index e7e349298..fdbf893a8 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -4897,6 +4897,47 @@ ice_find_prof_vsig(struct ice_hw *hw, enum ice_block blk, u64 hdl, u16 *vsig)
 	return status == ICE_SUCCESS;
 }
 
+/**
+ * ice_add_vsi_flow - add VSI flow
+ * @hw: pointer to the HW struct
+ * @blk: hardware block
+ * @vsi: input VSI
+ * @vsig: target VSIG to include the input VSI
+ *
+ * Calling this function will add the VSI to a given VSIG and
+ * update the HW tables accordingly. This call can be used to
+ * add multiple VSIs to a VSIG if we know beforehand that those
+ * VSIs have the same characteristics of the VSIG. This will
+ * save time in generating a new VSIG and TCAMs till a match is
+ * found and subsequent rollback when a matching VSIG is found.
+ */
+enum ice_status
+ice_add_vsi_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig)
+{
+	struct ice_chs_chg *tmp, *del;
+	struct LIST_HEAD_TYPE chg;
+	enum ice_status status;
+
+	/* if target VSIG is default the move is invalid */
+	if ((vsig & ICE_VSIG_IDX_M) == ICE_DEFAULT_VSIG)
+		return ICE_ERR_PARAM;
+
+	INIT_LIST_HEAD(&chg);
+
+	/* move VSI to the VSIG that matches */
+	status = ice_move_vsi(hw, blk, vsi, vsig, &chg);
+	/* update hardware if success */
+	if (!status)
+		status = ice_upd_prof_hw(hw, blk, &chg);
+
+	LIST_FOR_EACH_ENTRY_SAFE(del, tmp, &chg, ice_chs_chg, list_entry) {
+		LIST_DEL(&del->list_entry);
+		ice_free(hw, del);
+	}
+
+	return status;
+}
+
 /**
  * ice_add_prof_id_flow - add profile flow
  * @hw: pointer to the HW struct
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index df8eac05b..4714fe646 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -93,6 +93,8 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 struct ice_prof_map *
 ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id);
 enum ice_status
+ice_add_vsi_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig);
+enum ice_status
 ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl);
 enum ice_status
 ice_rem_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl);
diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 9f2a794bc..1ec49fcd9 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -1109,6 +1109,34 @@ ice_flow_rem_prof_sync(struct ice_hw *hw, enum ice_block blk,
 	return status;
 }
 
+/**
+ * ice_flow_assoc_vsig_vsi - associate a VSI with VSIG
+ * @hw: pointer to the hardware structure
+ * @blk: classification stage
+ * @vsi_handle: software VSI handle
+ * @vsig: target VSI group
+ *
+ * Assumption: the caller has already verified that the VSI to
+ * be added has the same characteristics as the VSIG and will
+ * thereby have access to all resources added to that VSIG.
+ */
+enum ice_status
+ice_flow_assoc_vsig_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi_handle,
+			u16 vsig)
+{
+	enum ice_status status;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle) || blk >= ICE_BLK_COUNT)
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&hw->fl_profs_locks[blk]);
+	status = ice_add_vsi_flow(hw, blk, ice_get_hw_vsi_num(hw, vsi_handle),
+				  vsig);
+	ice_release_lock(&hw->fl_profs_locks[blk]);
+
+	return status;
+}
+
 /**
  * ice_flow_assoc_prof - associate a VSI with a flow profile
  * @hw: pointer to the hardware structure
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
index 4fa13064e..57514a078 100644
--- a/drivers/net/ice/base/ice_flow.h
+++ b/drivers/net/ice/base/ice_flow.h
@@ -319,7 +319,9 @@ ice_flow_add_prof(struct ice_hw *hw, enum ice_block blk, enum ice_flow_dir dir,
 		  struct ice_flow_prof **prof);
 enum ice_status
 ice_flow_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id);
-
+enum ice_status
+ice_flow_assoc_vsig_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi_handle,
+			u16 vsig);
 enum ice_status
 ice_flow_get_hw_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id,
 		     u8 *hw_prof);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 25/49] net/ice/base: forbid VSI to remove unassociated ucast filter
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (23 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 24/49] net/ice/base: move VSI to VSI group Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 26/49] net/ice/base: add some minor features Leyi Rong
                   ` (25 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Akeem G Abodunrin, Paul M Stillwell Jr

If a VSI is not using a unicast filter or did not configure that
particular unicast filter, driver should not allow it to be removed
by the rogue VSI.

Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 57 +++++++++++++++++++++++++++++++
 1 file changed, 57 insertions(+)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 7cccaf4d3..faaedd4c8 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -3180,6 +3180,39 @@ ice_cfg_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool set,
 	return status;
 }
 
+/**
+ * ice_find_ucast_rule_entry - Search for a unicast MAC filter rule entry
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type for which the specified rule needs to be searched
+ * @f_info: rule information
+ *
+ * Helper function to search for a unicast rule entry - this is to be used
+ * to remove unicast MAC filter that is not shared with other VSIs on the
+ * PF switch.
+ *
+ * Returns pointer to entry storing the rule if found
+ */
+static struct ice_fltr_mgmt_list_entry *
+ice_find_ucast_rule_entry(struct ice_hw *hw, u8 recp_id,
+			  struct ice_fltr_info *f_info)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *list_itr;
+	struct LIST_HEAD_TYPE *list_head;
+
+	list_head = &sw->recp_list[recp_id].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (!memcmp(&f_info->l_data, &list_itr->fltr_info.l_data,
+			    sizeof(f_info->l_data)) &&
+		    f_info->fwd_id.hw_vsi_id ==
+		    list_itr->fltr_info.fwd_id.hw_vsi_id &&
+		    f_info->flag == list_itr->fltr_info.flag)
+			return list_itr;
+	}
+	return NULL;
+}
+
 /**
  * ice_remove_mac - remove a MAC address based filter rule
  * @hw: pointer to the hardware structure
@@ -3197,16 +3230,40 @@ enum ice_status
 ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list)
 {
 	struct ice_fltr_list_entry *list_itr, *tmp;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
 
 	if (!m_list)
 		return ICE_ERR_PARAM;
 
+	rule_lock = &hw->switch_info->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock;
 	LIST_FOR_EACH_ENTRY_SAFE(list_itr, tmp, m_list, ice_fltr_list_entry,
 				 list_entry) {
 		enum ice_sw_lkup_type l_type = list_itr->fltr_info.lkup_type;
+		u8 *add = &list_itr->fltr_info.l_data.mac.mac_addr[0];
+		u16 vsi_handle;
 
 		if (l_type != ICE_SW_LKUP_MAC)
 			return ICE_ERR_PARAM;
+
+		vsi_handle = list_itr->fltr_info.vsi_handle;
+		if (!ice_is_vsi_valid(hw, vsi_handle))
+			return ICE_ERR_PARAM;
+
+		list_itr->fltr_info.fwd_id.hw_vsi_id =
+					ice_get_hw_vsi_num(hw, vsi_handle);
+		if (IS_UNICAST_ETHER_ADDR(add) && !hw->ucast_shared) {
+			/* Don't remove the unicast address that belongs to
+			 * another VSI on the switch, since it is not being
+			 * shared...
+			 */
+			ice_acquire_lock(rule_lock);
+			if (!ice_find_ucast_rule_entry(hw, ICE_SW_LKUP_MAC,
+						       &list_itr->fltr_info)) {
+				ice_release_lock(rule_lock);
+				return ICE_ERR_DOES_NOT_EXIST;
+			}
+			ice_release_lock(rule_lock);
+		}
 		list_itr->status = ice_remove_rule_internal(hw,
 							    ICE_SW_LKUP_MAC,
 							    list_itr);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 26/49] net/ice/base: add some minor features
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (24 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 25/49] net/ice/base: forbid VSI to remove unassociated ucast filter Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 27/49] net/ice/base: call out dev/func caps when printing Leyi Rong
                   ` (24 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Paul M Stillwell Jr

1. Add loopback reporting to get link response.
2. Add infrastructure for NVM Write/Write Activate calls.
3. Add opcode for NVM save factory settings/NVM Update EMPR command.
4. Add lan overflow event to ice_aq_desc.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 47 ++++++++++++++++++---------
 1 file changed, 32 insertions(+), 15 deletions(-)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 77f93b950..4e6bce18c 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -110,6 +110,7 @@ struct ice_aqc_list_caps {
 struct ice_aqc_list_caps_elem {
 	__le16 cap;
 #define ICE_AQC_CAPS_VALID_FUNCTIONS			0x0005
+#define ICE_AQC_MAX_VALID_FUNCTIONS			0x8
 #define ICE_AQC_CAPS_VSI				0x0017
 #define ICE_AQC_CAPS_DCB				0x0018
 #define ICE_AQC_CAPS_RSS				0x0040
@@ -143,11 +144,9 @@ struct ice_aqc_manage_mac_read {
 #define ICE_AQC_MAN_MAC_WOL_ADDR_VALID		BIT(7)
 #define ICE_AQC_MAN_MAC_READ_S			4
 #define ICE_AQC_MAN_MAC_READ_M			(0xF << ICE_AQC_MAN_MAC_READ_S)
-	u8 lport_num;
-	u8 lport_num_valid;
-#define ICE_AQC_MAN_MAC_PORT_NUM_IS_VALID	BIT(0)
+	u8 rsvd[2];
 	u8 num_addr; /* Used in response */
-	u8 reserved[3];
+	u8 rsvd1[3];
 	__le32 addr_high;
 	__le32 addr_low;
 };
@@ -165,7 +164,7 @@ struct ice_aqc_manage_mac_read_resp {
 
 /* Manage MAC address, write command - direct (0x0108) */
 struct ice_aqc_manage_mac_write {
-	u8 port_num;
+	u8 rsvd;
 	u8 flags;
 #define ICE_AQC_MAN_MAC_WR_MC_MAG_EN		BIT(0)
 #define ICE_AQC_MAN_MAC_WR_WOL_LAA_PFR_KEEP	BIT(1)
@@ -481,8 +480,8 @@ struct ice_aqc_vsi_props {
 #define ICE_AQ_VSI_VLAN_MODE_TAGGED	0x2
 #define ICE_AQ_VSI_VLAN_MODE_ALL	0x3
 #define ICE_AQ_VSI_PVLAN_INSERT_PVID	BIT(2)
-#define ICE_AQ_VSI_VLAN_EMOD_S	3
-#define ICE_AQ_VSI_VLAN_EMOD_M	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_S		3
+#define ICE_AQ_VSI_VLAN_EMOD_M		(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
 #define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH	(0x0 << ICE_AQ_VSI_VLAN_EMOD_S)
 #define ICE_AQ_VSI_VLAN_EMOD_STR_UP	(0x1 << ICE_AQ_VSI_VLAN_EMOD_S)
 #define ICE_AQ_VSI_VLAN_EMOD_STR	(0x2 << ICE_AQ_VSI_VLAN_EMOD_S)
@@ -1425,6 +1424,7 @@ struct ice_aqc_get_phy_caps_data {
 #define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN		BIT(6)
 #define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN		BIT(7)
 #define ICE_AQC_PHY_FEC_MASK				MAKEMASK(0xdf, 0)
+	u8 rsvd1;	/* Byte 35 reserved */
 	u8 extended_compliance_code;
 #define ICE_MODULE_TYPE_TOTAL_BYTE			3
 	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
@@ -1439,13 +1439,14 @@ struct ice_aqc_get_phy_caps_data {
 #define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS			0xA0
 #define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS		0x86
 	u8 qualified_module_count;
+	u8 rsvd2[7];	/* Bytes 47:41 reserved */
 #define ICE_AQC_QUAL_MOD_COUNT_MAX			16
 	struct {
 		u8 v_oui[3];
 		u8 rsvd3;
 		u8 v_part[16];
 		__le32 v_rev;
-		__le64 rsvd8;
+		__le64 rsvd4;
 	} qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
 };
 
@@ -1571,7 +1572,12 @@ struct ice_aqc_get_link_status_data {
 #define ICE_AQ_LINK_TX_ACTIVE		0
 #define ICE_AQ_LINK_TX_DRAINED		1
 #define ICE_AQ_LINK_TX_FLUSHED		3
-	u8 reserved2;
+	u8 lb_status;
+#define ICE_AQ_LINK_LB_PHY_LCL		BIT(0)
+#define ICE_AQ_LINK_LB_PHY_RMT		BIT(1)
+#define ICE_AQ_LINK_LB_MAC_LCL		BIT(2)
+#define ICE_AQ_LINK_LB_PHY_IDX_S	3
+#define ICE_AQ_LINK_LB_PHY_IDX_M	(0x7 << ICE_AQ_LB_PHY_IDX_S)
 	__le16 max_frame_size;
 	u8 cfg;
 #define ICE_AQ_LINK_25G_KR_FEC_EN	BIT(0)
@@ -1659,20 +1665,26 @@ struct ice_aqc_set_port_id_led {
 
 /* NVM Read command (indirect 0x0701)
  * NVM Erase commands (direct 0x0702)
- * NVM Update commands (indirect 0x0703)
+ * NVM Write commands (indirect 0x0703)
+ * NVM Write Activate commands (direct 0x0707)
+ * NVM Shadow RAM Dump commands (direct 0x0707)
  */
 struct ice_aqc_nvm {
 	__le16 offset_low;
 	u8 offset_high;
 	u8 cmd_flags;
 #define ICE_AQC_NVM_LAST_CMD		BIT(0)
-#define ICE_AQC_NVM_PCIR_REQ		BIT(0)	/* Used by NVM Update reply */
-#define ICE_AQC_NVM_PRESERVATION_S	1
+#define ICE_AQC_NVM_PCIR_REQ		BIT(0)	/* Used by NVM Write reply */
+#define ICE_AQC_NVM_PRESERVATION_S	1 /* Used by NVM Write Activate only */
 #define ICE_AQC_NVM_PRESERVATION_M	(3 << ICE_AQC_NVM_PRESERVATION_S)
 #define ICE_AQC_NVM_NO_PRESERVATION	(0 << ICE_AQC_NVM_PRESERVATION_S)
 #define ICE_AQC_NVM_PRESERVE_ALL	BIT(1)
 #define ICE_AQC_NVM_FACTORY_DEFAULT	(2 << ICE_AQC_NVM_PRESERVATION_S)
 #define ICE_AQC_NVM_PRESERVE_SELECTED	(3 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_ACTIV_SEL_NVM	BIT(3) /* Write Activate/SR Dump only */
+#define ICE_AQC_NVM_ACTIV_SEL_OROM	BIT(4)
+#define ICE_AQC_NVM_ACTIV_SEL_EXT_TLV	BIT(5)
+#define ICE_AQC_NVM_ACTIV_SEL_MASK	MAKEMASK(0x7, 3)
 #define ICE_AQC_NVM_FLASH_ONLY		BIT(7)
 	__le16 module_typeid;
 	__le16 length;
@@ -1832,7 +1844,7 @@ struct ice_aqc_get_cee_dcb_cfg_resp {
 };
 
 /* Set Local LLDP MIB (indirect 0x0A08)
- * Used to replace the local MIB of a given LLDP agent. e.g. DCBx
+ * Used to replace the local MIB of a given LLDP agent. e.g. DCBX
  */
 struct ice_aqc_lldp_set_local_mib {
 	u8 type;
@@ -1857,7 +1869,7 @@ struct ice_aqc_lldp_set_local_mib_resp {
 };
 
 /* Stop/Start LLDP Agent (direct 0x0A09)
- * Used for stopping/starting specific LLDP agent. e.g. DCBx.
+ * Used for stopping/starting specific LLDP agent. e.g. DCBX.
  * The same structure is used for the response, with the command field
  * being used as the status field.
  */
@@ -2321,6 +2333,7 @@ struct ice_aq_desc {
 		struct ice_aqc_set_mac_cfg set_mac_cfg;
 		struct ice_aqc_set_event_mask set_event_mask;
 		struct ice_aqc_get_link_status get_link_status;
+		struct ice_aqc_event_lan_overflow lan_overflow;
 	} params;
 };
 
@@ -2492,10 +2505,14 @@ enum ice_adminq_opc {
 	/* NVM commands */
 	ice_aqc_opc_nvm_read				= 0x0701,
 	ice_aqc_opc_nvm_erase				= 0x0702,
-	ice_aqc_opc_nvm_update				= 0x0703,
+	ice_aqc_opc_nvm_write				= 0x0703,
 	ice_aqc_opc_nvm_cfg_read			= 0x0704,
 	ice_aqc_opc_nvm_cfg_write			= 0x0705,
 	ice_aqc_opc_nvm_checksum			= 0x0706,
+	ice_aqc_opc_nvm_write_activate			= 0x0707,
+	ice_aqc_opc_nvm_sr_dump				= 0x0707,
+	ice_aqc_opc_nvm_save_factory_settings		= 0x0708,
+	ice_aqc_opc_nvm_update_empr			= 0x0709,
 
 	/* LLDP commands */
 	ice_aqc_opc_lldp_get_mib			= 0x0A00,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 27/49] net/ice/base: call out dev/func caps when printing
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (25 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 26/49] net/ice/base: add some minor features Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 28/49] net/ice/base: add some minor features Leyi Rong
                   ` (23 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Anirudh Venkataramanan, Paul M Stillwell Jr

This patch makes a change to add a "func cap" prefix when printing
function capabilities, and a "dev cap" prefix when printing device
capabilities.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 75 ++++++++++++++++++-------------
 drivers/net/ice/base/ice_osdep.h  | 14 ++++++
 2 files changed, 59 insertions(+), 30 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index b4a9172b9..6e5a60a38 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -1948,6 +1948,7 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 	struct ice_hw_func_caps *func_p = NULL;
 	struct ice_hw_dev_caps *dev_p = NULL;
 	struct ice_hw_common_caps *caps;
+	char const *prefix;
 	u32 i;
 
 	if (!buf)
@@ -1958,9 +1959,11 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 	if (opc == ice_aqc_opc_list_dev_caps) {
 		dev_p = &hw->dev_caps;
 		caps = &dev_p->common_cap;
+		prefix = "dev cap";
 	} else if (opc == ice_aqc_opc_list_func_caps) {
 		func_p = &hw->func_caps;
 		caps = &func_p->common_cap;
+		prefix = "func cap";
 	} else {
 		ice_debug(hw, ICE_DBG_INIT, "wrong opcode\n");
 		return;
@@ -1976,21 +1979,25 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 		case ICE_AQC_CAPS_VALID_FUNCTIONS:
 			caps->valid_functions = number;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Valid Functions = %d\n",
+				  "%s: valid functions = %d\n", prefix,
 				  caps->valid_functions);
 			break;
 		case ICE_AQC_CAPS_VSI:
 			if (dev_p) {
 				dev_p->num_vsi_allocd_to_host = number;
 				ice_debug(hw, ICE_DBG_INIT,
-					  "HW caps: Dev.VSI cnt = %d\n",
+					  "%s: num VSI alloc to host = %d\n",
+					  prefix,
 					  dev_p->num_vsi_allocd_to_host);
 			} else if (func_p) {
 				func_p->guar_num_vsi =
 					ice_get_num_per_func(hw, ICE_MAX_VSI);
 				ice_debug(hw, ICE_DBG_INIT,
-					  "HW caps: Func.VSI cnt = %d\n",
-					  number);
+					  "%s: num guaranteed VSI (fw) = %d\n",
+					  prefix, number);
+				ice_debug(hw, ICE_DBG_INIT,
+					  "%s: num guaranteed VSI = %d\n",
+					  prefix, func_p->guar_num_vsi);
 			}
 			break;
 		case ICE_AQC_CAPS_DCB:
@@ -1998,49 +2005,51 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 			caps->active_tc_bitmap = logical_id;
 			caps->maxtc = phys_id;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: DCB = %d\n", caps->dcb);
+				  "%s: DCB = %d\n", prefix, caps->dcb);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Active TC bitmap = %d\n",
+				  "%s: active TC bitmap = %d\n", prefix,
 				  caps->active_tc_bitmap);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: TC Max = %d\n", caps->maxtc);
+				  "%s: TC max = %d\n", prefix, caps->maxtc);
 			break;
 		case ICE_AQC_CAPS_RSS:
 			caps->rss_table_size = number;
 			caps->rss_table_entry_width = logical_id;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: RSS table size = %d\n",
+				  "%s: RSS table size = %d\n", prefix,
 				  caps->rss_table_size);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: RSS table width = %d\n",
+				  "%s: RSS table width = %d\n", prefix,
 				  caps->rss_table_entry_width);
 			break;
 		case ICE_AQC_CAPS_RXQS:
 			caps->num_rxq = number;
 			caps->rxq_first_id = phys_id;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Num Rx Qs = %d\n", caps->num_rxq);
+				  "%s: num Rx queues = %d\n", prefix,
+				  caps->num_rxq);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Rx first queue ID = %d\n",
+				  "%s: Rx first queue ID = %d\n", prefix,
 				  caps->rxq_first_id);
 			break;
 		case ICE_AQC_CAPS_TXQS:
 			caps->num_txq = number;
 			caps->txq_first_id = phys_id;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Num Tx Qs = %d\n", caps->num_txq);
+				  "%s: num Tx queues = %d\n", prefix,
+				  caps->num_txq);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Tx first queue ID = %d\n",
+				  "%s: Tx first queue ID = %d\n", prefix,
 				  caps->txq_first_id);
 			break;
 		case ICE_AQC_CAPS_MSIX:
 			caps->num_msix_vectors = number;
 			caps->msix_vector_first_id = phys_id;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: MSIX vector count = %d\n",
+				  "%s: MSIX vector count = %d\n", prefix,
 				  caps->num_msix_vectors);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: MSIX first vector index = %d\n",
+				  "%s: MSIX first vector index = %d\n", prefix,
 				  caps->msix_vector_first_id);
 			break;
 		case ICE_AQC_CAPS_FD:
@@ -2050,7 +2059,7 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 			if (dev_p) {
 				dev_p->num_flow_director_fltr = number;
 				ice_debug(hw, ICE_DBG_INIT,
-					  "HW caps: Dev.fd_fltr =%d\n",
+					  "%s: num FD filters = %d\n", prefix,
 					  dev_p->num_flow_director_fltr);
 			}
 			if (func_p) {
@@ -2063,32 +2072,38 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 				      GLQF_FD_SIZE_FD_BSIZE_S;
 				func_p->fd_fltr_best_effort = val;
 				ice_debug(hw, ICE_DBG_INIT,
-					  "HW:func.fd_fltr guar= %d\n",
-					  func_p->fd_fltr_guar);
+					  "%s: num guaranteed FD filters = %d\n",
+					  prefix, func_p->fd_fltr_guar);
 				ice_debug(hw, ICE_DBG_INIT,
-					  "HW:func.fd_fltr best effort=%d\n",
-					  func_p->fd_fltr_best_effort);
+					  "%s: num best effort FD filters = %d\n",
+					  prefix, func_p->fd_fltr_best_effort);
 			}
 			break;
 		}
 		case ICE_AQC_CAPS_MAX_MTU:
 			caps->max_mtu = number;
-			if (dev_p)
-				ice_debug(hw, ICE_DBG_INIT,
-					  "HW caps: Dev.MaxMTU = %d\n",
-					  caps->max_mtu);
-			else if (func_p)
-				ice_debug(hw, ICE_DBG_INIT,
-					  "HW caps: func.MaxMTU = %d\n",
-					  caps->max_mtu);
+			ice_debug(hw, ICE_DBG_INIT, "%s: max MTU = %d\n",
+				  prefix, caps->max_mtu);
 			break;
 		default:
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Unknown capability[%d]: 0x%x\n", i,
-				  cap);
+				  "%s: unknown capability[%d]: 0x%x\n", prefix,
+				  i, cap);
 			break;
 		}
 	}
+
+	/* Re-calculate capabilities that are dependent on the number of
+	 * physical ports; i.e. some features are not supported or function
+	 * differently on devices with more than 4 ports.
+	 */
+	if (caps && (ice_hweight32(caps->valid_functions) > 4)) {
+		/* Max 4 TCs per port */
+		caps->maxtc = 4;
+		ice_debug(hw, ICE_DBG_INIT,
+			  "%s: TC max = %d (based on #ports)\n", prefix,
+			  caps->maxtc);
+	}
 }
 
 /**
diff --git a/drivers/net/ice/base/ice_osdep.h b/drivers/net/ice/base/ice_osdep.h
index d2d9238c7..ede893fc9 100644
--- a/drivers/net/ice/base/ice_osdep.h
+++ b/drivers/net/ice/base/ice_osdep.h
@@ -267,6 +267,20 @@ ice_hweight8(u32 num)
 	return bits;
 }
 
+static inline u8
+ice_hweight32(u32 num)
+{
+	u8 bits = 0;
+	u32 i;
+
+	for (i = 0; i < 32; i++) {
+		bits += (u8)(num & 0x1);
+		num >>= 1;
+	}
+
+	return bits;
+}
+
 #define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
 #define DELAY(x) rte_delay_us(x)
 #define ice_usec_delay(x) rte_delay_us(x)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 28/49] net/ice/base: add some minor features
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (26 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 27/49] net/ice/base: call out dev/func caps when printing Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 29/49] net/ice/base: cleanup update link info Leyi Rong
                   ` (22 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Paul M Stillwell Jr

1. Disable TX pacing option.
2. Use a different ICE_DBG bit for firmware log messages.
3. Always set prefena when configuring a RX queue.
4. make FDID available for FlexDescriptor.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c    | 44 +++++++++++++---------------
 drivers/net/ice/base/ice_fdir.c      |  2 +-
 drivers/net/ice/base/ice_lan_tx_rx.h |  3 +-
 drivers/net/ice/base/ice_type.h      |  2 +-
 4 files changed, 25 insertions(+), 26 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 6e5a60a38..89c922bed 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -449,11 +449,7 @@ ice_aq_set_mac_cfg(struct ice_hw *hw, u16 max_frame_size, struct ice_sq_cd *cd)
 {
 	u16 fc_threshold_val, tx_timer_val;
 	struct ice_aqc_set_mac_cfg *cmd;
-	struct ice_port_info *pi;
 	struct ice_aq_desc desc;
-	enum ice_status status;
-	u8 port_num = 0;
-	bool link_up;
 	u32 reg_val;
 
 	cmd = &desc.params.set_mac_cfg;
@@ -465,21 +461,6 @@ ice_aq_set_mac_cfg(struct ice_hw *hw, u16 max_frame_size, struct ice_sq_cd *cd)
 
 	cmd->max_frame_size = CPU_TO_LE16(max_frame_size);
 
-	/* Retrieve the current data_pacing value in FW*/
-	pi = &hw->port_info[port_num];
-
-	/* We turn on the get_link_info so that ice_update_link_info(...)
-	 * can be called.
-	 */
-	pi->phy.get_link_info = 1;
-
-	status = ice_get_link_status(pi, &link_up);
-
-	if (status)
-		return status;
-
-	cmd->params = pi->phy.link_info.pacing;
-
 	/* We read back the transmit timer and fc threshold value of
 	 * LFC. Thus, we will use index =
 	 * PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX.
@@ -544,7 +525,15 @@ static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
 	}
 	recps = hw->switch_info->recp_list;
 	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		struct ice_recp_grp_entry *rg_entry, *tmprg_entry;
+
 		recps[i].root_rid = i;
+		LIST_FOR_EACH_ENTRY_SAFE(rg_entry, tmprg_entry,
+					 &recps[i].rg_list, ice_recp_grp_entry,
+					 l_entry) {
+			LIST_DEL(&rg_entry->l_entry);
+			ice_free(hw, rg_entry);
+		}
 
 		if (recps[i].adv_rule) {
 			struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
@@ -571,6 +560,8 @@ static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
 				ice_free(hw, lst_itr);
 			}
 		}
+		if (recps[i].root_buf)
+			ice_free(hw, recps[i].root_buf);
 	}
 	ice_rm_all_sw_replay_rule_info(hw);
 	ice_free(hw, sw->recp_list);
@@ -789,10 +780,10 @@ static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
  */
 void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf)
 {
-	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg Start ]\n");
-	ice_debug_array(hw, ICE_DBG_AQ_MSG, 16, 1, (u8 *)buf,
+	ice_debug(hw, ICE_DBG_FW_LOG, "[ FW Log Msg Start ]\n");
+	ice_debug_array(hw, ICE_DBG_FW_LOG, 16, 1, (u8 *)buf,
 			LE16_TO_CPU(desc->datalen));
-	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg End ]\n");
+	ice_debug(hw, ICE_DBG_FW_LOG, "[ FW Log Msg End ]\n");
 }
 
 /**
@@ -1213,6 +1204,7 @@ static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
 	ICE_CTX_STORE(ice_rlan_ctx, tphdata_ena,	1,	195),
 	ICE_CTX_STORE(ice_rlan_ctx, tphhead_ena,	1,	196),
 	ICE_CTX_STORE(ice_rlan_ctx, lrxqthresh,		3,	198),
+	ICE_CTX_STORE(ice_rlan_ctx, prefena,		1,	201),
 	{ 0 }
 };
 
@@ -1223,7 +1215,8 @@ static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
  * @rxq_index: the index of the Rx queue
  *
  * Converts rxq context from sparse to dense structure and then writes
- * it to HW register space
+ * it to HW register space and enables the hardware to prefetch descriptors
+ * instead of only fetching them on demand
  */
 enum ice_status
 ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
@@ -1231,6 +1224,11 @@ ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
 {
 	u8 ctx_buf[ICE_RXQ_CTX_SZ] = { 0 };
 
+	if (!rlan_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	rlan_ctx->prefena = 1;
+
 	ice_set_ctx((u8 *)rlan_ctx, ctx_buf, ice_rlan_ctx_info);
 	return ice_copy_rxq_ctx_to_hw(hw, ctx_buf, rxq_index);
 }
diff --git a/drivers/net/ice/base/ice_fdir.c b/drivers/net/ice/base/ice_fdir.c
index 4bc8e6dcb..bde676a8f 100644
--- a/drivers/net/ice/base/ice_fdir.c
+++ b/drivers/net/ice/base/ice_fdir.c
@@ -186,7 +186,7 @@ ice_set_dflt_val_fd_desc(struct ice_fd_fltr_desc_ctx *fd_fltr_ctx)
 	fd_fltr_ctx->desc_prof_prio = ICE_FXD_FLTR_QW1_PROF_PRIO_ZERO;
 	fd_fltr_ctx->desc_prof = ICE_FXD_FLTR_QW1_PROF_ZERO;
 	fd_fltr_ctx->swap = ICE_FXD_FLTR_QW1_SWAP_SET;
-	fd_fltr_ctx->fdid_prio = ICE_FXD_FLTR_QW1_FDID_PRI_ZERO;
+	fd_fltr_ctx->fdid_prio = ICE_FXD_FLTR_QW1_FDID_PRI_ONE;
 	fd_fltr_ctx->fdid_mdid = ICE_FXD_FLTR_QW1_FDID_MDID_FD;
 	fd_fltr_ctx->fdid = ICE_FXD_FLTR_QW1_FDID_ZERO;
 }
diff --git a/drivers/net/ice/base/ice_lan_tx_rx.h b/drivers/net/ice/base/ice_lan_tx_rx.h
index 8c9902994..fa2309bf1 100644
--- a/drivers/net/ice/base/ice_lan_tx_rx.h
+++ b/drivers/net/ice/base/ice_lan_tx_rx.h
@@ -162,7 +162,7 @@ struct ice_fltr_desc {
 
 #define ICE_FXD_FLTR_QW1_FDID_PRI_S	25
 #define ICE_FXD_FLTR_QW1_FDID_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW1_FDID_PRI_S)
-#define ICE_FXD_FLTR_QW1_FDID_PRI_ZERO	0x0ULL
+#define ICE_FXD_FLTR_QW1_FDID_PRI_ONE	0x1ULL
 
 #define ICE_FXD_FLTR_QW1_FDID_MDID_S	28
 #define ICE_FXD_FLTR_QW1_FDID_MDID_M	(0xFULL << ICE_FXD_FLTR_QW1_FDID_MDID_S)
@@ -807,6 +807,7 @@ struct ice_rlan_ctx {
 	u8 tphdata_ena;
 	u8 tphhead_ena;
 	u16 lrxqthresh; /* bigger than needed, see above for reason */
+	u8 prefena;	/* NOTE: normally must be set to 1 at init */
 };
 
 struct ice_ctx_ele {
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 477f34595..116cfe647 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -82,7 +82,7 @@ static inline u32 ice_round_to_num(u32 N, u32 R)
 /* debug masks - set these bits in hw->debug_mask to control output */
 #define ICE_DBG_INIT		BIT_ULL(1)
 #define ICE_DBG_RELEASE		BIT_ULL(2)
-
+#define ICE_DBG_FW_LOG		BIT_ULL(3)
 #define ICE_DBG_LINK		BIT_ULL(4)
 #define ICE_DBG_PHY		BIT_ULL(5)
 #define ICE_DBG_QCTX		BIT_ULL(6)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 29/49] net/ice/base: cleanup update link info
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (27 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 28/49] net/ice/base: add some minor features Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 30/49] net/ice/base: add rd64 support Leyi Rong
                   ` (21 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Bruce Allan, Chinh T Cao, Paul M Stillwell Jr

1. Do not unnecessarily initialize local variable.
2. Cleanup ice_update_link_info.
3. Dont clear auto_fec bit in ice_cfg_phy_fec.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 52 ++++++++++++++-----------------
 1 file changed, 24 insertions(+), 28 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 89c922bed..db3acc040 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -2414,10 +2414,10 @@ void
 ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
 		    u16 link_speeds_bitmap)
 {
-	u16 speed = ICE_AQ_LINK_SPEED_UNKNOWN;
 	u64 pt_high;
 	u64 pt_low;
 	int index;
+	u16 speed;
 
 	/* We first check with low part of phy_type */
 	for (index = 0; index <= ICE_PHY_TYPE_LOW_MAX_INDEX; index++) {
@@ -2498,38 +2498,38 @@ ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi,
  */
 enum ice_status ice_update_link_info(struct ice_port_info *pi)
 {
-	struct ice_aqc_get_phy_caps_data *pcaps;
-	struct ice_phy_info *phy_info;
+	struct ice_link_status *li;
 	enum ice_status status;
-	struct ice_hw *hw;
 
 	if (!pi)
 		return ICE_ERR_PARAM;
 
-	hw = pi->hw;
-
-	pcaps = (struct ice_aqc_get_phy_caps_data *)
-		ice_malloc(hw, sizeof(*pcaps));
-	if (!pcaps)
-		return ICE_ERR_NO_MEMORY;
+	li = &pi->phy.link_info;
 
-	phy_info = &pi->phy;
 	status = ice_aq_get_link_info(pi, true, NULL, NULL);
 	if (status)
-		goto out;
+		return status;
+
+	if (li->link_info & ICE_AQ_MEDIA_AVAILABLE) {
+		struct ice_aqc_get_phy_caps_data *pcaps;
+		struct ice_hw *hw;
 
-	if (phy_info->link_info.link_info & ICE_AQ_MEDIA_AVAILABLE) {
-		status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG,
+		hw = pi->hw;
+		pcaps = (struct ice_aqc_get_phy_caps_data *)
+			ice_malloc(hw, sizeof(*pcaps));
+		if (!pcaps)
+			return ICE_ERR_NO_MEMORY;
+
+		status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP,
 					     pcaps, NULL);
-		if (status)
-			goto out;
+		if (status == ICE_SUCCESS)
+			ice_memcpy(li->module_type, &pcaps->module_type,
+				   sizeof(li->module_type),
+				   ICE_NONDMA_TO_NONDMA);
 
-		ice_memcpy(phy_info->link_info.module_type, &pcaps->module_type,
-			   sizeof(phy_info->link_info.module_type),
-			   ICE_NONDMA_TO_NONDMA);
+		ice_free(hw, pcaps);
 	}
-out:
-	ice_free(hw, pcaps);
+
 	return status;
 }
 
@@ -2792,27 +2792,24 @@ ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec)
 {
 	switch (fec) {
 	case ICE_FEC_BASER:
-		/* Clear auto FEC and RS bits, and AND BASE-R ability
+		/* Clear RS bits, and AND BASE-R ability
 		 * bits and OR request bits.
 		 */
-		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
 		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
 				     ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN;
 		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
 				     ICE_AQC_PHY_FEC_25G_KR_REQ;
 		break;
 	case ICE_FEC_RS:
-		/* Clear auto FEC and BASE-R bits, and AND RS ability
+		/* Clear BASE-R bits, and AND RS ability
 		 * bits and OR request bits.
 		 */
-		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
 		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN;
 		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_25G_RS_528_REQ |
 				     ICE_AQC_PHY_FEC_25G_RS_544_REQ;
 		break;
 	case ICE_FEC_NONE:
-		/* Clear auto FEC and all FEC option bits. */
-		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+		/* Clear all FEC option bits. */
 		cfg->link_fec_opt &= ~ICE_AQC_PHY_FEC_MASK;
 		break;
 	case ICE_FEC_AUTO:
@@ -3912,7 +3909,6 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues,
 	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
 		return ICE_ERR_CFG;
 
-
 	if (!num_queues) {
 		/* if queue is disabled already yet the disable queue command
 		 * has to be sent to complete the VF reset, then call
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 30/49] net/ice/base: add rd64 support
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (28 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 29/49] net/ice/base: cleanup update link info Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 31/49] net/ice/base: track HW stat registers past rollover Leyi Rong
                   ` (20 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong

Add API support for rd64.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_osdep.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/net/ice/base/ice_osdep.h b/drivers/net/ice/base/ice_osdep.h
index ede893fc9..35a17b941 100644
--- a/drivers/net/ice/base/ice_osdep.h
+++ b/drivers/net/ice/base/ice_osdep.h
@@ -126,11 +126,19 @@ do {									\
 #define ICE_PCI_REG(reg)     rte_read32(reg)
 #define ICE_PCI_REG_ADDR(a, reg) \
 	((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+#define ICE_PCI_REG64(reg)     rte_read64(reg)
+#define ICE_PCI_REG_ADDR64(a, reg) \
+	((volatile uint64_t *)((char *)(a)->hw_addr + (reg)))
 static inline uint32_t ice_read_addr(volatile void *addr)
 {
 	return rte_le_to_cpu_32(ICE_PCI_REG(addr));
 }
 
+static inline uint64_t ice_read_addr64(volatile void *addr)
+{
+	return rte_le_to_cpu_64(ICE_PCI_REG64(addr));
+}
+
 #define ICE_PCI_REG_WRITE(reg, value) \
 	rte_write32((rte_cpu_to_le_32(value)), reg)
 
@@ -145,6 +153,7 @@ static inline uint32_t ice_read_addr(volatile void *addr)
 	ICE_PCI_REG_WRITE(ICE_PCI_REG_ADDR((a), (reg)), (value))
 #define flush(a) ice_read_addr(ICE_PCI_REG_ADDR((a), (GLGEN_STAT)))
 #define div64_long(n, d) ((n) / (d))
+#define rd64(a, reg) ice_read_addr64(ICE_PCI_REG_ADDR64((a), (reg)))
 
 #define BITS_PER_BYTE       8
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 31/49] net/ice/base: track HW stat registers past rollover
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (29 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 30/49] net/ice/base: add rd64 support Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 32/49] net/ice/base: implement LLDP persistent settings Leyi Rong
                   ` (19 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

Modify ice_stat_update40 to use rd64 instead of two calls to rd32.
Additionally, drop the now unnecessary hireg function parameter.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 57 +++++++++++++++++++------------
 drivers/net/ice/base/ice_common.h | 10 +++---
 2 files changed, 39 insertions(+), 28 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index db3acc040..f9a5d43e6 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -4087,40 +4087,44 @@ void ice_replay_post(struct ice_hw *hw)
 /**
  * ice_stat_update40 - read 40 bit stat from the chip and update stat values
  * @hw: ptr to the hardware info
- * @hireg: high 32 bit HW register to read from
- * @loreg: low 32 bit HW register to read from
+ * @reg: offset of 64 bit HW register to read from
  * @prev_stat_loaded: bool to specify if previous stats are loaded
  * @prev_stat: ptr to previous loaded stat value
  * @cur_stat: ptr to current stat value
  */
 void
-ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
-		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat)
+ice_stat_update40(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat)
 {
-	u64 new_data;
-
-	new_data = rd32(hw, loreg);
-	new_data |= ((u64)(rd32(hw, hireg) & 0xFFFF)) << 32;
+	u64 new_data = rd64(hw, reg) & (BIT_ULL(40) - 1);
 
 	/* device stats are not reset at PFR, they likely will not be zeroed
-	 * when the driver starts. So save the first values read and use them as
-	 * offsets to be subtracted from the raw values in order to report stats
-	 * that count from zero.
+	 * when the driver starts. Thus, save the value from the first read
+	 * without adding to the statistic value so that we report stats which
+	 * count up from zero.
 	 */
-	if (!prev_stat_loaded)
+	if (!prev_stat_loaded) {
 		*prev_stat = new_data;
+		return;
+	}
+
+	/* Calculate the difference between the new and old values, and then
+	 * add it to the software stat value.
+	 */
 	if (new_data >= *prev_stat)
-		*cur_stat = new_data - *prev_stat;
+		*cur_stat += new_data - *prev_stat;
 	else
 		/* to manage the potential roll-over */
-		*cur_stat = (new_data + BIT_ULL(40)) - *prev_stat;
-	*cur_stat &= 0xFFFFFFFFFFULL;
+		*cur_stat += (new_data + BIT_ULL(40)) - *prev_stat;
+
+	/* Update the previously stored value to prepare for next read */
+	*prev_stat = new_data;
 }
 
 /**
  * ice_stat_update32 - read 32 bit stat from the chip and update stat values
  * @hw: ptr to the hardware info
- * @reg: HW register to read from
+ * @reg: offset of HW register to read from
  * @prev_stat_loaded: bool to specify if previous stats are loaded
  * @prev_stat: ptr to previous loaded stat value
  * @cur_stat: ptr to current stat value
@@ -4134,17 +4138,26 @@ ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
 	new_data = rd32(hw, reg);
 
 	/* device stats are not reset at PFR, they likely will not be zeroed
-	 * when the driver starts. So save the first values read and use them as
-	 * offsets to be subtracted from the raw values in order to report stats
-	 * that count from zero.
+	 * when the driver starts. Thus, save the value from the first read
+	 * without adding to the statistic value so that we report stats which
+	 * count up from zero.
 	 */
-	if (!prev_stat_loaded)
+	if (!prev_stat_loaded) {
 		*prev_stat = new_data;
+		return;
+	}
+
+	/* Calculate the difference between the new and old values, and then
+	 * add it to the software stat value.
+	 */
 	if (new_data >= *prev_stat)
-		*cur_stat = new_data - *prev_stat;
+		*cur_stat += new_data - *prev_stat;
 	else
 		/* to manage the potential roll-over */
-		*cur_stat = (new_data + BIT_ULL(32)) - *prev_stat;
+		*cur_stat += (new_data + BIT_ULL(32)) - *prev_stat;
+
+	/* Update the previously stored value to prepare for next read */
+	*prev_stat = new_data;
 }
 
 /**
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 2ea4a6e8e..673943cb9 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -6,7 +6,6 @@
 #define _ICE_COMMON_H_
 
 #include "ice_type.h"
-
 #include "ice_flex_pipe.h"
 #include "ice_switch.h"
 #include "ice_fdir.h"
@@ -34,8 +33,7 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 		  struct ice_rq_event_info *e, u16 *pending);
 enum ice_status
 ice_get_link_status(struct ice_port_info *pi, bool *link_up);
-enum ice_status
-ice_update_link_info(struct ice_port_info *pi);
+enum ice_status ice_update_link_info(struct ice_port_info *pi);
 enum ice_status
 ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
 		enum ice_aq_res_access_type access, u32 timeout);
@@ -200,8 +198,8 @@ ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
 enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes);
 void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf);
 void
-ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
-		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat);
+ice_stat_update40(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat);
 void
 ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
 		  u64 *prev_stat, u64 *cur_stat);
@@ -211,5 +209,5 @@ ice_stat_update_repc(struct ice_hw *hw, u16 vsi_handle, bool prev_stat_loaded,
 enum ice_status
 ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
 		     struct ice_aqc_get_elem *buf);
-bool ice_is_fw_in_rec_mode(struct ice_hw *hw);
 #endif /* _ICE_COMMON_H_ */
+bool ice_is_fw_in_rec_mode(struct ice_hw *hw);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 32/49] net/ice/base: implement LLDP persistent settings
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (30 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 31/49] net/ice/base: track HW stat registers past rollover Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 33/49] net/ice/base: check new FD filter duplicate location Leyi Rong
                   ` (18 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jaroslaw Ilgiewicz, Paul M Stillwell Jr

This patch implements persistent, across reboots, start and stop
of LLDP agent. Added additional function parameter to
ice_aq_start_lldp and ice_aq_stop_lldp.

Signed-off-by: Jaroslaw Ilgiewicz <jaroslaw.ilgiewicz@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_dcb.c | 33 ++++++++++++++++++++++-----------
 drivers/net/ice/base/ice_dcb.h |  9 ++++-----
 drivers/net/ice/ice_ethdev.c   |  2 +-
 3 files changed, 27 insertions(+), 17 deletions(-)

diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c
index 100c4bb0f..008c7a110 100644
--- a/drivers/net/ice/base/ice_dcb.c
+++ b/drivers/net/ice/base/ice_dcb.c
@@ -83,12 +83,14 @@ ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update,
  * @hw: pointer to the HW struct
  * @shutdown_lldp_agent: True if LLDP Agent needs to be Shutdown
  *			 False if LLDP Agent needs to be Stopped
+ * @persist: True if Stop/Shutdown of LLDP Agent needs to be persistent across
+ *	     reboots
  * @cd: pointer to command details structure or NULL
  *
  * Stop or Shutdown the embedded LLDP Agent (0x0A05)
  */
 enum ice_status
-ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent,
+ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent, bool persist,
 		 struct ice_sq_cd *cd)
 {
 	struct ice_aqc_lldp_stop *cmd;
@@ -101,17 +103,22 @@ ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent,
 	if (shutdown_lldp_agent)
 		cmd->command |= ICE_AQ_LLDP_AGENT_SHUTDOWN;
 
+	if (persist)
+		cmd->command |= ICE_AQ_LLDP_AGENT_PERSIST_DIS;
+
 	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
 }
 
 /**
  * ice_aq_start_lldp
  * @hw: pointer to the HW struct
+ * @persist: True if Start of LLDP Agent needs to be persistent across reboots
  * @cd: pointer to command details structure or NULL
  *
  * Start the embedded LLDP Agent on all ports. (0x0A06)
  */
-enum ice_status ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd)
+enum ice_status
+ice_aq_start_lldp(struct ice_hw *hw, bool persist, struct ice_sq_cd *cd)
 {
 	struct ice_aqc_lldp_start *cmd;
 	struct ice_aq_desc desc;
@@ -122,6 +129,9 @@ enum ice_status ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd)
 
 	cmd->command = ICE_AQ_LLDP_AGENT_START;
 
+	if (persist)
+		cmd->command |= ICE_AQ_LLDP_AGENT_PERSIST_ENA;
+
 	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
 }
 
@@ -615,7 +625,8 @@ ice_parse_org_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
  *
  * Parse DCB configuration from the LLDPDU
  */
-enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg)
+enum ice_status
+ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg)
 {
 	struct ice_lldp_org_tlv *tlv;
 	enum ice_status ret = ICE_SUCCESS;
@@ -659,7 +670,7 @@ enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg)
 /**
  * ice_aq_get_dcb_cfg
  * @hw: pointer to the HW struct
- * @mib_type: mib type for the query
+ * @mib_type: MIB type for the query
  * @bridgetype: bridge type for the query (remote)
  * @dcbcfg: store for LLDPDU data
  *
@@ -690,13 +701,13 @@ ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
 }
 
 /**
- * ice_aq_start_stop_dcbx - Start/Stop DCBx service in FW
+ * ice_aq_start_stop_dcbx - Start/Stop DCBX service in FW
  * @hw: pointer to the HW struct
- * @start_dcbx_agent: True if DCBx Agent needs to be started
- *		      False if DCBx Agent needs to be stopped
- * @dcbx_agent_status: FW indicates back the DCBx agent status
- *		       True if DCBx Agent is active
- *		       False if DCBx Agent is stopped
+ * @start_dcbx_agent: True if DCBX Agent needs to be started
+ *		      False if DCBX Agent needs to be stopped
+ * @dcbx_agent_status: FW indicates back the DCBX agent status
+ *		       True if DCBX Agent is active
+ *		       False if DCBX Agent is stopped
  * @cd: pointer to command details structure or NULL
  *
  * Start/Stop the embedded dcbx Agent. In case that this wrapper function
@@ -1236,7 +1247,7 @@ ice_add_dcb_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg,
 /**
  * ice_dcb_cfg_to_lldp - Convert DCB configuration to MIB format
  * @lldpmib: pointer to the HW struct
- * @miblen: length of LLDP mib
+ * @miblen: length of LLDP MIB
  * @dcbcfg: Local store which holds the DCB Config
  *
  * Convert the DCB configuration to MIB format
diff --git a/drivers/net/ice/base/ice_dcb.h b/drivers/net/ice/base/ice_dcb.h
index 65d2bafef..47127096b 100644
--- a/drivers/net/ice/base/ice_dcb.h
+++ b/drivers/net/ice/base/ice_dcb.h
@@ -114,7 +114,6 @@ struct ice_lldp_org_tlv {
 	__be32 ouisubtype;
 	u8 tlvinfo[1];
 };
-
 #pragma pack()
 
 struct ice_cee_tlv_hdr {
@@ -147,7 +146,6 @@ struct ice_cee_app_prio {
 	__be16 lower_oui;
 	u8 prio_map;
 };
-
 #pragma pack()
 
 /* TODO: The below structures related LLDP/DCBX variables
@@ -190,8 +188,8 @@ enum ice_status
 ice_aq_get_cee_dcb_cfg(struct ice_hw *hw,
 		       struct ice_aqc_get_cee_dcb_cfg_resp *buff,
 		       struct ice_sq_cd *cd);
-u8 ice_get_dcbx_status(struct ice_hw *hw);
 enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg);
+u8 ice_get_dcbx_status(struct ice_hw *hw);
 enum ice_status
 ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
 		   struct ice_dcbx_cfg *dcbcfg);
@@ -211,9 +209,10 @@ enum ice_status
 ice_update_port_tc_tree_cfg(struct ice_port_info *pi,
 			    struct ice_aqc_port_ets_elem *buf);
 enum ice_status
-ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent,
+ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent, bool persist,
 		 struct ice_sq_cd *cd);
-enum ice_status ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_start_lldp(struct ice_hw *hw, bool persist, struct ice_sq_cd *cd);
 enum ice_status
 ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent,
 		       bool *dcbx_agent_status, struct ice_sq_cd *cd);
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 962d506a1..b14bc7102 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1447,7 +1447,7 @@ ice_dev_init(struct rte_eth_dev *dev)
 	/* Disable double vlan by default */
 	ice_vsi_config_double_vlan(vsi, FALSE);
 
-	ret = ice_aq_stop_lldp(hw, TRUE, NULL);
+	ret = ice_aq_stop_lldp(hw, TRUE, FALSE, NULL);
 	if (ret != ICE_SUCCESS)
 		PMD_INIT_LOG(DEBUG, "lldp has already stopped\n");
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 33/49] net/ice/base: check new FD filter duplicate location
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (31 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 32/49] net/ice/base: implement LLDP persistent settings Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 34/49] net/ice/base: correct UDP/TCP PTYPE assignments Leyi Rong
                   ` (17 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Karol Kolacinski, Paul M Stillwell Jr

Function ice_fdir_is_dup_fltr tests if new Flow Director rule
is not a duplicate.

Signed-off-by: Karol Kolacinski <karol.kolacinski@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_fdir.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_fdir.c b/drivers/net/ice/base/ice_fdir.c
index bde676a8f..9ef91b3b8 100644
--- a/drivers/net/ice/base/ice_fdir.c
+++ b/drivers/net/ice/base/ice_fdir.c
@@ -692,8 +692,13 @@ bool ice_fdir_is_dup_fltr(struct ice_hw *hw, struct ice_fdir_fltr *input)
 				ret = ice_fdir_comp_rules(rule, input, false);
 			else
 				ret = ice_fdir_comp_rules(rule, input, true);
-			if (ret)
-				break;
+			if (ret) {
+				if (rule->fltr_id == input->fltr_id &&
+				    rule->q_index != input->q_index)
+					ret = false;
+				else
+					break;
+			}
 		}
 	}
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 34/49] net/ice/base: correct UDP/TCP PTYPE assignments
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (32 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 33/49] net/ice/base: check new FD filter duplicate location Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 35/49] net/ice/base: calculate rate limit burst size correctly Leyi Rong
                   ` (16 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Vignesh Sridhar, Paul M Stillwell Jr

1. Using the UDP-IL PTYPEs when processing packet segments as it contains
all PTYPEs with UDP and allow packets to be forwarded to associated VSIs
as switch rules are based on outer IPs.
2. Add PTYPE 0x088 to TCP PTYPE bitmap list.

Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c | 23 ++++++-----------------
 1 file changed, 6 insertions(+), 17 deletions(-)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 1ec49fcd9..36657a1a3 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -195,21 +195,11 @@ static const u32 ice_ptypes_arp_of[] = {
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 };
 
-/* Packet types for packets with an Outermost/First UDP header */
-static const u32 ice_ptypes_udp_of[] = {
-	0x81000000, 0x00000000, 0x04000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-};
-
-/* Packet types for packets with an Innermost/Last UDP header */
+/* UDP Packet types for non-tunneled packets or tunneled
+ * packets with inner UDP.
+ */
 static const u32 ice_ptypes_udp_il[] = {
-	0x80000000, 0x20204040, 0x00081010, 0x80810102,
+	0x81000000, 0x20204040, 0x04081010, 0x80810102,
 	0x00204040, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
@@ -222,7 +212,7 @@ static const u32 ice_ptypes_udp_il[] = {
 /* Packet types for packets with an Innermost/Last TCP header */
 static const u32 ice_ptypes_tcp_il[] = {
 	0x04000000, 0x80810102, 0x10204040, 0x42040408,
-	0x00810002, 0x00000000, 0x00000000, 0x00000000,
+	0x00810102, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
@@ -473,8 +463,7 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
 				       ICE_FLOW_PTYPE_MAX);
 			hdrs &= ~ICE_FLOW_SEG_HDR_ICMP;
 		} else if (hdrs & ICE_FLOW_SEG_HDR_UDP) {
-			src = !i ? (const ice_bitmap_t *)ice_ptypes_udp_of :
-				(const ice_bitmap_t *)ice_ptypes_udp_il;
+			src = (const ice_bitmap_t *)ice_ptypes_udp_il;
 			ice_and_bitmap(params->ptypes, params->ptypes, src,
 				       ICE_FLOW_PTYPE_MAX);
 			hdrs &= ~ICE_FLOW_SEG_HDR_UDP;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 35/49] net/ice/base: calculate rate limit burst size correctly
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (33 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 34/49] net/ice/base: correct UDP/TCP PTYPE assignments Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 36/49] net/ice/base: add lock around profile map list Leyi Rong
                   ` (15 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Ben Shelton, Paul M Stillwell Jr

When the MSB is not set, the lower 11 bits do not represent bytes, but
chunks of 64 bytes. Adjust the rate limit burst size calculation
accordingly, and update the comments to indicate the way the hardware
actually works.

Signed-off-by: Ben Shelton <benjamin.h.shelton@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_sched.c | 17 ++++++++---------
 drivers/net/ice/base/ice_sched.h | 14 ++++++++------
 2 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 0c1c18ba1..a72e72982 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -5060,16 +5060,15 @@ enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes)
 	if (bytes < ICE_MIN_BURST_SIZE_ALLOWED ||
 	    bytes > ICE_MAX_BURST_SIZE_ALLOWED)
 		return ICE_ERR_PARAM;
-	if (bytes <= ICE_MAX_BURST_SIZE_BYTE_GRANULARITY) {
-		/* byte granularity case */
+	if (ice_round_to_num(bytes, 64) <=
+	    ICE_MAX_BURST_SIZE_64_BYTE_GRANULARITY) {
+		/* 64 byte granularity case */
 		/* Disable MSB granularity bit */
-		burst_size_to_prog = ICE_BYTE_GRANULARITY;
-		/* round number to nearest 256 granularity */
-		bytes = ice_round_to_num(bytes, 256);
-		/* check rounding doesn't go beyond allowed */
-		if (bytes > ICE_MAX_BURST_SIZE_BYTE_GRANULARITY)
-			bytes = ICE_MAX_BURST_SIZE_BYTE_GRANULARITY;
-		burst_size_to_prog |= (u16)bytes;
+		burst_size_to_prog = ICE_64_BYTE_GRANULARITY;
+		/* round number to nearest 64 byte granularity */
+		bytes = ice_round_to_num(bytes, 64);
+		/* The value is in 64 byte chunks */
+		burst_size_to_prog |= (u16)(bytes / 64);
 	} else {
 		/* k bytes granularity case */
 		/* Enable MSB granularity bit */
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 56f9977ab..e444dc880 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -13,14 +13,16 @@
 #define ICE_SCHED_INVAL_LAYER_NUM	0xFF
 /* Burst size is a 12 bits register that is configured while creating the RL
  * profile(s). MSB is a granularity bit and tells the granularity type
- * 0 - LSB bits are in bytes granularity
+ * 0 - LSB bits are in 64 bytes granularity
  * 1 - LSB bits are in 1K bytes granularity
  */
-#define ICE_BYTE_GRANULARITY			0
-#define ICE_KBYTE_GRANULARITY			0x800
-#define ICE_MIN_BURST_SIZE_ALLOWED		1 /* In Bytes */
-#define ICE_MAX_BURST_SIZE_ALLOWED		(2047 * 1024) /* In Bytes */
-#define ICE_MAX_BURST_SIZE_BYTE_GRANULARITY	2047 /* In Bytes */
+#define ICE_64_BYTE_GRANULARITY			0
+#define ICE_KBYTE_GRANULARITY			BIT(11)
+#define ICE_MIN_BURST_SIZE_ALLOWED		64 /* In Bytes */
+#define ICE_MAX_BURST_SIZE_ALLOWED \
+	((BIT(11) - 1) * 1024) /* In Bytes */
+#define ICE_MAX_BURST_SIZE_64_BYTE_GRANULARITY \
+	((BIT(11) - 1) * 64) /* In Bytes */
 #define ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY	ICE_MAX_BURST_SIZE_ALLOWED
 
 #define ICE_RL_PROF_FREQUENCY 446000000
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 36/49] net/ice/base: add lock around profile map list
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (34 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 35/49] net/ice/base: calculate rate limit burst size correctly Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 37/49] net/ice/base: fix Flow Director VSI count Leyi Rong
                   ` (14 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Add locking mechanism around profile map list.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 31 +++++++++++++++++-----------
 1 file changed, 19 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index fdbf893a8..5864cbf3e 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -3973,6 +3973,8 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 	u32 byte = 0;
 	u8 prof_id;
 
+	ice_acquire_lock(&hw->blk[blk].es.prof_map_lock);
+
 	/* search for existing profile */
 	status = ice_find_prof_id(hw, blk, es, &prof_id);
 	if (status) {
@@ -4044,11 +4046,12 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 		bytes--;
 		byte++;
 	}
-	LIST_ADD(&prof->list, &hw->blk[blk].es.prof_map);
 
-	return ICE_SUCCESS;
+	LIST_ADD(&prof->list, &hw->blk[blk].es.prof_map);
+	status = ICE_SUCCESS;
 
 err_ice_add_prof:
+	ice_release_lock(&hw->blk[blk].es.prof_map_lock);
 	return status;
 }
 
@@ -4350,29 +4353,33 @@ ice_rem_flow_all(struct ice_hw *hw, enum ice_block blk, u64 id)
  */
 enum ice_status ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id)
 {
-	enum ice_status status;
 	struct ice_prof_map *pmap;
+	enum ice_status status;
 
-	pmap = ice_search_prof_id(hw, blk, id);
-	if (!pmap)
-		return ICE_ERR_DOES_NOT_EXIST;
+	ice_acquire_lock(&hw->blk[blk].es.prof_map_lock);
+
+	pmap = ice_search_prof_id_low(hw, blk, id);
+	if (!pmap) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto err_ice_rem_prof;
+	}
 
 	/* remove all flows with this profile */
 	status = ice_rem_flow_all(hw, blk, pmap->profile_cookie);
 	if (status)
-		return status;
+		goto err_ice_rem_prof;
 
-	/* remove profile */
-	status = ice_free_prof_id(hw, blk, pmap->prof_id);
-	if (status)
-		return status;
 	/* dereference profile, and possibly remove */
 	ice_prof_dec_ref(hw, blk, pmap->prof_id);
 
 	LIST_DEL(&pmap->list);
 	ice_free(hw, pmap);
 
-	return ICE_SUCCESS;
+	status = ICE_SUCCESS;
+
+err_ice_rem_prof:
+	ice_release_lock(&hw->blk[blk].es.prof_map_lock);
+	return status;
 }
 
 /**
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 37/49] net/ice/base: fix Flow Director VSI count
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (35 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 36/49] net/ice/base: add lock around profile map list Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 38/49] net/ice/base: use more efficient structures Leyi Rong
                   ` (13 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Henry Tieman, Paul M Stillwell Jr

Flow director keeps a list of VSIs for each flow type (TCP4, UDP6, etc.)
This list varies in length depending on the number of traffic classes
(ADQ). This patch uses the define of max TCs to calculate the size of
the VSI array.

Fixes: bd984f155f49 ("net/ice/base: support FDIR")

Signed-off-by: Henry Tieman <henry.w.tieman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_type.h | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 116cfe647..919ca7fa8 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -273,8 +273,13 @@ enum ice_fltr_ptype {
 	ICE_FLTR_PTYPE_MAX,
 };
 
-/* 6 VSI = 1 ICE_VSI_PF + 1 ICE_VSI_CTRL + 4 ICE_VSI_CHNL */
-#define ICE_MAX_FDIR_VSI_PER_FILTER	6
+#ifndef ADQ_SUPPORT
+/* 2 VSI = 1 ICE_VSI_PF + 1 ICE_VSI_CTRL */
+#define ICE_MAX_FDIR_VSI_PER_FILTER	2
+#else
+/* 1 ICE_VSI_PF + 1 ICE_VSI_CTRL + ICE_MAX_TRAFFIC_CLASS */
+#define ICE_MAX_FDIR_VSI_PER_FILTER	(2 + ICE_MAX_TRAFFIC_CLASS)
+#endif /* !ADQ_SUPPORT */
 
 struct ice_fd_hw_prof {
 	struct ice_flow_seg_info *fdir_seg;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 38/49] net/ice/base: use more efficient structures
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (36 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 37/49] net/ice/base: fix Flow Director VSI count Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 39/49] net/ice/base: slightly code update Leyi Rong
                   ` (12 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jesse Brandeburg, Paul M Stillwell Jr

Move a bunch of members around to make more efficient use of
memory, eliminating holes where possible. None of these members
are hot path so cache line alignment is not very important here.

Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_controlq.h  |  4 +--
 drivers/net/ice/base/ice_flex_type.h | 38 +++++++++++++---------------
 drivers/net/ice/base/ice_flow.c      |  4 +--
 drivers/net/ice/base/ice_flow.h      | 18 ++++++-------
 4 files changed, 29 insertions(+), 35 deletions(-)

diff --git a/drivers/net/ice/base/ice_controlq.h b/drivers/net/ice/base/ice_controlq.h
index 182db6754..21c8722e5 100644
--- a/drivers/net/ice/base/ice_controlq.h
+++ b/drivers/net/ice/base/ice_controlq.h
@@ -81,6 +81,7 @@ struct ice_rq_event_info {
 /* Control Queue information */
 struct ice_ctl_q_info {
 	enum ice_ctl_q qtype;
+	enum ice_aq_err rq_last_status;	/* last status on receive queue */
 	struct ice_ctl_q_ring rq;	/* receive queue */
 	struct ice_ctl_q_ring sq;	/* send queue */
 	u32 sq_cmd_timeout;		/* send queue cmd write back timeout */
@@ -88,10 +89,9 @@ struct ice_ctl_q_info {
 	u16 num_sq_entries;		/* send queue depth */
 	u16 rq_buf_size;		/* receive queue buffer size */
 	u16 sq_buf_size;		/* send queue buffer size */
+	enum ice_aq_err sq_last_status;	/* last status on send queue */
 	struct ice_lock sq_lock;		/* Send queue lock */
 	struct ice_lock rq_lock;		/* Receive queue lock */
-	enum ice_aq_err sq_last_status;	/* last status on send queue */
-	enum ice_aq_err rq_last_status;	/* last status on receive queue */
 };
 
 #endif /* _ICE_CONTROLQ_H_ */
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
index d23b2ae82..dca5cf285 100644
--- a/drivers/net/ice/base/ice_flex_type.h
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -5,7 +5,7 @@
 #ifndef _ICE_FLEX_TYPE_H_
 #define _ICE_FLEX_TYPE_H_
 
-#define ICE_FV_OFFSET_INVAL    0x1FF
+#define ICE_FV_OFFSET_INVAL	0x1FF
 
 #pragma pack(1)
 /* Extraction Sequence (Field Vector) Table */
@@ -14,7 +14,6 @@ struct ice_fv_word {
 	u16 off;		/* Offset within the protocol header */
 	u8 resvrd;
 };
-
 #pragma pack()
 
 #define ICE_MAX_FV_WORDS 48
@@ -367,7 +366,6 @@ struct ice_boost_key_value {
 	__le16 hv_src_port_key;
 	u8 tcam_search_key;
 };
-
 #pragma pack()
 
 struct ice_boost_key {
@@ -406,7 +404,6 @@ struct ice_xlt1_section {
 	__le16 offset;
 	u8 value[1];
 };
-
 #pragma pack()
 
 #define ICE_XLT1_SIZE(n)	(sizeof(struct ice_xlt1_section) + \
@@ -467,19 +464,19 @@ struct ice_tunnel_type_scan {
 
 struct ice_tunnel_entry {
 	enum ice_tunnel_type type;
-	u8 valid;
-	u8 in_use;
-	u8 marked;
 	u16 boost_addr;
 	u16 port;
 	struct ice_boost_tcam_entry *boost_entry;
+	u8 valid;
+	u8 in_use;
+	u8 marked;
 };
 
 #define ICE_TUNNEL_MAX_ENTRIES	16
 
 struct ice_tunnel_table {
-	u16 count;
 	struct ice_tunnel_entry tbl[ICE_TUNNEL_MAX_ENTRIES];
+	u16 count;
 };
 
 struct ice_pkg_es {
@@ -511,13 +508,13 @@ struct ice_es {
 #define ICE_DEFAULT_PTG	0
 
 struct ice_ptg_entry {
-	u8 in_use;
 	struct ice_ptg_ptype *first_ptype;
+	u8 in_use;
 };
 
 struct ice_ptg_ptype {
-	u8 ptg;
 	struct ice_ptg_ptype *next_ptype;
+	u8 ptg;
 };
 
 #define ICE_MAX_TCAM_PER_PROFILE	8
@@ -535,9 +532,9 @@ struct ice_prof_map {
 #define ICE_INVALID_TCAM	0xFFFF
 
 struct ice_tcam_inf {
+	u16 tcam_idx;
 	u8 ptg;
 	u8 prof_id;
-	u16 tcam_idx;
 	u8 in_use;
 };
 
@@ -550,16 +547,16 @@ struct ice_vsig_prof {
 };
 
 struct ice_vsig_entry {
-	u8 in_use;
 	struct LIST_HEAD_TYPE prop_lst;
 	struct ice_vsig_vsi *first_vsi;
+	u8 in_use;
 };
 
 struct ice_vsig_vsi {
+	struct ice_vsig_vsi *next_vsi;
+	u32 prop_mask;
 	u16 changed;
 	u16 vsig;
-	u32 prop_mask;
-	struct ice_vsig_vsi *next_vsi;
 };
 
 #define ICE_XLT1_CNT	1024
@@ -567,11 +564,11 @@ struct ice_vsig_vsi {
 
 /* XLT1 Table */
 struct ice_xlt1 {
-	u32 sid;
-	u16 count;
 	struct ice_ptg_entry *ptg_tbl;
 	struct ice_ptg_ptype *ptypes;
 	u8 *t;
+	u32 sid;
+	u16 count;
 };
 
 #define ICE_XLT2_CNT	768
@@ -591,11 +588,11 @@ struct ice_xlt1 {
 
 /* XLT2 Table */
 struct ice_xlt2 {
-	u32 sid;
-	u16 count;
 	struct ice_vsig_entry *vsig_tbl;
 	struct ice_vsig_vsi *vsis;
 	u16 *t;
+	u32 sid;
+	u16 count;
 };
 
 /* Extraction sequence - list of match fields:
@@ -641,21 +638,20 @@ struct ice_prof_id_section {
 	__le16 count;
 	struct ice_prof_tcam_entry entry[1];
 };
-
 #pragma pack()
 
 struct ice_prof_tcam {
 	u32 sid;
 	u16 count;
 	u16 max_prof_id;
-	u8 cdid_bits; /* # cdid bits to use in key, 0, 2, 4, or 8 */
 	struct ice_prof_tcam_entry *t;
+	u8 cdid_bits; /* # cdid bits to use in key, 0, 2, 4, or 8 */
 };
 
 struct ice_prof_redir {
+	u8 *t;
 	u32 sid;
 	u16 count;
-	u8 *t;
 };
 
 /* Tables per block */
diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 36657a1a3..795abe98f 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -284,10 +284,10 @@ static const u32 ice_ptypes_mac_il[] = {
 /* Manage parameters and info. used during the creation of a flow profile */
 struct ice_flow_prof_params {
 	enum ice_block blk;
-	struct ice_flow_prof *prof;
-
 	u16 entry_length; /* # of bytes formatted entry will require */
 	u8 es_cnt;
+	struct ice_flow_prof *prof;
+
 	/* For ACL, the es[0] will have the data of ICE_RX_MDID_PKT_FLAGS_15_0
 	 * This will give us the direction flags.
 	 */
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
index 57514a078..715fd7471 100644
--- a/drivers/net/ice/base/ice_flow.h
+++ b/drivers/net/ice/base/ice_flow.h
@@ -225,20 +225,18 @@ struct ice_flow_entry {
 	struct LIST_ENTRY_TYPE l_entry;
 
 	u64 id;
-	u16 vsi_handle;
-	enum ice_flow_priority priority;
 	struct ice_flow_prof *prof;
-
+	/* Action list */
+	struct ice_flow_action *acts;
 	/* Flow entry's content */
-	u16 entry_sz;
 	void *entry;
-
-	/* Action list */
+	enum ice_flow_priority priority;
+	u16 vsi_handle;
+	u16 entry_sz;
 	u8 acts_cnt;
-	struct ice_flow_action *acts;
 };
 
-#define ICE_FLOW_ENTRY_HNDL(e)	((unsigned long)e)
+#define ICE_FLOW_ENTRY_HNDL(e)	((u64)e)
 #define ICE_FLOW_ENTRY_PTR(h)	((struct ice_flow_entry *)(h))
 
 struct ice_flow_prof {
@@ -246,12 +244,13 @@ struct ice_flow_prof {
 
 	u64 id;
 	enum ice_flow_dir dir;
+	u8 segs_cnt;
+	u8 acts_cnt;
 
 	/* Keep track of flow entries associated with this flow profile */
 	struct ice_lock entries_lock;
 	struct LIST_HEAD_TYPE entries;
 
-	u8 segs_cnt;
 	struct ice_flow_seg_info segs[ICE_FLOW_SEG_MAX];
 
 	/* software VSI handles referenced by this flow profile */
@@ -264,7 +263,6 @@ struct ice_flow_prof {
 	} cfg;
 
 	/* Default actions */
-	u8 acts_cnt;
 	struct ice_flow_action *acts;
 };
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 39/49] net/ice/base: slightly code update
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (37 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 38/49] net/ice/base: use more efficient structures Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-05 12:04   ` Maxime Coquelin
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 40/49] net/ice/base: code clean up Leyi Rong
                   ` (11 subsequent siblings)
  50 siblings, 1 reply; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Paul M Stillwell Jr

Mainly update below functions:

ice_flow_proc_seg_hdrs
ice_flow_find_prof_conds
ice_dealloc_flow_entry
ice_add_rule_internal

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c     | 13 +++----
 drivers/net/ice/base/ice_flow.c          | 47 +++++++++++++++++-------
 drivers/net/ice/base/ice_nvm.c           |  4 +-
 drivers/net/ice/base/ice_protocol_type.h |  1 +
 drivers/net/ice/base/ice_switch.c        | 24 +++++++-----
 drivers/net/ice/base/ice_switch.h        | 14 +++----
 drivers/net/ice/base/ice_type.h          | 13 ++++++-
 7 files changed, 73 insertions(+), 43 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 5864cbf3e..2a310b6e1 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -134,7 +134,7 @@ static struct ice_buf_table *ice_find_buf_table(struct ice_seg *ice_seg)
 	nvms = (struct ice_nvm_table *)(ice_seg->device_table +
 		LE32_TO_CPU(ice_seg->device_table_count));
 
-	return (struct ice_buf_table *)
+	return (_FORCE_ struct ice_buf_table *)
 		(nvms->vers + LE32_TO_CPU(nvms->table_count));
 }
 
@@ -1005,9 +1005,8 @@ ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
 
 		bh = (struct ice_buf_hdr *)(bufs + i);
 
-		status = ice_aq_download_pkg(hw, bh, LE16_TO_CPU(bh->data_end),
-					     last, &offset, &info, NULL);
-
+		status = ice_aq_download_pkg(hw, bh, ICE_PKG_BUF_SIZE, last,
+					     &offset, &info, NULL);
 		if (status) {
 			ice_debug(hw, ICE_DBG_PKG,
 				  "Pkg download failed: err %d off %d inf %d\n",
@@ -2937,7 +2936,7 @@ static void ice_fill_tbl(struct ice_hw *hw, enum ice_block block_id, u32 sid)
 		case ICE_SID_XLT2_ACL:
 		case ICE_SID_XLT2_PE:
 			xlt2 = (struct ice_xlt2_section *)sect;
-			src = (u8 *)xlt2->value;
+			src = (_FORCE_ u8 *)xlt2->value;
 			sect_len = LE16_TO_CPU(xlt2->count) *
 				sizeof(*hw->blk[block_id].xlt2.t);
 			dst = (u8 *)hw->blk[block_id].xlt2.t;
@@ -3889,7 +3888,7 @@ ice_update_fd_swap(struct ice_hw *hw, u16 prof_id, struct ice_fv_word *es)
 
 	/* fill in the swap array */
 	si = hw->blk[ICE_BLK_FD].es.fvw - 1;
-	do {
+	while (si >= 0) {
 		u8 indexes_used = 1;
 
 		/* assume flat at this index */
@@ -3921,7 +3920,7 @@ ice_update_fd_swap(struct ice_hw *hw, u16 prof_id, struct ice_fv_word *es)
 		}
 
 		si -= indexes_used;
-	} while (si >= 0);
+	}
 
 	/* for each set of 4 swap indexes, write the appropriate register */
 	for (j = 0; j < hw->blk[ICE_BLK_FD].es.fvw / 4; j++) {
diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 795abe98f..f31557eac 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -415,9 +415,6 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
 		const ice_bitmap_t *src;
 		u32 hdrs;
 
-		if (i > 0 && (i + 1) < prof->segs_cnt)
-			continue;
-
 		hdrs = prof->segs[i].hdrs;
 
 		if (hdrs & ICE_FLOW_SEG_HDR_ETH) {
@@ -847,6 +844,7 @@ ice_flow_proc_segs(struct ice_hw *hw, struct ice_flow_prof_params *params)
 
 #define ICE_FLOW_FIND_PROF_CHK_FLDS	0x00000001
 #define ICE_FLOW_FIND_PROF_CHK_VSI	0x00000002
+#define ICE_FLOW_FIND_PROF_NOT_CHK_DIR	0x00000004
 
 /**
  * ice_flow_find_prof_conds - Find a profile matching headers and conditions
@@ -866,7 +864,8 @@ ice_flow_find_prof_conds(struct ice_hw *hw, enum ice_block blk,
 	struct ice_flow_prof *p;
 
 	LIST_FOR_EACH_ENTRY(p, &hw->fl_profs[blk], ice_flow_prof, l_entry) {
-		if (p->dir == dir && segs_cnt && segs_cnt == p->segs_cnt) {
+		if ((p->dir == dir || conds & ICE_FLOW_FIND_PROF_NOT_CHK_DIR) &&
+		    segs_cnt && segs_cnt == p->segs_cnt) {
 			u8 i;
 
 			/* Check for profile-VSI association if specified */
@@ -935,17 +934,15 @@ ice_flow_find_prof_id(struct ice_hw *hw, enum ice_block blk, u64 prof_id)
 }
 
 /**
- * ice_flow_rem_entry_sync - Remove a flow entry
+ * ice_dealloc_flow_entry - Deallocate flow entry memory
  * @hw: pointer to the HW struct
  * @entry: flow entry to be removed
  */
-static enum ice_status
-ice_flow_rem_entry_sync(struct ice_hw *hw, struct ice_flow_entry *entry)
+static void
+ice_dealloc_flow_entry(struct ice_hw *hw, struct ice_flow_entry *entry)
 {
 	if (!entry)
-		return ICE_ERR_BAD_PTR;
-
-	LIST_DEL(&entry->l_entry);
+		return;
 
 	if (entry->entry)
 		ice_free(hw, entry->entry);
@@ -957,6 +954,22 @@ ice_flow_rem_entry_sync(struct ice_hw *hw, struct ice_flow_entry *entry)
 	}
 
 	ice_free(hw, entry);
+}
+
+/**
+ * ice_flow_rem_entry_sync - Remove a flow entry
+ * @hw: pointer to the HW struct
+ * @entry: flow entry to be removed
+ */
+static enum ice_status
+ice_flow_rem_entry_sync(struct ice_hw *hw, struct ice_flow_entry *entry)
+{
+	if (!entry)
+		return ICE_ERR_BAD_PTR;
+
+	LIST_DEL(&entry->l_entry);
+
+	ice_dealloc_flow_entry(hw, entry);
 
 	return ICE_SUCCESS;
 }
@@ -1395,9 +1408,12 @@ ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id,
 		goto out;
 	}
 
-	ice_acquire_lock(&prof->entries_lock);
-	LIST_ADD(&e->l_entry, &prof->entries);
-	ice_release_lock(&prof->entries_lock);
+	if (blk != ICE_BLK_ACL) {
+		/* ACL will handle the entry management */
+		ice_acquire_lock(&prof->entries_lock);
+		LIST_ADD(&e->l_entry, &prof->entries);
+		ice_release_lock(&prof->entries_lock);
+	}
 
 	*entry_h = ICE_FLOW_ENTRY_HNDL(e);
 
@@ -1425,7 +1441,7 @@ enum ice_status ice_flow_rem_entry(struct ice_hw *hw, u64 entry_h)
 	if (entry_h == ICE_FLOW_ENTRY_HANDLE_INVAL)
 		return ICE_ERR_PARAM;
 
-	entry = ICE_FLOW_ENTRY_PTR((unsigned long)entry_h);
+	entry = ICE_FLOW_ENTRY_PTR(entry_h);
 
 	/* Retain the pointer to the flow profile as the entry will be freed */
 	prof = entry->prof;
@@ -1676,6 +1692,9 @@ enum ice_status ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
 	if (!ice_is_vsi_valid(hw, vsi_handle))
 		return ICE_ERR_PARAM;
 
+	if (LIST_EMPTY(&hw->fl_profs[blk]))
+		return ICE_SUCCESS;
+
 	ice_acquire_lock(&hw->fl_profs_locks[blk]);
 	LIST_FOR_EACH_ENTRY_SAFE(p, t, &hw->fl_profs[blk], ice_flow_prof,
 				 l_entry) {
diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
index fa9c348ce..76cfedb29 100644
--- a/drivers/net/ice/base/ice_nvm.c
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -127,7 +127,7 @@ ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
 
 	status = ice_read_sr_aq(hw, offset, 1, data, true);
 	if (!status)
-		*data = LE16_TO_CPU(*(__le16 *)data);
+		*data = LE16_TO_CPU(*(_FORCE_ __le16 *)data);
 
 	return status;
 }
@@ -185,7 +185,7 @@ ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
 	} while (words_read < *words);
 
 	for (i = 0; i < *words; i++)
-		data[i] = LE16_TO_CPU(((__le16 *)data)[i]);
+		data[i] = LE16_TO_CPU(((_FORCE_ __le16 *)data)[i]);
 
 read_nvm_buf_aq_exit:
 	*words = words_read;
diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
index e572dd320..82822fb74 100644
--- a/drivers/net/ice/base/ice_protocol_type.h
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -189,6 +189,7 @@ struct ice_udp_tnl_hdr {
 	u16 field;
 	u16 proto_type;
 	u16 vni;
+	u16 reserved;
 };
 
 struct ice_nvgre {
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index faaedd4c8..373acb7a6 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -279,6 +279,7 @@ enum ice_status ice_init_def_sw_recp(struct ice_hw *hw)
 		recps[i].root_rid = i;
 		INIT_LIST_HEAD(&recps[i].filt_rules);
 		INIT_LIST_HEAD(&recps[i].filt_replay_rules);
+		INIT_LIST_HEAD(&recps[i].rg_list);
 		ice_init_lock(&recps[i].filt_rule_lock);
 	}
 
@@ -859,7 +860,7 @@ ice_aq_add_update_mir_rule(struct ice_hw *hw, u16 rule_type, u16 dest_vsi,
 			return ICE_ERR_PARAM;
 
 		buf_size = count * sizeof(__le16);
-		mr_list = (__le16 *)ice_malloc(hw, buf_size);
+		mr_list = (_FORCE_ __le16 *)ice_malloc(hw, buf_size);
 		if (!mr_list)
 			return ICE_ERR_NO_MEMORY;
 		break;
@@ -1459,7 +1460,6 @@ static int ice_ilog2(u64 n)
 	return -1;
 }
 
-
 /**
  * ice_fill_sw_rule - Helper function to fill switch rule structure
  * @hw: pointer to the hardware structure
@@ -1479,7 +1479,6 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
 	__be16 *off;
 	u8 q_rgn;
 
-
 	if (opc == ice_aqc_opc_remove_sw_rules) {
 		s_rule->pdata.lkup_tx_rx.act = 0;
 		s_rule->pdata.lkup_tx_rx.index =
@@ -1555,7 +1554,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
 		daddr = f_info->l_data.ethertype_mac.mac_addr;
 		/* fall-through */
 	case ICE_SW_LKUP_ETHERTYPE:
-		off = (__be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
+		off = (_FORCE_ __be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
 		*off = CPU_TO_BE16(f_info->l_data.ethertype_mac.ethertype);
 		break;
 	case ICE_SW_LKUP_MAC_VLAN:
@@ -1586,7 +1585,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
 			   ICE_NONDMA_TO_NONDMA);
 
 	if (!(vlan_id > ICE_MAX_VLAN_ID)) {
-		off = (__be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
+		off = (_FORCE_ __be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
 		*off = CPU_TO_BE16(vlan_id);
 	}
 
@@ -2289,14 +2288,15 @@ ice_add_rule_internal(struct ice_hw *hw, u8 recp_id,
 
 	m_entry = ice_find_rule_entry(hw, recp_id, new_fltr);
 	if (!m_entry) {
-		ice_release_lock(rule_lock);
-		return ice_create_pkt_fwd_rule(hw, f_entry);
+		status = ice_create_pkt_fwd_rule(hw, f_entry);
+		goto exit_add_rule_internal;
 	}
 
 	cur_fltr = &m_entry->fltr_info;
 	status = ice_add_update_vsi_list(hw, m_entry, cur_fltr, new_fltr);
-	ice_release_lock(rule_lock);
 
+exit_add_rule_internal:
+	ice_release_lock(rule_lock);
 	return status;
 }
 
@@ -2975,12 +2975,19 @@ ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list)
  * ice_add_eth_mac - Add ethertype and MAC based filter rule
  * @hw: pointer to the hardware structure
  * @em_list: list of ether type MAC filter, MAC is optional
+ *
+ * This function requires the caller to populate the entries in
+ * the filter list with the necessary fields (including flags to
+ * indicate Tx or Rx rules).
  */
 enum ice_status
 ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list)
 {
 	struct ice_fltr_list_entry *em_list_itr;
 
+	if (!em_list || !hw)
+		return ICE_ERR_PARAM;
+
 	LIST_FOR_EACH_ENTRY(em_list_itr, em_list, ice_fltr_list_entry,
 			    list_entry) {
 		enum ice_sw_lkup_type l_type =
@@ -2990,7 +2997,6 @@ ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list)
 		    l_type != ICE_SW_LKUP_ETHERTYPE)
 			return ICE_ERR_PARAM;
 
-		em_list_itr->fltr_info.flag = ICE_FLTR_TX;
 		em_list_itr->status = ice_add_rule_internal(hw, l_type,
 							    em_list_itr);
 		if (em_list_itr->status)
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index 2f140a86d..05b1170c9 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -11,6 +11,9 @@
 #define ICE_SW_CFG_MAX_BUF_LEN 2048
 #define ICE_MAX_SW 256
 #define ICE_DFLT_VSI_INVAL 0xff
+#define ICE_FLTR_RX BIT(0)
+#define ICE_FLTR_TX BIT(1)
+#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
 
 
 /* Worst case buffer length for ice_aqc_opc_get_res_alloc */
@@ -77,9 +80,6 @@ struct ice_fltr_info {
 	/* rule ID returned by firmware once filter rule is created */
 	u16 fltr_rule_id;
 	u16 flag;
-#define ICE_FLTR_RX		BIT(0)
-#define ICE_FLTR_TX		BIT(1)
-#define ICE_FLTR_TX_RX		(ICE_FLTR_RX | ICE_FLTR_TX)
 
 	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
 	u16 src;
@@ -145,10 +145,6 @@ struct ice_sw_act_ctrl {
 	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
 	u16 src;
 	u16 flag;
-#define ICE_FLTR_RX             BIT(0)
-#define ICE_FLTR_TX             BIT(1)
-#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
-
 	enum ice_sw_fwd_act_type fltr_act;
 	/* Depending on filter action */
 	union {
@@ -368,6 +364,8 @@ ice_aq_get_res_descs(struct ice_hw *hw, u16 num_entries,
 		     struct ice_sq_cd *cd);
 enum ice_status
 ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
+enum ice_status
+ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
 void ice_rem_all_sw_rules_info(struct ice_hw *hw);
 enum ice_status ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
 enum ice_status ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
@@ -375,8 +373,6 @@ enum ice_status
 ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list);
 enum ice_status
 ice_remove_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list);
-enum ice_status
-ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
 #ifndef NO_MACVLAN_SUPPORT
 enum ice_status
 ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 919ca7fa8..f4e151c55 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -14,6 +14,10 @@
 
 #define BITS_PER_BYTE	8
 
+#ifndef _FORCE_
+#define _FORCE_
+#endif
+
 #define ICE_BYTES_PER_WORD	2
 #define ICE_BYTES_PER_DWORD	4
 #define ICE_MAX_TRAFFIC_CLASS	8
@@ -35,7 +39,7 @@
 #endif
 
 #ifndef IS_ASCII
-#define IS_ASCII(_ch)  ((_ch) < 0x80)
+#define IS_ASCII(_ch)	((_ch) < 0x80)
 #endif
 
 #include "ice_status.h"
@@ -80,6 +84,7 @@ static inline u32 ice_round_to_num(u32 N, u32 R)
 #define ICE_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
 
 /* debug masks - set these bits in hw->debug_mask to control output */
+#define ICE_DBG_TRACE		BIT_ULL(0) /* for function-trace only */
 #define ICE_DBG_INIT		BIT_ULL(1)
 #define ICE_DBG_RELEASE		BIT_ULL(2)
 #define ICE_DBG_FW_LOG		BIT_ULL(3)
@@ -199,6 +204,7 @@ enum ice_vsi_type {
 #ifdef ADQ_SUPPORT
 	ICE_VSI_CHNL = 4,
 #endif /* ADQ_SUPPORT */
+	ICE_VSI_LB = 6,
 };
 
 struct ice_link_status {
@@ -718,6 +724,8 @@ struct ice_fw_log_cfg {
 #define ICE_FW_LOG_EVNT_INIT	(ICE_AQC_FW_LOG_INIT_EN >> ICE_AQC_FW_LOG_EN_S)
 #define ICE_FW_LOG_EVNT_FLOW	(ICE_AQC_FW_LOG_FLOW_EN >> ICE_AQC_FW_LOG_EN_S)
 #define ICE_FW_LOG_EVNT_ERR	(ICE_AQC_FW_LOG_ERR_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_ALL	(ICE_FW_LOG_EVNT_INFO | ICE_FW_LOG_EVNT_INIT | \
+				 ICE_FW_LOG_EVNT_FLOW | ICE_FW_LOG_EVNT_ERR)
 	struct ice_fw_log_evnt evnts[ICE_AQC_FW_LOG_ID_MAX];
 };
 
@@ -745,6 +753,7 @@ struct ice_hw {
 	u8 pf_id;		/* device profile info */
 
 	u16 max_burst_size;	/* driver sets this value */
+
 	/* Tx Scheduler values */
 	u16 num_tx_sched_layers;
 	u16 num_tx_sched_phys_layers;
@@ -948,7 +957,6 @@ enum ice_sw_fwd_act_type {
 #define ICE_SR_CSR_PROTECTED_LIST_PTR		0x0D
 #define ICE_SR_MNG_CFG_PTR			0x0E
 #define ICE_SR_EMP_MODULE_PTR			0x0F
-#define ICE_SR_PBA_FLAGS			0x15
 #define ICE_SR_PBA_BLOCK_PTR			0x16
 #define ICE_SR_BOOT_CFG_PTR			0x17
 #define ICE_SR_NVM_WOL_CFG			0x19
@@ -994,6 +1002,7 @@ enum ice_sw_fwd_act_type {
 #define ICE_SR_EMP_SR_SETTINGS_PTR		0x48
 #define ICE_SR_CONFIGURATION_METADATA_PTR	0x4D
 #define ICE_SR_IMMEDIATE_VALUES_PTR		0x4E
+#define ICE_SR_POR_REGISTERS_AUTOLOAD_PTR	0x118
 
 /* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
 #define ICE_SR_VPD_SIZE_WORDS		512
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 40/49] net/ice/base: code clean up
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (38 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 39/49] net/ice/base: slightly code update Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-05 12:06   ` Maxime Coquelin
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 41/49] net/ice/base: cleanup ice flex pipe files Leyi Rong
                   ` (10 subsequent siblings)
  50 siblings, 1 reply; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Paul M Stillwell Jr

Cleanup the useless code.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_controlq.c  | 62 +---------------------------
 drivers/net/ice/base/ice_fdir.h      |  1 -
 drivers/net/ice/base/ice_flex_pipe.c |  5 ++-
 drivers/net/ice/base/ice_sched.c     |  4 +-
 drivers/net/ice/base/ice_type.h      |  3 ++
 5 files changed, 10 insertions(+), 65 deletions(-)

diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
index 4cb6df113..3ef07e094 100644
--- a/drivers/net/ice/base/ice_controlq.c
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -262,7 +262,7 @@ ice_cfg_sq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
  * @hw: pointer to the hardware structure
  * @cq: pointer to the specific Control queue
  *
- * Configure base address and length registers for the receive (event q)
+ * Configure base address and length registers for the receive (event queue)
  */
 static enum ice_status
 ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
@@ -772,9 +772,6 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 	struct ice_ctl_q_ring *sq = &cq->sq;
 	u16 ntc = sq->next_to_clean;
 	struct ice_sq_cd *details;
-#if 0
-	struct ice_aq_desc desc_cb;
-#endif
 	struct ice_aq_desc *desc;
 
 	desc = ICE_CTL_Q_DESC(*sq, ntc);
@@ -783,15 +780,6 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 	while (rd32(hw, cq->sq.head) != ntc) {
 		ice_debug(hw, ICE_DBG_AQ_MSG,
 			  "ntc %d head %d.\n", ntc, rd32(hw, cq->sq.head));
-#if 0
-		if (details->callback) {
-			ICE_CTL_Q_CALLBACK cb_func =
-				(ICE_CTL_Q_CALLBACK)details->callback;
-			ice_memcpy(&desc_cb, desc, sizeof(desc_cb),
-				   ICE_DMA_TO_DMA);
-			cb_func(hw, &desc_cb);
-		}
-#endif
 		ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
 		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
 		ntc++;
@@ -941,38 +929,8 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	details = ICE_CTL_Q_DETAILS(cq->sq, cq->sq.next_to_use);
 	if (cd)
 		*details = *cd;
-#if 0
-		/* FIXME: if/when this block gets enabled (when the #if 0
-		 * is removed), add braces to both branches of the surrounding
-		 * conditional expression. The braces have been removed to
-		 * prevent checkpatch complaining.
-		 */
-
-		/* If the command details are defined copy the cookie. The
-		 * CPU_TO_LE32 is not needed here because the data is ignored
-		 * by the FW, only used by the driver
-		 */
-		if (details->cookie) {
-			desc->cookie_high =
-				CPU_TO_LE32(ICE_HI_DWORD(details->cookie));
-			desc->cookie_low =
-				CPU_TO_LE32(ICE_LO_DWORD(details->cookie));
-		}
-#endif
 	else
 		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
-#if 0
-	/* clear requested flags and then set additional flags if defined */
-	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
-	desc->flags |= CPU_TO_LE16(details->flags_ena);
-
-	if (details->postpone && !details->async) {
-		ice_debug(hw, ICE_DBG_AQ_MSG,
-			  "Async flag not set along with postpone flag\n");
-		status = ICE_ERR_PARAM;
-		goto sq_send_command_error;
-	}
-#endif
 
 	/* Call clean and check queue available function to reclaim the
 	 * descriptors that were processed by FW/MBX; the function returns the
@@ -1019,20 +977,8 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	(cq->sq.next_to_use)++;
 	if (cq->sq.next_to_use == cq->sq.count)
 		cq->sq.next_to_use = 0;
-#if 0
-	/* FIXME - handle this case? */
-	if (!details->postpone)
-#endif
 	wr32(hw, cq->sq.tail, cq->sq.next_to_use);
 
-#if 0
-	/* if command details are not defined or async flag is not set,
-	 * we need to wait for desc write back
-	 */
-	if (!details->async && !details->postpone) {
-		/* FIXME - handle this case? */
-	}
-#endif
 	do {
 		if (ice_sq_done(hw, cq))
 			break;
@@ -1087,9 +1033,6 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 
 	/* update the error if time out occurred */
 	if (!cmd_completed) {
-#if 0
-	    (!details->async && !details->postpone)) {
-#endif
 		ice_debug(hw, ICE_DBG_AQ_MSG,
 			  "Control Send Queue Writeback timeout.\n");
 		status = ICE_ERR_AQ_TIMEOUT;
@@ -1208,9 +1151,6 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	cq->rq.next_to_clean = ntc;
 	cq->rq.next_to_use = ntu;
 
-#if 0
-	ice_nvmupd_check_wait_event(hw, LE16_TO_CPU(e->desc.opcode));
-#endif
 clean_rq_elem_out:
 	/* Set pending if needed, unlock and return */
 	if (pending) {
diff --git a/drivers/net/ice/base/ice_fdir.h b/drivers/net/ice/base/ice_fdir.h
index 2ecb147f1..f8f06658c 100644
--- a/drivers/net/ice/base/ice_fdir.h
+++ b/drivers/net/ice/base/ice_fdir.h
@@ -173,7 +173,6 @@ struct ice_fdir_fltr {
 	u32 fltr_id;
 };
 
-
 /* Dummy packet filter definition structure. */
 struct ice_fdir_base_pkt {
 	enum ice_fltr_ptype flow;
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 2a310b6e1..46234c014 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -398,7 +398,7 @@ ice_find_boost_entry(struct ice_seg *ice_seg, u16 addr,
  * Handles enumeration of individual label entries.
  */
 static void *
-ice_label_enum_handler(u32 __always_unused sect_type, void *section, u32 index,
+ice_label_enum_handler(u32 __ALWAYS_UNUSED sect_type, void *section, u32 index,
 		       u32 *offset)
 {
 	struct ice_label_section *labels;
@@ -640,7 +640,7 @@ static bool ice_bits_max_set(const u8 *mask, u16 size, u16 max)
  * @size: the size of the complete key in bytes (must be even)
  * @val: array of 8-bit values that makes up the value portion of the key
  * @upd: array of 8-bit masks that determine what key portion to update
- * @dc: array of 8-bit masks that make up the dont' care mask
+ * @dc: array of 8-bit masks that make up the don't care mask
  * @nm: array of 8-bit masks that make up the never match mask
  * @off: the offset of the first byte in the key to update
  * @len: the number of bytes in the key update
@@ -4544,6 +4544,7 @@ ice_move_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig,
 	status = ice_vsig_find_vsi(hw, blk, vsi, &orig_vsig);
 	if (!status)
 		status = ice_vsig_add_mv_vsi(hw, blk, vsi, vsig);
+
 	if (status) {
 		ice_free(hw, p);
 		return status;
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index a72e72982..fa3158a7b 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -1233,7 +1233,7 @@ enum ice_status ice_sched_init_port(struct ice_port_info *pi)
 		goto err_init_port;
 	}
 
-	/* If the last node is a leaf node then the index of the Q group
+	/* If the last node is a leaf node then the index of the queue group
 	 * layer is two less than the number of elements.
 	 */
 	if (num_elems > 2 && buf[0].generic[num_elems - 1].data.elem_type ==
@@ -3529,9 +3529,11 @@ ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
 		LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
 				    ice_sched_agg_vsi_info, list_entry)
 			if (agg_vsi_info->vsi_handle == vsi_handle) {
+				/* cppcheck-suppress unreadVariable */
 				vsi_handle_valid = true;
 				break;
 			}
+
 		if (!vsi_handle_valid)
 			goto exit_agg_priority_per_tc;
 
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index f4e151c55..f76be2b58 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -114,6 +114,9 @@ static inline u32 ice_round_to_num(u32 N, u32 R)
 #define ICE_DBG_USER		BIT_ULL(31)
 #define ICE_DBG_ALL		0xFFFFFFFFFFFFFFFFULL
 
+#ifndef __ALWAYS_UNUSED
+#define __ALWAYS_UNUSED
+#endif
 
 
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 41/49] net/ice/base: cleanup ice flex pipe files
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (39 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 40/49] net/ice/base: code clean up Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 42/49] net/ice/base: change how VMDq capability is wrapped Leyi Rong
                   ` (9 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Bruce Allan, Paul M Stillwell Jr

Make functions that can be, static. Remove some code that is not
currently called.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 579 ++++-----------------------
 drivers/net/ice/base/ice_flex_pipe.h |  59 ---
 2 files changed, 78 insertions(+), 560 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 46234c014..fda5bef43 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -461,7 +461,7 @@ ice_enum_labels(struct ice_seg *ice_seg, u32 type, struct ice_pkg_enum *state,
  * since the first call to ice_enum_labels requires a pointer to an actual
  * ice_seg structure.
  */
-void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg)
+static void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg)
 {
 	struct ice_pkg_enum state;
 	char *label_name;
@@ -808,27 +808,6 @@ ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
 	return status;
 }
 
-/**
- * ice_aq_upload_section
- * @hw: pointer to the hardware structure
- * @pkg_buf: the package buffer which will receive the section
- * @buf_size: the size of the package buffer
- * @cd: pointer to command details structure or NULL
- *
- * Upload Section (0x0C41)
- */
-enum ice_status
-ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
-		      u16 buf_size, struct ice_sq_cd *cd)
-{
-	struct ice_aq_desc desc;
-
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_upload_section");
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_upload_section);
-	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
-
-	return ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd);
-}
 
 /**
  * ice_aq_update_pkg
@@ -890,7 +869,7 @@ ice_aq_update_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, u16 buf_size,
  * success it returns a pointer to the segment header, otherwise it will
  * return NULL.
  */
-struct ice_generic_seg_hdr *
+static struct ice_generic_seg_hdr *
 ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
 		    struct ice_pkg_hdr *pkg_hdr)
 {
@@ -1052,7 +1031,8 @@ ice_aq_get_pkg_info_list(struct ice_hw *hw,
  *
  * Handles the download of a complete package.
  */
-enum ice_status ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
+static enum ice_status
+ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
 {
 	struct ice_buf_table *ice_buf_tbl;
 
@@ -1081,7 +1061,7 @@ enum ice_status ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
  *
  * Saves off the package details into the HW structure.
  */
-enum ice_status
+static enum ice_status
 ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
 {
 	struct ice_global_metadata_seg *meta_seg;
@@ -1133,8 +1113,7 @@ ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
  *
  * Store details of the package currently loaded in HW into the HW structure.
  */
-enum ice_status
-ice_get_pkg_info(struct ice_hw *hw)
+static enum ice_status ice_get_pkg_info(struct ice_hw *hw)
 {
 	struct ice_aqc_get_pkg_info_resp *pkg_info;
 	enum ice_status status;
@@ -1187,40 +1166,6 @@ ice_get_pkg_info(struct ice_hw *hw)
 	return status;
 }
 
-/**
- * ice_find_label_value
- * @ice_seg: pointer to the ice segment (non-NULL)
- * @name: name of the label to search for
- * @type: the section type that will contain the label
- * @value: pointer to a value that will return the label's value if found
- *
- * Finds a label's value given the label name and the section type to search.
- * The ice_seg parameter must not be NULL since the first call to
- * ice_enum_labels requires a pointer to an actual ice_seg structure.
- */
-enum ice_status
-ice_find_label_value(struct ice_seg *ice_seg, char const *name, u32 type,
-		     u16 *value)
-{
-	struct ice_pkg_enum state;
-	char *label_name;
-	u16 val;
-
-	if (!ice_seg)
-		return ICE_ERR_PARAM;
-
-	do {
-		label_name = ice_enum_labels(ice_seg, type, &state, &val);
-		if (label_name && !strcmp(label_name, name)) {
-			*value = val;
-			return ICE_SUCCESS;
-		}
-
-		ice_seg = NULL;
-	} while (label_name);
-
-	return ICE_ERR_CFG;
-}
 
 /**
  * ice_verify_pkg - verify package
@@ -1499,7 +1444,7 @@ enum ice_status ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len)
  * Allocates a package buffer and returns a pointer to the buffer header.
  * Note: all package contents must be in Little Endian form.
  */
-struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw)
+static struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw)
 {
 	struct ice_buf_build *bld;
 	struct ice_buf_hdr *buf;
@@ -1623,40 +1568,15 @@ ice_get_sw_fv_list(struct ice_hw *hw, u16 *prot_ids, u8 ids_cnt,
 }
 
 /**
- * ice_pkg_buf_alloc_single_section
+ * ice_pkg_buf_free
  * @hw: pointer to the HW structure
- * @type: the section type value
- * @size: the size of the section to reserve (in bytes)
- * @section: returns pointer to the section
+ * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
  *
- * Allocates a package buffer with a single section.
- * Note: all package contents must be in Little Endian form.
+ * Frees a package buffer
  */
-static struct ice_buf_build *
-ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size,
-				 void **section)
+static void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld)
 {
-	struct ice_buf_build *buf;
-
-	if (!section)
-		return NULL;
-
-	buf = ice_pkg_buf_alloc(hw);
-	if (!buf)
-		return NULL;
-
-	if (ice_pkg_buf_reserve_section(buf, 1))
-		goto ice_pkg_buf_alloc_single_section_err;
-
-	*section = ice_pkg_buf_alloc_section(buf, type, size);
-	if (!*section)
-		goto ice_pkg_buf_alloc_single_section_err;
-
-	return buf;
-
-ice_pkg_buf_alloc_single_section_err:
-	ice_pkg_buf_free(hw, buf);
-	return NULL;
+	ice_free(hw, bld);
 }
 
 /**
@@ -1672,7 +1592,7 @@ ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size,
  * result in some wasted space in the buffer.
  * Note: all package contents must be in Little Endian form.
  */
-enum ice_status
+static enum ice_status
 ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count)
 {
 	struct ice_buf_hdr *buf;
@@ -1700,48 +1620,6 @@ ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count)
 	return ICE_SUCCESS;
 }
 
-/**
- * ice_pkg_buf_unreserve_section
- * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
- * @count: the number of sections to unreserve
- *
- * Unreserves one or more section table entries in a package buffer, releasing
- * space that can be used for section data. This routine can be called
- * multiple times as long as they are made before calling
- * ice_pkg_buf_alloc_section(). Once ice_pkg_buf_alloc_section()
- * is called once, the number of sections that can be allocated will not be able
- * to be increased; not using all reserved sections is fine, but this will
- * result in some wasted space in the buffer.
- * Note: all package contents must be in Little Endian form.
- */
-enum ice_status
-ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count)
-{
-	struct ice_buf_hdr *buf;
-	u16 section_count;
-	u16 data_end;
-
-	if (!bld)
-		return ICE_ERR_PARAM;
-
-	buf = (struct ice_buf_hdr *)&bld->buf;
-
-	/* already an active section, can't decrease table size */
-	section_count = LE16_TO_CPU(buf->section_count);
-	if (section_count > 0)
-		return ICE_ERR_CFG;
-
-	if (count > bld->reserved_section_table_entries)
-		return ICE_ERR_CFG;
-	bld->reserved_section_table_entries -= count;
-
-	data_end = LE16_TO_CPU(buf->data_end) -
-		   (count * sizeof(buf->section_entry[0]));
-	buf->data_end = CPU_TO_LE16(data_end);
-
-	return ICE_SUCCESS;
-}
-
 /**
  * ice_pkg_buf_alloc_section
  * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
@@ -1754,7 +1632,7 @@ ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count)
  * section contents.
  * Note: all package contents must be in Little Endian form.
  */
-void *
+static void *
 ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size)
 {
 	struct ice_buf_hdr *buf;
@@ -1795,23 +1673,8 @@ ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size)
 	return NULL;
 }
 
-/**
- * ice_pkg_buf_get_free_space
- * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
- *
- * Returns the number of free bytes remaining in the buffer.
- * Note: all package contents must be in Little Endian form.
- */
-u16 ice_pkg_buf_get_free_space(struct ice_buf_build *bld)
-{
-	struct ice_buf_hdr *buf;
 
-	if (!bld)
-		return 0;
 
-	buf = (struct ice_buf_hdr *)&bld->buf;
-	return ICE_MAX_S_DATA_END - LE16_TO_CPU(buf->data_end);
-}
 
 /**
  * ice_pkg_buf_get_active_sections
@@ -1823,7 +1686,7 @@ u16 ice_pkg_buf_get_free_space(struct ice_buf_build *bld)
  * not be used.
  * Note: all package contents must be in Little Endian form.
  */
-u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld)
+static u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld)
 {
 	struct ice_buf_hdr *buf;
 
@@ -1840,7 +1703,7 @@ u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld)
  *
  * Return a pointer to the buffer's header
  */
-struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld)
+static struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld)
 {
 	if (!bld)
 		return NULL;
@@ -1848,17 +1711,6 @@ struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld)
 	return &bld->buf;
 }
 
-/**
- * ice_pkg_buf_free
- * @hw: pointer to the HW structure
- * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
- *
- * Frees a package buffer
- */
-void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld)
-{
-	ice_free(hw, bld);
-}
 
 /**
  * ice_find_prot_off - find prot ID and offset pair, based on prof and FV index
@@ -1891,38 +1743,6 @@ ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx,
 
 /* PTG Management */
 
-/**
- * ice_ptg_update_xlt1 - Updates packet type groups in HW via XLT1 table
- * @hw: pointer to the hardware structure
- * @blk: HW block
- *
- * This function will update the XLT1 hardware table to reflect the new
- * packet type group configuration.
- */
-enum ice_status ice_ptg_update_xlt1(struct ice_hw *hw, enum ice_block blk)
-{
-	struct ice_xlt1_section *sect;
-	struct ice_buf_build *bld;
-	enum ice_status status;
-	u16 index;
-
-	bld = ice_pkg_buf_alloc_single_section(hw, ice_sect_id(blk, ICE_XLT1),
-					       ICE_XLT1_SIZE(ICE_XLT1_CNT),
-					       (void **)&sect);
-	if (!bld)
-		return ICE_ERR_NO_MEMORY;
-
-	sect->count = CPU_TO_LE16(ICE_XLT1_CNT);
-	sect->offset = CPU_TO_LE16(0);
-	for (index = 0; index < ICE_XLT1_CNT; index++)
-		sect->value[index] = hw->blk[blk].xlt1.ptypes[index].ptg;
-
-	status = ice_update_pkg(hw, ice_pkg_buf(bld), 1);
-
-	ice_pkg_buf_free(hw, bld);
-
-	return status;
-}
 
 /**
  * ice_ptg_find_ptype - Search for packet type group using packet type (ptype)
@@ -1935,7 +1755,7 @@ enum ice_status ice_ptg_update_xlt1(struct ice_hw *hw, enum ice_block blk)
  * PTG ID that contains it through the ptg parameter, with the value of
  * ICE_DEFAULT_PTG (0) meaning it is part the default PTG.
  */
-enum ice_status
+static enum ice_status
 ice_ptg_find_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 *ptg)
 {
 	if (ptype >= ICE_XLT1_CNT || !ptg)
@@ -1969,7 +1789,7 @@ void ice_ptg_alloc_val(struct ice_hw *hw, enum ice_block blk, u8 ptg)
  * that 0 is the default packet type group, so successfully created PTGs will
  * have a non-zero ID value; which means a 0 return value indicates an error.
  */
-u8 ice_ptg_alloc(struct ice_hw *hw, enum ice_block blk)
+static u8 ice_ptg_alloc(struct ice_hw *hw, enum ice_block blk)
 {
 	u16 i;
 
@@ -1984,30 +1804,6 @@ u8 ice_ptg_alloc(struct ice_hw *hw, enum ice_block blk)
 	return 0;
 }
 
-/**
- * ice_ptg_free - Frees a packet type group
- * @hw: pointer to the hardware structure
- * @blk: HW block
- * @ptg: the ptg ID to free
- *
- * This function frees a packet type group, and returns all the current ptypes
- * within it to the default PTG.
- */
-void ice_ptg_free(struct ice_hw *hw, enum ice_block blk, u8 ptg)
-{
-	struct ice_ptg_ptype *p, *temp;
-
-	hw->blk[blk].xlt1.ptg_tbl[ptg].in_use = false;
-	p = hw->blk[blk].xlt1.ptg_tbl[ptg].first_ptype;
-	while (p) {
-		p->ptg = ICE_DEFAULT_PTG;
-		temp = p->next_ptype;
-		p->next_ptype = NULL;
-		p = temp;
-	}
-
-	hw->blk[blk].xlt1.ptg_tbl[ptg].first_ptype = NULL;
-}
 
 /**
  * ice_ptg_remove_ptype - Removes ptype from a particular packet type group
@@ -2066,7 +1862,7 @@ ice_ptg_remove_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg)
  * a destination PTG ID of ICE_DEFAULT_PTG (0) will move the ptype to the
  * default PTG.
  */
-enum ice_status
+static enum ice_status
 ice_ptg_add_mv_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg)
 {
 	enum ice_status status;
@@ -2202,70 +1998,6 @@ ice_match_prop_lst(struct LIST_HEAD_TYPE *list1, struct LIST_HEAD_TYPE *list2)
 
 /* VSIG Management */
 
-/**
- * ice_vsig_update_xlt2_sect - update one section of XLT2 table
- * @hw: pointer to the hardware structure
- * @blk: HW block
- * @vsi: HW VSI number to program
- * @vsig: vsig for the VSI
- *
- * This function will update the XLT2 hardware table with the input VSI
- * group configuration.
- */
-static enum ice_status
-ice_vsig_update_xlt2_sect(struct ice_hw *hw, enum ice_block blk, u16 vsi,
-			  u16 vsig)
-{
-	struct ice_xlt2_section *sect;
-	struct ice_buf_build *bld;
-	enum ice_status status;
-
-	bld = ice_pkg_buf_alloc_single_section(hw, ice_sect_id(blk, ICE_XLT2),
-					       sizeof(struct ice_xlt2_section),
-					       (void **)&sect);
-	if (!bld)
-		return ICE_ERR_NO_MEMORY;
-
-	sect->count = CPU_TO_LE16(1);
-	sect->offset = CPU_TO_LE16(vsi);
-	sect->value[0] = CPU_TO_LE16(vsig);
-
-	status = ice_update_pkg(hw, ice_pkg_buf(bld), 1);
-
-	ice_pkg_buf_free(hw, bld);
-
-	return status;
-}
-
-/**
- * ice_vsig_update_xlt2 - update XLT2 table with VSIG configuration
- * @hw: pointer to the hardware structure
- * @blk: HW block
- *
- * This function will update the XLT2 hardware table with the input VSI
- * group configuration of used vsis.
- */
-enum ice_status ice_vsig_update_xlt2(struct ice_hw *hw, enum ice_block blk)
-{
-	u16 vsi;
-
-	for (vsi = 0; vsi < ICE_MAX_VSI; vsi++) {
-		/* update only vsis that have been changed */
-		if (hw->blk[blk].xlt2.vsis[vsi].changed) {
-			enum ice_status status;
-			u16 vsig;
-
-			vsig = hw->blk[blk].xlt2.vsis[vsi].vsig;
-			status = ice_vsig_update_xlt2_sect(hw, blk, vsi, vsig);
-			if (status)
-				return status;
-
-			hw->blk[blk].xlt2.vsis[vsi].changed = 0;
-		}
-	}
-
-	return ICE_SUCCESS;
-}
 
 /**
  * ice_vsig_find_vsi - find a VSIG that contains a specified VSI
@@ -2346,7 +2078,7 @@ static u16 ice_vsig_alloc(struct ice_hw *hw, enum ice_block blk)
  * for, the list must match exactly, including the order in which the
  * characteristics are listed.
  */
-enum ice_status
+static enum ice_status
 ice_find_dup_props_vsig(struct ice_hw *hw, enum ice_block blk,
 			struct LIST_HEAD_TYPE *chs, u16 *vsig)
 {
@@ -2373,7 +2105,7 @@ ice_find_dup_props_vsig(struct ice_hw *hw, enum ice_block blk,
  * The function will remove all VSIs associated with the input VSIG and move
  * them to the DEFAULT_VSIG and mark the VSIG available.
  */
-enum ice_status
+static enum ice_status
 ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig)
 {
 	struct ice_vsig_prof *dtmp, *del;
@@ -2424,6 +2156,62 @@ ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig)
 	return ICE_SUCCESS;
 }
 
+/**
+ * ice_vsig_remove_vsi - remove VSI from VSIG
+ * @hw: pointer to the hardware structure
+ * @blk: HW block
+ * @vsi: VSI to remove
+ * @vsig: VSI group to remove from
+ *
+ * The function will remove the input VSI from its VSI group and move it
+ * to the DEFAULT_VSIG.
+ */
+static enum ice_status
+ice_vsig_remove_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig)
+{
+	struct ice_vsig_vsi **vsi_head, *vsi_cur, *vsi_tgt;
+	u16 idx;
+
+	idx = vsig & ICE_VSIG_IDX_M;
+
+	if (vsi >= ICE_MAX_VSI || idx >= ICE_MAX_VSIGS)
+		return ICE_ERR_PARAM;
+
+	if (!hw->blk[blk].xlt2.vsig_tbl[idx].in_use)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	/* entry already in default VSIG, don't have to remove */
+	if (idx == ICE_DEFAULT_VSIG)
+		return ICE_SUCCESS;
+
+	vsi_head = &hw->blk[blk].xlt2.vsig_tbl[idx].first_vsi;
+	if (!(*vsi_head))
+		return ICE_ERR_CFG;
+
+	vsi_tgt = &hw->blk[blk].xlt2.vsis[vsi];
+	vsi_cur = (*vsi_head);
+
+	/* iterate the VSI list, skip over the entry to be removed */
+	while (vsi_cur) {
+		if (vsi_tgt == vsi_cur) {
+			(*vsi_head) = vsi_cur->next_vsi;
+			break;
+		}
+		vsi_head = &vsi_cur->next_vsi;
+		vsi_cur = vsi_cur->next_vsi;
+	}
+
+	/* verify if VSI was removed from group list */
+	if (!vsi_cur)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	vsi_cur->vsig = ICE_DEFAULT_VSIG;
+	vsi_cur->changed = 1;
+	vsi_cur->next_vsi = NULL;
+
+	return ICE_SUCCESS;
+}
+
 /**
  * ice_vsig_add_mv_vsi - add or move a VSI to a VSI group
  * @hw: pointer to the hardware structure
@@ -2436,7 +2224,7 @@ ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig)
  * move the entry to the DEFAULT_VSIG, update the original VSIG and
  * then move entry to the new VSIG.
  */
-enum ice_status
+static enum ice_status
 ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig)
 {
 	struct ice_vsig_vsi *tmp;
@@ -2487,62 +2275,6 @@ ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig)
 	return ICE_SUCCESS;
 }
 
-/**
- * ice_vsig_remove_vsi - remove VSI from VSIG
- * @hw: pointer to the hardware structure
- * @blk: HW block
- * @vsi: VSI to remove
- * @vsig: VSI group to remove from
- *
- * The function will remove the input VSI from its VSI group and move it
- * to the DEFAULT_VSIG.
- */
-enum ice_status
-ice_vsig_remove_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig)
-{
-	struct ice_vsig_vsi **vsi_head, *vsi_cur, *vsi_tgt;
-	u16 idx;
-
-	idx = vsig & ICE_VSIG_IDX_M;
-
-	if (vsi >= ICE_MAX_VSI || idx >= ICE_MAX_VSIGS)
-		return ICE_ERR_PARAM;
-
-	if (!hw->blk[blk].xlt2.vsig_tbl[idx].in_use)
-		return ICE_ERR_DOES_NOT_EXIST;
-
-	/* entry already in default VSIG, don't have to remove */
-	if (idx == ICE_DEFAULT_VSIG)
-		return ICE_SUCCESS;
-
-	vsi_head = &hw->blk[blk].xlt2.vsig_tbl[idx].first_vsi;
-	if (!(*vsi_head))
-		return ICE_ERR_CFG;
-
-	vsi_tgt = &hw->blk[blk].xlt2.vsis[vsi];
-	vsi_cur = (*vsi_head);
-
-	/* iterate the VSI list, skip over the entry to be removed */
-	while (vsi_cur) {
-		if (vsi_tgt == vsi_cur) {
-			(*vsi_head) = vsi_cur->next_vsi;
-			break;
-		}
-		vsi_head = &vsi_cur->next_vsi;
-		vsi_cur = vsi_cur->next_vsi;
-	}
-
-	/* verify if VSI was removed from group list */
-	if (!vsi_cur)
-		return ICE_ERR_DOES_NOT_EXIST;
-
-	vsi_cur->vsig = ICE_DEFAULT_VSIG;
-	vsi_cur->changed = 1;
-	vsi_cur->next_vsi = NULL;
-
-	return ICE_SUCCESS;
-}
-
 /**
  * ice_find_prof_id - find profile ID for a given field vector
  * @hw: pointer to the hardware structure
@@ -3142,70 +2874,6 @@ static void ice_init_flow_profs(struct ice_hw *hw, u8 blk_idx)
 	INIT_LIST_HEAD(&hw->fl_profs[blk_idx]);
 }
 
-/**
- * ice_clear_hw_tbls - clear HW tables and flow profiles
- * @hw: pointer to the hardware structure
- */
-void ice_clear_hw_tbls(struct ice_hw *hw)
-{
-	u8 i;
-
-	for (i = 0; i < ICE_BLK_COUNT; i++) {
-		struct ice_prof_redir *prof_redir = &hw->blk[i].prof_redir;
-		struct ice_prof_tcam *prof = &hw->blk[i].prof;
-		struct ice_xlt1 *xlt1 = &hw->blk[i].xlt1;
-		struct ice_xlt2 *xlt2 = &hw->blk[i].xlt2;
-		struct ice_es *es = &hw->blk[i].es;
-
-		if (hw->blk[i].is_list_init) {
-			struct ice_prof_map *del, *tmp;
-
-			ice_acquire_lock(&es->prof_map_lock);
-			LIST_FOR_EACH_ENTRY_SAFE(del, tmp, &es->prof_map,
-						 ice_prof_map, list) {
-				LIST_DEL(&del->list);
-				ice_free(hw, del);
-			}
-			INIT_LIST_HEAD(&es->prof_map);
-			ice_release_lock(&es->prof_map_lock);
-
-			ice_acquire_lock(&hw->fl_profs_locks[i]);
-			ice_free_flow_profs(hw, i);
-			ice_release_lock(&hw->fl_profs_locks[i]);
-		}
-
-		ice_free_vsig_tbl(hw, (enum ice_block)i);
-
-		ice_memset(xlt1->ptypes, 0, xlt1->count * sizeof(*xlt1->ptypes),
-			   ICE_NONDMA_MEM);
-		ice_memset(xlt1->ptg_tbl, 0,
-			   ICE_MAX_PTGS * sizeof(*xlt1->ptg_tbl),
-			   ICE_NONDMA_MEM);
-		ice_memset(xlt1->t, 0, xlt1->count * sizeof(*xlt1->t),
-			   ICE_NONDMA_MEM);
-
-		ice_memset(xlt2->vsis, 0, xlt2->count * sizeof(*xlt2->vsis),
-			   ICE_NONDMA_MEM);
-		ice_memset(xlt2->vsig_tbl, 0,
-			   xlt2->count * sizeof(*xlt2->vsig_tbl),
-			   ICE_NONDMA_MEM);
-		ice_memset(xlt2->t, 0, xlt2->count * sizeof(*xlt2->t),
-			   ICE_NONDMA_MEM);
-
-		ice_memset(prof->t, 0, prof->count * sizeof(*prof->t),
-			   ICE_NONDMA_MEM);
-		ice_memset(prof_redir->t, 0,
-			   prof_redir->count * sizeof(*prof_redir->t),
-			   ICE_NONDMA_MEM);
-
-		ice_memset(es->t, 0, es->count * sizeof(*es->t),
-			   ICE_NONDMA_MEM);
-		ice_memset(es->ref_count, 0, es->count * sizeof(*es->ref_count),
-			   ICE_NONDMA_MEM);
-		ice_memset(es->written, 0, es->count * sizeof(*es->written),
-			   ICE_NONDMA_MEM);
-	}
-}
 
 /**
  * ice_init_hw_tbls - init hardware table memory
@@ -4100,43 +3768,6 @@ ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id)
 	return entry;
 }
 
-/**
- * ice_set_prof_context - Set context for a given profile
- * @hw: pointer to the HW struct
- * @blk: hardware block
- * @id: profile tracking ID
- * @cntxt: context
- */
-struct ice_prof_map *
-ice_set_prof_context(struct ice_hw *hw, enum ice_block blk, u64 id, u64 cntxt)
-{
-	struct ice_prof_map *entry;
-
-	entry = ice_search_prof_id(hw, blk, id);
-	if (entry)
-		entry->context = cntxt;
-
-	return entry;
-}
-
-/**
- * ice_get_prof_context - Get context for a given profile
- * @hw: pointer to the HW struct
- * @blk: hardware block
- * @id: profile tracking ID
- * @cntxt: pointer to variable to receive the context
- */
-struct ice_prof_map *
-ice_get_prof_context(struct ice_hw *hw, enum ice_block blk, u64 id, u64 *cntxt)
-{
-	struct ice_prof_map *entry;
-
-	entry = ice_search_prof_id(hw, blk, id);
-	if (entry)
-		*cntxt = entry->context;
-
-	return entry;
-}
 
 /**
  * ice_vsig_prof_id_count - count profiles in a VSIG
@@ -5094,33 +4725,6 @@ ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl)
 	return status;
 }
 
-/**
- * ice_add_flow - add flow
- * @hw: pointer to the HW struct
- * @blk: hardware block
- * @vsi: array of VSIs to enable with the profile specified by ID
- * @count: number of elements in the VSI array
- * @id: profile tracking ID
- *
- * Calling this function will update the hardware tables to enable the
- * profile indicated by the ID parameter for the VSIs specified in the VSI
- * array. Once successfully called, the flow will be enabled.
- */
-enum ice_status
-ice_add_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
-	     u64 id)
-{
-	enum ice_status status;
-	u16 i;
-
-	for (i = 0; i < count; i++) {
-		status = ice_add_prof_id_flow(hw, blk, vsi[i], id);
-		if (status)
-			return status;
-	}
-
-	return ICE_SUCCESS;
-}
 
 /**
  * ice_rem_prof_from_list - remove a profile from list
@@ -5276,30 +4880,3 @@ ice_rem_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl)
 	return status;
 }
 
-/**
- * ice_rem_flow - remove flow
- * @hw: pointer to the HW struct
- * @blk: hardware block
- * @vsi: array of VSIs from which to remove the profile specified by ID
- * @count: number of elements in the VSI array
- * @id: profile tracking ID
- *
- * The function will remove flows from the specified VSIs that were enabled
- * using ice_add_flow. The ID value will indicated which profile will be
- * removed. Once successfully called, the flow will be disabled.
- */
-enum ice_status
-ice_rem_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
-	     u64 id)
-{
-	enum ice_status status;
-	u16 i;
-
-	for (i = 0; i < count; i++) {
-		status = ice_rem_prof_id_flow(hw, blk, vsi[i], id);
-		if (status)
-			return status;
-	}
-
-	return ICE_SUCCESS;
-}
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 4714fe646..7142ae7fe 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -27,66 +27,18 @@ void ice_release_change_lock(struct ice_hw *hw);
 enum ice_status
 ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx,
 		  u8 *prot, u16 *off);
-struct ice_generic_seg_hdr *
-ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
-		    struct ice_pkg_hdr *pkg_hdr);
-enum ice_status ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg);
-
-enum ice_status
-ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_header);
-enum ice_status
-ice_get_pkg_info(struct ice_hw *hw);
-
-void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg);
-
 enum ice_status
 ice_find_label_value(struct ice_seg *ice_seg, char const *name, u32 type,
 		     u16 *value);
 enum ice_status
 ice_get_sw_fv_list(struct ice_hw *hw, u16 *prot_ids, u8 ids_cnt,
 		   struct LIST_HEAD_TYPE *fv_list);
-enum ice_status
-ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
-		      u16 buf_size, struct ice_sq_cd *cd);
 
-enum ice_status
-ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count);
-u16 ice_pkg_buf_get_free_space(struct ice_buf_build *bld);
-u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld);
-
-/* package buffer building routines */
-
-struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw);
-enum ice_status
-ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count);
-void *ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size);
-struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld);
-void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld);
-
-/* XLT1/PType group functions */
-enum ice_status ice_ptg_update_xlt1(struct ice_hw *hw, enum ice_block blk);
-enum ice_status
-ice_ptg_find_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 *ptg);
-u8 ice_ptg_alloc(struct ice_hw *hw, enum ice_block blk);
-void ice_ptg_free(struct ice_hw *hw, enum ice_block blk, u8 ptg);
-enum ice_status
-ice_ptg_add_mv_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg);
 
 /* XLT2/VSI group functions */
-enum ice_status ice_vsig_update_xlt2(struct ice_hw *hw, enum ice_block blk);
 enum ice_status
 ice_vsig_find_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 *vsig);
-enum ice_status
-ice_find_dup_props_vsig(struct ice_hw *hw, enum ice_block blk,
-			struct LIST_HEAD_TYPE *chs, u16 *vsig);
 
-enum ice_status
-ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig);
-enum ice_status ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig);
-enum ice_status
-ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig);
-enum ice_status
-ice_vsig_remove_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig);
 enum ice_status
 ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 	     struct ice_fv_word *es);
@@ -98,10 +50,6 @@ enum ice_status
 ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl);
 enum ice_status
 ice_rem_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl);
-struct ice_prof_map *
-ice_set_prof_context(struct ice_hw *hw, enum ice_block blk, u64 id, u64 cntxt);
-struct ice_prof_map *
-ice_get_prof_context(struct ice_hw *hw, enum ice_block blk, u64 id, u64 *cntxt);
 enum ice_status
 ice_init_pkg(struct ice_hw *hw, u8 *buff, u32 len);
 enum ice_status
@@ -109,15 +57,8 @@ ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len);
 enum ice_status ice_init_hw_tbls(struct ice_hw *hw);
 void ice_free_seg(struct ice_hw *hw);
 void ice_fill_blk_tbls(struct ice_hw *hw);
-void ice_clear_hw_tbls(struct ice_hw *hw);
 void ice_free_hw_tbls(struct ice_hw *hw);
 enum ice_status
-ice_add_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
-	     u64 id);
-enum ice_status
-ice_rem_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
-	     u64 id);
-enum ice_status
 ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id);
 
 enum ice_status
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 42/49] net/ice/base: change how VMDq capability is wrapped
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (40 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 41/49] net/ice/base: cleanup ice flex pipe files Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
       [not found]   ` <ca03c24866cdb2f45ed04b6b3e9b35bac06c5dcd.camel@intel.com>
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 43/49] net/ice/base: refactor VSI node sched code Leyi Rong
                   ` (8 subsequent siblings)
  50 siblings, 1 reply; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Anirudh Venkataramanan, Paul M Stillwell Jr

This patch exposes the VMDq capability when at least one among
VMDQ_SUPPORT, OFFLOAD_MACVLAN_SUPPORT or ADQ_SUPPORT (ADQ uses
VMDQ as well) is defined.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 3 +++
 drivers/net/ice/base/ice_common.c     | 7 +++++++
 drivers/net/ice/base/ice_type.h       | 8 ++++++++
 3 files changed, 18 insertions(+)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 4e6bce18c..7642a923b 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -111,6 +111,9 @@ struct ice_aqc_list_caps_elem {
 	__le16 cap;
 #define ICE_AQC_CAPS_VALID_FUNCTIONS			0x0005
 #define ICE_AQC_MAX_VALID_FUNCTIONS			0x8
+#if defined(VMDQ_SUPPORT) || defined(OFFLOAD_MACVLAN_SUPPORT) || defined(ADQ_SUPPORT) || defined(FW_SUPPORT)
+#define ICE_AQC_CAPS_VMDQ				0x0014
+#endif /* VMDQ_SUPPORT || OFFLOAD_MACVLAN_SUPPORT || ADQ_SUPPORT || FW_SUPPORT */
 #define ICE_AQC_CAPS_VSI				0x0017
 #define ICE_AQC_CAPS_DCB				0x0018
 #define ICE_AQC_CAPS_RSS				0x0040
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index f9a5d43e6..1d54f3d71 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -1980,6 +1980,13 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 				  "%s: valid functions = %d\n", prefix,
 				  caps->valid_functions);
 			break;
+#if defined(VMDQ_SUPPORT) || defined(OFFLOAD_MACVLAN_SUPPORT) || defined(ADQ_SUPPORT)
+		case ICE_AQC_CAPS_VMDQ:
+			caps->vmdq = (number == 1);
+			ice_debug(hw, ICE_DBG_INIT,
+				  "%s: VMDQ = %d\n", prefix, caps->vmdq);
+			break;
+#endif /* VMDQ_SUPPORT || OFFLOAD_MACVLAN_SUPPORT || ADQ_SUPPORT */
 		case ICE_AQC_CAPS_VSI:
 			if (dev_p) {
 				dev_p->num_vsi_allocd_to_host = number;
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index f76be2b58..f30b37985 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -207,6 +207,9 @@ enum ice_vsi_type {
 #ifdef ADQ_SUPPORT
 	ICE_VSI_CHNL = 4,
 #endif /* ADQ_SUPPORT */
+#ifdef OFFLOAD_MACVLAN_SUPPORT
+	ICE_VSI_OFFLOAD_MACVLAN = 5,
+#endif /* OFFLOAD_MACVLAN_SUPPORT */
 	ICE_VSI_LB = 6,
 };
 
@@ -353,6 +356,11 @@ struct ice_hw_common_caps {
 #define ICE_MAX_SUPPORTED_GPIO_SDP	8
 	u8 led[ICE_MAX_SUPPORTED_GPIO_LED];
 	u8 sdp[ICE_MAX_SUPPORTED_GPIO_SDP];
+#if defined(VMDQ_SUPPORT) || defined(OFFLOAD_MACVLAN_SUPPORT) || defined(ADQ_SUPPORT)
+
+	/* VMDQ */
+	u8 vmdq;			/* VMDQ supported */
+#endif /* VMDQ_SUPPORT || OFFLOAD_MACVLAN_SUPPORT || ADQ_SUPPORT */
 
 	/* EVB capabilities */
 	u8 evb_802_1_qbg;		/* Edge Virtual Bridging */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 43/49] net/ice/base: refactor VSI node sched code
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (41 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 42/49] net/ice/base: change how VMDq capability is wrapped Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 44/49] net/ice/base: add some minor new defines Leyi Rong
                   ` (7 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Grzegorz Nitka, Paul M Stillwell Jr

Refactored VSI node sched code to use port_info ptr as call arg.

The declaration of VSI node getter function has been modified to use
pointer to ice_port_info structure instead of pointer to hw structure.
This way suitable port_info structure is used to find VSI node.

Signed-off-by: Grzegorz Nitka <grzegorz.nitka@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_sched.c | 47 ++++++++++++++++----------------
 drivers/net/ice/base/ice_sched.h |  2 +-
 2 files changed, 24 insertions(+), 25 deletions(-)

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index fa3158a7b..0f4153146 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -1451,7 +1451,7 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
 
 /**
  * ice_sched_get_vsi_node - Get a VSI node based on VSI ID
- * @hw: pointer to the HW struct
+ * @pi: pointer to the port information structure
  * @tc_node: pointer to the TC node
  * @vsi_handle: software VSI handle
  *
@@ -1459,14 +1459,14 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
  * TC branch
  */
 struct ice_sched_node *
-ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+ice_sched_get_vsi_node(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 		       u16 vsi_handle)
 {
 	struct ice_sched_node *node;
 	u8 vsi_layer;
 
-	vsi_layer = ice_sched_get_vsi_layer(hw);
-	node = ice_sched_get_first_node(hw->port_info, tc_node, vsi_layer);
+	vsi_layer = ice_sched_get_vsi_layer(pi->hw);
+	node = ice_sched_get_first_node(pi, tc_node, vsi_layer);
 
 	/* Check whether it already exists */
 	while (node) {
@@ -1587,7 +1587,7 @@ ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
 
 	qgl = ice_sched_get_qgrp_layer(hw);
 	vsil = ice_sched_get_vsi_layer(hw);
-	parent = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	parent = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 	for (i = vsil + 1; i <= qgl; i++) {
 		if (!parent)
 			return ICE_ERR_CFG;
@@ -1620,7 +1620,7 @@ ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
 
 /**
  * ice_sched_calc_vsi_support_nodes - calculate number of VSI support nodes
- * @hw: pointer to the HW struct
+ * @pi: pointer to the port info structure
  * @tc_node: pointer to TC node
  * @num_nodes: pointer to num nodes array
  *
@@ -1629,15 +1629,15 @@ ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
  * layers
  */
 static void
-ice_sched_calc_vsi_support_nodes(struct ice_hw *hw,
+ice_sched_calc_vsi_support_nodes(struct ice_port_info *pi,
 				 struct ice_sched_node *tc_node, u16 *num_nodes)
 {
 	struct ice_sched_node *node;
 	u8 vsil;
 	int i;
 
-	vsil = ice_sched_get_vsi_layer(hw);
-	for (i = vsil; i >= hw->sw_entry_point_layer; i--)
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+	for (i = vsil; i >= pi->hw->sw_entry_point_layer; i--)
 		/* Add intermediate nodes if TC has no children and
 		 * need at least one node for VSI
 		 */
@@ -1647,11 +1647,11 @@ ice_sched_calc_vsi_support_nodes(struct ice_hw *hw,
 			/* If intermediate nodes are reached max children
 			 * then add a new one.
 			 */
-			node = ice_sched_get_first_node(hw->port_info, tc_node,
-							(u8)i);
+			node = ice_sched_get_first_node(pi, tc_node, (u8)i);
 			/* scan all the siblings */
 			while (node) {
-				if (node->num_children < hw->max_children[i])
+				if (node->num_children <
+				    pi->hw->max_children[i])
 					break;
 				node = node->sibling;
 			}
@@ -1731,14 +1731,13 @@ ice_sched_add_vsi_to_topo(struct ice_port_info *pi, u16 vsi_handle, u8 tc)
 {
 	u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
 	struct ice_sched_node *tc_node;
-	struct ice_hw *hw = pi->hw;
 
 	tc_node = ice_sched_get_tc_node(pi, tc);
 	if (!tc_node)
 		return ICE_ERR_PARAM;
 
 	/* calculate number of supported nodes needed for this VSI */
-	ice_sched_calc_vsi_support_nodes(hw, tc_node, num_nodes);
+	ice_sched_calc_vsi_support_nodes(pi, tc_node, num_nodes);
 
 	/* add VSI supported nodes to TC subtree */
 	return ice_sched_add_vsi_support_nodes(pi, vsi_handle, tc_node,
@@ -1771,7 +1770,7 @@ ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
 	if (!tc_node)
 		return ICE_ERR_CFG;
 
-	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 	if (!vsi_node)
 		return ICE_ERR_CFG;
 
@@ -1834,7 +1833,7 @@ ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
 	vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
 	if (!vsi_ctx)
 		return ICE_ERR_PARAM;
-	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 
 	/* suspend the VSI if TC is not enabled */
 	if (!enable) {
@@ -1855,7 +1854,7 @@ ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
 		if (status)
 			return status;
 
-		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			return ICE_ERR_CFG;
 
@@ -1966,7 +1965,7 @@ ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
 		if (!tc_node)
 			continue;
 
-		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			continue;
 
@@ -2256,7 +2255,7 @@ ice_sched_move_vsi_to_agg(struct ice_port_info *pi, u16 vsi_handle, u32 agg_id,
 	if (!agg_node)
 		return ICE_ERR_DOES_NOT_EXIST;
 
-	vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+	vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 	if (!vsi_node)
 		return ICE_ERR_DOES_NOT_EXIST;
 
@@ -3537,7 +3536,7 @@ ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
 		if (!vsi_handle_valid)
 			goto exit_agg_priority_per_tc;
 
-		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			goto exit_agg_priority_per_tc;
 
@@ -3593,7 +3592,7 @@ ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
 		if (!tc_node)
 			continue;
 
-		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			continue;
 
@@ -4805,7 +4804,7 @@ ice_sched_validate_vsi_srl_node(struct ice_port_info *pi, u16 vsi_handle)
 		if (!tc_node)
 			continue;
 
-		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			continue;
 
@@ -4864,7 +4863,7 @@ ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle,
 		if (!tc_node)
 			continue;
 
-		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			continue;
 
@@ -5368,7 +5367,7 @@ ice_sched_replay_vsi_bw(struct ice_hw *hw, u16 vsi_handle,
 		tc_node = ice_sched_get_tc_node(pi, tc);
 		if (!tc_node)
 			continue;
-		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			continue;
 		bw_t_info = &vsi_ctx->sched.bw_t_info[tc];
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index e444dc880..38f8f93d2 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -107,7 +107,7 @@ ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
 		  u8 owner, bool enable);
 enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle);
 struct ice_sched_node *
-ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+ice_sched_get_vsi_node(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 		       u16 vsi_handle);
 bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node);
 enum ice_status
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 44/49] net/ice/base: add some minor new defines
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (42 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 43/49] net/ice/base: refactor VSI node sched code Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 45/49] net/ice/base: add 16-byte Flex Rx Descriptor Leyi Rong
                   ` (6 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacek Naczyk, Faerman Lev, Paul M Stillwell Jr

1. Add defines for Link Topology Netlist Section.
2. Add missing Read MAC command response bits.
3. Adds AQ error 29.

Signed-off-by: Jacek Naczyk <jacek.naczyk@intel.com>
Signed-off-by: Faerman Lev <lev.faerman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 5 ++++-
 drivers/net/ice/base/ice_type.h       | 2 ++
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 7642a923b..486429c89 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -145,6 +145,8 @@ struct ice_aqc_manage_mac_read {
 #define ICE_AQC_MAN_MAC_SAN_ADDR_VALID		BIT(5)
 #define ICE_AQC_MAN_MAC_PORT_ADDR_VALID		BIT(6)
 #define ICE_AQC_MAN_MAC_WOL_ADDR_VALID		BIT(7)
+#define ICE_AQC_MAN_MAC_MC_MAG_EN		BIT(8)
+#define ICE_AQC_MAN_MAC_WOL_PRESERVE_ON_PFR	BIT(9)
 #define ICE_AQC_MAN_MAC_READ_S			4
 #define ICE_AQC_MAN_MAC_READ_M			(0xF << ICE_AQC_MAN_MAC_READ_S)
 	u8 rsvd[2];
@@ -1686,7 +1688,7 @@ struct ice_aqc_nvm {
 #define ICE_AQC_NVM_PRESERVE_SELECTED	(3 << ICE_AQC_NVM_PRESERVATION_S)
 #define ICE_AQC_NVM_ACTIV_SEL_NVM	BIT(3) /* Write Activate/SR Dump only */
 #define ICE_AQC_NVM_ACTIV_SEL_OROM	BIT(4)
-#define ICE_AQC_NVM_ACTIV_SEL_EXT_TLV	BIT(5)
+#define ICE_AQC_NVM_ACTIV_SEL_NETLIST	BIT(5)
 #define ICE_AQC_NVM_ACTIV_SEL_MASK	MAKEMASK(0x7, 3)
 #define ICE_AQC_NVM_FLASH_ONLY		BIT(7)
 	__le16 module_typeid;
@@ -2405,6 +2407,7 @@ enum ice_aq_err {
 	ICE_AQ_RC_ESVN		= 26, /* SVN number prohibits this package */
 	ICE_AQ_RC_EBADMAN	= 27, /* Manifest hash mismatch */
 	ICE_AQ_RC_EBADBUF	= 28, /* Buffer hash mismatches manifest */
+	ICE_AQ_RC_EACCES_BMCU	= 29, /* BMC Update in progress */
 };
 
 /* Admin Queue command opcodes */
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index f30b37985..4ec7906ac 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -1010,6 +1010,8 @@ enum ice_sw_fwd_act_type {
 #define ICE_SR_NVM_BANK_SIZE			0x43
 #define ICE_SR_1ND_OROM_BANK_PTR		0x44
 #define ICE_SR_OROM_BANK_SIZE			0x45
+#define ICE_SR_NETLIST_BANK_PTR			0x46
+#define ICE_SR_NETLIST_BANK_SIZE		0x47
 #define ICE_SR_EMP_SR_SETTINGS_PTR		0x48
 #define ICE_SR_CONFIGURATION_METADATA_PTR	0x4D
 #define ICE_SR_IMMEDIATE_VALUES_PTR		0x4E
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 45/49] net/ice/base: add 16-byte Flex Rx Descriptor
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (43 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 44/49] net/ice/base: add some minor new defines Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 46/49] net/ice/base: add vxlan/generic tunnel management Leyi Rong
                   ` (5 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Add 16-byte Flex Rx descriptor structure definition.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_lan_tx_rx.h | 28 ++++++++++++++++++++++++++--
 1 file changed, 26 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_lan_tx_rx.h b/drivers/net/ice/base/ice_lan_tx_rx.h
index fa2309bf1..147185212 100644
--- a/drivers/net/ice/base/ice_lan_tx_rx.h
+++ b/drivers/net/ice/base/ice_lan_tx_rx.h
@@ -373,10 +373,34 @@ enum ice_rx_prog_status_desc_error_bits {
 	ICE_RX_PROG_STATUS_DESC_NO_FD_ENTRY_S	= 1,
 };
 
-/* Rx Flex Descriptor
- * This descriptor is used instead of the legacy version descriptor when
+/* Rx Flex Descriptors
+ * These descriptors are used instead of the legacy version descriptors when
  * ice_rlan_ctx.adv_desc is set
  */
+union ice_16b_rx_flex_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+				 /* bit 0 of hdr_addr is DD bit */
+	} read;
+	struct {
+		/* Qword 0 */
+		u8 rxdid; /* descriptor builder profile ID */
+		u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+		__le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+		__le16 pkt_len; /* [15:14] are reserved */
+		__le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+						/* sph=[11:11] */
+						/* ff1/ext=[15:12] */
+
+		/* Qword 1 */
+		__le16 status_error0;
+		__le16 l2tag1;
+		__le16 flex_meta0;
+		__le16 flex_meta1;
+	} wb; /* writeback */
+};
+
 union ice_32b_rx_flex_desc {
 	struct {
 		__le64 pkt_addr; /* Packet buffer address */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 46/49] net/ice/base: add vxlan/generic tunnel management
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (44 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 45/49] net/ice/base: add 16-byte Flex Rx Descriptor Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 47/49] net/ice/base: enable additional switch rules Leyi Rong
                   ` (4 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Added routines for handling tunnel management:
	- ice_tunnel_port_in_use()
	- ice_tunnel_get_type()
	- ice_find_free_tunnel_entry()
	- ice_create_tunnel()
	- ice_destroy_tunnel()

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 228 +++++++++++++++++++++++++++
 drivers/net/ice/base/ice_flex_pipe.h |   6 +
 2 files changed, 234 insertions(+)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index fda5bef43..1c19548c1 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1711,6 +1711,234 @@ static struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld)
 	return &bld->buf;
 }
 
+/**
+ * ice_tunnel_port_in_use
+ * @hw: pointer to the HW structure
+ * @port: port to search for
+ * @index: optionally returns index
+ *
+ * Returns whether a port is already in use as a tunnel, and optionally its
+ * index
+ */
+bool ice_tunnel_port_in_use(struct ice_hw *hw, u16 port, u16 *index)
+{
+	u16 i;
+
+	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
+		if (hw->tnl.tbl[i].in_use && hw->tnl.tbl[i].port == port) {
+			if (index)
+				*index = i;
+			return true;
+		}
+
+	return false;
+}
+
+/**
+ * ice_tunnel_get_type
+ * @hw: pointer to the HW structure
+ * @port: port to search for
+ * @type: returns tunnel index
+ *
+ * For a given port number, will return the type of tunnel.
+ */
+bool
+ice_tunnel_get_type(struct ice_hw *hw, u16 port, enum ice_tunnel_type *type)
+{
+	u16 i;
+
+	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
+		if (hw->tnl.tbl[i].in_use && hw->tnl.tbl[i].port == port) {
+			*type = hw->tnl.tbl[i].type;
+			return true;
+		}
+
+	return false;
+}
+
+/**
+ * ice_find_free_tunnel_entry
+ * @hw: pointer to the HW structure
+ * @type: tunnel type
+ * @index: optionally returns index
+ *
+ * Returns whether there is a free tunnel entry, and optionally its index
+ */
+static bool
+ice_find_free_tunnel_entry(struct ice_hw *hw, enum ice_tunnel_type type,
+			   u16 *index)
+{
+	u16 i;
+
+	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
+		if (hw->tnl.tbl[i].valid && !hw->tnl.tbl[i].in_use &&
+		    hw->tnl.tbl[i].type == type) {
+			if (index)
+				*index = i;
+			return true;
+		}
+
+	return false;
+}
+
+/**
+ * ice_create_tunnel
+ * @hw: pointer to the HW structure
+ * @type: type of tunnel
+ * @port: port to use for vxlan tunnel
+ *
+ * Creates a tunnel
+ */
+enum ice_status
+ice_create_tunnel(struct ice_hw *hw, enum ice_tunnel_type type, u16 port)
+{
+	struct ice_boost_tcam_section *sect_rx, *sect_tx;
+	enum ice_status status = ICE_ERR_MAX_LIMIT;
+	struct ice_buf_build *bld;
+	u16 index;
+
+	if (ice_tunnel_port_in_use(hw, port, NULL))
+		return ICE_ERR_ALREADY_EXISTS;
+
+	if (!ice_find_free_tunnel_entry(hw, type, &index))
+		return ICE_ERR_OUT_OF_RANGE;
+
+	bld = ice_pkg_buf_alloc(hw);
+	if (!bld)
+		return ICE_ERR_NO_MEMORY;
+
+	/* allocate 2 sections, one for RX parser, one for TX parser */
+	if (ice_pkg_buf_reserve_section(bld, 2))
+		goto ice_create_tunnel_err;
+
+	sect_rx = (struct ice_boost_tcam_section *)
+		ice_pkg_buf_alloc_section(bld, ICE_SID_RXPARSER_BOOST_TCAM,
+					  sizeof(*sect_rx));
+	if (!sect_rx)
+		goto ice_create_tunnel_err;
+	sect_rx->count = CPU_TO_LE16(1);
+
+	sect_tx = (struct ice_boost_tcam_section *)
+		ice_pkg_buf_alloc_section(bld, ICE_SID_TXPARSER_BOOST_TCAM,
+					  sizeof(*sect_tx));
+	if (!sect_tx)
+		goto ice_create_tunnel_err;
+	sect_tx->count = CPU_TO_LE16(1);
+
+	/* copy original boost entry to update package buffer */
+	ice_memcpy(sect_rx->tcam, hw->tnl.tbl[index].boost_entry,
+		   sizeof(*sect_rx->tcam), ICE_NONDMA_TO_NONDMA);
+
+	/* over-write the never-match dest port key bits with the encoded port
+	 * bits
+	 */
+	ice_set_key((u8 *)&sect_rx->tcam[0].key, sizeof(sect_rx->tcam[0].key),
+		    (u8 *)&port, NULL, NULL, NULL,
+		    offsetof(struct ice_boost_key_value, hv_dst_port_key),
+		    sizeof(sect_rx->tcam[0].key.key.hv_dst_port_key));
+
+	/* exact copy of entry to TX section entry */
+	ice_memcpy(sect_tx->tcam, sect_rx->tcam, sizeof(*sect_tx->tcam),
+		   ICE_NONDMA_TO_NONDMA);
+
+	status = ice_update_pkg(hw, ice_pkg_buf(bld), 1);
+	if (!status) {
+		hw->tnl.tbl[index].port = port;
+		hw->tnl.tbl[index].in_use = true;
+	}
+
+ice_create_tunnel_err:
+	ice_pkg_buf_free(hw, bld);
+
+	return status;
+}
+
+/**
+ * ice_destroy_tunnel
+ * @hw: pointer to the HW structure
+ * @port: port of tunnel to destroy (ignored if the all parameter is true)
+ * @all: flag that states to destroy all tunnels
+ *
+ * Destroys a tunnel or all tunnels by creating an update package buffer
+ * targeting the specific updates requested and then performing an update
+ * package.
+ */
+enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all)
+{
+	struct ice_boost_tcam_section *sect_rx, *sect_tx;
+	enum ice_status status = ICE_ERR_MAX_LIMIT;
+	struct ice_buf_build *bld;
+	u16 count = 0;
+	u16 size;
+	u16 i;
+
+	/* determine count */
+	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
+		if (hw->tnl.tbl[i].valid && hw->tnl.tbl[i].in_use &&
+		    (all || hw->tnl.tbl[i].port == port))
+			count++;
+
+	if (!count)
+		return ICE_ERR_PARAM;
+
+	/* size of section - there is at least one entry */
+	size = (count - 1) * sizeof(*sect_rx->tcam) + sizeof(*sect_rx);
+
+	bld = ice_pkg_buf_alloc(hw);
+	if (!bld)
+		return ICE_ERR_NO_MEMORY;
+
+	/* allocate 2 sections, one for RX parser, one for TX parser */
+	if (ice_pkg_buf_reserve_section(bld, 2))
+		goto ice_destroy_tunnel_err;
+
+	sect_rx = (struct ice_boost_tcam_section *)
+		ice_pkg_buf_alloc_section(bld, ICE_SID_RXPARSER_BOOST_TCAM,
+					  size);
+	if (!sect_rx)
+		goto ice_destroy_tunnel_err;
+	sect_rx->count = CPU_TO_LE16(1);
+
+	sect_tx = (struct ice_boost_tcam_section *)
+		ice_pkg_buf_alloc_section(bld, ICE_SID_TXPARSER_BOOST_TCAM,
+					  size);
+	if (!sect_tx)
+		goto ice_destroy_tunnel_err;
+	sect_tx->count = CPU_TO_LE16(1);
+
+	/* copy original boost entry to update package buffer, one copy to RX
+	 * section, another copy to the TX section
+	 */
+	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
+		if (hw->tnl.tbl[i].valid && hw->tnl.tbl[i].in_use &&
+		    (all || hw->tnl.tbl[i].port == port)) {
+			ice_memcpy(sect_rx->tcam + i,
+				   hw->tnl.tbl[i].boost_entry,
+				   sizeof(*sect_rx->tcam),
+				   ICE_NONDMA_TO_NONDMA);
+			ice_memcpy(sect_tx->tcam + i,
+				   hw->tnl.tbl[i].boost_entry,
+				   sizeof(*sect_tx->tcam),
+				   ICE_NONDMA_TO_NONDMA);
+			hw->tnl.tbl[i].marked = true;
+		}
+
+	status = ice_update_pkg(hw, ice_pkg_buf(bld), 1);
+	if (!status)
+		for (i = 0; i < hw->tnl.count &&
+		     i < ICE_TUNNEL_MAX_ENTRIES; i++)
+			if (hw->tnl.tbl[i].marked) {
+				hw->tnl.tbl[i].port = 0;
+				hw->tnl.tbl[i].in_use = false;
+				hw->tnl.tbl[i].marked = false;
+			}
+
+	ice_pkg_buf_free(hw, bld);
+
+ice_destroy_tunnel_err:
+	return status;
+}
+
 
 /**
  * ice_find_prot_off - find prot ID and offset pair, based on prof and FV index
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 7142ae7fe..13066808c 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -33,6 +33,12 @@ ice_find_label_value(struct ice_seg *ice_seg, char const *name, u32 type,
 enum ice_status
 ice_get_sw_fv_list(struct ice_hw *hw, u16 *prot_ids, u8 ids_cnt,
 		   struct LIST_HEAD_TYPE *fv_list);
+enum ice_status
+ice_create_tunnel(struct ice_hw *hw, enum ice_tunnel_type type, u16 port);
+enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all);
+bool ice_tunnel_port_in_use(struct ice_hw *hw, u16 port, u16 *index);
+bool
+ice_tunnel_get_type(struct ice_hw *hw, u16 port, enum ice_tunnel_type *type);
 
 
 /* XLT2/VSI group functions */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 47/49] net/ice/base: enable additional switch rules
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (45 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 46/49] net/ice/base: add vxlan/generic tunnel management Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-05 12:24   ` Maxime Coquelin
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 48/49] net/ice/base: allow forward to Q groups in switch rule Leyi Rong
                   ` (3 subsequent siblings)
  50 siblings, 1 reply; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Add capability to create inner IP and inner TCP switch recipes and
rules. Change UDP tunnel dummy packet to accommodate the training of
these new rules.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_protocol_type.h |   8 +-
 drivers/net/ice/base/ice_switch.c        | 361 ++++++++++++-----------
 drivers/net/ice/base/ice_switch.h        |   1 +
 3 files changed, 203 insertions(+), 167 deletions(-)

diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
index 82822fb74..38bed7a79 100644
--- a/drivers/net/ice/base/ice_protocol_type.h
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -35,6 +35,7 @@ enum ice_protocol_type {
 	ICE_IPV6_IL,
 	ICE_IPV6_OFOS,
 	ICE_TCP_IL,
+	ICE_UDP_OF,
 	ICE_UDP_ILOS,
 	ICE_SCTP_IL,
 	ICE_VXLAN,
@@ -112,6 +113,7 @@ enum ice_prot_id {
 #define ICE_IPV6_OFOS_HW	40
 #define ICE_IPV6_IL_HW		41
 #define ICE_TCP_IL_HW		49
+#define ICE_UDP_OF_HW		52
 #define ICE_UDP_ILOS_HW		53
 #define ICE_SCTP_IL_HW		96
 
@@ -188,8 +190,7 @@ struct ice_l4_hdr {
 struct ice_udp_tnl_hdr {
 	u16 field;
 	u16 proto_type;
-	u16 vni;
-	u16 reserved;
+	u32 vni;	/* only use lower 24-bits */
 };
 
 struct ice_nvgre {
@@ -225,6 +226,7 @@ struct ice_prot_lkup_ext {
 	u8 n_val_words;
 	/* create a buffer to hold max words per recipe */
 	u16 field_off[ICE_MAX_CHAIN_WORDS];
+	u16 field_mask[ICE_MAX_CHAIN_WORDS];
 
 	struct ice_fv_word fv_words[ICE_MAX_CHAIN_WORDS];
 
@@ -235,6 +237,7 @@ struct ice_prot_lkup_ext {
 struct ice_pref_recipe_group {
 	u8 n_val_pairs;		/* Number of valid pairs */
 	struct ice_fv_word pairs[ICE_NUM_WORDS_RECIPE];
+	u16 mask[ICE_NUM_WORDS_RECIPE];
 };
 
 struct ice_recp_grp_entry {
@@ -244,6 +247,7 @@ struct ice_recp_grp_entry {
 	u16 rid;
 	u8 chain_idx;
 	u16 fv_idx[ICE_NUM_WORDS_RECIPE];
+	u16 fv_mask[ICE_NUM_WORDS_RECIPE];
 	struct ice_pref_recipe_group r_group;
 };
 #endif /* _ICE_PROTOCOL_TYPE_H_ */
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 373acb7a6..02fb49dba 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -53,60 +53,109 @@ static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0,
 	 sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi) + \
 	 ((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi)))
 
+static const struct ice_dummy_pkt_offsets {
+	enum ice_protocol_type type;
+	u16 offset; /* ICE_PROTOCOL_LAST indicates end of list */
+} dummy_gre_packet_offsets[] = {
+	{ ICE_MAC_OFOS,		0 },
+	{ ICE_IPV4_OFOS,	14 },
+	{ ICE_VXLAN,		34 },
+	{ ICE_MAC_IL,		42 },
+	{ ICE_IPV4_IL,		54 },
+	{ ICE_PROTOCOL_LAST,	0 },
+};
+
 static const
-u8 dummy_gre_packet[] = { 0, 0, 0, 0,		/* Ether starts */
+u8 dummy_gre_packet[] = { 0, 0, 0, 0,		/* ICE_MAC_OFOS 0 */
 			  0, 0, 0, 0,
 			  0, 0, 0, 0,
-			  0x08, 0,		/* Ether ends */
-			  0x45, 0, 0, 0x3E,	/* IP starts */
+			  0x08, 0,
+			  0x45, 0, 0, 0x3E,	/* ICE_IPV4_OFOS 14 */
 			  0, 0, 0, 0,
 			  0, 0x2F, 0, 0,
 			  0, 0, 0, 0,
-			  0, 0, 0, 0,		/* IP ends */
-			  0x80, 0, 0x65, 0x58,	/* GRE starts */
-			  0, 0, 0, 0,		/* GRE ends */
-			  0, 0, 0, 0,		/* Ether starts */
-			  0, 0, 0, 0,
-			  0, 0, 0, 0,
-			  0x08, 0,		/* Ether ends */
-			  0x45, 0, 0, 0x14,	/* IP starts */
 			  0, 0, 0, 0,
+			  0x80, 0, 0x65, 0x58,	/* ICE_VXLAN_GRE 34 */
 			  0, 0, 0, 0,
+			  0, 0, 0, 0,		/* ICE_MAC_IL 42 */
 			  0, 0, 0, 0,
-			  0, 0, 0, 0		/* IP ends */
-			};
-
-static const u8
-dummy_udp_tun_packet[] = {0, 0, 0, 0,		/* Ether starts */
-			  0, 0, 0, 0,
-			  0, 0, 0, 0,
-			  0x08, 0,		/* Ether ends */
-			  0x45, 0, 0, 0x32,	/* IP starts */
 			  0, 0, 0, 0,
-			  0, 0x11, 0, 0,
+			  0x08, 0,
+			  0x45, 0, 0, 0x14,	/* ICE_IPV4_IL 54 */
 			  0, 0, 0, 0,
-			  0, 0, 0, 0,		/* IP ends */
-			  0, 0, 0x12, 0xB5,	/* UDP start*/
-			  0, 0x1E, 0, 0,	/* UDP end*/
-			  0, 0, 0, 0,		/* VXLAN start */
-			  0, 0, 0, 0,		/* VXLAN end*/
-			  0, 0, 0, 0,		/* Ether starts */
 			  0, 0, 0, 0,
 			  0, 0, 0, 0,
-			  0, 0			/* Ether ends */
+			  0, 0, 0, 0
 			};
 
+static const
+struct ice_dummy_pkt_offsets dummy_udp_tun_packet_offsets[] = {
+	{ ICE_MAC_OFOS,		0 },
+	{ ICE_IPV4_OFOS,	14 },
+	{ ICE_UDP_OF,		34 },
+	{ ICE_VXLAN,		42 },
+	{ ICE_MAC_IL,		50 },
+	{ ICE_IPV4_IL,		64 },
+	{ ICE_TCP_IL,		84 },
+	{ ICE_PROTOCOL_LAST,	0 },
+};
+
+static const
+u8 dummy_udp_tun_packet[] = {
+	0x00, 0x00, 0x00, 0x00,  /* ICE_MAC_OFOS 0 */
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x08, 0x00,
+
+	0x45, 0x00, 0x00, 0x5a, /* ICE_IPV4_OFOS 14 */
+	0x00, 0x01, 0x00, 0x00,
+	0x40, 0x11, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+
+	0x00, 0x00, 0x12, 0xb5, /* ICE_UDP_OF 34 */
+	0x00, 0x46, 0x00, 0x00,
+
+	0x04, 0x00, 0x00, 0x03, /* ICE_VXLAN 42 */
+	0x00, 0x00, 0x00, 0x00,
+
+	0x00, 0x00, 0x00, 0x00, /* ICE_MAC_IL 50 */
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x08, 0x00,
+
+	0x45, 0x00, 0x00, 0x28, /* ICE_IPV4_IL 64 */
+	0x00, 0x01, 0x00, 0x00,
+	0x40, 0x06, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+
+	0x00, 0x00, 0x00, 0x00, /* ICE_TCP_IL 84 */
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x50, 0x02, 0x20, 0x00,
+	0x00, 0x00, 0x00, 0x00
+};
+
+static const
+struct ice_dummy_pkt_offsets dummy_tcp_tun_packet_offsets[] = {
+	{ ICE_MAC_OFOS,		0 },
+	{ ICE_IPV4_OFOS,	14 },
+	{ ICE_TCP_IL,		34 },
+	{ ICE_PROTOCOL_LAST,	0 },
+};
+
 static const u8
-dummy_tcp_tun_packet[] = {0, 0, 0, 0,		/* Ether starts */
+dummy_tcp_tun_packet[] = {0, 0, 0, 0,		/* ICE_MAC_OFOS 0 */
 			  0, 0, 0, 0,
 			  0, 0, 0, 0,
-			  0x08, 0,              /* Ether ends */
-			  0x45, 0, 0, 0x28,     /* IP starts */
+			  0x08, 0,
+			  0x45, 0, 0, 0x28,     /* ICE_IPV4_OFOS 14 */
 			  0, 0x01, 0, 0,
 			  0x40, 0x06, 0xF5, 0x69,
 			  0, 0, 0, 0,
-			  0, 0, 0, 0,   /* IP ends */
 			  0, 0, 0, 0,
+			  0, 0, 0, 0,		/* ICE_TCP_IL 34 */
 			  0, 0, 0, 0,
 			  0, 0, 0, 0,
 			  0x50, 0x02, 0x20,
@@ -184,6 +233,9 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid)
 			u8 lkup_indx = root_bufs.content.lkup_indx[i + 1];
 
 			rg_entry->fv_idx[i] = lkup_indx;
+			rg_entry->fv_mask[i] =
+				LE16_TO_CPU(root_bufs.content.mask[i + 1]);
+
 			/* If the recipe is a chained recipe then all its
 			 * child recipe's result will have a result index.
 			 * To fill fv_words we should not use those result
@@ -4254,10 +4306,11 @@ static const struct ice_prot_ext_tbl_entry ice_prot_ext[] = {
 	{ ICE_IPV6_OFOS,	{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24,
 				 26, 28, 30, 32, 34, 36, 38 } },
 	{ ICE_TCP_IL,		{ 0, 2 } },
+	{ ICE_UDP_OF,		{ 0, 2 } },
 	{ ICE_UDP_ILOS,		{ 0, 2 } },
 	{ ICE_SCTP_IL,		{ 0, 2 } },
-	{ ICE_VXLAN,		{ 8, 10, 12 } },
-	{ ICE_GENEVE,		{ 8, 10, 12 } },
+	{ ICE_VXLAN,		{ 8, 10, 12, 14 } },
+	{ ICE_GENEVE,		{ 8, 10, 12, 14 } },
 	{ ICE_VXLAN_GPE,	{ 0, 2, 4 } },
 	{ ICE_NVGRE,		{ 0, 2 } },
 	{ ICE_PROTOCOL_LAST,	{ 0 } }
@@ -4270,11 +4323,14 @@ static const struct ice_prot_ext_tbl_entry ice_prot_ext[] = {
  */
 static const struct ice_pref_recipe_group ice_recipe_pack[] = {
 	{3, { { ICE_MAC_OFOS_HW, 0, 0 }, { ICE_MAC_OFOS_HW, 2, 0 },
-	      { ICE_MAC_OFOS_HW, 4, 0 } } },
+	      { ICE_MAC_OFOS_HW, 4, 0 } }, { 0xffff, 0xffff, 0xffff, 0xffff } },
 	{4, { { ICE_MAC_IL_HW, 0, 0 }, { ICE_MAC_IL_HW, 2, 0 },
-	      { ICE_MAC_IL_HW, 4, 0 }, { ICE_META_DATA_ID_HW, 44, 0 } } },
-	{2, { { ICE_IPV4_IL_HW, 0, 0 }, { ICE_IPV4_IL_HW, 2, 0 } } },
-	{2, { { ICE_IPV4_IL_HW, 12, 0 }, { ICE_IPV4_IL_HW, 14, 0 } } },
+	      { ICE_MAC_IL_HW, 4, 0 }, { ICE_META_DATA_ID_HW, 44, 0 } },
+		{ 0xffff, 0xffff, 0xffff, 0xffff } },
+	{2, { { ICE_IPV4_IL_HW, 0, 0 }, { ICE_IPV4_IL_HW, 2, 0 } },
+		{ 0xffff, 0xffff, 0xffff, 0xffff } },
+	{2, { { ICE_IPV4_IL_HW, 12, 0 }, { ICE_IPV4_IL_HW, 14, 0 } },
+		{ 0xffff, 0xffff, 0xffff, 0xffff } },
 };
 
 static const struct ice_protocol_entry ice_prot_id_tbl[] = {
@@ -4285,6 +4341,7 @@ static const struct ice_protocol_entry ice_prot_id_tbl[] = {
 	{ ICE_IPV6_OFOS,	ICE_IPV6_OFOS_HW },
 	{ ICE_IPV6_IL,		ICE_IPV6_IL_HW },
 	{ ICE_TCP_IL,		ICE_TCP_IL_HW },
+	{ ICE_UDP_OF,		ICE_UDP_OF_HW },
 	{ ICE_UDP_ILOS,		ICE_UDP_ILOS_HW },
 	{ ICE_SCTP_IL,		ICE_SCTP_IL_HW },
 	{ ICE_VXLAN,		ICE_UDP_OF_HW },
@@ -4403,7 +4460,7 @@ ice_fill_valid_words(struct ice_adv_lkup_elem *rule,
 	word = lkup_exts->n_val_words;
 
 	for (j = 0; j < sizeof(rule->m_u) / sizeof(u16); j++)
-		if (((u16 *)&rule->m_u)[j] == 0xffff &&
+		if (((u16 *)&rule->m_u)[j] &&
 		    rule->type < ARRAY_SIZE(ice_prot_ext)) {
 			/* No more space to accommodate */
 			if (word >= ICE_MAX_CHAIN_WORDS)
@@ -4412,6 +4469,7 @@ ice_fill_valid_words(struct ice_adv_lkup_elem *rule,
 				ice_prot_ext[rule->type].offs[j];
 			lkup_exts->fv_words[word].prot_id =
 				ice_prot_id_tbl[rule->type].protocol_id;
+			lkup_exts->field_mask[word] = ((u16 *)&rule->m_u)[j];
 			word++;
 		}
 
@@ -4535,6 +4593,7 @@ ice_create_first_fit_recp_def(struct ice_hw *hw,
 				lkup_exts->fv_words[j].prot_id;
 			grp->pairs[grp->n_val_pairs].off =
 				lkup_exts->fv_words[j].off;
+			grp->mask[grp->n_val_pairs] = lkup_exts->field_mask[j];
 			grp->n_val_pairs++;
 		}
 
@@ -4569,14 +4628,22 @@ ice_fill_fv_word_index(struct ice_hw *hw, struct LIST_HEAD_TYPE *fv_list,
 
 		for (i = 0; i < rg->r_group.n_val_pairs; i++) {
 			struct ice_fv_word *pr;
+			u16 mask;
 			u8 j;
 
 			pr = &rg->r_group.pairs[i];
+			mask = rg->r_group.mask[i];
+
 			for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++)
 				if (fv_ext[j].prot_id == pr->prot_id &&
 				    fv_ext[j].off == pr->off) {
 					/* Store index of field vector */
 					rg->fv_idx[i] = j;
+					/* Mask is given by caller as big
+					 * endian, but sent to FW as little
+					 * endian
+					 */
+					rg->fv_mask[i] = mask << 8 | mask >> 8;
 					break;
 				}
 		}
@@ -4674,7 +4741,8 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm,
 
 		for (i = 0; i < entry->r_group.n_val_pairs; i++) {
 			buf[recps].content.lkup_indx[i + 1] = entry->fv_idx[i];
-			buf[recps].content.mask[i + 1] = CPU_TO_LE16(0xFFFF);
+			buf[recps].content.mask[i + 1] =
+				CPU_TO_LE16(entry->fv_mask[i]);
 		}
 
 		if (rm->n_grp_count > 1) {
@@ -4896,6 +4964,8 @@ ice_create_recipe_group(struct ice_hw *hw, struct ice_sw_recipe *rm,
 		rm->n_ext_words = lkup_exts->n_val_words;
 		ice_memcpy(&rm->ext_words, lkup_exts->fv_words,
 			   sizeof(rm->ext_words), ICE_NONDMA_TO_NONDMA);
+		ice_memcpy(rm->word_masks, lkup_exts->field_mask,
+			   sizeof(rm->word_masks), ICE_NONDMA_TO_NONDMA);
 		goto out;
 	}
 
@@ -5097,16 +5167,8 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 	return status;
 }
 
-#define ICE_MAC_HDR_OFFSET	0
-#define ICE_IP_HDR_OFFSET	14
-#define ICE_GRE_HDR_OFFSET	34
-#define ICE_MAC_IL_HDR_OFFSET	42
-#define ICE_IP_IL_HDR_OFFSET	56
-#define ICE_L4_HDR_OFFSET	34
-#define ICE_UDP_TUN_HDR_OFFSET	42
-
 /**
- * ice_find_dummy_packet - find dummy packet with given match criteria
+ * ice_find_dummy_packet - find dummy packet by tunnel type
  *
  * @lkups: lookup elements or match criteria for the advanced recipe, one
  *	   structure per protocol header
@@ -5114,17 +5176,20 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
  * @tun_type: tunnel type from the match criteria
  * @pkt: dummy packet to fill according to filter match criteria
  * @pkt_len: packet length of dummy packet
+ * @offsets: pointer to receive the pointer to the offsets for the packet
  */
 static void
 ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
 		      enum ice_sw_tunnel_type tun_type, const u8 **pkt,
-		      u16 *pkt_len)
+		      u16 *pkt_len,
+		      const struct ice_dummy_pkt_offsets **offsets)
 {
 	u16 i;
 
 	if (tun_type == ICE_SW_TUN_NVGRE || tun_type == ICE_ALL_TUNNELS) {
 		*pkt = dummy_gre_packet;
 		*pkt_len = sizeof(dummy_gre_packet);
+		*offsets = dummy_gre_packet_offsets;
 		return;
 	}
 
@@ -5132,6 +5197,7 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
 	    tun_type == ICE_SW_TUN_VXLAN_GPE) {
 		*pkt = dummy_udp_tun_packet;
 		*pkt_len = sizeof(dummy_udp_tun_packet);
+		*offsets = dummy_udp_tun_packet_offsets;
 		return;
 	}
 
@@ -5139,12 +5205,14 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
 		if (lkups[i].type == ICE_UDP_ILOS) {
 			*pkt = dummy_udp_tun_packet;
 			*pkt_len = sizeof(dummy_udp_tun_packet);
+			*offsets = dummy_udp_tun_packet_offsets;
 			return;
 		}
 	}
 
 	*pkt = dummy_tcp_tun_packet;
 	*pkt_len = sizeof(dummy_tcp_tun_packet);
+	*offsets = dummy_tcp_tun_packet_offsets;
 }
 
 /**
@@ -5153,16 +5221,16 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
  * @lkups: lookup elements or match criteria for the advanced recipe, one
  *	   structure per protocol header
  * @lkups_cnt: number of protocols
- * @tun_type: to know if the dummy packet is supposed to be tunnel packet
  * @s_rule: stores rule information from the match criteria
  * @dummy_pkt: dummy packet to fill according to filter match criteria
  * @pkt_len: packet length of dummy packet
+ * @offsets: offset info for the dummy packet
  */
-static void
+static enum ice_status
 ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
-			  enum ice_sw_tunnel_type tun_type,
 			  struct ice_aqc_sw_rules_elem *s_rule,
-			  const u8 *dummy_pkt, u16 pkt_len)
+			  const u8 *dummy_pkt, u16 pkt_len,
+			  const struct ice_dummy_pkt_offsets *offsets)
 {
 	u8 *pkt;
 	u16 i;
@@ -5175,124 +5243,74 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
 	ice_memcpy(pkt, dummy_pkt, pkt_len, ICE_NONDMA_TO_NONDMA);
 
 	for (i = 0; i < lkups_cnt; i++) {
-		u32 len, pkt_off, hdr_size, field_off;
+		enum ice_protocol_type type;
+		u16 offset = 0, len = 0, j;
+		bool found = false;
+
+		/* find the start of this layer; it should be found since this
+		 * was already checked when search for the dummy packet
+		 */
+		type = lkups[i].type;
+		for (j = 0; offsets[j].type != ICE_PROTOCOL_LAST; j++) {
+			if (type == offsets[j].type) {
+				offset = offsets[j].offset;
+				found = true;
+				break;
+			}
+		}
+		/* this should never happen in a correct calling sequence */
+		if (!found)
+			return ICE_ERR_PARAM;
 
 		switch (lkups[i].type) {
 		case ICE_MAC_OFOS:
 		case ICE_MAC_IL:
-			pkt_off = offsetof(struct ice_ether_hdr, dst_addr) +
-				((lkups[i].type == ICE_MAC_IL) ?
-				 ICE_MAC_IL_HDR_OFFSET : 0);
-			len = sizeof(lkups[i].h_u.eth_hdr.dst_addr);
-			if ((tun_type == ICE_SW_TUN_VXLAN ||
-			     tun_type == ICE_SW_TUN_GENEVE ||
-			     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
-			     lkups[i].type == ICE_MAC_IL) {
-				pkt_off += sizeof(struct ice_udp_tnl_hdr);
-			}
-
-			ice_memcpy(&pkt[pkt_off],
-				   &lkups[i].h_u.eth_hdr.dst_addr, len,
-				   ICE_NONDMA_TO_NONDMA);
-			pkt_off = offsetof(struct ice_ether_hdr, src_addr) +
-				((lkups[i].type == ICE_MAC_IL) ?
-				 ICE_MAC_IL_HDR_OFFSET : 0);
-			len = sizeof(lkups[i].h_u.eth_hdr.src_addr);
-			if ((tun_type == ICE_SW_TUN_VXLAN ||
-			     tun_type == ICE_SW_TUN_GENEVE ||
-			     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
-			     lkups[i].type == ICE_MAC_IL) {
-				pkt_off += sizeof(struct ice_udp_tnl_hdr);
-			}
-			ice_memcpy(&pkt[pkt_off],
-				   &lkups[i].h_u.eth_hdr.src_addr, len,
-				   ICE_NONDMA_TO_NONDMA);
-			if (lkups[i].h_u.eth_hdr.ethtype_id) {
-				pkt_off = offsetof(struct ice_ether_hdr,
-						   ethtype_id) +
-					((lkups[i].type == ICE_MAC_IL) ?
-					 ICE_MAC_IL_HDR_OFFSET : 0);
-				len = sizeof(lkups[i].h_u.eth_hdr.ethtype_id);
-				if ((tun_type == ICE_SW_TUN_VXLAN ||
-				     tun_type == ICE_SW_TUN_GENEVE ||
-				     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
-				     lkups[i].type == ICE_MAC_IL) {
-					pkt_off +=
-						sizeof(struct ice_udp_tnl_hdr);
-				}
-				ice_memcpy(&pkt[pkt_off],
-					   &lkups[i].h_u.eth_hdr.ethtype_id,
-					   len, ICE_NONDMA_TO_NONDMA);
-			}
+			len = sizeof(struct ice_ether_hdr);
 			break;
 		case ICE_IPV4_OFOS:
-			hdr_size = sizeof(struct ice_ipv4_hdr);
-			if (lkups[i].h_u.ipv4_hdr.dst_addr) {
-				pkt_off = ICE_IP_HDR_OFFSET +
-					   offsetof(struct ice_ipv4_hdr,
-						    dst_addr);
-				field_off = offsetof(struct ice_ipv4_hdr,
-						     dst_addr);
-				len = hdr_size - field_off;
-				ice_memcpy(&pkt[pkt_off],
-					   &lkups[i].h_u.ipv4_hdr.dst_addr,
-					   len, ICE_NONDMA_TO_NONDMA);
-			}
-			if (lkups[i].h_u.ipv4_hdr.src_addr) {
-				pkt_off = ICE_IP_HDR_OFFSET +
-					   offsetof(struct ice_ipv4_hdr,
-						    src_addr);
-				field_off = offsetof(struct ice_ipv4_hdr,
-						     src_addr);
-				len = hdr_size - field_off;
-				ice_memcpy(&pkt[pkt_off],
-					   &lkups[i].h_u.ipv4_hdr.src_addr,
-					   len, ICE_NONDMA_TO_NONDMA);
-			}
-			break;
 		case ICE_IPV4_IL:
+			len = sizeof(struct ice_ipv4_hdr);
 			break;
 		case ICE_TCP_IL:
+		case ICE_UDP_OF:
 		case ICE_UDP_ILOS:
+			len = sizeof(struct ice_l4_hdr);
+			break;
 		case ICE_SCTP_IL:
-			hdr_size = sizeof(struct ice_udp_tnl_hdr);
-			if (lkups[i].h_u.l4_hdr.dst_port) {
-				pkt_off = ICE_L4_HDR_OFFSET +
-					   offsetof(struct ice_l4_hdr,
-						    dst_port);
-				field_off = offsetof(struct ice_l4_hdr,
-						     dst_port);
-				len =  hdr_size - field_off;
-				ice_memcpy(&pkt[pkt_off],
-					   &lkups[i].h_u.l4_hdr.dst_port,
-					   len, ICE_NONDMA_TO_NONDMA);
-			}
-			if (lkups[i].h_u.l4_hdr.src_port) {
-				pkt_off = ICE_L4_HDR_OFFSET +
-					offsetof(struct ice_l4_hdr, src_port);
-				field_off = offsetof(struct ice_l4_hdr,
-						     src_port);
-				len =  hdr_size - field_off;
-				ice_memcpy(&pkt[pkt_off],
-					   &lkups[i].h_u.l4_hdr.src_port,
-					   len, ICE_NONDMA_TO_NONDMA);
-			}
+			len = sizeof(struct ice_sctp_hdr);
 			break;
 		case ICE_VXLAN:
 		case ICE_GENEVE:
 		case ICE_VXLAN_GPE:
-			pkt_off = ICE_UDP_TUN_HDR_OFFSET +
-				   offsetof(struct ice_udp_tnl_hdr, vni);
-			field_off = offsetof(struct ice_udp_tnl_hdr, vni);
-			len =  sizeof(struct ice_udp_tnl_hdr) - field_off;
-			ice_memcpy(&pkt[pkt_off], &lkups[i].h_u.tnl_hdr.vni,
-				   len, ICE_NONDMA_TO_NONDMA);
+			len = sizeof(struct ice_udp_tnl_hdr);
 			break;
 		default:
-			break;
+			return ICE_ERR_PARAM;
 		}
+
+		/* the length should be a word multiple */
+		if (len % ICE_BYTES_PER_WORD)
+			return ICE_ERR_CFG;
+
+		/* We have the offset to the header start, the length, the
+		 * caller's header values and mask. Use this information to
+		 * copy the data into the dummy packet appropriately based on
+		 * the mask. Note that we need to only write the bits as
+		 * indicated by the mask to make sure we don't improperly write
+		 * over any significant packet data.
+		 */
+		for (j = 0; j < len / sizeof(u16); j++)
+			if (((u16 *)&lkups[i].m_u)[j])
+				((u16 *)(pkt + offset))[j] =
+					(((u16 *)(pkt + offset))[j] &
+					 ~((u16 *)&lkups[i].m_u)[j]) |
+					(((u16 *)&lkups[i].h_u)[j] &
+					 ((u16 *)&lkups[i].m_u)[j]);
 	}
+
 	s_rule->pdata.lkup_tx_rx.hdr_len = CPU_TO_LE16(pkt_len);
+
+	return ICE_SUCCESS;
 }
 
 /**
@@ -5446,7 +5464,7 @@ ice_adv_add_update_vsi_list(struct ice_hw *hw,
 }
 
 /**
- * ice_add_adv_rule - create an advanced switch rule
+ * ice_add_adv_rule - helper function to create an advanced switch rule
  * @hw: pointer to the hardware structure
  * @lkups: information on the words that needs to be looked up. All words
  * together makes one recipe
@@ -5470,11 +5488,13 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 {
 	struct ice_adv_fltr_mgmt_list_entry *m_entry, *adv_fltr = NULL;
 	u16 rid = 0, i, pkt_len, rule_buf_sz, vsi_handle;
-	struct ice_aqc_sw_rules_elem *s_rule;
+	const struct ice_dummy_pkt_offsets *pkt_offsets;
+	struct ice_aqc_sw_rules_elem *s_rule = NULL;
 	struct LIST_HEAD_TYPE *rule_head;
 	struct ice_switch_info *sw;
 	enum ice_status status;
 	const u8 *pkt = NULL;
+	bool found = false;
 	u32 act = 0;
 
 	if (!lkups_cnt)
@@ -5483,13 +5503,25 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 	for (i = 0; i < lkups_cnt; i++) {
 		u16 j, *ptr;
 
-		/* Validate match masks to make sure they match complete 16-bit
-		 * words.
+		/* Validate match masks to make sure that there is something
+		 * to match.
 		 */
-		ptr = (u16 *)&lkups->m_u;
+		ptr = (u16 *)&lkups[i].m_u;
 		for (j = 0; j < sizeof(lkups->m_u) / sizeof(u16); j++)
-			if (ptr[j] != 0 && ptr[j] != 0xffff)
-				return ICE_ERR_PARAM;
+			if (ptr[j] != 0) {
+				found = true;
+				break;
+			}
+	}
+	if (!found)
+		return ICE_ERR_PARAM;
+
+	/* make sure that we can locate a dummy packet */
+	ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt, &pkt_len,
+			      &pkt_offsets);
+	if (!pkt) {
+		status = ICE_ERR_PARAM;
+		goto err_ice_add_adv_rule;
 	}
 
 	if (!(rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI ||
@@ -5530,8 +5562,6 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 		}
 		return status;
 	}
-	ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt,
-			      &pkt_len);
 	rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + pkt_len;
 	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rule_buf_sz);
 	if (!s_rule)
@@ -5576,8 +5606,8 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 	s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(rid);
 	s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
 
-	ice_fill_adv_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, s_rule,
-				  pkt, pkt_len);
+	ice_fill_adv_dummy_packet(lkups, lkups_cnt, s_rule, pkt, pkt_len,
+				  pkt_offsets);
 
 	status = ice_aq_sw_rules(hw, (struct ice_aqc_sw_rules *)s_rule,
 				 rule_buf_sz, 1, ice_aqc_opc_add_sw_rules,
@@ -5753,11 +5783,12 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo)
 {
 	struct ice_adv_fltr_mgmt_list_entry *list_elem;
+	const struct ice_dummy_pkt_offsets *offsets;
 	struct ice_prot_lkup_ext lkup_exts;
 	u16 rule_buf_sz, pkt_len, i, rid;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
 	enum ice_status status = ICE_SUCCESS;
 	bool remove_rule = false;
-	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
 	const u8 *pkt = NULL;
 	u16 vsi_handle;
 
@@ -5805,7 +5836,7 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 		struct ice_aqc_sw_rules_elem *s_rule;
 
 		ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt,
-				      &pkt_len);
+				      &pkt_len, &offsets);
 		rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + pkt_len;
 		s_rule =
 			(struct ice_aqc_sw_rules_elem *)ice_malloc(hw,
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index 05b1170c9..db79e41eb 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -192,6 +192,7 @@ struct ice_sw_recipe {
 	 * recipe
 	 */
 	struct ice_fv_word ext_words[ICE_MAX_CHAIN_WORDS];
+	u16 word_masks[ICE_MAX_CHAIN_WORDS];
 
 	/* if this recipe is a collection of other recipe */
 	u8 big_recp;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 48/49] net/ice/base: allow forward to Q groups in switch rule
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (46 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 47/49] net/ice/base: enable additional switch rules Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 49/49] net/ice/base: changes for reducing ice add adv rule time Leyi Rong
                   ` (2 subsequent siblings)
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Enable forward to Q group action in ice_add_adv_rule.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 02fb49dba..050526f04 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -5496,6 +5496,7 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 	const u8 *pkt = NULL;
 	bool found = false;
 	u32 act = 0;
+	u8 q_rgn;
 
 	if (!lkups_cnt)
 		return ICE_ERR_PARAM;
@@ -5526,6 +5527,7 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 
 	if (!(rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI ||
 	      rinfo->sw_act.fltr_act == ICE_FWD_TO_Q ||
+	      rinfo->sw_act.fltr_act == ICE_FWD_TO_QGRP ||
 	      rinfo->sw_act.fltr_act == ICE_DROP_PACKET))
 		return ICE_ERR_CFG;
 
@@ -5578,6 +5580,15 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 		act |= (rinfo->sw_act.fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
 		       ICE_SINGLE_ACT_Q_INDEX_M;
 		break;
+	case ICE_FWD_TO_QGRP:
+		q_rgn = rinfo->sw_act.qgrp_size > 0 ?
+			(u8)ice_ilog2(rinfo->sw_act.qgrp_size) : 0;
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (rinfo->sw_act.fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+		       ICE_SINGLE_ACT_Q_INDEX_M;
+		act |= (q_rgn << ICE_SINGLE_ACT_Q_REGION_S) &
+		       ICE_SINGLE_ACT_Q_REGION_M;
+		break;
 	case ICE_DROP_PACKET:
 		act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_DROP |
 		       ICE_SINGLE_ACT_VALID_BIT;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH 49/49] net/ice/base: changes for reducing ice add adv rule time
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (47 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 48/49] net/ice/base: allow forward to Q groups in switch rule Leyi Rong
@ 2019-06-04  5:42 ` Leyi Rong
  2019-06-04 16:56 ` [dpdk-dev] [PATCH 00/49] shared code update Maxime Coquelin
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
  50 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-04  5:42 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Shivanshu Shukla, Paul M Stillwell Jr

While calling ice_find_recp we were calling ice_get_recp_to_prof_map
everytime we called ice_find_recp. ice_get_recp_to_prof_map is a very
expensive operation and we should try to reduce the number of times we
call this function. So moved it into ice_get_recp_frm_fw since we only
need to have fresh recp_to_profile mapping when we we check FW to see if
the recipe we are trying to add already exists in FW.

Signed-off-by: Shivanshu Shukla <shivanshu.shukla@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 050526f04..1f95fb149 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -168,6 +168,8 @@ static ice_declare_bitmap(recipe_to_profile[ICE_MAX_NUM_RECIPES],
 			  ICE_MAX_NUM_PROFILES);
 static ice_declare_bitmap(available_result_ids, ICE_CHAIN_FV_INDEX_START + 1);
 
+static void ice_get_recp_to_prof_map(struct ice_hw *hw);
+
 /**
  * ice_get_recp_frm_fw - update SW bookkeeping from FW recipe entries
  * @hw: pointer to hardware structure
@@ -189,6 +191,10 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid)
 	struct ice_prot_lkup_ext *lkup_exts;
 	enum ice_status status;
 
+	/* Get recipe to profile map so that we can get the fv from
+	 * lkups that we read for a recipe from FW.
+	 */
+	ice_get_recp_to_prof_map(hw);
 	/* we need a buffer big enough to accommodate all the recipes */
 	tmp = (struct ice_aqc_recipe_data_elem *)ice_calloc(hw,
 		ICE_MAX_NUM_RECIPES, sizeof(*tmp));
@@ -4363,7 +4369,6 @@ static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts)
 	struct ice_sw_recipe *recp;
 	u16 i;
 
-	ice_get_recp_to_prof_map(hw);
 	/* Initialize available_result_ids which tracks available result idx */
 	for (i = 0; i <= ICE_CHAIN_FV_INDEX_START; i++)
 		ice_set_bit(ICE_CHAIN_FV_INDEX_START - i,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 00/49] shared code update
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (48 preceding siblings ...)
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 49/49] net/ice/base: changes for reducing ice add adv rule time Leyi Rong
@ 2019-06-04 16:56 ` Maxime Coquelin
  2019-06-06  5:44   ` Rong, Leyi
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
  50 siblings, 1 reply; 225+ messages in thread
From: Maxime Coquelin @ 2019-06-04 16:56 UTC (permalink / raw)
  To: Leyi Rong, qi.z.zhang; +Cc: dev

Hi Leyi,

On 6/4/19 7:41 AM, Leyi Rong wrote:
> Main changes:
> 1. Advanced switch rule support.
> 2. Add more APIs for tunnel management.
> 3. Add some minor features.
> 4. Code clean and bug fix.

In order to ease the review process, I think it would be much better
to split this series in multiple ones, by features. Otherwise, it
is more difficult to keep track if comments are taken into account
in the next revision.

Also, it is suggested to put the fixes first in the series to ease
the backporting.

Thanks,
Maxime

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 02/49] net/ice/base: update standard extr seq to include DIR flag
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 02/49] net/ice/base: update standard extr seq to include DIR flag Leyi Rong
@ 2019-06-04 17:06   ` Maxime Coquelin
  0 siblings, 0 replies; 225+ messages in thread
From: Maxime Coquelin @ 2019-06-04 17:06 UTC (permalink / raw)
  To: Leyi Rong, qi.z.zhang; +Cc: dev, Chinh T Cao, Paul M Stillwell Jr



On 6/4/19 7:42 AM, Leyi Rong wrote:
> Once upon a time, the ice_flow_create_xtrct_seq() function in ice_flow.c
> extracted only protocol fields explicitly specified by the caller of the
> ice_flow_add_prof() function via its struct ice_flow_seg_info instances.
> However, to support different ingress and egress flow profiles with the
> same matching criteria, it would be necessary to also match on the packet
> Direction metadata. The primary reason was because there could not be more
> than one HW profile with the same CDID, PTG, and VSIG. The Direction
> metadata was not a parameter used to select HW profile IDs.
> 
> Thus, for ACL, the direction flag would need to be added to the extraction
> sequence. This information will be use later as one criteria for ACL
> scenario entry matching.
> 
> Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> ---
>   drivers/net/ice/base/ice_flow.c | 43 +++++++++++++++++++++++++++++++++
>   1 file changed, 43 insertions(+)
> 
> diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
> index be819e0e9..f1bf5b5e7 100644
> --- a/drivers/net/ice/base/ice_flow.c
> +++ b/drivers/net/ice/base/ice_flow.c
> @@ -495,6 +495,42 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
>   	return ICE_SUCCESS;
>   }
>   
> +/**
> + * ice_flow_xtract_pkt_flags - Create an extr sequence entry for packet flags
> + * @hw: pointer to the HW struct
> + * @params: information about the flow to be processed
> + * @flags: The value of pkt_flags[x:x] in RX/TX MDID metadata.
> + *
> + * This function will allocate an extraction sequence entries for a DWORD size
> + * chunk of the packet flags.
> + */
> +static enum ice_status
> +ice_flow_xtract_pkt_flags(struct ice_hw *hw,
> +			  struct ice_flow_prof_params *params,
> +			  enum ice_flex_mdid_pkt_flags flags)
> +{
> +	u8 fv_words = hw->blk[params->blk].es.fvw;
> +	u8 idx;
> +
> +	/* Make sure the number of extraction sequence entries required does not
> +	 * exceed the block's capacity.
> +	 */
> +	if (params->es_cnt >= fv_words)
> +		return ICE_ERR_MAX_LIMIT;
> +
> +	/* some blocks require a reversed field vector layout */
> +	if (hw->blk[params->blk].es.reverse)
> +		idx = fv_words - params->es_cnt - 1;
> +	else
> +		idx = params->es_cnt;
> +
> +	params->es[idx].prot_id = ICE_PROT_META_ID;
> +	params->es[idx].off = flags;
> +	params->es_cnt++;
> +
> +	return ICE_SUCCESS;
> +}
> +
>   /**
>    * ice_flow_xtract_fld - Create an extraction sequence entry for the given field
>    * @hw: pointer to the HW struct
> @@ -744,6 +780,13 @@ ice_flow_create_xtrct_seq(struct ice_hw *hw,
>   	enum ice_status status = ICE_SUCCESS;
>   	u8 i;
>   
> +	/* For ACL, we also need to extract the direction bit (Rx,Tx) data from
> +	 * packet flags
> +	 */
> +	if (params->blk == ICE_BLK_ACL)
> +		ice_flow_xtract_pkt_flags(hw, params,
> +					  ICE_RX_MDID_PKT_FLAGS_15_0);

Shouldn't you propagate the error (ICE_ERR_MAX_LIMIT)?

> +
>   	for (i = 0; i < params->prof->segs_cnt; i++) {
>   		u64 match = params->prof->segs[i].match;
>   		u16 j;
> 

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 03/49] net/ice/base: add API to configure MIB
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 03/49] net/ice/base: add API to configure MIB Leyi Rong
@ 2019-06-04 17:14   ` Maxime Coquelin
  2019-06-05  0:00     ` Stillwell Jr, Paul M
  0 siblings, 1 reply; 225+ messages in thread
From: Maxime Coquelin @ 2019-06-04 17:14 UTC (permalink / raw)
  To: Leyi Rong, qi.z.zhang; +Cc: dev, Chinh T Cao, Dave Ertman, Paul M Stillwell Jr



On 6/4/19 7:42 AM, Leyi Rong wrote:
> Add ice_cfg_lldp_mib_change and treat DCBx state NOT_STARTED as valid.
> 
> Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
> Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> ---
>   drivers/net/ice/base/ice_dcb.c | 41 +++++++++++++++++++++++++++++-----
>   drivers/net/ice/base/ice_dcb.h |  3 ++-
>   2 files changed, 38 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c
> index a7810578d..100c4bb0f 100644
> --- a/drivers/net/ice/base/ice_dcb.c
> +++ b/drivers/net/ice/base/ice_dcb.c
> @@ -927,10 +927,11 @@ enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi)
>   /**
>    * ice_init_dcb
>    * @hw: pointer to the HW struct
> + * @enable_mib_change: enable MIB change event
>    *
>    * Update DCB configuration from the Firmware
>    */
> -enum ice_status ice_init_dcb(struct ice_hw *hw)
> +enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change)
>   {
>   	struct ice_port_info *pi = hw->port_info;
>   	enum ice_status ret = ICE_SUCCESS;
> @@ -944,7 +945,8 @@ enum ice_status ice_init_dcb(struct ice_hw *hw)
>   	pi->dcbx_status = ice_get_dcbx_status(hw);
>   
>   	if (pi->dcbx_status == ICE_DCBX_STATUS_DONE ||
> -	    pi->dcbx_status == ICE_DCBX_STATUS_IN_PROGRESS) {
> +	    pi->dcbx_status == ICE_DCBX_STATUS_IN_PROGRESS ||
> +	    pi->dcbx_status == ICE_DCBX_STATUS_NOT_STARTED) {

Should this really be in this patch?
It does not seem related to the API addition.

>   		/* Get current DCBX configuration */
>   		ret = ice_get_dcb_cfg(pi);
>   		pi->is_sw_lldp = (hw->adminq.sq_last_status == ICE_AQ_RC_EPERM);
> @@ -952,13 +954,42 @@ enum ice_status ice_init_dcb(struct ice_hw *hw)
>   			return ret;
>   	} else if (pi->dcbx_status == ICE_DCBX_STATUS_DIS) {
>   		return ICE_ERR_NOT_READY;
> -	} else if (pi->dcbx_status == ICE_DCBX_STATUS_MULTIPLE_PEERS) {

Ditto.



^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 05/49] net/ice/base: add funcs to create new switch recipe
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 05/49] net/ice/base: add funcs to create new switch recipe Leyi Rong
@ 2019-06-04 17:27   ` Maxime Coquelin
  0 siblings, 0 replies; 225+ messages in thread
From: Maxime Coquelin @ 2019-06-04 17:27 UTC (permalink / raw)
  To: Leyi Rong, qi.z.zhang; +Cc: dev, Grishma Kotecha, Paul M Stillwell Jr



On 6/4/19 7:42 AM, Leyi Rong wrote:
> Add functions to support following admin queue commands:
> 1. 0x0208: allocate resource to hold a switch recipe. This is needed
> when a new switch recipe needs to be created.
> 2. 0x0290: create a recipe with protocol header information and
> other details that determine how this recipe filter work.
> 3. 0x0292: get details of an existing recipe.
> 4. 0x0291: associate a switch recipe to a profile.
> 
> Signed-off-by: Grishma Kotecha <grishma.kotecha@intel.com>
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> ---
>   drivers/net/ice/base/ice_switch.c | 132 ++++++++++++++++++++++++++++++
>   drivers/net/ice/base/ice_switch.h |  12 +++
>   2 files changed, 144 insertions(+)
> 

FWIW:
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 03/49] net/ice/base: add API to configure MIB
  2019-06-04 17:14   ` Maxime Coquelin
@ 2019-06-05  0:00     ` Stillwell Jr, Paul M
  2019-06-05  8:03       ` Maxime Coquelin
  0 siblings, 1 reply; 225+ messages in thread
From: Stillwell Jr, Paul M @ 2019-06-05  0:00 UTC (permalink / raw)
  To: Maxime Coquelin, Rong, Leyi, Zhang, Qi Z
  Cc: dev, Cao, Chinh T, Ertman, David M

> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Tuesday, June 4, 2019 10:15 AM
> To: Rong, Leyi <leyi.rong@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Cao, Chinh T <chinh.t.cao@intel.com>; Ertman, David M
> <david.m.ertman@intel.com>; Stillwell Jr, Paul M
> <paul.m.stillwell.jr@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 03/49] net/ice/base: add API to configure
> MIB
> 
> 
> 
> On 6/4/19 7:42 AM, Leyi Rong wrote:
> > Add ice_cfg_lldp_mib_change and treat DCBx state NOT_STARTED as valid.
> >
> > Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
> > Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
> > Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> > Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> > ---
> >   drivers/net/ice/base/ice_dcb.c | 41
> +++++++++++++++++++++++++++++-----
> >   drivers/net/ice/base/ice_dcb.h |  3 ++-
> >   2 files changed, 38 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/net/ice/base/ice_dcb.c
> > b/drivers/net/ice/base/ice_dcb.c index a7810578d..100c4bb0f 100644
> > --- a/drivers/net/ice/base/ice_dcb.c
> > +++ b/drivers/net/ice/base/ice_dcb.c
> > @@ -927,10 +927,11 @@ enum ice_status ice_get_dcb_cfg(struct
> ice_port_info *pi)
> >   /**
> >    * ice_init_dcb
> >    * @hw: pointer to the HW struct
> > + * @enable_mib_change: enable MIB change event
> >    *
> >    * Update DCB configuration from the Firmware
> >    */
> > -enum ice_status ice_init_dcb(struct ice_hw *hw)
> > +enum ice_status ice_init_dcb(struct ice_hw *hw, bool
> > +enable_mib_change)
> >   {
> >   	struct ice_port_info *pi = hw->port_info;
> >   	enum ice_status ret = ICE_SUCCESS;
> > @@ -944,7 +945,8 @@ enum ice_status ice_init_dcb(struct ice_hw *hw)
> >   	pi->dcbx_status = ice_get_dcbx_status(hw);
> >
> >   	if (pi->dcbx_status == ICE_DCBX_STATUS_DONE ||
> > -	    pi->dcbx_status == ICE_DCBX_STATUS_IN_PROGRESS) {
> > +	    pi->dcbx_status == ICE_DCBX_STATUS_IN_PROGRESS ||
> > +	    pi->dcbx_status == ICE_DCBX_STATUS_NOT_STARTED) {
> 
> Should this really be in this patch?
> It does not seem related to the API addition.
> 

This seems ok since the commit message says that we changed the API and are treating dcbx_status in a different manor. Is the objection that we have 2 things in one commit?

> >   		/* Get current DCBX configuration */
> >   		ret = ice_get_dcb_cfg(pi);
> >   		pi->is_sw_lldp = (hw->adminq.sq_last_status ==
> ICE_AQ_RC_EPERM);
> > @@ -952,13 +954,42 @@ enum ice_status ice_init_dcb(struct ice_hw *hw)
> >   			return ret;
> >   	} else if (pi->dcbx_status == ICE_DCBX_STATUS_DIS) {
> >   		return ICE_ERR_NOT_READY;
> > -	} else if (pi->dcbx_status == ICE_DCBX_STATUS_MULTIPLE_PEERS) {
> 
> Ditto.
> 


^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 42/49] net/ice/base: change how VMDq capability is wrapped
       [not found]   ` <ca03c24866cdb2f45ed04b6b3e9b35bac06c5dcd.camel@intel.com>
@ 2019-06-05  0:02     ` Stillwell Jr, Paul M
  0 siblings, 0 replies; 225+ messages in thread
From: Stillwell Jr, Paul M @ 2019-06-05  0:02 UTC (permalink / raw)
  To: Venkataramanan, Anirudh, Zhang, Qi Z, Rong, Leyi; +Cc: dev

There will be a newer version of this patch that removes this code.

Paul

> -----Original Message-----
> From: Venkataramanan, Anirudh
> Sent: Tuesday, June 4, 2019 12:17 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Rong, Leyi <leyi.rong@intel.com>
> Cc: Stillwell Jr, Paul M <paul.m.stillwell.jr@intel.com>; dev@dpdk.org
> Subject: Re: [PATCH 42/49] net/ice/base: change how VMDq capability is
> wrapped
> 
> NACK. Please see below.
> 
> On Tue, 2019-06-04 at 13:42 +0800, Rong, Leyi wrote:
> > This patch exposes the VMDq capability when at least one among
> > VMDQ_SUPPORT, OFFLOAD_MACVLAN_SUPPORT or ADQ_SUPPORT (ADQ
> uses
> > VMDQ as well) is defined.
> >
> > Signed-off-by: Anirudh Venkataramanan <
> > anirudh.venkataramanan@intel.com>
> > Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> > Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> > ---
> >  drivers/net/ice/base/ice_adminq_cmd.h | 3 +++
> >  drivers/net/ice/base/ice_common.c     | 7 +++++++
> >  drivers/net/ice/base/ice_type.h       | 8 ++++++++
> >  3 files changed, 18 insertions(+)
> >
> > diff --git a/drivers/net/ice/base/ice_adminq_cmd.h
> > b/drivers/net/ice/base/ice_adminq_cmd.h
> > index 4e6bce18c..7642a923b 100644
> > --- a/drivers/net/ice/base/ice_adminq_cmd.h
> > +++ b/drivers/net/ice/base/ice_adminq_cmd.h
> > @@ -111,6 +111,9 @@ struct ice_aqc_list_caps_elem {
> >  	__le16 cap;
> >  #define ICE_AQC_CAPS_VALID_FUNCTIONS			0x0005
> >  #define ICE_AQC_MAX_VALID_FUNCTIONS			0x8
> > +#if defined(VMDQ_SUPPORT) ||
> defined(OFFLOAD_MACVLAN_SUPPORT) ||
> > defined(ADQ_SUPPORT) || defined(FW_SUPPORT)
> > +#define ICE_AQC_CAPS_VMDQ				0x0014
> > +#endif /* VMDQ_SUPPORT || OFFLOAD_MACVLAN_SUPPORT ||
> ADQ_SUPPORT ||
> > FW_SUPPORT */
> 
> There doesn't seem to be anything in the makefile that defines any of
> these *_SUPPORT defines. Patch should be updated to not wrap these
> fields. Commit message should be updated as well.
> 
> >  #define ICE_AQC_CAPS_VSI				0x0017
> >  #define ICE_AQC_CAPS_DCB				0x0018
> >  #define ICE_AQC_CAPS_RSS				0x0040
> > diff --git a/drivers/net/ice/base/ice_common.c
> > b/drivers/net/ice/base/ice_common.c
> > index f9a5d43e6..1d54f3d71 100644
> > --- a/drivers/net/ice/base/ice_common.c
> > +++ b/drivers/net/ice/base/ice_common.c
> > @@ -1980,6 +1980,13 @@ ice_parse_caps(struct ice_hw *hw, void *buf,
> > u32 cap_count,
> >  				  "%s: valid functions = %d\n", prefix,
> >  				  caps->valid_functions);
> >  			break;
> > +#if defined(VMDQ_SUPPORT) ||
> defined(OFFLOAD_MACVLAN_SUPPORT) ||
> > defined(ADQ_SUPPORT)
> > +		case ICE_AQC_CAPS_VMDQ:
> > +			caps->vmdq = (number == 1);
> > +			ice_debug(hw, ICE_DBG_INIT,
> > +				  "%s: VMDQ = %d\n", prefix, caps-
> > >vmdq);
> > +			break;
> > +#endif /* VMDQ_SUPPORT || OFFLOAD_MACVLAN_SUPPORT ||
> ADQ_SUPPORT */
> 
> ditto
> 
> >  		case ICE_AQC_CAPS_VSI:
> >  			if (dev_p) {
> >  				dev_p->num_vsi_allocd_to_host = number;
> > diff --git a/drivers/net/ice/base/ice_type.h
> > b/drivers/net/ice/base/ice_type.h
> > index f76be2b58..f30b37985 100644
> > --- a/drivers/net/ice/base/ice_type.h
> > +++ b/drivers/net/ice/base/ice_type.h
> > @@ -207,6 +207,9 @@ enum ice_vsi_type {
> >  #ifdef ADQ_SUPPORT
> >  	ICE_VSI_CHNL = 4,
> >  #endif /* ADQ_SUPPORT */
> > +#ifdef OFFLOAD_MACVLAN_SUPPORT
> > +	ICE_VSI_OFFLOAD_MACVLAN = 5,
> > +#endif /* OFFLOAD_MACVLAN_SUPPORT */
> 
> ditto
> 
> >  	ICE_VSI_LB = 6,
> >  };
> >
> > @@ -353,6 +356,11 @@ struct ice_hw_common_caps {
> >  #define ICE_MAX_SUPPORTED_GPIO_SDP	8
> >  	u8 led[ICE_MAX_SUPPORTED_GPIO_LED];
> >  	u8 sdp[ICE_MAX_SUPPORTED_GPIO_SDP];
> > +#if defined(VMDQ_SUPPORT) ||
> defined(OFFLOAD_MACVLAN_SUPPORT) ||
> > defined(ADQ_SUPPORT)
> > +
> > +	/* VMDQ */
> > +	u8 vmdq;			/* VMDQ supported */
> > +#endif /* VMDQ_SUPPORT || OFFLOAD_MACVLAN_SUPPORT ||
> ADQ_SUPPORT */
> 
> ditto
> 
> >
> >  	/* EVB capabilities */
> >  	u8 evb_802_1_qbg;		/* Edge Virtual Bridging */

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 03/49] net/ice/base: add API to configure MIB
  2019-06-05  0:00     ` Stillwell Jr, Paul M
@ 2019-06-05  8:03       ` Maxime Coquelin
  0 siblings, 0 replies; 225+ messages in thread
From: Maxime Coquelin @ 2019-06-05  8:03 UTC (permalink / raw)
  To: Stillwell Jr, Paul M, Rong, Leyi, Zhang, Qi Z
  Cc: dev, Cao, Chinh T, Ertman, David M




On 6/5/19 2:00 AM, Stillwell Jr, Paul M wrote:
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Sent: Tuesday, June 4, 2019 10:15 AM
>> To: Rong, Leyi <leyi.rong@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
>> Cc: dev@dpdk.org; Cao, Chinh T <chinh.t.cao@intel.com>; Ertman, David M
>> <david.m.ertman@intel.com>; Stillwell Jr, Paul M
>> <paul.m.stillwell.jr@intel.com>
>> Subject: Re: [dpdk-dev] [PATCH 03/49] net/ice/base: add API to configure
>> MIB
>>
>>
>>
>> On 6/4/19 7:42 AM, Leyi Rong wrote:
>>> Add ice_cfg_lldp_mib_change and treat DCBx state NOT_STARTED as valid.
>>>
>>> Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
>>> Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
>>> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
>>> Signed-off-by: Leyi Rong <leyi.rong@intel.com>
>>> ---
>>>    drivers/net/ice/base/ice_dcb.c | 41
>> +++++++++++++++++++++++++++++-----
>>>    drivers/net/ice/base/ice_dcb.h |  3 ++-
>>>    2 files changed, 38 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/net/ice/base/ice_dcb.c
>>> b/drivers/net/ice/base/ice_dcb.c index a7810578d..100c4bb0f 100644
>>> --- a/drivers/net/ice/base/ice_dcb.c
>>> +++ b/drivers/net/ice/base/ice_dcb.c
>>> @@ -927,10 +927,11 @@ enum ice_status ice_get_dcb_cfg(struct
>> ice_port_info *pi)
>>>    /**
>>>     * ice_init_dcb
>>>     * @hw: pointer to the HW struct
>>> + * @enable_mib_change: enable MIB change event
>>>     *
>>>     * Update DCB configuration from the Firmware
>>>     */
>>> -enum ice_status ice_init_dcb(struct ice_hw *hw)
>>> +enum ice_status ice_init_dcb(struct ice_hw *hw, bool
>>> +enable_mib_change)
>>>    {
>>>    	struct ice_port_info *pi = hw->port_info;
>>>    	enum ice_status ret = ICE_SUCCESS;
>>> @@ -944,7 +945,8 @@ enum ice_status ice_init_dcb(struct ice_hw *hw)
>>>    	pi->dcbx_status = ice_get_dcbx_status(hw);
>>>
>>>    	if (pi->dcbx_status == ICE_DCBX_STATUS_DONE ||
>>> -	    pi->dcbx_status == ICE_DCBX_STATUS_IN_PROGRESS) {
>>> +	    pi->dcbx_status == ICE_DCBX_STATUS_IN_PROGRESS ||
>>> +	    pi->dcbx_status == ICE_DCBX_STATUS_NOT_STARTED) {
>>
>> Should this really be in this patch?
>> It does not seem related to the API addition.
>>
> 
> This seems ok since the commit message says that we changed the API and are treating dcbx_status in a different manor. Is the objection that we have 2 things in one commit?

Well, it depends if DCBx NOT_STARTED becomes valid thanks to
ice_cfg_lldp_mib_change addition. It is not obvious by looking at the
commit message and the patch itself.

It this is not the case, then 2 commits are prefered, as one could
backport only the DCBx NOT_STARTED part.

> 
>>>    		/* Get current DCBX configuration */
>>>    		ret = ice_get_dcb_cfg(pi);
>>>    		pi->is_sw_lldp = (hw->adminq.sq_last_status ==
>> ICE_AQ_RC_EPERM);
>>> @@ -952,13 +954,42 @@ enum ice_status ice_init_dcb(struct ice_hw *hw)
>>>    			return ret;
>>>    	} else if (pi->dcbx_status == ICE_DCBX_STATUS_DIS) {
>>>    		return ICE_ERR_NOT_READY;
>>> -	} else if (pi->dcbx_status == ICE_DCBX_STATUS_MULTIPLE_PEERS) {
>>
>> Ditto.
>>
> 

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 07/49] net/ice/base: replay advanced rule after reset
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 07/49] net/ice/base: replay advanced rule after reset Leyi Rong
@ 2019-06-05  8:58   ` Maxime Coquelin
  2019-06-05 15:53     ` Stillwell Jr, Paul M
  0 siblings, 1 reply; 225+ messages in thread
From: Maxime Coquelin @ 2019-06-05  8:58 UTC (permalink / raw)
  To: Leyi Rong, qi.z.zhang; +Cc: dev, Victor Raj, Paul M Stillwell Jr



On 6/4/19 7:42 AM, Leyi Rong wrote:
> Code added to replay the advanced rule per VSI basis and remove the
> advanced rule information from shared code recipe list.
> 
> Signed-off-by: Victor Raj <victor.raj@intel.com>
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> ---
>   drivers/net/ice/base/ice_switch.c | 81 ++++++++++++++++++++++++++-----
>   1 file changed, 69 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
> index c53021aed..ca0497ca7 100644
> --- a/drivers/net/ice/base/ice_switch.c
> +++ b/drivers/net/ice/base/ice_switch.c
> @@ -3033,6 +3033,27 @@ ice_rem_sw_rule_info(struct ice_hw *hw, struct LIST_HEAD_TYPE *rule_head)
>   	}
>   }
>   
> +/**
> + * ice_rem_adv_rule_info
> + * @hw: pointer to the hardware structure
> + * @rule_head: pointer to the switch list structure that we want to delete
> + */
> +static void
> +ice_rem_adv_rule_info(struct ice_hw *hw, struct LIST_HEAD_TYPE *rule_head)
> +{
> +	struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
> +	struct ice_adv_fltr_mgmt_list_entry *lst_itr;
> +
> +	if (LIST_EMPTY(rule_head))
> +		return;


Is it necessary? If the list is empty, LIST_FOR_EACH_ENTRY will not loop
and status will be returned:

#define LIST_FOR_EACH_ENTRY_SAFE(pos, tmp, head, type, member) \
	LIST_FOR_EACH_ENTRY(pos, head, type, member)

with:

#define LIST_FOR_EACH_ENTRY(pos, head, type, member)			       \
	for ((pos) = (head)->lh_first ?					       \
		     container_of((head)->lh_first, struct type, member) :     \
		     0;							       \
	     (pos);							       \
	     (pos) = (pos)->member.next.le_next ?			       \
		     container_of((pos)->member.next.le_next, struct type,     \
				  member) :				       \
		     0)

> +	LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry, rule_head,

> +				 ice_adv_fltr_mgmt_list_entry, list_entry) {
> +		LIST_DEL(&lst_itr->list_entry);
> +		ice_free(hw, lst_itr->lkups);
> +		ice_free(hw, lst_itr);
> +	}
> +}
>   
>   /**
>    * ice_rem_all_sw_rules_info
> @@ -3049,6 +3070,8 @@ void ice_rem_all_sw_rules_info(struct ice_hw *hw)
>   		rule_head = &sw->recp_list[i].filt_rules;
>   		if (!sw->recp_list[i].adv_rule)
>   			ice_rem_sw_rule_info(hw, rule_head);
> +		else
> +			ice_rem_adv_rule_info(hw, rule_head);
>   	}
>   }
>   
> @@ -5687,6 +5710,38 @@ ice_replay_vsi_fltr(struct ice_hw *hw, u16 vsi_handle, u8 recp_id,
>   	return status;
>   }
>   
> +/**
> + * ice_replay_vsi_adv_rule - Replay advanced rule for requested VSI
> + * @hw: pointer to the hardware structure
> + * @vsi_handle: driver VSI handle
> + * @list_head: list for which filters need to be replayed
> + *
> + * Replay the advanced rule for the given VSI.
> + */
> +static enum ice_status
> +ice_replay_vsi_adv_rule(struct ice_hw *hw, u16 vsi_handle,
> +			struct LIST_HEAD_TYPE *list_head)
> +{
> +	struct ice_rule_query_data added_entry = { 0 };
> +	struct ice_adv_fltr_mgmt_list_entry *adv_fltr;
> +	enum ice_status status = ICE_SUCCESS;
> +
> +	if (LIST_EMPTY(list_head))
> +		return status;

Ditto, it should be removed.

> +	LIST_FOR_EACH_ENTRY(adv_fltr, list_head, ice_adv_fltr_mgmt_list_entry,
> +			    list_entry) {
> +		struct ice_adv_rule_info *rinfo = &adv_fltr->rule_info;
> +		u16 lk_cnt = adv_fltr->lkups_cnt;
> +
> +		if (vsi_handle != rinfo->sw_act.vsi_handle)
> +			continue;
> +		status = ice_add_adv_rule(hw, adv_fltr->lkups, lk_cnt, rinfo,
> +					  &added_entry);
> +		if (status)
> +			break;
> +	}
> +	return status;
> +}
>   
>   /**
>    * ice_replay_vsi_all_fltr - replay all filters stored in bookkeeping lists
> @@ -5698,23 +5753,23 @@ ice_replay_vsi_fltr(struct ice_hw *hw, u16 vsi_handle, u8 recp_id,
>   enum ice_status ice_replay_vsi_all_fltr(struct ice_hw *hw, u16 vsi_handle)
>   {
>   	struct ice_switch_info *sw = hw->switch_info;
> -	enum ice_status status = ICE_SUCCESS;
> +	enum ice_status status;
>   	u8 i;
>   
> +	/* Update the recipes that were created */
>   	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
> -		/* Update the default recipe lines and ones that were created */
> -		if (i < ICE_MAX_NUM_RECIPES || sw->recp_list[i].recp_created) {
> -			struct LIST_HEAD_TYPE *head;
> +		struct LIST_HEAD_TYPE *head;
>   
> -			head = &sw->recp_list[i].filt_replay_rules;
> -			if (!sw->recp_list[i].adv_rule)
> -				status = ice_replay_vsi_fltr(hw, vsi_handle, i,
> -							     head);
> -			if (status != ICE_SUCCESS)
> -				return status;
> -		}
> +		head = &sw->recp_list[i].filt_replay_rules;
> +		if (!sw->recp_list[i].adv_rule)
> +			status = ice_replay_vsi_fltr(hw, vsi_handle, i, head);
> +		else
> +			status = ice_replay_vsi_adv_rule(hw, vsi_handle, head);
> +		if (status != ICE_SUCCESS)
> +			return status;
>   	}
> -	return status;
> +
> +	return ICE_SUCCESS;
>   }
>   
>   /**
> @@ -5738,6 +5793,8 @@ void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw)
>   			l_head = &sw->recp_list[i].filt_replay_rules;
>   			if (!sw->recp_list[i].adv_rule)
>   				ice_rem_sw_rule_info(hw, l_head);
> +			else
> +				ice_rem_adv_rule_info(hw, l_head);
>   		}
>   	}
>   }
> 

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 08/49] net/ice/base: code for removing advanced rule
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 08/49] net/ice/base: code for removing advanced rule Leyi Rong
@ 2019-06-05  9:07   ` Maxime Coquelin
  0 siblings, 0 replies; 225+ messages in thread
From: Maxime Coquelin @ 2019-06-05  9:07 UTC (permalink / raw)
  To: Leyi Rong, qi.z.zhang; +Cc: dev, Shivanshu Shukla, Paul M Stillwell Jr



On 6/4/19 7:42 AM, Leyi Rong wrote:
> This patch also contains ice_remove_adv_rule function to remove existing
> advanced rules. it also handles the case when we have multiple VSI using
> the same rule using the following helper functions:
> 
> ice_adv_rem_update_vsi_list - function to remove VS from VSI list for
> advanced rules.
> 
> Signed-off-by: Shivanshu Shukla <shivanshu.shukla@intel.com>
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> ---
>   drivers/net/ice/base/ice_switch.c | 309 +++++++++++++++++++++++++++++-
>   drivers/net/ice/base/ice_switch.h |   9 +
>   2 files changed, 310 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
> index ca0497ca7..3719ac4bb 100644
> --- a/drivers/net/ice/base/ice_switch.c
> +++ b/drivers/net/ice/base/ice_switch.c

...

> +/**
> + * ice_rem_adv_for_vsi - removes existing advanced switch rules for a
> + *                       given VSI handle
> + * @hw: pointer to the hardware structure
> + * @vsi_handle: VSI handle for which we are supposed to remove all the rules.
> + *
> + * This function is used to remove all the rules for a given VSI and as soon
> + * as removing a rule fails, it will return immediately with the error code,
> + * else it will return ICE_SUCCESS
> + */
> +enum ice_status
> +ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle)
> +{
> +	struct ice_adv_fltr_mgmt_list_entry *list_itr;
> +	struct ice_vsi_list_map_info *map_info;
> +	struct LIST_HEAD_TYPE *list_head;
> +	struct ice_adv_rule_info rinfo;
> +	struct ice_switch_info *sw;
> +	enum ice_status status;
> +	u16 vsi_list_id = 0;
> +	u8 rid;
> +
> +	sw = hw->switch_info;
> +	for (rid = 0; rid < ICE_MAX_NUM_RECIPES; rid++) {
> +		if (!sw->recp_list[rid].recp_created)
> +			continue;
> +		if (!sw->recp_list[rid].adv_rule)
> +			continue;
> +		list_head = &sw->recp_list[rid].filt_rules;
> +		map_info = NULL;

Useless assignation

> +		LIST_FOR_EACH_ENTRY(list_itr, list_head,
> +				    ice_adv_fltr_mgmt_list_entry, list_entry) {
> +			map_info = ice_find_vsi_list_entry(hw, rid, vsi_handle,
> +							   &vsi_list_id);
> +			if (!map_info)
> +				continue;
> +			rinfo = list_itr->rule_info;
> +			rinfo.sw_act.vsi_handle = vsi_handle;
> +			status = ice_rem_adv_rule(hw, list_itr->lkups,
> +						  list_itr->lkups_cnt, &rinfo);
> +			if (status)
> +				return status;
> +			map_info = NULL;

Useless assignation

> +		}
> +	}
> +	return ICE_SUCCESS;
> +}
> +
>   /**
>    * ice_replay_fltr - Replay all the filters stored by a specific list head
>    * @hw: pointer to the hardware structure


^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 14/49] net/ice/base: refactor HW table init function
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 14/49] net/ice/base: refactor HW table init function Leyi Rong
@ 2019-06-05 10:35   ` Maxime Coquelin
  2019-06-05 18:10     ` Stillwell Jr, Paul M
  0 siblings, 1 reply; 225+ messages in thread
From: Maxime Coquelin @ 2019-06-05 10:35 UTC (permalink / raw)
  To: Leyi Rong, qi.z.zhang; +Cc: dev, Vignesh Sridhar, Paul M Stillwell Jr



On 6/4/19 7:42 AM, Leyi Rong wrote:
> 1. Separated the calls to initialize and allocate the HW XLT tables
> from call to fill table. This is to allow the ice_init_hw_tbls call
> to be made prior to package download so that all HW structures are
> correctly initialized. This will avoid any invalid memory references
> if package download fails on unloading the driver.
> 2. Fill HW tables with package content after successful package download.
> 3. Free HW table and flow profile allocations when unloading driver.
> 4. Add flag in block structure to check if lists in block are
> initialized. This is to avoid any NULL reference in releasing flow
> profiles that may have been freed in previous calls to free tables.
> 
> Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> ---
>   drivers/net/ice/base/ice_common.c    |   6 +-
>   drivers/net/ice/base/ice_flex_pipe.c | 284 ++++++++++++++-------------
>   drivers/net/ice/base/ice_flex_pipe.h |   1 +
>   drivers/net/ice/base/ice_flex_type.h |   1 +
>   4 files changed, 151 insertions(+), 141 deletions(-)
> 
> diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
> index a0ab25aef..62c7fad0d 100644
> --- a/drivers/net/ice/base/ice_common.c
> +++ b/drivers/net/ice/base/ice_common.c
> @@ -916,12 +916,13 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
>   
>   	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC);
>   	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC_2);
> -
>   	/* Obtain counter base index which would be used by flow director */
>   	status = ice_alloc_fd_res_cntr(hw, &hw->fd_ctr_base);
>   	if (status)
>   		goto err_unroll_fltr_mgmt_struct;
> -
> +	status = ice_init_hw_tbls(hw);
> +	if (status)
> +		goto err_unroll_fltr_mgmt_struct;
>   	return ICE_SUCCESS;
>   
>   err_unroll_fltr_mgmt_struct:
> @@ -952,6 +953,7 @@ void ice_deinit_hw(struct ice_hw *hw)
>   	ice_sched_cleanup_all(hw);
>   	ice_sched_clear_agg(hw);
>   	ice_free_seg(hw);
> +	ice_free_hw_tbls(hw);
>   
>   	if (hw->port_info) {
>   		ice_free(hw, hw->port_info);
> diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
> index 8f0b513f4..93e056853 100644
> --- a/drivers/net/ice/base/ice_flex_pipe.c
> +++ b/drivers/net/ice/base/ice_flex_pipe.c
> @@ -1375,10 +1375,12 @@ enum ice_status ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len)
>   
>   	if (!status) {
>   		hw->seg = seg;
> -		/* on successful package download, update other required
> -		 * registers to support the package
> +		/* on successful package download update other required
> +		 * registers to support the package and fill HW tables
> +		 * with package content.
>   		 */
>   		ice_init_pkg_regs(hw);
> +		ice_fill_blk_tbls(hw);
>   	} else {
>   		ice_debug(hw, ICE_DBG_INIT, "package load failed, %d\n",
>   			  status);
> @@ -2755,6 +2757,65 @@ static const u32 ice_blk_sids[ICE_BLK_COUNT][ICE_SID_OFF_COUNT] = {
>   	}
>   };
>   
> +/**
> + * ice_init_sw_xlt1_db - init software XLT1 database from HW tables
> + * @hw: pointer to the hardware structure
> + * @blk: the HW block to initialize
> + */
> +static
> +void ice_init_sw_xlt1_db(struct ice_hw *hw, enum ice_block blk)
> +{
> +	u16 pt;
> +
> +	for (pt = 0; pt < hw->blk[blk].xlt1.count; pt++) {
> +		u8 ptg;
> +
> +		ptg = hw->blk[blk].xlt1.t[pt];
> +		if (ptg != ICE_DEFAULT_PTG) {
> +			ice_ptg_alloc_val(hw, blk, ptg);
> +			ice_ptg_add_mv_ptype(hw, blk, pt, ptg);

ice_ptg_add_mv_ptype() can fail, error should be propagated.

> +		}
> +	}
> +}
> +
> +/**
> + * ice_init_sw_xlt2_db - init software XLT2 database from HW tables
> + * @hw: pointer to the hardware structure
> + * @blk: the HW block to initialize
> + */
> +static void ice_init_sw_xlt2_db(struct ice_hw *hw, enum ice_block blk)
> +{
> +	u16 vsi;
> +
> +	for (vsi = 0; vsi < hw->blk[blk].xlt2.count; vsi++) {
> +		u16 vsig;
> +
> +		vsig = hw->blk[blk].xlt2.t[vsi];
> +		if (vsig) {
> +			ice_vsig_alloc_val(hw, blk, vsig);
> +			ice_vsig_add_mv_vsi(hw, blk, vsi, vsig);

Ditto

> +			/* no changes at this time, since this has been
> +			 * initialized from the original package
> +			 */
> +			hw->blk[blk].xlt2.vsis[vsi].changed = 0;
> +		}
> +	}
> +}
> +
> +/**
> + * ice_init_sw_db - init software database from HW tables
> + * @hw: pointer to the hardware structure
> + */
> +static void ice_init_sw_db(struct ice_hw *hw)
> +{
> +	u16 i;
> +
> +	for (i = 0; i < ICE_BLK_COUNT; i++) {
> +		ice_init_sw_xlt1_db(hw, (enum ice_block)i);
> +		ice_init_sw_xlt2_db(hw, (enum ice_block)i);
> +	}

And so this function should also propagate the error.

> +}
> +
>   /**
>    * ice_fill_tbl - Reads content of a single table type into database
>    * @hw: pointer to the hardware structure
> @@ -2853,12 +2914,12 @@ static void ice_fill_tbl(struct ice_hw *hw, enum ice_block block_id, u32 sid)
>   		case ICE_SID_FLD_VEC_PE:
>   			es = (struct ice_sw_fv_section *)sect;
>   			src = (u8 *)es->fv;
> -			sect_len = LE16_TO_CPU(es->count) *
> -				hw->blk[block_id].es.fvw *
> +			sect_len = (u32)(LE16_TO_CPU(es->count) *
> +					 hw->blk[block_id].es.fvw) *
>   				sizeof(*hw->blk[block_id].es.t);
>   			dst = (u8 *)hw->blk[block_id].es.t;
> -			dst_len = hw->blk[block_id].es.count *
> -				hw->blk[block_id].es.fvw *
> +			dst_len = (u32)(hw->blk[block_id].es.count *
> +					hw->blk[block_id].es.fvw) *
>   				sizeof(*hw->blk[block_id].es.t);
>   			break;
>   		default:
> @@ -2886,75 +2947,61 @@ static void ice_fill_tbl(struct ice_hw *hw, enum ice_block block_id, u32 sid)
>   }
>   
>   /**
> - * ice_fill_blk_tbls - Read package content for tables of a block
> + * ice_fill_blk_tbls - Read package context for tables
>    * @hw: pointer to the hardware structure
> - * @block_id: The block ID which contains the tables to be copied
>    *
>    * Reads the current package contents and populates the driver
> - * database with the data it contains to allow for advanced driver
> - * features.
> - */
> -static void ice_fill_blk_tbls(struct ice_hw *hw, enum ice_block block_id)
> -{
> -	ice_fill_tbl(hw, block_id, hw->blk[block_id].xlt1.sid);
> -	ice_fill_tbl(hw, block_id, hw->blk[block_id].xlt2.sid);
> -	ice_fill_tbl(hw, block_id, hw->blk[block_id].prof.sid);
> -	ice_fill_tbl(hw, block_id, hw->blk[block_id].prof_redir.sid);
> -	ice_fill_tbl(hw, block_id, hw->blk[block_id].es.sid);
> -}
> -
> -/**
> - * ice_free_flow_profs - free flow profile entries
> - * @hw: pointer to the hardware structure
> + * database with the data iteratively for all advanced feature
> + * blocks. Assume that the Hw tables have been allocated.
>    */
> -static void ice_free_flow_profs(struct ice_hw *hw)
> +void ice_fill_blk_tbls(struct ice_hw *hw)
>   {
>   	u8 i;
>   
>   	for (i = 0; i < ICE_BLK_COUNT; i++) {
> -		struct ice_flow_prof *p, *tmp;
> -
> -		if (!&hw->fl_profs[i])
> -			continue;
> -
> -		/* This call is being made as part of resource deallocation
> -		 * during unload. Lock acquire and release will not be
> -		 * necessary here.
> -		 */
> -		LIST_FOR_EACH_ENTRY_SAFE(p, tmp, &hw->fl_profs[i],
> -					 ice_flow_prof, l_entry) {
> -			struct ice_flow_entry *e, *t;
> -
> -			LIST_FOR_EACH_ENTRY_SAFE(e, t, &p->entries,
> -						 ice_flow_entry, l_entry)
> -				ice_flow_rem_entry(hw, ICE_FLOW_ENTRY_HNDL(e));
> -
> -			LIST_DEL(&p->l_entry);
> -			if (p->acts)
> -				ice_free(hw, p->acts);
> -			ice_free(hw, p);
> -		}
> +		enum ice_block blk_id = (enum ice_block)i;
>   
> -		ice_destroy_lock(&hw->fl_profs_locks[i]);
> +		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].xlt1.sid);
> +		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].xlt2.sid);
> +		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].prof.sid);
> +		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].prof_redir.sid);
> +		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].es.sid);

>   	}
> +
> +	ice_init_sw_db(hw);

Propagate error once above is fixed.

>   }
>   


^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 17/49] net/ice/base: use macro instead of magic 8
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 17/49] net/ice/base: use macro instead of magic 8 Leyi Rong
@ 2019-06-05 10:39   ` Maxime Coquelin
  0 siblings, 0 replies; 225+ messages in thread
From: Maxime Coquelin @ 2019-06-05 10:39 UTC (permalink / raw)
  To: Leyi Rong, qi.z.zhang; +Cc: dev, Bruce Allan, Paul M Stillwell Jr



On 6/4/19 7:42 AM, Leyi Rong wrote:
> Replace the use of the magic number 8 by BITS_PER_BYTE when calculating
> the number of bits from the number of bytes.
> 
> Signed-off-by: Bruce Allan<bruce.w.allan@intel.com>
> Signed-off-by: Paul M Stillwell Jr<paul.m.stillwell.jr@intel.com>
> Signed-off-by: Leyi Rong<leyi.rong@intel.com>
> ---
>   drivers/net/ice/base/ice_flex_pipe.c |  4 +-
>   drivers/net/ice/base/ice_flow.c      | 74 +++++++++++++++-------------
>   2 files changed, 43 insertions(+), 35 deletions(-)

Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 39/49] net/ice/base: slightly code update
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 39/49] net/ice/base: slightly code update Leyi Rong
@ 2019-06-05 12:04   ` Maxime Coquelin
  2019-06-06  6:46     ` Rong, Leyi
  0 siblings, 1 reply; 225+ messages in thread
From: Maxime Coquelin @ 2019-06-05 12:04 UTC (permalink / raw)
  To: Leyi Rong, qi.z.zhang; +Cc: dev, Paul M Stillwell Jr



On 6/4/19 7:42 AM, Leyi Rong wrote:
> Mainly update below functions:
> 
> ice_flow_proc_seg_hdrs
> ice_flow_find_prof_conds
> ice_dealloc_flow_entry
> ice_add_rule_internal


It seems that some of the changes are bug fixes.
So IMO, these changes should be in dedicated patches, with Fixes tag
in the commit message.

Overall, these changes should be split by kind of changes. There are
functions reworks, minor cleanups, robustness changes, ...

> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> ---
>   drivers/net/ice/base/ice_flex_pipe.c     | 13 +++----
>   drivers/net/ice/base/ice_flow.c          | 47 +++++++++++++++++-------
>   drivers/net/ice/base/ice_nvm.c           |  4 +-
>   drivers/net/ice/base/ice_protocol_type.h |  1 +
>   drivers/net/ice/base/ice_switch.c        | 24 +++++++-----
>   drivers/net/ice/base/ice_switch.h        | 14 +++----
>   drivers/net/ice/base/ice_type.h          | 13 ++++++-
>   7 files changed, 73 insertions(+), 43 deletions(-)
> 
> diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
> index 5864cbf3e..2a310b6e1 100644
> --- a/drivers/net/ice/base/ice_flex_pipe.c
> +++ b/drivers/net/ice/base/ice_flex_pipe.c
> @@ -134,7 +134,7 @@ static struct ice_buf_table *ice_find_buf_table(struct ice_seg *ice_seg)
>   	nvms = (struct ice_nvm_table *)(ice_seg->device_table +
>   		LE32_TO_CPU(ice_seg->device_table_count));
>   
> -	return (struct ice_buf_table *)
> +	return (_FORCE_ struct ice_buf_table *)
>   		(nvms->vers + LE32_TO_CPU(nvms->table_count));
>   }
>   
> @@ -1005,9 +1005,8 @@ ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
>   
>   		bh = (struct ice_buf_hdr *)(bufs + i);
>   
> -		status = ice_aq_download_pkg(hw, bh, LE16_TO_CPU(bh->data_end),
> -					     last, &offset, &info, NULL);
> -
> +		status = ice_aq_download_pkg(hw, bh, ICE_PKG_BUF_SIZE, last,
> +					     &offset, &info, NULL);
>   		if (status) {
>   			ice_debug(hw, ICE_DBG_PKG,
>   				  "Pkg download failed: err %d off %d inf %d\n",
> @@ -2937,7 +2936,7 @@ static void ice_fill_tbl(struct ice_hw *hw, enum ice_block block_id, u32 sid)
>   		case ICE_SID_XLT2_ACL:
>   		case ICE_SID_XLT2_PE:
>   			xlt2 = (struct ice_xlt2_section *)sect;
> -			src = (u8 *)xlt2->value;
> +			src = (_FORCE_ u8 *)xlt2->value;
>   			sect_len = LE16_TO_CPU(xlt2->count) *
>   				sizeof(*hw->blk[block_id].xlt2.t);
>   			dst = (u8 *)hw->blk[block_id].xlt2.t;
> @@ -3889,7 +3888,7 @@ ice_update_fd_swap(struct ice_hw *hw, u16 prof_id, struct ice_fv_word *es)
>   
>   	/* fill in the swap array */
>   	si = hw->blk[ICE_BLK_FD].es.fvw - 1;
> -	do {
> +	while (si >= 0) {
>   		u8 indexes_used = 1;
>   
>   		/* assume flat at this index */
> @@ -3921,7 +3920,7 @@ ice_update_fd_swap(struct ice_hw *hw, u16 prof_id, struct ice_fv_word *es)
>   		}
>   
>   		si -= indexes_used;
> -	} while (si >= 0);
> +	}
>   
>   	/* for each set of 4 swap indexes, write the appropriate register */
>   	for (j = 0; j < hw->blk[ICE_BLK_FD].es.fvw / 4; j++) {
> diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
> index 795abe98f..f31557eac 100644
> --- a/drivers/net/ice/base/ice_flow.c
> +++ b/drivers/net/ice/base/ice_flow.c
> @@ -415,9 +415,6 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
>   		const ice_bitmap_t *src;
>   		u32 hdrs;
>   
> -		if (i > 0 && (i + 1) < prof->segs_cnt)
> -			continue;
> -
>   		hdrs = prof->segs[i].hdrs;
>   
>   		if (hdrs & ICE_FLOW_SEG_HDR_ETH) {
> @@ -847,6 +844,7 @@ ice_flow_proc_segs(struct ice_hw *hw, struct ice_flow_prof_params *params)
>   
>   #define ICE_FLOW_FIND_PROF_CHK_FLDS	0x00000001
>   #define ICE_FLOW_FIND_PROF_CHK_VSI	0x00000002
> +#define ICE_FLOW_FIND_PROF_NOT_CHK_DIR	0x00000004
>   
>   /**
>    * ice_flow_find_prof_conds - Find a profile matching headers and conditions
> @@ -866,7 +864,8 @@ ice_flow_find_prof_conds(struct ice_hw *hw, enum ice_block blk,
>   	struct ice_flow_prof *p;
>   
>   	LIST_FOR_EACH_ENTRY(p, &hw->fl_profs[blk], ice_flow_prof, l_entry) {
> -		if (p->dir == dir && segs_cnt && segs_cnt == p->segs_cnt) {
> +		if ((p->dir == dir || conds & ICE_FLOW_FIND_PROF_NOT_CHK_DIR) &&
> +		    segs_cnt && segs_cnt == p->segs_cnt) {
>   			u8 i;
>   
>   			/* Check for profile-VSI association if specified */
> @@ -935,17 +934,15 @@ ice_flow_find_prof_id(struct ice_hw *hw, enum ice_block blk, u64 prof_id)
>   }
>   
>   /**
> - * ice_flow_rem_entry_sync - Remove a flow entry
> + * ice_dealloc_flow_entry - Deallocate flow entry memory
>    * @hw: pointer to the HW struct
>    * @entry: flow entry to be removed
>    */
> -static enum ice_status
> -ice_flow_rem_entry_sync(struct ice_hw *hw, struct ice_flow_entry *entry)
> +static void
> +ice_dealloc_flow_entry(struct ice_hw *hw, struct ice_flow_entry *entry)
>   {
>   	if (!entry)
> -		return ICE_ERR_BAD_PTR;
> -
> -	LIST_DEL(&entry->l_entry);
> +		return;
>   
>   	if (entry->entry)
>   		ice_free(hw, entry->entry);
> @@ -957,6 +954,22 @@ ice_flow_rem_entry_sync(struct ice_hw *hw, struct ice_flow_entry *entry)
>   	}
>   
>   	ice_free(hw, entry);
> +}
> +
> +/**
> + * ice_flow_rem_entry_sync - Remove a flow entry
> + * @hw: pointer to the HW struct
> + * @entry: flow entry to be removed
> + */
> +static enum ice_status
> +ice_flow_rem_entry_sync(struct ice_hw *hw, struct ice_flow_entry *entry)
> +{
> +	if (!entry)
> +		return ICE_ERR_BAD_PTR;
> +
> +	LIST_DEL(&entry->l_entry);
> +
> +	ice_dealloc_flow_entry(hw, entry);
>   
>   	return ICE_SUCCESS;
>   }
> @@ -1395,9 +1408,12 @@ ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id,
>   		goto out;
>   	}
>   
> -	ice_acquire_lock(&prof->entries_lock);
> -	LIST_ADD(&e->l_entry, &prof->entries);
> -	ice_release_lock(&prof->entries_lock);
> +	if (blk != ICE_BLK_ACL) {
> +		/* ACL will handle the entry management */
> +		ice_acquire_lock(&prof->entries_lock);
> +		LIST_ADD(&e->l_entry, &prof->entries);
> +		ice_release_lock(&prof->entries_lock);
> +	}
>   
>   	*entry_h = ICE_FLOW_ENTRY_HNDL(e);
>   
> @@ -1425,7 +1441,7 @@ enum ice_status ice_flow_rem_entry(struct ice_hw *hw, u64 entry_h)
>   	if (entry_h == ICE_FLOW_ENTRY_HANDLE_INVAL)
>   		return ICE_ERR_PARAM;
>   
> -	entry = ICE_FLOW_ENTRY_PTR((unsigned long)entry_h);
> +	entry = ICE_FLOW_ENTRY_PTR(entry_h);
>   
>   	/* Retain the pointer to the flow profile as the entry will be freed */
>   	prof = entry->prof;
> @@ -1676,6 +1692,9 @@ enum ice_status ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
>   	if (!ice_is_vsi_valid(hw, vsi_handle))
>   		return ICE_ERR_PARAM;
>   
> +	if (LIST_EMPTY(&hw->fl_profs[blk]))
> +		return ICE_SUCCESS;
> +

It should be useless as LIST_FOR_EACH_ENTRY_SAFE handles that case
properly IIUC.

>   	ice_acquire_lock(&hw->fl_profs_locks[blk]);
>   	LIST_FOR_EACH_ENTRY_SAFE(p, t, &hw->fl_profs[blk], ice_flow_prof,
>   				 l_entry) {
> diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
> index fa9c348ce..76cfedb29 100644
> --- a/drivers/net/ice/base/ice_nvm.c
> +++ b/drivers/net/ice/base/ice_nvm.c
> @@ -127,7 +127,7 @@ ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
>   
>   	status = ice_read_sr_aq(hw, offset, 1, data, true);
>   	if (!status)
> -		*data = LE16_TO_CPU(*(__le16 *)data);
> +		*data = LE16_TO_CPU(*(_FORCE_ __le16 *)data);
>   
>   	return status;
>   }
> @@ -185,7 +185,7 @@ ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
>   	} while (words_read < *words);
>   
>   	for (i = 0; i < *words; i++)
> -		data[i] = LE16_TO_CPU(((__le16 *)data)[i]);
> +		data[i] = LE16_TO_CPU(((_FORCE_ __le16 *)data)[i]);
>   
>   read_nvm_buf_aq_exit:
>   	*words = words_read;
> diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
> index e572dd320..82822fb74 100644
> --- a/drivers/net/ice/base/ice_protocol_type.h
> +++ b/drivers/net/ice/base/ice_protocol_type.h
> @@ -189,6 +189,7 @@ struct ice_udp_tnl_hdr {
>   	u16 field;
>   	u16 proto_type;
>   	u16 vni;
> +	u16 reserved;
>   };
>   
>   struct ice_nvgre {
> diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
> index faaedd4c8..373acb7a6 100644
> --- a/drivers/net/ice/base/ice_switch.c
> +++ b/drivers/net/ice/base/ice_switch.c
> @@ -279,6 +279,7 @@ enum ice_status ice_init_def_sw_recp(struct ice_hw *hw)
>   		recps[i].root_rid = i;
>   		INIT_LIST_HEAD(&recps[i].filt_rules);
>   		INIT_LIST_HEAD(&recps[i].filt_replay_rules);
> +		INIT_LIST_HEAD(&recps[i].rg_list);

That looks like a bug fix, isn't it?

>   		ice_init_lock(&recps[i].filt_rule_lock);
>   	}
>   
> @@ -859,7 +860,7 @@ ice_aq_add_update_mir_rule(struct ice_hw *hw, u16 rule_type, u16 dest_vsi,
>   			return ICE_ERR_PARAM;
>   
>   		buf_size = count * sizeof(__le16);
> -		mr_list = (__le16 *)ice_malloc(hw, buf_size);
> +		mr_list = (_FORCE_ __le16 *)ice_malloc(hw, buf_size);
>   		if (!mr_list)
>   			return ICE_ERR_NO_MEMORY;
>   		break;
> @@ -1459,7 +1460,6 @@ static int ice_ilog2(u64 n)
>   	return -1;
>   }
>   
> -
>   /**
>    * ice_fill_sw_rule - Helper function to fill switch rule structure
>    * @hw: pointer to the hardware structure
> @@ -1479,7 +1479,6 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
>   	__be16 *off;
>   	u8 q_rgn;
>   
> -
>   	if (opc == ice_aqc_opc_remove_sw_rules) {
>   		s_rule->pdata.lkup_tx_rx.act = 0;
>   		s_rule->pdata.lkup_tx_rx.index =
> @@ -1555,7 +1554,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
>   		daddr = f_info->l_data.ethertype_mac.mac_addr;
>   		/* fall-through */
>   	case ICE_SW_LKUP_ETHERTYPE:
> -		off = (__be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
> +		off = (_FORCE_ __be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
>   		*off = CPU_TO_BE16(f_info->l_data.ethertype_mac.ethertype);
>   		break;
>   	case ICE_SW_LKUP_MAC_VLAN:
> @@ -1586,7 +1585,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
>   			   ICE_NONDMA_TO_NONDMA);
>   
>   	if (!(vlan_id > ICE_MAX_VLAN_ID)) {
> -		off = (__be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
> +		off = (_FORCE_ __be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
>   		*off = CPU_TO_BE16(vlan_id);
>   	}
>   
> @@ -2289,14 +2288,15 @@ ice_add_rule_internal(struct ice_hw *hw, u8 recp_id,
>   
>   	m_entry = ice_find_rule_entry(hw, recp_id, new_fltr);
>   	if (!m_entry) {
> -		ice_release_lock(rule_lock);
> -		return ice_create_pkt_fwd_rule(hw, f_entry);
> +		status = ice_create_pkt_fwd_rule(hw, f_entry);
> +		goto exit_add_rule_internal;
>   	}
>   
>   	cur_fltr = &m_entry->fltr_info;
>   	status = ice_add_update_vsi_list(hw, m_entry, cur_fltr, new_fltr);
> -	ice_release_lock(rule_lock);
>   
> +exit_add_rule_internal:
> +	ice_release_lock(rule_lock);
>   	return status;
>   }
>   
> @@ -2975,12 +2975,19 @@ ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list)
>    * ice_add_eth_mac - Add ethertype and MAC based filter rule
>    * @hw: pointer to the hardware structure
>    * @em_list: list of ether type MAC filter, MAC is optional
> + *
> + * This function requires the caller to populate the entries in
> + * the filter list with the necessary fields (including flags to
> + * indicate Tx or Rx rules).
>    */
>   enum ice_status
>   ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list)
>   {
>   	struct ice_fltr_list_entry *em_list_itr;
>   
> +	if (!em_list || !hw)
> +		return ICE_ERR_PARAM;
> +
>   	LIST_FOR_EACH_ENTRY(em_list_itr, em_list, ice_fltr_list_entry,
>   			    list_entry) {
>   		enum ice_sw_lkup_type l_type =
> @@ -2990,7 +2997,6 @@ ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list)
>   		    l_type != ICE_SW_LKUP_ETHERTYPE)
>   			return ICE_ERR_PARAM;
>   
> -		em_list_itr->fltr_info.flag = ICE_FLTR_TX;
>   		em_list_itr->status = ice_add_rule_internal(hw, l_type,
>   							    em_list_itr);
>   		if (em_list_itr->status)
> diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
> index 2f140a86d..05b1170c9 100644
> --- a/drivers/net/ice/base/ice_switch.h
> +++ b/drivers/net/ice/base/ice_switch.h
> @@ -11,6 +11,9 @@
>   #define ICE_SW_CFG_MAX_BUF_LEN 2048
>   #define ICE_MAX_SW 256
>   #define ICE_DFLT_VSI_INVAL 0xff
> +#define ICE_FLTR_RX BIT(0)
> +#define ICE_FLTR_TX BIT(1)
> +#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
>   
>   
>   /* Worst case buffer length for ice_aqc_opc_get_res_alloc */
> @@ -77,9 +80,6 @@ struct ice_fltr_info {
>   	/* rule ID returned by firmware once filter rule is created */
>   	u16 fltr_rule_id;
>   	u16 flag;
> -#define ICE_FLTR_RX		BIT(0)
> -#define ICE_FLTR_TX		BIT(1)
> -#define ICE_FLTR_TX_RX		(ICE_FLTR_RX | ICE_FLTR_TX)
>   
>   	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
>   	u16 src;
> @@ -145,10 +145,6 @@ struct ice_sw_act_ctrl {
>   	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
>   	u16 src;
>   	u16 flag;
> -#define ICE_FLTR_RX             BIT(0)
> -#define ICE_FLTR_TX             BIT(1)
> -#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
> -
>   	enum ice_sw_fwd_act_type fltr_act;
>   	/* Depending on filter action */
>   	union {
> @@ -368,6 +364,8 @@ ice_aq_get_res_descs(struct ice_hw *hw, u16 num_entries,
>   		     struct ice_sq_cd *cd);
>   enum ice_status
>   ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
> +enum ice_status
> +ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
>   void ice_rem_all_sw_rules_info(struct ice_hw *hw);
>   enum ice_status ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
>   enum ice_status ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
> @@ -375,8 +373,6 @@ enum ice_status
>   ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list);
>   enum ice_status
>   ice_remove_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list);
> -enum ice_status
> -ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
>   #ifndef NO_MACVLAN_SUPPORT
>   enum ice_status
>   ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
> diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
> index 919ca7fa8..f4e151c55 100644
> --- a/drivers/net/ice/base/ice_type.h
> +++ b/drivers/net/ice/base/ice_type.h
> @@ -14,6 +14,10 @@
>   
>   #define BITS_PER_BYTE	8
>   
> +#ifndef _FORCE_
> +#define _FORCE_
> +#endif
> +
>   #define ICE_BYTES_PER_WORD	2
>   #define ICE_BYTES_PER_DWORD	4
>   #define ICE_MAX_TRAFFIC_CLASS	8
> @@ -35,7 +39,7 @@
>   #endif
>   
>   #ifndef IS_ASCII
> -#define IS_ASCII(_ch)  ((_ch) < 0x80)
> +#define IS_ASCII(_ch)	((_ch) < 0x80)
>   #endif
>   
>   #include "ice_status.h"
> @@ -80,6 +84,7 @@ static inline u32 ice_round_to_num(u32 N, u32 R)
>   #define ICE_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
>   
>   /* debug masks - set these bits in hw->debug_mask to control output */
> +#define ICE_DBG_TRACE		BIT_ULL(0) /* for function-trace only */
>   #define ICE_DBG_INIT		BIT_ULL(1)
>   #define ICE_DBG_RELEASE		BIT_ULL(2)
>   #define ICE_DBG_FW_LOG		BIT_ULL(3)
> @@ -199,6 +204,7 @@ enum ice_vsi_type {
>   #ifdef ADQ_SUPPORT
>   	ICE_VSI_CHNL = 4,
>   #endif /* ADQ_SUPPORT */
> +	ICE_VSI_LB = 6,
>   };
>   
>   struct ice_link_status {
> @@ -718,6 +724,8 @@ struct ice_fw_log_cfg {
>   #define ICE_FW_LOG_EVNT_INIT	(ICE_AQC_FW_LOG_INIT_EN >> ICE_AQC_FW_LOG_EN_S)
>   #define ICE_FW_LOG_EVNT_FLOW	(ICE_AQC_FW_LOG_FLOW_EN >> ICE_AQC_FW_LOG_EN_S)
>   #define ICE_FW_LOG_EVNT_ERR	(ICE_AQC_FW_LOG_ERR_EN >> ICE_AQC_FW_LOG_EN_S)
> +#define ICE_FW_LOG_EVNT_ALL	(ICE_FW_LOG_EVNT_INFO | ICE_FW_LOG_EVNT_INIT | \
> +				 ICE_FW_LOG_EVNT_FLOW | ICE_FW_LOG_EVNT_ERR)
>   	struct ice_fw_log_evnt evnts[ICE_AQC_FW_LOG_ID_MAX];
>   };
>   
> @@ -745,6 +753,7 @@ struct ice_hw {
>   	u8 pf_id;		/* device profile info */
>   
>   	u16 max_burst_size;	/* driver sets this value */
> +
>   	/* Tx Scheduler values */
>   	u16 num_tx_sched_layers;
>   	u16 num_tx_sched_phys_layers;
> @@ -948,7 +957,6 @@ enum ice_sw_fwd_act_type {
>   #define ICE_SR_CSR_PROTECTED_LIST_PTR		0x0D
>   #define ICE_SR_MNG_CFG_PTR			0x0E
>   #define ICE_SR_EMP_MODULE_PTR			0x0F
> -#define ICE_SR_PBA_FLAGS			0x15
>   #define ICE_SR_PBA_BLOCK_PTR			0x16
>   #define ICE_SR_BOOT_CFG_PTR			0x17
>   #define ICE_SR_NVM_WOL_CFG			0x19
> @@ -994,6 +1002,7 @@ enum ice_sw_fwd_act_type {
>   #define ICE_SR_EMP_SR_SETTINGS_PTR		0x48
>   #define ICE_SR_CONFIGURATION_METADATA_PTR	0x4D
>   #define ICE_SR_IMMEDIATE_VALUES_PTR		0x4E
> +#define ICE_SR_POR_REGISTERS_AUTOLOAD_PTR	0x118
>   
>   /* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
>   #define ICE_SR_VPD_SIZE_WORDS		512
> 

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 40/49] net/ice/base: code clean up
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 40/49] net/ice/base: code clean up Leyi Rong
@ 2019-06-05 12:06   ` Maxime Coquelin
  2019-06-06  7:32     ` Rong, Leyi
  0 siblings, 1 reply; 225+ messages in thread
From: Maxime Coquelin @ 2019-06-05 12:06 UTC (permalink / raw)
  To: Leyi Rong, qi.z.zhang; +Cc: dev, Paul M Stillwell Jr



On 6/4/19 7:42 AM, Leyi Rong wrote:
> Cleanup the useless code.
> 
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> ---
>   drivers/net/ice/base/ice_controlq.c  | 62 +---------------------------
>   drivers/net/ice/base/ice_fdir.h      |  1 -
>   drivers/net/ice/base/ice_flex_pipe.c |  5 ++-
>   drivers/net/ice/base/ice_sched.c     |  4 +-
>   drivers/net/ice/base/ice_type.h      |  3 ++
>   5 files changed, 10 insertions(+), 65 deletions(-)
> 
> diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
> index 4cb6df113..3ef07e094 100644
> --- a/drivers/net/ice/base/ice_controlq.c
> +++ b/drivers/net/ice/base/ice_controlq.c
> @@ -262,7 +262,7 @@ ice_cfg_sq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
>    * @hw: pointer to the hardware structure
>    * @cq: pointer to the specific Control queue
>    *
> - * Configure base address and length registers for the receive (event q)
> + * Configure base address and length registers for the receive (event queue)
>    */
>   static enum ice_status
>   ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
> @@ -772,9 +772,6 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
>   	struct ice_ctl_q_ring *sq = &cq->sq;
>   	u16 ntc = sq->next_to_clean;
>   	struct ice_sq_cd *details;
> -#if 0
> -	struct ice_aq_desc desc_cb;
> -#endif
>   	struct ice_aq_desc *desc;
>   
>   	desc = ICE_CTL_Q_DESC(*sq, ntc);
> @@ -783,15 +780,6 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
>   	while (rd32(hw, cq->sq.head) != ntc) {
>   		ice_debug(hw, ICE_DBG_AQ_MSG,
>   			  "ntc %d head %d.\n", ntc, rd32(hw, cq->sq.head));
> -#if 0
> -		if (details->callback) {
> -			ICE_CTL_Q_CALLBACK cb_func =
> -				(ICE_CTL_Q_CALLBACK)details->callback;
> -			ice_memcpy(&desc_cb, desc, sizeof(desc_cb),
> -				   ICE_DMA_TO_DMA);
> -			cb_func(hw, &desc_cb);
> -		}
> -#endif
>   		ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
>   		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
>   		ntc++;
> @@ -941,38 +929,8 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
>   	details = ICE_CTL_Q_DETAILS(cq->sq, cq->sq.next_to_use);
>   	if (cd)
>   		*details = *cd;
> -#if 0
> -		/* FIXME: if/when this block gets enabled (when the #if 0
> -		 * is removed), add braces to both branches of the surrounding
> -		 * conditional expression. The braces have been removed to
> -		 * prevent checkpatch complaining.
> -		 */
> -
> -		/* If the command details are defined copy the cookie. The
> -		 * CPU_TO_LE32 is not needed here because the data is ignored
> -		 * by the FW, only used by the driver
> -		 */
> -		if (details->cookie) {
> -			desc->cookie_high =
> -				CPU_TO_LE32(ICE_HI_DWORD(details->cookie));
> -			desc->cookie_low =
> -				CPU_TO_LE32(ICE_LO_DWORD(details->cookie));
> -		}
> -#endif
>   	else
>   		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
> -#if 0
> -	/* clear requested flags and then set additional flags if defined */
> -	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
> -	desc->flags |= CPU_TO_LE16(details->flags_ena);
> -
> -	if (details->postpone && !details->async) {
> -		ice_debug(hw, ICE_DBG_AQ_MSG,
> -			  "Async flag not set along with postpone flag\n");
> -		status = ICE_ERR_PARAM;
> -		goto sq_send_command_error;
> -	}
> -#endif
>   
>   	/* Call clean and check queue available function to reclaim the
>   	 * descriptors that were processed by FW/MBX; the function returns the
> @@ -1019,20 +977,8 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
>   	(cq->sq.next_to_use)++;
>   	if (cq->sq.next_to_use == cq->sq.count)
>   		cq->sq.next_to_use = 0;
> -#if 0
> -	/* FIXME - handle this case? */
> -	if (!details->postpone)
> -#endif
>   	wr32(hw, cq->sq.tail, cq->sq.next_to_use);
>   
> -#if 0
> -	/* if command details are not defined or async flag is not set,
> -	 * we need to wait for desc write back
> -	 */
> -	if (!details->async && !details->postpone) {
> -		/* FIXME - handle this case? */
> -	}
> -#endif
>   	do {
>   		if (ice_sq_done(hw, cq))
>   			break;
> @@ -1087,9 +1033,6 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
>   
>   	/* update the error if time out occurred */
>   	if (!cmd_completed) {
> -#if 0
> -	    (!details->async && !details->postpone)) {
> -#endif
>   		ice_debug(hw, ICE_DBG_AQ_MSG,
>   			  "Control Send Queue Writeback timeout.\n");
>   		status = ICE_ERR_AQ_TIMEOUT;
> @@ -1208,9 +1151,6 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
>   	cq->rq.next_to_clean = ntc;
>   	cq->rq.next_to_use = ntu;
>   
> -#if 0
> -	ice_nvmupd_check_wait_event(hw, LE16_TO_CPU(e->desc.opcode));
> -#endif
>   clean_rq_elem_out:
>   	/* Set pending if needed, unlock and return */
>   	if (pending) {


Starting from here, the rest looks unrelated to the commit subject.

> diff --git a/drivers/net/ice/base/ice_fdir.h b/drivers/net/ice/base/ice_fdir.h
> index 2ecb147f1..f8f06658c 100644
> --- a/drivers/net/ice/base/ice_fdir.h
> +++ b/drivers/net/ice/base/ice_fdir.h
> @@ -173,7 +173,6 @@ struct ice_fdir_fltr {
>   	u32 fltr_id;
>   };
>   
> -
>   /* Dummy packet filter definition structure. */
>   struct ice_fdir_base_pkt {
>   	enum ice_fltr_ptype flow;
> diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
> index 2a310b6e1..46234c014 100644
> --- a/drivers/net/ice/base/ice_flex_pipe.c
> +++ b/drivers/net/ice/base/ice_flex_pipe.c
> @@ -398,7 +398,7 @@ ice_find_boost_entry(struct ice_seg *ice_seg, u16 addr,
>    * Handles enumeration of individual label entries.
>    */
>   static void *
> -ice_label_enum_handler(u32 __always_unused sect_type, void *section, u32 index,
> +ice_label_enum_handler(u32 __ALWAYS_UNUSED sect_type, void *section, u32 index,
>   		       u32 *offset)
>   {
>   	struct ice_label_section *labels;
> @@ -640,7 +640,7 @@ static bool ice_bits_max_set(const u8 *mask, u16 size, u16 max)
>    * @size: the size of the complete key in bytes (must be even)
>    * @val: array of 8-bit values that makes up the value portion of the key
>    * @upd: array of 8-bit masks that determine what key portion to update
> - * @dc: array of 8-bit masks that make up the dont' care mask
> + * @dc: array of 8-bit masks that make up the don't care mask
>    * @nm: array of 8-bit masks that make up the never match mask
>    * @off: the offset of the first byte in the key to update
>    * @len: the number of bytes in the key update
> @@ -4544,6 +4544,7 @@ ice_move_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig,
>   	status = ice_vsig_find_vsi(hw, blk, vsi, &orig_vsig);
>   	if (!status)
>   		status = ice_vsig_add_mv_vsi(hw, blk, vsi, vsig);
> +
>   	if (status) {
>   		ice_free(hw, p);
>   		return status;
> diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
> index a72e72982..fa3158a7b 100644
> --- a/drivers/net/ice/base/ice_sched.c
> +++ b/drivers/net/ice/base/ice_sched.c
> @@ -1233,7 +1233,7 @@ enum ice_status ice_sched_init_port(struct ice_port_info *pi)
>   		goto err_init_port;
>   	}
>   
> -	/* If the last node is a leaf node then the index of the Q group
> +	/* If the last node is a leaf node then the index of the queue group
>   	 * layer is two less than the number of elements.
>   	 */
>   	if (num_elems > 2 && buf[0].generic[num_elems - 1].data.elem_type ==
> @@ -3529,9 +3529,11 @@ ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
>   		LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
>   				    ice_sched_agg_vsi_info, list_entry)
>   			if (agg_vsi_info->vsi_handle == vsi_handle) {
> +				/* cppcheck-suppress unreadVariable */
>   				vsi_handle_valid = true;
>   				break;
>   			}
> +
>   		if (!vsi_handle_valid)
>   			goto exit_agg_priority_per_tc;
>   
> diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
> index f4e151c55..f76be2b58 100644
> --- a/drivers/net/ice/base/ice_type.h
> +++ b/drivers/net/ice/base/ice_type.h
> @@ -114,6 +114,9 @@ static inline u32 ice_round_to_num(u32 N, u32 R)
>   #define ICE_DBG_USER		BIT_ULL(31)
>   #define ICE_DBG_ALL		0xFFFFFFFFFFFFFFFFULL
>   
> +#ifndef __ALWAYS_UNUSED
> +#define __ALWAYS_UNUSED
> +#endif

That does not look related

>   
>   
>   
> 

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 47/49] net/ice/base: enable additional switch rules
  2019-06-04  5:42 ` [dpdk-dev] [PATCH 47/49] net/ice/base: enable additional switch rules Leyi Rong
@ 2019-06-05 12:24   ` Maxime Coquelin
  2019-06-05 16:34     ` Stillwell Jr, Paul M
  0 siblings, 1 reply; 225+ messages in thread
From: Maxime Coquelin @ 2019-06-05 12:24 UTC (permalink / raw)
  To: Leyi Rong, qi.z.zhang; +Cc: dev, Dan Nowlin, Paul M Stillwell Jr



On 6/4/19 7:42 AM, Leyi Rong wrote:
> Add capability to create inner IP and inner TCP switch recipes and
> rules. Change UDP tunnel dummy packet to accommodate the training of
> these new rules.
> 
> Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> ---
>   drivers/net/ice/base/ice_protocol_type.h |   8 +-
>   drivers/net/ice/base/ice_switch.c        | 361 ++++++++++++-----------
>   drivers/net/ice/base/ice_switch.h        |   1 +
>   3 files changed, 203 insertions(+), 167 deletions(-)
> 
> diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
> index 82822fb74..38bed7a79 100644
> --- a/drivers/net/ice/base/ice_protocol_type.h
> +++ b/drivers/net/ice/base/ice_protocol_type.h
> @@ -35,6 +35,7 @@ enum ice_protocol_type {
>   	ICE_IPV6_IL,
>   	ICE_IPV6_OFOS,
>   	ICE_TCP_IL,
> +	ICE_UDP_OF,
>   	ICE_UDP_ILOS,
>   	ICE_SCTP_IL,
>   	ICE_VXLAN,
> @@ -112,6 +113,7 @@ enum ice_prot_id {
>   #define ICE_IPV6_OFOS_HW	40
>   #define ICE_IPV6_IL_HW		41
>   #define ICE_TCP_IL_HW		49
> +#define ICE_UDP_OF_HW		52
>   #define ICE_UDP_ILOS_HW		53
>   #define ICE_SCTP_IL_HW		96
>   
> @@ -188,8 +190,7 @@ struct ice_l4_hdr {
>   struct ice_udp_tnl_hdr {
>   	u16 field;
>   	u16 proto_type;
> -	u16 vni;
> -	u16 reserved;
> +	u32 vni;	/* only use lower 24-bits */
>   };
>   
>   struct ice_nvgre {
> @@ -225,6 +226,7 @@ struct ice_prot_lkup_ext {
>   	u8 n_val_words;
>   	/* create a buffer to hold max words per recipe */
>   	u16 field_off[ICE_MAX_CHAIN_WORDS];
> +	u16 field_mask[ICE_MAX_CHAIN_WORDS];
>   
>   	struct ice_fv_word fv_words[ICE_MAX_CHAIN_WORDS];
>   
> @@ -235,6 +237,7 @@ struct ice_prot_lkup_ext {
>   struct ice_pref_recipe_group {
>   	u8 n_val_pairs;		/* Number of valid pairs */
>   	struct ice_fv_word pairs[ICE_NUM_WORDS_RECIPE];
> +	u16 mask[ICE_NUM_WORDS_RECIPE];
>   };
>   
>   struct ice_recp_grp_entry {
> @@ -244,6 +247,7 @@ struct ice_recp_grp_entry {
>   	u16 rid;
>   	u8 chain_idx;
>   	u16 fv_idx[ICE_NUM_WORDS_RECIPE];
> +	u16 fv_mask[ICE_NUM_WORDS_RECIPE];
>   	struct ice_pref_recipe_group r_group;
>   };
>   #endif /* _ICE_PROTOCOL_TYPE_H_ */
> diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
> index 373acb7a6..02fb49dba 100644
> --- a/drivers/net/ice/base/ice_switch.c
> +++ b/drivers/net/ice/base/ice_switch.c
> @@ -53,60 +53,109 @@ static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0,
>   	 sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi) + \
>   	 ((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi)))
>   
> +static const struct ice_dummy_pkt_offsets {
> +	enum ice_protocol_type type;
> +	u16 offset; /* ICE_PROTOCOL_LAST indicates end of list */
> +} dummy_gre_packet_offsets[] = {
> +	{ ICE_MAC_OFOS,		0 },
> +	{ ICE_IPV4_OFOS,	14 },
> +	{ ICE_VXLAN,		34 },
> +	{ ICE_MAC_IL,		42 },
> +	{ ICE_IPV4_IL,		54 },
> +	{ ICE_PROTOCOL_LAST,	0 },
> +};
> +
>   static const
> -u8 dummy_gre_packet[] = { 0, 0, 0, 0,		/* Ether starts */
> +u8 dummy_gre_packet[] = { 0, 0, 0, 0,		/* ICE_MAC_OFOS 0 */
>   			  0, 0, 0, 0,
>   			  0, 0, 0, 0,
> -			  0x08, 0,		/* Ether ends */
> -			  0x45, 0, 0, 0x3E,	/* IP starts */
> +			  0x08, 0,
> +			  0x45, 0, 0, 0x3E,	/* ICE_IPV4_OFOS 14 */
>   			  0, 0, 0, 0,
>   			  0, 0x2F, 0, 0,
>   			  0, 0, 0, 0,
> -			  0, 0, 0, 0,		/* IP ends */
> -			  0x80, 0, 0x65, 0x58,	/* GRE starts */
> -			  0, 0, 0, 0,		/* GRE ends */
> -			  0, 0, 0, 0,		/* Ether starts */
> -			  0, 0, 0, 0,
> -			  0, 0, 0, 0,
> -			  0x08, 0,		/* Ether ends */
> -			  0x45, 0, 0, 0x14,	/* IP starts */
>   			  0, 0, 0, 0,
> +			  0x80, 0, 0x65, 0x58,	/* ICE_VXLAN_GRE 34 */
>   			  0, 0, 0, 0,
> +			  0, 0, 0, 0,		/* ICE_MAC_IL 42 */
>   			  0, 0, 0, 0,
> -			  0, 0, 0, 0		/* IP ends */
> -			};
> -
> -static const u8
> -dummy_udp_tun_packet[] = {0, 0, 0, 0,		/* Ether starts */
> -			  0, 0, 0, 0,
> -			  0, 0, 0, 0,
> -			  0x08, 0,		/* Ether ends */
> -			  0x45, 0, 0, 0x32,	/* IP starts */
>   			  0, 0, 0, 0,
> -			  0, 0x11, 0, 0,
> +			  0x08, 0,
> +			  0x45, 0, 0, 0x14,	/* ICE_IPV4_IL 54 */
>   			  0, 0, 0, 0,
> -			  0, 0, 0, 0,		/* IP ends */
> -			  0, 0, 0x12, 0xB5,	/* UDP start*/
> -			  0, 0x1E, 0, 0,	/* UDP end*/
> -			  0, 0, 0, 0,		/* VXLAN start */
> -			  0, 0, 0, 0,		/* VXLAN end*/
> -			  0, 0, 0, 0,		/* Ether starts */
>   			  0, 0, 0, 0,
>   			  0, 0, 0, 0,
> -			  0, 0			/* Ether ends */
> +			  0, 0, 0, 0
>   			};
>   
> +static const
> +struct ice_dummy_pkt_offsets dummy_udp_tun_packet_offsets[] = {
> +	{ ICE_MAC_OFOS,		0 },
> +	{ ICE_IPV4_OFOS,	14 },
> +	{ ICE_UDP_OF,		34 },
> +	{ ICE_VXLAN,		42 },
> +	{ ICE_MAC_IL,		50 },
> +	{ ICE_IPV4_IL,		64 },
> +	{ ICE_TCP_IL,		84 },
> +	{ ICE_PROTOCOL_LAST,	0 },
> +};
> +
> +static const
> +u8 dummy_udp_tun_packet[] = {
> +	0x00, 0x00, 0x00, 0x00,  /* ICE_MAC_OFOS 0 */
> +	0x00, 0x00, 0x00, 0x00,
> +	0x00, 0x00, 0x00, 0x00,
> +	0x08, 0x00,
> +
> +	0x45, 0x00, 0x00, 0x5a, /* ICE_IPV4_OFOS 14 */
> +	0x00, 0x01, 0x00, 0x00,
> +	0x40, 0x11, 0x00, 0x00,
> +	0x00, 0x00, 0x00, 0x00,
> +	0x00, 0x00, 0x00, 0x00,
> +
> +	0x00, 0x00, 0x12, 0xb5, /* ICE_UDP_OF 34 */
> +	0x00, 0x46, 0x00, 0x00,
> +
> +	0x04, 0x00, 0x00, 0x03, /* ICE_VXLAN 42 */
> +	0x00, 0x00, 0x00, 0x00,
> +
> +	0x00, 0x00, 0x00, 0x00, /* ICE_MAC_IL 50 */
> +	0x00, 0x00, 0x00, 0x00,
> +	0x00, 0x00, 0x00, 0x00,
> +	0x08, 0x00,
> +
> +	0x45, 0x00, 0x00, 0x28, /* ICE_IPV4_IL 64 */
> +	0x00, 0x01, 0x00, 0x00,
> +	0x40, 0x06, 0x00, 0x00,
> +	0x00, 0x00, 0x00, 0x00,
> +	0x00, 0x00, 0x00, 0x00,
> +
> +	0x00, 0x00, 0x00, 0x00, /* ICE_TCP_IL 84 */
> +	0x00, 0x00, 0x00, 0x00,
> +	0x00, 0x00, 0x00, 0x00,
> +	0x50, 0x02, 0x20, 0x00,
> +	0x00, 0x00, 0x00, 0x00
> +};
> +
> +static const
> +struct ice_dummy_pkt_offsets dummy_tcp_tun_packet_offsets[] = {
> +	{ ICE_MAC_OFOS,		0 },
> +	{ ICE_IPV4_OFOS,	14 },
> +	{ ICE_TCP_IL,		34 },
> +	{ ICE_PROTOCOL_LAST,	0 },
> +};
> +
>   static const u8
> -dummy_tcp_tun_packet[] = {0, 0, 0, 0,		/* Ether starts */
> +dummy_tcp_tun_packet[] = {0, 0, 0, 0,		/* ICE_MAC_OFOS 0 */
>   			  0, 0, 0, 0,
>   			  0, 0, 0, 0,
> -			  0x08, 0,              /* Ether ends */
> -			  0x45, 0, 0, 0x28,     /* IP starts */
> +			  0x08, 0,
> +			  0x45, 0, 0, 0x28,     /* ICE_IPV4_OFOS 14 */
>   			  0, 0x01, 0, 0,
>   			  0x40, 0x06, 0xF5, 0x69,
>   			  0, 0, 0, 0,
> -			  0, 0, 0, 0,   /* IP ends */
>   			  0, 0, 0, 0,
> +			  0, 0, 0, 0,		/* ICE_TCP_IL 34 */
>   			  0, 0, 0, 0,
>   			  0, 0, 0, 0,
>   			  0x50, 0x02, 0x20,
> @@ -184,6 +233,9 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid)
>   			u8 lkup_indx = root_bufs.content.lkup_indx[i + 1];
>   
>   			rg_entry->fv_idx[i] = lkup_indx;
> +			rg_entry->fv_mask[i] =
> +				LE16_TO_CPU(root_bufs.content.mask[i + 1]);
> +
>   			/* If the recipe is a chained recipe then all its
>   			 * child recipe's result will have a result index.
>   			 * To fill fv_words we should not use those result
> @@ -4254,10 +4306,11 @@ static const struct ice_prot_ext_tbl_entry ice_prot_ext[] = {
>   	{ ICE_IPV6_OFOS,	{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24,
>   				 26, 28, 30, 32, 34, 36, 38 } },
>   	{ ICE_TCP_IL,		{ 0, 2 } },
> +	{ ICE_UDP_OF,		{ 0, 2 } },
>   	{ ICE_UDP_ILOS,		{ 0, 2 } },
>   	{ ICE_SCTP_IL,		{ 0, 2 } },
> -	{ ICE_VXLAN,		{ 8, 10, 12 } },
> -	{ ICE_GENEVE,		{ 8, 10, 12 } },
> +	{ ICE_VXLAN,		{ 8, 10, 12, 14 } },
> +	{ ICE_GENEVE,		{ 8, 10, 12, 14 } },
>   	{ ICE_VXLAN_GPE,	{ 0, 2, 4 } },
>   	{ ICE_NVGRE,		{ 0, 2 } },
>   	{ ICE_PROTOCOL_LAST,	{ 0 } }
> @@ -4270,11 +4323,14 @@ static const struct ice_prot_ext_tbl_entry ice_prot_ext[] = {
>    */
>   static const struct ice_pref_recipe_group ice_recipe_pack[] = {
>   	{3, { { ICE_MAC_OFOS_HW, 0, 0 }, { ICE_MAC_OFOS_HW, 2, 0 },
> -	      { ICE_MAC_OFOS_HW, 4, 0 } } },
> +	      { ICE_MAC_OFOS_HW, 4, 0 } }, { 0xffff, 0xffff, 0xffff, 0xffff } },
>   	{4, { { ICE_MAC_IL_HW, 0, 0 }, { ICE_MAC_IL_HW, 2, 0 },
> -	      { ICE_MAC_IL_HW, 4, 0 }, { ICE_META_DATA_ID_HW, 44, 0 } } },
> -	{2, { { ICE_IPV4_IL_HW, 0, 0 }, { ICE_IPV4_IL_HW, 2, 0 } } },
> -	{2, { { ICE_IPV4_IL_HW, 12, 0 }, { ICE_IPV4_IL_HW, 14, 0 } } },
> +	      { ICE_MAC_IL_HW, 4, 0 }, { ICE_META_DATA_ID_HW, 44, 0 } },
> +		{ 0xffff, 0xffff, 0xffff, 0xffff } },
> +	{2, { { ICE_IPV4_IL_HW, 0, 0 }, { ICE_IPV4_IL_HW, 2, 0 } },
> +		{ 0xffff, 0xffff, 0xffff, 0xffff } },
> +	{2, { { ICE_IPV4_IL_HW, 12, 0 }, { ICE_IPV4_IL_HW, 14, 0 } },
> +		{ 0xffff, 0xffff, 0xffff, 0xffff } },
>   };
>   
>   static const struct ice_protocol_entry ice_prot_id_tbl[] = {
> @@ -4285,6 +4341,7 @@ static const struct ice_protocol_entry ice_prot_id_tbl[] = {
>   	{ ICE_IPV6_OFOS,	ICE_IPV6_OFOS_HW },
>   	{ ICE_IPV6_IL,		ICE_IPV6_IL_HW },
>   	{ ICE_TCP_IL,		ICE_TCP_IL_HW },
> +	{ ICE_UDP_OF,		ICE_UDP_OF_HW },
>   	{ ICE_UDP_ILOS,		ICE_UDP_ILOS_HW },
>   	{ ICE_SCTP_IL,		ICE_SCTP_IL_HW },
>   	{ ICE_VXLAN,		ICE_UDP_OF_HW },
> @@ -4403,7 +4460,7 @@ ice_fill_valid_words(struct ice_adv_lkup_elem *rule,
>   	word = lkup_exts->n_val_words;
>   
>   	for (j = 0; j < sizeof(rule->m_u) / sizeof(u16); j++)
> -		if (((u16 *)&rule->m_u)[j] == 0xffff &&
> +		if (((u16 *)&rule->m_u)[j] &&
>   		    rule->type < ARRAY_SIZE(ice_prot_ext)) {
>   			/* No more space to accommodate */
>   			if (word >= ICE_MAX_CHAIN_WORDS)
> @@ -4412,6 +4469,7 @@ ice_fill_valid_words(struct ice_adv_lkup_elem *rule,
>   				ice_prot_ext[rule->type].offs[j];
>   			lkup_exts->fv_words[word].prot_id =
>   				ice_prot_id_tbl[rule->type].protocol_id;
> +			lkup_exts->field_mask[word] = ((u16 *)&rule->m_u)[j];
>   			word++;
>   		}
>   
> @@ -4535,6 +4593,7 @@ ice_create_first_fit_recp_def(struct ice_hw *hw,
>   				lkup_exts->fv_words[j].prot_id;
>   			grp->pairs[grp->n_val_pairs].off =
>   				lkup_exts->fv_words[j].off;
> +			grp->mask[grp->n_val_pairs] = lkup_exts->field_mask[j];
>   			grp->n_val_pairs++;
>   		}
>   
> @@ -4569,14 +4628,22 @@ ice_fill_fv_word_index(struct ice_hw *hw, struct LIST_HEAD_TYPE *fv_list,
>   
>   		for (i = 0; i < rg->r_group.n_val_pairs; i++) {
>   			struct ice_fv_word *pr;
> +			u16 mask;
>   			u8 j;
>   
>   			pr = &rg->r_group.pairs[i];
> +			mask = rg->r_group.mask[i];
> +
>   			for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++)
>   				if (fv_ext[j].prot_id == pr->prot_id &&
>   				    fv_ext[j].off == pr->off) {
>   					/* Store index of field vector */
>   					rg->fv_idx[i] = j;
> +					/* Mask is given by caller as big
> +					 * endian, but sent to FW as little
> +					 * endian
> +					 */
> +					rg->fv_mask[i] = mask << 8 | mask >> 8;
>   					break;
>   				}
>   		}
> @@ -4674,7 +4741,8 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm,
>   
>   		for (i = 0; i < entry->r_group.n_val_pairs; i++) {
>   			buf[recps].content.lkup_indx[i + 1] = entry->fv_idx[i];
> -			buf[recps].content.mask[i + 1] = CPU_TO_LE16(0xFFFF);
> +			buf[recps].content.mask[i + 1] =
> +				CPU_TO_LE16(entry->fv_mask[i]);
>   		}
>   
>   		if (rm->n_grp_count > 1) {
> @@ -4896,6 +4964,8 @@ ice_create_recipe_group(struct ice_hw *hw, struct ice_sw_recipe *rm,
>   		rm->n_ext_words = lkup_exts->n_val_words;
>   		ice_memcpy(&rm->ext_words, lkup_exts->fv_words,
>   			   sizeof(rm->ext_words), ICE_NONDMA_TO_NONDMA);
> +		ice_memcpy(rm->word_masks, lkup_exts->field_mask,
> +			   sizeof(rm->word_masks), ICE_NONDMA_TO_NONDMA);
>   		goto out;
>   	}
>   
> @@ -5097,16 +5167,8 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
>   	return status;
>   }
>   
> -#define ICE_MAC_HDR_OFFSET	0
> -#define ICE_IP_HDR_OFFSET	14
> -#define ICE_GRE_HDR_OFFSET	34
> -#define ICE_MAC_IL_HDR_OFFSET	42
> -#define ICE_IP_IL_HDR_OFFSET	56
> -#define ICE_L4_HDR_OFFSET	34
> -#define ICE_UDP_TUN_HDR_OFFSET	42
> -
>   /**
> - * ice_find_dummy_packet - find dummy packet with given match criteria
> + * ice_find_dummy_packet - find dummy packet by tunnel type
>    *
>    * @lkups: lookup elements or match criteria for the advanced recipe, one
>    *	   structure per protocol header
> @@ -5114,17 +5176,20 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
>    * @tun_type: tunnel type from the match criteria
>    * @pkt: dummy packet to fill according to filter match criteria
>    * @pkt_len: packet length of dummy packet
> + * @offsets: pointer to receive the pointer to the offsets for the packet
>    */
>   static void
>   ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
>   		      enum ice_sw_tunnel_type tun_type, const u8 **pkt,
> -		      u16 *pkt_len)
> +		      u16 *pkt_len,
> +		      const struct ice_dummy_pkt_offsets **offsets)
>   {
>   	u16 i;
>   
>   	if (tun_type == ICE_SW_TUN_NVGRE || tun_type == ICE_ALL_TUNNELS) {
>   		*pkt = dummy_gre_packet;
>   		*pkt_len = sizeof(dummy_gre_packet);
> +		*offsets = dummy_gre_packet_offsets;
>   		return;
>   	}
>   
> @@ -5132,6 +5197,7 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
>   	    tun_type == ICE_SW_TUN_VXLAN_GPE) {
>   		*pkt = dummy_udp_tun_packet;
>   		*pkt_len = sizeof(dummy_udp_tun_packet);
> +		*offsets = dummy_udp_tun_packet_offsets;
>   		return;
>   	}
>   
> @@ -5139,12 +5205,14 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
>   		if (lkups[i].type == ICE_UDP_ILOS) {
>   			*pkt = dummy_udp_tun_packet;
>   			*pkt_len = sizeof(dummy_udp_tun_packet);
> +			*offsets = dummy_udp_tun_packet_offsets;
>   			return;
>   		}
>   	}
>   
>   	*pkt = dummy_tcp_tun_packet;
>   	*pkt_len = sizeof(dummy_tcp_tun_packet);
> +	*offsets = dummy_tcp_tun_packet_offsets;
>   }
>   
>   /**
> @@ -5153,16 +5221,16 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
>    * @lkups: lookup elements or match criteria for the advanced recipe, one
>    *	   structure per protocol header
>    * @lkups_cnt: number of protocols
> - * @tun_type: to know if the dummy packet is supposed to be tunnel packet
>    * @s_rule: stores rule information from the match criteria
>    * @dummy_pkt: dummy packet to fill according to filter match criteria
>    * @pkt_len: packet length of dummy packet
> + * @offsets: offset info for the dummy packet
>    */
> -static void
> +static enum ice_status
>   ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
> -			  enum ice_sw_tunnel_type tun_type,
>   			  struct ice_aqc_sw_rules_elem *s_rule,
> -			  const u8 *dummy_pkt, u16 pkt_len)
> +			  const u8 *dummy_pkt, u16 pkt_len,
> +			  const struct ice_dummy_pkt_offsets *offsets)
>   {
>   	u8 *pkt;
>   	u16 i;
> @@ -5175,124 +5243,74 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
>   	ice_memcpy(pkt, dummy_pkt, pkt_len, ICE_NONDMA_TO_NONDMA);
>   
>   	for (i = 0; i < lkups_cnt; i++) {
> -		u32 len, pkt_off, hdr_size, field_off;
> +		enum ice_protocol_type type;
> +		u16 offset = 0, len = 0, j;
> +		bool found = false;
> +
> +		/* find the start of this layer; it should be found since this
> +		 * was already checked when search for the dummy packet
> +		 */
> +		type = lkups[i].type;
> +		for (j = 0; offsets[j].type != ICE_PROTOCOL_LAST; j++) {
> +			if (type == offsets[j].type) {
> +				offset = offsets[j].offset;
> +				found = true;
> +				break;
> +			}
> +		}
> +		/* this should never happen in a correct calling sequence */
> +		if (!found)
> +			return ICE_ERR_PARAM;
>   
>   		switch (lkups[i].type) {
>   		case ICE_MAC_OFOS:
>   		case ICE_MAC_IL:
> -			pkt_off = offsetof(struct ice_ether_hdr, dst_addr) +
> -				((lkups[i].type == ICE_MAC_IL) ?
> -				 ICE_MAC_IL_HDR_OFFSET : 0);
> -			len = sizeof(lkups[i].h_u.eth_hdr.dst_addr);
> -			if ((tun_type == ICE_SW_TUN_VXLAN ||
> -			     tun_type == ICE_SW_TUN_GENEVE ||
> -			     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
> -			     lkups[i].type == ICE_MAC_IL) {
> -				pkt_off += sizeof(struct ice_udp_tnl_hdr);
> -			}
> -
> -			ice_memcpy(&pkt[pkt_off],
> -				   &lkups[i].h_u.eth_hdr.dst_addr, len,
> -				   ICE_NONDMA_TO_NONDMA);
> -			pkt_off = offsetof(struct ice_ether_hdr, src_addr) +
> -				((lkups[i].type == ICE_MAC_IL) ?
> -				 ICE_MAC_IL_HDR_OFFSET : 0);
> -			len = sizeof(lkups[i].h_u.eth_hdr.src_addr);
> -			if ((tun_type == ICE_SW_TUN_VXLAN ||
> -			     tun_type == ICE_SW_TUN_GENEVE ||
> -			     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
> -			     lkups[i].type == ICE_MAC_IL) {
> -				pkt_off += sizeof(struct ice_udp_tnl_hdr);
> -			}
> -			ice_memcpy(&pkt[pkt_off],
> -				   &lkups[i].h_u.eth_hdr.src_addr, len,
> -				   ICE_NONDMA_TO_NONDMA);
> -			if (lkups[i].h_u.eth_hdr.ethtype_id) {
> -				pkt_off = offsetof(struct ice_ether_hdr,
> -						   ethtype_id) +
> -					((lkups[i].type == ICE_MAC_IL) ?
> -					 ICE_MAC_IL_HDR_OFFSET : 0);
> -				len = sizeof(lkups[i].h_u.eth_hdr.ethtype_id);
> -				if ((tun_type == ICE_SW_TUN_VXLAN ||
> -				     tun_type == ICE_SW_TUN_GENEVE ||
> -				     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
> -				     lkups[i].type == ICE_MAC_IL) {
> -					pkt_off +=
> -						sizeof(struct ice_udp_tnl_hdr);
> -				}
> -				ice_memcpy(&pkt[pkt_off],
> -					   &lkups[i].h_u.eth_hdr.ethtype_id,
> -					   len, ICE_NONDMA_TO_NONDMA);
> -			}
> +			len = sizeof(struct ice_ether_hdr);
>   			break;
>   		case ICE_IPV4_OFOS:
> -			hdr_size = sizeof(struct ice_ipv4_hdr);
> -			if (lkups[i].h_u.ipv4_hdr.dst_addr) {
> -				pkt_off = ICE_IP_HDR_OFFSET +
> -					   offsetof(struct ice_ipv4_hdr,
> -						    dst_addr);
> -				field_off = offsetof(struct ice_ipv4_hdr,
> -						     dst_addr);
> -				len = hdr_size - field_off;
> -				ice_memcpy(&pkt[pkt_off],
> -					   &lkups[i].h_u.ipv4_hdr.dst_addr,
> -					   len, ICE_NONDMA_TO_NONDMA);
> -			}
> -			if (lkups[i].h_u.ipv4_hdr.src_addr) {
> -				pkt_off = ICE_IP_HDR_OFFSET +
> -					   offsetof(struct ice_ipv4_hdr,
> -						    src_addr);
> -				field_off = offsetof(struct ice_ipv4_hdr,
> -						     src_addr);
> -				len = hdr_size - field_off;
> -				ice_memcpy(&pkt[pkt_off],
> -					   &lkups[i].h_u.ipv4_hdr.src_addr,
> -					   len, ICE_NONDMA_TO_NONDMA);
> -			}
> -			break;
>   		case ICE_IPV4_IL:
> +			len = sizeof(struct ice_ipv4_hdr);
>   			break;
>   		case ICE_TCP_IL:
> +		case ICE_UDP_OF:
>   		case ICE_UDP_ILOS:
> +			len = sizeof(struct ice_l4_hdr);
> +			break;
>   		case ICE_SCTP_IL:
> -			hdr_size = sizeof(struct ice_udp_tnl_hdr);
> -			if (lkups[i].h_u.l4_hdr.dst_port) {
> -				pkt_off = ICE_L4_HDR_OFFSET +
> -					   offsetof(struct ice_l4_hdr,
> -						    dst_port);
> -				field_off = offsetof(struct ice_l4_hdr,
> -						     dst_port);
> -				len =  hdr_size - field_off;
> -				ice_memcpy(&pkt[pkt_off],
> -					   &lkups[i].h_u.l4_hdr.dst_port,
> -					   len, ICE_NONDMA_TO_NONDMA);
> -			}
> -			if (lkups[i].h_u.l4_hdr.src_port) {
> -				pkt_off = ICE_L4_HDR_OFFSET +
> -					offsetof(struct ice_l4_hdr, src_port);
> -				field_off = offsetof(struct ice_l4_hdr,
> -						     src_port);
> -				len =  hdr_size - field_off;
> -				ice_memcpy(&pkt[pkt_off],
> -					   &lkups[i].h_u.l4_hdr.src_port,
> -					   len, ICE_NONDMA_TO_NONDMA);
> -			}
> +			len = sizeof(struct ice_sctp_hdr);
>   			break;
>   		case ICE_VXLAN:
>   		case ICE_GENEVE:
>   		case ICE_VXLAN_GPE:
> -			pkt_off = ICE_UDP_TUN_HDR_OFFSET +
> -				   offsetof(struct ice_udp_tnl_hdr, vni);
> -			field_off = offsetof(struct ice_udp_tnl_hdr, vni);
> -			len =  sizeof(struct ice_udp_tnl_hdr) - field_off;
> -			ice_memcpy(&pkt[pkt_off], &lkups[i].h_u.tnl_hdr.vni,
> -				   len, ICE_NONDMA_TO_NONDMA);
> +			len = sizeof(struct ice_udp_tnl_hdr);
>   			break;
>   		default:
> -			break;
> +			return ICE_ERR_PARAM;
>   		}
> +
> +		/* the length should be a word multiple */
> +		if (len % ICE_BYTES_PER_WORD)
> +			return ICE_ERR_CFG;
> +
> +		/* We have the offset to the header start, the length, the
> +		 * caller's header values and mask. Use this information to
> +		 * copy the data into the dummy packet appropriately based on
> +		 * the mask. Note that we need to only write the bits as
> +		 * indicated by the mask to make sure we don't improperly write
> +		 * over any significant packet data.
> +		 */
> +		for (j = 0; j < len / sizeof(u16); j++)
> +			if (((u16 *)&lkups[i].m_u)[j])
> +				((u16 *)(pkt + offset))[j] =
> +					(((u16 *)(pkt + offset))[j] &
> +					 ~((u16 *)&lkups[i].m_u)[j]) |
> +					(((u16 *)&lkups[i].h_u)[j] &
> +					 ((u16 *)&lkups[i].m_u)[j]);
>   	}
> +
>   	s_rule->pdata.lkup_tx_rx.hdr_len = CPU_TO_LE16(pkt_len);
> +
> +	return ICE_SUCCESS;
>   }
>   
>   /**
> @@ -5446,7 +5464,7 @@ ice_adv_add_update_vsi_list(struct ice_hw *hw,
>   }
>   
>   /**
> - * ice_add_adv_rule - create an advanced switch rule
> + * ice_add_adv_rule - helper function to create an advanced switch rule
>    * @hw: pointer to the hardware structure
>    * @lkups: information on the words that needs to be looked up. All words
>    * together makes one recipe
> @@ -5470,11 +5488,13 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
>   {
>   	struct ice_adv_fltr_mgmt_list_entry *m_entry, *adv_fltr = NULL;
>   	u16 rid = 0, i, pkt_len, rule_buf_sz, vsi_handle;
> -	struct ice_aqc_sw_rules_elem *s_rule;
> +	const struct ice_dummy_pkt_offsets *pkt_offsets;
> +	struct ice_aqc_sw_rules_elem *s_rule = NULL;
>   	struct LIST_HEAD_TYPE *rule_head;
>   	struct ice_switch_info *sw;
>   	enum ice_status status;
>   	const u8 *pkt = NULL;
> +	bool found = false;
>   	u32 act = 0;
>   
>   	if (!lkups_cnt)
> @@ -5483,13 +5503,25 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
>   	for (i = 0; i < lkups_cnt; i++) {
>   		u16 j, *ptr;
>   
> -		/* Validate match masks to make sure they match complete 16-bit
> -		 * words.
> +		/* Validate match masks to make sure that there is something
> +		 * to match.
>   		 */
> -		ptr = (u16 *)&lkups->m_u;
> +		ptr = (u16 *)&lkups[i].m_u;
>   		for (j = 0; j < sizeof(lkups->m_u) / sizeof(u16); j++)
> -			if (ptr[j] != 0 && ptr[j] != 0xffff)
> -				return ICE_ERR_PARAM;
> +			if (ptr[j] != 0) {
> +				found = true;
> +				break;
> +			}
> +	}
> +	if (!found)
> +		return ICE_ERR_PARAM;
> +
> +	/* make sure that we can locate a dummy packet */
> +	ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt, &pkt_len,
> +			      &pkt_offsets);
> +	if (!pkt) {
> +		status = ICE_ERR_PARAM;
> +		goto err_ice_add_adv_rule;
>   	}
>   
>   	if (!(rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI ||
> @@ -5530,8 +5562,6 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
>   		}
>   		return status;
>   	}
> -	ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt,
> -			      &pkt_len);
>   	rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + pkt_len;
>   	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rule_buf_sz);
>   	if (!s_rule)
> @@ -5576,8 +5606,8 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
>   	s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(rid);
>   	s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
>   
> -	ice_fill_adv_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, s_rule,
> -				  pkt, pkt_len);
> +	ice_fill_adv_dummy_packet(lkups, lkups_cnt, s_rule, pkt, pkt_len,
> +				  pkt_offsets);

Now that ice_fill_adv_dummy_packet() propagates an error, the caller
should do the same.

>   
>   	status = ice_aq_sw_rules(hw, (struct ice_aqc_sw_rules *)s_rule,
>   				 rule_buf_sz, 1, ice_aqc_opc_add_sw_rules,
> @@ -5753,11 +5783,12 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
>   		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo)
>   {
>   	struct ice_adv_fltr_mgmt_list_entry *list_elem;
> +	const struct ice_dummy_pkt_offsets *offsets;
>   	struct ice_prot_lkup_ext lkup_exts;
>   	u16 rule_buf_sz, pkt_len, i, rid;
> +	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
>   	enum ice_status status = ICE_SUCCESS;
>   	bool remove_rule = false;
> -	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
>   	const u8 *pkt = NULL;
>   	u16 vsi_handle;
>   
> @@ -5805,7 +5836,7 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
>   		struct ice_aqc_sw_rules_elem *s_rule;
>   
>   		ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt,
> -				      &pkt_len);
> +				      &pkt_len, &offsets);
>   		rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + pkt_len;
>   		s_rule =
>   			(struct ice_aqc_sw_rules_elem *)ice_malloc(hw,
> diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
> index 05b1170c9..db79e41eb 100644
> --- a/drivers/net/ice/base/ice_switch.h
> +++ b/drivers/net/ice/base/ice_switch.h
> @@ -192,6 +192,7 @@ struct ice_sw_recipe {
>   	 * recipe
>   	 */
>   	struct ice_fv_word ext_words[ICE_MAX_CHAIN_WORDS];
> +	u16 word_masks[ICE_MAX_CHAIN_WORDS];
>   
>   	/* if this recipe is a collection of other recipe */
>   	u8 big_recp;
> 

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 07/49] net/ice/base: replay advanced rule after reset
  2019-06-05  8:58   ` Maxime Coquelin
@ 2019-06-05 15:53     ` Stillwell Jr, Paul M
  2019-06-05 15:59       ` Maxime Coquelin
  0 siblings, 1 reply; 225+ messages in thread
From: Stillwell Jr, Paul M @ 2019-06-05 15:53 UTC (permalink / raw)
  To: Maxime Coquelin, Rong, Leyi, Zhang, Qi Z; +Cc: dev, Raj, Victor

> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, June 5, 2019 1:59 AM
> To: Rong, Leyi <leyi.rong@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Raj, Victor <victor.raj@intel.com>; Stillwell Jr, Paul M
> <paul.m.stillwell.jr@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 07/49] net/ice/base: replay advanced rule
> after reset
> 
> 
> 
> On 6/4/19 7:42 AM, Leyi Rong wrote:
> > Code added to replay the advanced rule per VSI basis and remove the
> > advanced rule information from shared code recipe list.
> >
> > Signed-off-by: Victor Raj <victor.raj@intel.com>
> > Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> > Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> > ---
> >   drivers/net/ice/base/ice_switch.c | 81 ++++++++++++++++++++++++++-
> ----
> >   1 file changed, 69 insertions(+), 12 deletions(-)
> >
> > diff --git a/drivers/net/ice/base/ice_switch.c
> > b/drivers/net/ice/base/ice_switch.c
> > index c53021aed..ca0497ca7 100644
> > --- a/drivers/net/ice/base/ice_switch.c
> > +++ b/drivers/net/ice/base/ice_switch.c
> > @@ -3033,6 +3033,27 @@ ice_rem_sw_rule_info(struct ice_hw *hw,
> struct LIST_HEAD_TYPE *rule_head)
> >   	}
> >   }
> >
> > +/**
> > + * ice_rem_adv_rule_info
> > + * @hw: pointer to the hardware structure
> > + * @rule_head: pointer to the switch list structure that we want to
> > +delete  */ static void ice_rem_adv_rule_info(struct ice_hw *hw,
> > +struct LIST_HEAD_TYPE *rule_head) {
> > +	struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
> > +	struct ice_adv_fltr_mgmt_list_entry *lst_itr;
> > +
> > +	if (LIST_EMPTY(rule_head))
> > +		return;
> 
> 
> Is it necessary? If the list is empty, LIST_FOR_EACH_ENTRY will not loop and
> status will be returned:
> 

Yes, the check is necessary. This code gets consumed by multiple different OSs and not all OSs implement the LIST_FOR_EACH_ENTRY_SAFE in the way that DPDK did. For example, if I'm understanding the Linux code correctly, the list_for_each_entry_safe code in Linux would not work correctly without checking LIST_EMPTY since the Linux implementation doesn't have a check for null in it's implementation of list_for_each_entry_safe.

So what the DPDK code should be doing is something more like the Linux implementation (since it is the most restrictive use) and not check for null in LIST_FOR_EACH_ENTRY_SAFE and assume the list is not empty, thus the check would be necessary in DPDK also.

The change to the DPDK implementation of LIST_FOR_EACH_ENTRY_SAFE would need to be a separate patch though so I think for now we should leave this code and work on a patch to change the DPDK implementation.

> #define LIST_FOR_EACH_ENTRY_SAFE(pos, tmp, head, type, member) \
> 	LIST_FOR_EACH_ENTRY(pos, head, type, member)
> 
> with:
> 
> #define LIST_FOR_EACH_ENTRY(pos, head, type, member)
> 	       \
> 	for ((pos) = (head)->lh_first ?					       \
> 		     container_of((head)->lh_first, struct type, member) :     \
> 		     0;							       \
> 	     (pos);							       \
> 	     (pos) = (pos)->member.next.le_next ?			       \
> 		     container_of((pos)->member.next.le_next, struct type,
> \
> 				  member) :				       \
> 		     0)
> 
> > +	LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry, rule_head,
> 
> > +				 ice_adv_fltr_mgmt_list_entry, list_entry) {
> > +		LIST_DEL(&lst_itr->list_entry);
> > +		ice_free(hw, lst_itr->lkups);
> > +		ice_free(hw, lst_itr);
> > +	}
> > +}
> >
> >   /**
> >    * ice_rem_all_sw_rules_info
> > @@ -3049,6 +3070,8 @@ void ice_rem_all_sw_rules_info(struct ice_hw
> *hw)
> >   		rule_head = &sw->recp_list[i].filt_rules;
> >   		if (!sw->recp_list[i].adv_rule)
> >   			ice_rem_sw_rule_info(hw, rule_head);
> > +		else
> > +			ice_rem_adv_rule_info(hw, rule_head);
> >   	}
> >   }
> >
> > @@ -5687,6 +5710,38 @@ ice_replay_vsi_fltr(struct ice_hw *hw, u16
> vsi_handle, u8 recp_id,
> >   	return status;
> >   }
> >
> > +/**
> > + * ice_replay_vsi_adv_rule - Replay advanced rule for requested VSI
> > + * @hw: pointer to the hardware structure
> > + * @vsi_handle: driver VSI handle
> > + * @list_head: list for which filters need to be replayed
> > + *
> > + * Replay the advanced rule for the given VSI.
> > + */
> > +static enum ice_status
> > +ice_replay_vsi_adv_rule(struct ice_hw *hw, u16 vsi_handle,
> > +			struct LIST_HEAD_TYPE *list_head)
> > +{
> > +	struct ice_rule_query_data added_entry = { 0 };
> > +	struct ice_adv_fltr_mgmt_list_entry *adv_fltr;
> > +	enum ice_status status = ICE_SUCCESS;
> > +
> > +	if (LIST_EMPTY(list_head))
> > +		return status;
> 
> Ditto, it should be removed.
> 
> > +	LIST_FOR_EACH_ENTRY(adv_fltr, list_head,
> ice_adv_fltr_mgmt_list_entry,
> > +			    list_entry) {
> > +		struct ice_adv_rule_info *rinfo = &adv_fltr->rule_info;
> > +		u16 lk_cnt = adv_fltr->lkups_cnt;
> > +
> > +		if (vsi_handle != rinfo->sw_act.vsi_handle)
> > +			continue;
> > +		status = ice_add_adv_rule(hw, adv_fltr->lkups, lk_cnt, rinfo,
> > +					  &added_entry);
> > +		if (status)
> > +			break;
> > +	}
> > +	return status;
> > +}
> >
> >   /**
> >    * ice_replay_vsi_all_fltr - replay all filters stored in
> > bookkeeping lists @@ -5698,23 +5753,23 @@ ice_replay_vsi_fltr(struct
> ice_hw *hw, u16 vsi_handle, u8 recp_id,
> >   enum ice_status ice_replay_vsi_all_fltr(struct ice_hw *hw, u16
> vsi_handle)
> >   {
> >   	struct ice_switch_info *sw = hw->switch_info;
> > -	enum ice_status status = ICE_SUCCESS;
> > +	enum ice_status status;
> >   	u8 i;
> >
> > +	/* Update the recipes that were created */
> >   	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
> > -		/* Update the default recipe lines and ones that were
> created */
> > -		if (i < ICE_MAX_NUM_RECIPES || sw-
> >recp_list[i].recp_created) {
> > -			struct LIST_HEAD_TYPE *head;
> > +		struct LIST_HEAD_TYPE *head;
> >
> > -			head = &sw->recp_list[i].filt_replay_rules;
> > -			if (!sw->recp_list[i].adv_rule)
> > -				status = ice_replay_vsi_fltr(hw, vsi_handle, i,
> > -							     head);
> > -			if (status != ICE_SUCCESS)
> > -				return status;
> > -		}
> > +		head = &sw->recp_list[i].filt_replay_rules;
> > +		if (!sw->recp_list[i].adv_rule)
> > +			status = ice_replay_vsi_fltr(hw, vsi_handle, i, head);
> > +		else
> > +			status = ice_replay_vsi_adv_rule(hw, vsi_handle,
> head);
> > +		if (status != ICE_SUCCESS)
> > +			return status;
> >   	}
> > -	return status;
> > +
> > +	return ICE_SUCCESS;
> >   }
> >
> >   /**
> > @@ -5738,6 +5793,8 @@ void ice_rm_all_sw_replay_rule_info(struct
> ice_hw *hw)
> >   			l_head = &sw->recp_list[i].filt_replay_rules;
> >   			if (!sw->recp_list[i].adv_rule)
> >   				ice_rem_sw_rule_info(hw, l_head);
> > +			else
> > +				ice_rem_adv_rule_info(hw, l_head);
> >   		}
> >   	}
> >   }
> >

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 07/49] net/ice/base: replay advanced rule after reset
  2019-06-05 15:53     ` Stillwell Jr, Paul M
@ 2019-06-05 15:59       ` Maxime Coquelin
  2019-06-05 16:16         ` Stillwell Jr, Paul M
  0 siblings, 1 reply; 225+ messages in thread
From: Maxime Coquelin @ 2019-06-05 15:59 UTC (permalink / raw)
  To: Stillwell Jr, Paul M, Rong, Leyi, Zhang, Qi Z; +Cc: dev, Raj, Victor



On 6/5/19 5:53 PM, Stillwell Jr, Paul M wrote:
>>> diff --git a/drivers/net/ice/base/ice_switch.c
>>> b/drivers/net/ice/base/ice_switch.c
>>> index c53021aed..ca0497ca7 100644
>>> --- a/drivers/net/ice/base/ice_switch.c
>>> +++ b/drivers/net/ice/base/ice_switch.c
>>> @@ -3033,6 +3033,27 @@ ice_rem_sw_rule_info(struct ice_hw *hw,
>> struct LIST_HEAD_TYPE *rule_head)
>>>    	}
>>>    }
>>>
>>> +/**
>>> + * ice_rem_adv_rule_info
>>> + * @hw: pointer to the hardware structure
>>> + * @rule_head: pointer to the switch list structure that we want to
>>> +delete  */ static void ice_rem_adv_rule_info(struct ice_hw *hw,
>>> +struct LIST_HEAD_TYPE *rule_head) {
>>> +	struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
>>> +	struct ice_adv_fltr_mgmt_list_entry *lst_itr;
>>> +
>>> +	if (LIST_EMPTY(rule_head))
>>> +		return;
>>
>> Is it necessary? If the list is empty, LIST_FOR_EACH_ENTRY will not loop and
>> status will be returned:
>>
> Yes, the check is necessary. This code gets consumed by multiple different OSs and not all OSs implement the LIST_FOR_EACH_ENTRY_SAFE in the way that DPDK did. For example, if I'm understanding the Linux code correctly, the list_for_each_entry_safe code in Linux would not work correctly without checking LIST_EMPTY since the Linux implementation doesn't have a check for null in it's implementation of list_for_each_entry_safe.

Do you mean the same patch is upstreamed into Linux Kernel without any
adaptations?

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 07/49] net/ice/base: replay advanced rule after reset
  2019-06-05 15:59       ` Maxime Coquelin
@ 2019-06-05 16:16         ` Stillwell Jr, Paul M
  2019-06-05 16:28           ` Maxime Coquelin
  0 siblings, 1 reply; 225+ messages in thread
From: Stillwell Jr, Paul M @ 2019-06-05 16:16 UTC (permalink / raw)
  To: Maxime Coquelin, Rong, Leyi, Zhang, Qi Z; +Cc: dev, Raj, Victor

> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, June 5, 2019 8:59 AM
> To: Stillwell Jr, Paul M <paul.m.stillwell.jr@intel.com>; Rong, Leyi
> <leyi.rong@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Raj, Victor <victor.raj@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 07/49] net/ice/base: replay advanced rule
> after reset
> 
> 
> 
> On 6/5/19 5:53 PM, Stillwell Jr, Paul M wrote:
> >>> diff --git a/drivers/net/ice/base/ice_switch.c
> >>> b/drivers/net/ice/base/ice_switch.c
> >>> index c53021aed..ca0497ca7 100644
> >>> --- a/drivers/net/ice/base/ice_switch.c
> >>> +++ b/drivers/net/ice/base/ice_switch.c
> >>> @@ -3033,6 +3033,27 @@ ice_rem_sw_rule_info(struct ice_hw *hw,
> >> struct LIST_HEAD_TYPE *rule_head)
> >>>    	}
> >>>    }
> >>>
> >>> +/**
> >>> + * ice_rem_adv_rule_info
> >>> + * @hw: pointer to the hardware structure
> >>> + * @rule_head: pointer to the switch list structure that we want to
> >>> +delete  */ static void ice_rem_adv_rule_info(struct ice_hw *hw,
> >>> +struct LIST_HEAD_TYPE *rule_head) {
> >>> +	struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
> >>> +	struct ice_adv_fltr_mgmt_list_entry *lst_itr;
> >>> +
> >>> +	if (LIST_EMPTY(rule_head))
> >>> +		return;
> >>
> >> Is it necessary? If the list is empty, LIST_FOR_EACH_ENTRY will not
> >> loop and status will be returned:
> >>
> > Yes, the check is necessary. This code gets consumed by multiple different
> OSs and not all OSs implement the LIST_FOR_EACH_ENTRY_SAFE in the way
> that DPDK did. For example, if I'm understanding the Linux code correctly,
> the list_for_each_entry_safe code in Linux would not work correctly without
> checking LIST_EMPTY since the Linux implementation doesn't have a check
> for null in it's implementation of list_for_each_entry_safe.
> 
> Do you mean the same patch is upstreamed into Linux Kernel without any
> adaptations?

The same patch is planned to be upstreamed in the Linux kernel without any adaptations. Like I said, for Linux you have to check for LIST_EMPTY since the implementation of list_for_each_entry_safe doesn't check for NULL.

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 07/49] net/ice/base: replay advanced rule after reset
  2019-06-05 16:16         ` Stillwell Jr, Paul M
@ 2019-06-05 16:28           ` Maxime Coquelin
  2019-06-05 16:31             ` Stillwell Jr, Paul M
  0 siblings, 1 reply; 225+ messages in thread
From: Maxime Coquelin @ 2019-06-05 16:28 UTC (permalink / raw)
  To: Stillwell Jr, Paul M, Rong, Leyi, Zhang, Qi Z; +Cc: dev, Raj, Victor



On 6/5/19 6:16 PM, Stillwell Jr, Paul M wrote:
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin@redhat.com>
>> Sent: Wednesday, June 5, 2019 8:59 AM
>> To: Stillwell Jr, Paul M <paul.m.stillwell.jr@intel.com>; Rong, Leyi
>> <leyi.rong@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
>> Cc: dev@dpdk.org; Raj, Victor <victor.raj@intel.com>
>> Subject: Re: [dpdk-dev] [PATCH 07/49] net/ice/base: replay advanced rule
>> after reset
>>
>>
>>
>> On 6/5/19 5:53 PM, Stillwell Jr, Paul M wrote:
>>>>> diff --git a/drivers/net/ice/base/ice_switch.c
>>>>> b/drivers/net/ice/base/ice_switch.c
>>>>> index c53021aed..ca0497ca7 100644
>>>>> --- a/drivers/net/ice/base/ice_switch.c
>>>>> +++ b/drivers/net/ice/base/ice_switch.c
>>>>> @@ -3033,6 +3033,27 @@ ice_rem_sw_rule_info(struct ice_hw *hw,
>>>> struct LIST_HEAD_TYPE *rule_head)
>>>>>     	}
>>>>>     }
>>>>>
>>>>> +/**
>>>>> + * ice_rem_adv_rule_info
>>>>> + * @hw: pointer to the hardware structure
>>>>> + * @rule_head: pointer to the switch list structure that we want to
>>>>> +delete  */ static void ice_rem_adv_rule_info(struct ice_hw *hw,
>>>>> +struct LIST_HEAD_TYPE *rule_head) {
>>>>> +	struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
>>>>> +	struct ice_adv_fltr_mgmt_list_entry *lst_itr;
>>>>> +
>>>>> +	if (LIST_EMPTY(rule_head))
>>>>> +		return;
>>>>
>>>> Is it necessary? If the list is empty, LIST_FOR_EACH_ENTRY will not
>>>> loop and status will be returned:
>>>>
>>> Yes, the check is necessary. This code gets consumed by multiple different
>> OSs and not all OSs implement the LIST_FOR_EACH_ENTRY_SAFE in the way
>> that DPDK did. For example, if I'm understanding the Linux code correctly,
>> the list_for_each_entry_safe code in Linux would not work correctly without
>> checking LIST_EMPTY since the Linux implementation doesn't have a check
>> for null in it's implementation of list_for_each_entry_safe.
>>
>> Do you mean the same patch is upstreamed into Linux Kernel without any
>> adaptations?
> 
> The same patch is planned to be upstreamed in the Linux kernel without any adaptations. Like I said, for Linux you have to check for LIST_EMPTY since the implementation of list_for_each_entry_safe doesn't check for NULL.
> 

OK, thanks for the clarification.
That's a surprise to me that OS abstraction layers are now accepted
in upstream kernel (Like ice_acquire_lock for instance).

Let's drop my comments about LIST_EMPTY then.

Maxime

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 07/49] net/ice/base: replay advanced rule after reset
  2019-06-05 16:28           ` Maxime Coquelin
@ 2019-06-05 16:31             ` Stillwell Jr, Paul M
  0 siblings, 0 replies; 225+ messages in thread
From: Stillwell Jr, Paul M @ 2019-06-05 16:31 UTC (permalink / raw)
  To: Maxime Coquelin, Rong, Leyi, Zhang, Qi Z; +Cc: dev, Raj, Victor

> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, June 5, 2019 9:28 AM
> To: Stillwell Jr, Paul M <paul.m.stillwell.jr@intel.com>; Rong, Leyi
> <leyi.rong@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Raj, Victor <victor.raj@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 07/49] net/ice/base: replay advanced rule
> after reset
> 
> 
> 
> On 6/5/19 6:16 PM, Stillwell Jr, Paul M wrote:
> >> -----Original Message-----
> >> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> >> Sent: Wednesday, June 5, 2019 8:59 AM
> >> To: Stillwell Jr, Paul M <paul.m.stillwell.jr@intel.com>; Rong, Leyi
> >> <leyi.rong@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> >> Cc: dev@dpdk.org; Raj, Victor <victor.raj@intel.com>
> >> Subject: Re: [dpdk-dev] [PATCH 07/49] net/ice/base: replay advanced
> >> rule after reset
> >>
> >>
> >>
> >> On 6/5/19 5:53 PM, Stillwell Jr, Paul M wrote:
> >>>>> diff --git a/drivers/net/ice/base/ice_switch.c
> >>>>> b/drivers/net/ice/base/ice_switch.c
> >>>>> index c53021aed..ca0497ca7 100644
> >>>>> --- a/drivers/net/ice/base/ice_switch.c
> >>>>> +++ b/drivers/net/ice/base/ice_switch.c
> >>>>> @@ -3033,6 +3033,27 @@ ice_rem_sw_rule_info(struct ice_hw *hw,
> >>>> struct LIST_HEAD_TYPE *rule_head)
> >>>>>     	}
> >>>>>     }
> >>>>>
> >>>>> +/**
> >>>>> + * ice_rem_adv_rule_info
> >>>>> + * @hw: pointer to the hardware structure
> >>>>> + * @rule_head: pointer to the switch list structure that we want
> >>>>> +to delete  */ static void ice_rem_adv_rule_info(struct ice_hw
> >>>>> +*hw, struct LIST_HEAD_TYPE *rule_head) {
> >>>>> +	struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
> >>>>> +	struct ice_adv_fltr_mgmt_list_entry *lst_itr;
> >>>>> +
> >>>>> +	if (LIST_EMPTY(rule_head))
> >>>>> +		return;
> >>>>
> >>>> Is it necessary? If the list is empty, LIST_FOR_EACH_ENTRY will not
> >>>> loop and status will be returned:
> >>>>
> >>> Yes, the check is necessary. This code gets consumed by multiple
> >>> different
> >> OSs and not all OSs implement the LIST_FOR_EACH_ENTRY_SAFE in the
> way
> >> that DPDK did. For example, if I'm understanding the Linux code
> >> correctly, the list_for_each_entry_safe code in Linux would not work
> >> correctly without checking LIST_EMPTY since the Linux implementation
> >> doesn't have a check for null in it's implementation of
> list_for_each_entry_safe.
> >>
> >> Do you mean the same patch is upstreamed into Linux Kernel without
> >> any adaptations?
> >
> > The same patch is planned to be upstreamed in the Linux kernel without
> any adaptations. Like I said, for Linux you have to check for LIST_EMPTY since
> the implementation of list_for_each_entry_safe doesn't check for NULL.
> >
> 
> OK, thanks for the clarification.
> That's a surprise to me that OS abstraction layers are now accepted in
> upstream kernel (Like ice_acquire_lock for instance).
> 

Just to further clarify, when the patch goes upstream in Linux the LIST_FOR_EACH_ENTRY_SAFE and all the other LIST_* macros in the code will get replaced with the corresponding list_* that is in the linux kernel. We have a coccinelle entry to replace these in Linux.

> Let's drop my comments about LIST_EMPTY then.
> 
> Maxime

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 47/49] net/ice/base: enable additional switch rules
  2019-06-05 12:24   ` Maxime Coquelin
@ 2019-06-05 16:34     ` Stillwell Jr, Paul M
  2019-06-07 12:41       ` Maxime Coquelin
  0 siblings, 1 reply; 225+ messages in thread
From: Stillwell Jr, Paul M @ 2019-06-05 16:34 UTC (permalink / raw)
  To: Maxime Coquelin, Rong, Leyi, Zhang, Qi Z; +Cc: dev, Nowlin, Dan

> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, June 5, 2019 5:24 AM
> To: Rong, Leyi <leyi.rong@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Nowlin, Dan <dan.nowlin@intel.com>; Stillwell Jr, Paul M
> <paul.m.stillwell.jr@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 47/49] net/ice/base: enable additional
> switch rules
> 
> 
> 
> On 6/4/19 7:42 AM, Leyi Rong wrote:
> > Add capability to create inner IP and inner TCP switch recipes and
> > rules. Change UDP tunnel dummy packet to accommodate the training of
> > these new rules.
> >
> > Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
> > Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> > Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> > ---
> >   drivers/net/ice/base/ice_protocol_type.h |   8 +-
> >   drivers/net/ice/base/ice_switch.c        | 361 ++++++++++++-----------
> >   drivers/net/ice/base/ice_switch.h        |   1 +
> >   3 files changed, 203 insertions(+), 167 deletions(-)
> >
> > diff --git a/drivers/net/ice/base/ice_protocol_type.h
> > b/drivers/net/ice/base/ice_protocol_type.h
> > index 82822fb74..38bed7a79 100644
> > --- a/drivers/net/ice/base/ice_protocol_type.h
> > +++ b/drivers/net/ice/base/ice_protocol_type.h
> > @@ -35,6 +35,7 @@ enum ice_protocol_type {
> >   	ICE_IPV6_IL,
> >   	ICE_IPV6_OFOS,
> >   	ICE_TCP_IL,
> > +	ICE_UDP_OF,
> >   	ICE_UDP_ILOS,
> >   	ICE_SCTP_IL,
> >   	ICE_VXLAN,
> > @@ -112,6 +113,7 @@ enum ice_prot_id {
> >   #define ICE_IPV6_OFOS_HW	40
> >   #define ICE_IPV6_IL_HW		41
> >   #define ICE_TCP_IL_HW		49
> > +#define ICE_UDP_OF_HW		52
> >   #define ICE_UDP_ILOS_HW		53
> >   #define ICE_SCTP_IL_HW		96
> >
> > @@ -188,8 +190,7 @@ struct ice_l4_hdr {
> >   struct ice_udp_tnl_hdr {
> >   	u16 field;
> >   	u16 proto_type;
> > -	u16 vni;
> > -	u16 reserved;
> > +	u32 vni;	/* only use lower 24-bits */
> >   };
> >
> >   struct ice_nvgre {
> > @@ -225,6 +226,7 @@ struct ice_prot_lkup_ext {
> >   	u8 n_val_words;
> >   	/* create a buffer to hold max words per recipe */
> >   	u16 field_off[ICE_MAX_CHAIN_WORDS];
> > +	u16 field_mask[ICE_MAX_CHAIN_WORDS];
> >
> >   	struct ice_fv_word fv_words[ICE_MAX_CHAIN_WORDS];
> >
> > @@ -235,6 +237,7 @@ struct ice_prot_lkup_ext {
> >   struct ice_pref_recipe_group {
> >   	u8 n_val_pairs;		/* Number of valid pairs */
> >   	struct ice_fv_word pairs[ICE_NUM_WORDS_RECIPE];
> > +	u16 mask[ICE_NUM_WORDS_RECIPE];
> >   };
> >
> >   struct ice_recp_grp_entry {
> > @@ -244,6 +247,7 @@ struct ice_recp_grp_entry {
> >   	u16 rid;
> >   	u8 chain_idx;
> >   	u16 fv_idx[ICE_NUM_WORDS_RECIPE];
> > +	u16 fv_mask[ICE_NUM_WORDS_RECIPE];
> >   	struct ice_pref_recipe_group r_group;
> >   };
> >   #endif /* _ICE_PROTOCOL_TYPE_H_ */
> > diff --git a/drivers/net/ice/base/ice_switch.c
> > b/drivers/net/ice/base/ice_switch.c
> > index 373acb7a6..02fb49dba 100644
> > --- a/drivers/net/ice/base/ice_switch.c
> > +++ b/drivers/net/ice/base/ice_switch.c
> > @@ -53,60 +53,109 @@ static const u8
> dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0,
> >   	 sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi) + \
> >   	 ((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi)))
> >
> > +static const struct ice_dummy_pkt_offsets {
> > +	enum ice_protocol_type type;
> > +	u16 offset; /* ICE_PROTOCOL_LAST indicates end of list */ }
> > +dummy_gre_packet_offsets[] = {
> > +	{ ICE_MAC_OFOS,		0 },
> > +	{ ICE_IPV4_OFOS,	14 },
> > +	{ ICE_VXLAN,		34 },
> > +	{ ICE_MAC_IL,		42 },
> > +	{ ICE_IPV4_IL,		54 },
> > +	{ ICE_PROTOCOL_LAST,	0 },
> > +};
> > +
> >   static const
> > -u8 dummy_gre_packet[] = { 0, 0, 0, 0,		/* Ether starts */
> > +u8 dummy_gre_packet[] = { 0, 0, 0, 0,		/* ICE_MAC_OFOS 0
> */
> >   			  0, 0, 0, 0,
> >   			  0, 0, 0, 0,
> > -			  0x08, 0,		/* Ether ends */
> > -			  0x45, 0, 0, 0x3E,	/* IP starts */
> > +			  0x08, 0,
> > +			  0x45, 0, 0, 0x3E,	/* ICE_IPV4_OFOS 14 */
> >   			  0, 0, 0, 0,
> >   			  0, 0x2F, 0, 0,
> >   			  0, 0, 0, 0,
> > -			  0, 0, 0, 0,		/* IP ends */
> > -			  0x80, 0, 0x65, 0x58,	/* GRE starts */
> > -			  0, 0, 0, 0,		/* GRE ends */
> > -			  0, 0, 0, 0,		/* Ether starts */
> > -			  0, 0, 0, 0,
> > -			  0, 0, 0, 0,
> > -			  0x08, 0,		/* Ether ends */
> > -			  0x45, 0, 0, 0x14,	/* IP starts */
> >   			  0, 0, 0, 0,
> > +			  0x80, 0, 0x65, 0x58,	/* ICE_VXLAN_GRE 34 */
> >   			  0, 0, 0, 0,
> > +			  0, 0, 0, 0,		/* ICE_MAC_IL 42 */
> >   			  0, 0, 0, 0,
> > -			  0, 0, 0, 0		/* IP ends */
> > -			};
> > -
> > -static const u8
> > -dummy_udp_tun_packet[] = {0, 0, 0, 0,		/* Ether starts */
> > -			  0, 0, 0, 0,
> > -			  0, 0, 0, 0,
> > -			  0x08, 0,		/* Ether ends */
> > -			  0x45, 0, 0, 0x32,	/* IP starts */
> >   			  0, 0, 0, 0,
> > -			  0, 0x11, 0, 0,
> > +			  0x08, 0,
> > +			  0x45, 0, 0, 0x14,	/* ICE_IPV4_IL 54 */
> >   			  0, 0, 0, 0,
> > -			  0, 0, 0, 0,		/* IP ends */
> > -			  0, 0, 0x12, 0xB5,	/* UDP start*/
> > -			  0, 0x1E, 0, 0,	/* UDP end*/
> > -			  0, 0, 0, 0,		/* VXLAN start */
> > -			  0, 0, 0, 0,		/* VXLAN end*/
> > -			  0, 0, 0, 0,		/* Ether starts */
> >   			  0, 0, 0, 0,
> >   			  0, 0, 0, 0,
> > -			  0, 0			/* Ether ends */
> > +			  0, 0, 0, 0
> >   			};
> >
> > +static const
> > +struct ice_dummy_pkt_offsets dummy_udp_tun_packet_offsets[] = {
> > +	{ ICE_MAC_OFOS,		0 },
> > +	{ ICE_IPV4_OFOS,	14 },
> > +	{ ICE_UDP_OF,		34 },
> > +	{ ICE_VXLAN,		42 },
> > +	{ ICE_MAC_IL,		50 },
> > +	{ ICE_IPV4_IL,		64 },
> > +	{ ICE_TCP_IL,		84 },
> > +	{ ICE_PROTOCOL_LAST,	0 },
> > +};
> > +
> > +static const
> > +u8 dummy_udp_tun_packet[] = {
> > +	0x00, 0x00, 0x00, 0x00,  /* ICE_MAC_OFOS 0 */
> > +	0x00, 0x00, 0x00, 0x00,
> > +	0x00, 0x00, 0x00, 0x00,
> > +	0x08, 0x00,
> > +
> > +	0x45, 0x00, 0x00, 0x5a, /* ICE_IPV4_OFOS 14 */
> > +	0x00, 0x01, 0x00, 0x00,
> > +	0x40, 0x11, 0x00, 0x00,
> > +	0x00, 0x00, 0x00, 0x00,
> > +	0x00, 0x00, 0x00, 0x00,
> > +
> > +	0x00, 0x00, 0x12, 0xb5, /* ICE_UDP_OF 34 */
> > +	0x00, 0x46, 0x00, 0x00,
> > +
> > +	0x04, 0x00, 0x00, 0x03, /* ICE_VXLAN 42 */
> > +	0x00, 0x00, 0x00, 0x00,
> > +
> > +	0x00, 0x00, 0x00, 0x00, /* ICE_MAC_IL 50 */
> > +	0x00, 0x00, 0x00, 0x00,
> > +	0x00, 0x00, 0x00, 0x00,
> > +	0x08, 0x00,
> > +
> > +	0x45, 0x00, 0x00, 0x28, /* ICE_IPV4_IL 64 */
> > +	0x00, 0x01, 0x00, 0x00,
> > +	0x40, 0x06, 0x00, 0x00,
> > +	0x00, 0x00, 0x00, 0x00,
> > +	0x00, 0x00, 0x00, 0x00,
> > +
> > +	0x00, 0x00, 0x00, 0x00, /* ICE_TCP_IL 84 */
> > +	0x00, 0x00, 0x00, 0x00,
> > +	0x00, 0x00, 0x00, 0x00,
> > +	0x50, 0x02, 0x20, 0x00,
> > +	0x00, 0x00, 0x00, 0x00
> > +};
> > +
> > +static const
> > +struct ice_dummy_pkt_offsets dummy_tcp_tun_packet_offsets[] = {
> > +	{ ICE_MAC_OFOS,		0 },
> > +	{ ICE_IPV4_OFOS,	14 },
> > +	{ ICE_TCP_IL,		34 },
> > +	{ ICE_PROTOCOL_LAST,	0 },
> > +};
> > +
> >   static const u8
> > -dummy_tcp_tun_packet[] = {0, 0, 0, 0,		/* Ether starts */
> > +dummy_tcp_tun_packet[] = {0, 0, 0, 0,		/* ICE_MAC_OFOS 0
> */
> >   			  0, 0, 0, 0,
> >   			  0, 0, 0, 0,
> > -			  0x08, 0,              /* Ether ends */
> > -			  0x45, 0, 0, 0x28,     /* IP starts */
> > +			  0x08, 0,
> > +			  0x45, 0, 0, 0x28,     /* ICE_IPV4_OFOS 14 */
> >   			  0, 0x01, 0, 0,
> >   			  0x40, 0x06, 0xF5, 0x69,
> >   			  0, 0, 0, 0,
> > -			  0, 0, 0, 0,   /* IP ends */
> >   			  0, 0, 0, 0,
> > +			  0, 0, 0, 0,		/* ICE_TCP_IL 34 */
> >   			  0, 0, 0, 0,
> >   			  0, 0, 0, 0,
> >   			  0x50, 0x02, 0x20,
> > @@ -184,6 +233,9 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct
> ice_sw_recipe *recps, u8 rid)
> >   			u8 lkup_indx = root_bufs.content.lkup_indx[i + 1];
> >
> >   			rg_entry->fv_idx[i] = lkup_indx;
> > +			rg_entry->fv_mask[i] =
> > +				LE16_TO_CPU(root_bufs.content.mask[i +
> 1]);
> > +
> >   			/* If the recipe is a chained recipe then all its
> >   			 * child recipe's result will have a result index.
> >   			 * To fill fv_words we should not use those result
> @@ -4254,10
> > +4306,11 @@ static const struct ice_prot_ext_tbl_entry ice_prot_ext[] = {
> >   	{ ICE_IPV6_OFOS,	{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24,
> >   				 26, 28, 30, 32, 34, 36, 38 } },
> >   	{ ICE_TCP_IL,		{ 0, 2 } },
> > +	{ ICE_UDP_OF,		{ 0, 2 } },
> >   	{ ICE_UDP_ILOS,		{ 0, 2 } },
> >   	{ ICE_SCTP_IL,		{ 0, 2 } },
> > -	{ ICE_VXLAN,		{ 8, 10, 12 } },
> > -	{ ICE_GENEVE,		{ 8, 10, 12 } },
> > +	{ ICE_VXLAN,		{ 8, 10, 12, 14 } },
> > +	{ ICE_GENEVE,		{ 8, 10, 12, 14 } },
> >   	{ ICE_VXLAN_GPE,	{ 0, 2, 4 } },
> >   	{ ICE_NVGRE,		{ 0, 2 } },
> >   	{ ICE_PROTOCOL_LAST,	{ 0 } }
> > @@ -4270,11 +4323,14 @@ static const struct ice_prot_ext_tbl_entry
> ice_prot_ext[] = {
> >    */
> >   static const struct ice_pref_recipe_group ice_recipe_pack[] = {
> >   	{3, { { ICE_MAC_OFOS_HW, 0, 0 }, { ICE_MAC_OFOS_HW, 2, 0 },
> > -	      { ICE_MAC_OFOS_HW, 4, 0 } } },
> > +	      { ICE_MAC_OFOS_HW, 4, 0 } }, { 0xffff, 0xffff, 0xffff, 0xffff
> > +} },
> >   	{4, { { ICE_MAC_IL_HW, 0, 0 }, { ICE_MAC_IL_HW, 2, 0 },
> > -	      { ICE_MAC_IL_HW, 4, 0 }, { ICE_META_DATA_ID_HW, 44, 0 } } },
> > -	{2, { { ICE_IPV4_IL_HW, 0, 0 }, { ICE_IPV4_IL_HW, 2, 0 } } },
> > -	{2, { { ICE_IPV4_IL_HW, 12, 0 }, { ICE_IPV4_IL_HW, 14, 0 } } },
> > +	      { ICE_MAC_IL_HW, 4, 0 }, { ICE_META_DATA_ID_HW, 44, 0 } },
> > +		{ 0xffff, 0xffff, 0xffff, 0xffff } },
> > +	{2, { { ICE_IPV4_IL_HW, 0, 0 }, { ICE_IPV4_IL_HW, 2, 0 } },
> > +		{ 0xffff, 0xffff, 0xffff, 0xffff } },
> > +	{2, { { ICE_IPV4_IL_HW, 12, 0 }, { ICE_IPV4_IL_HW, 14, 0 } },
> > +		{ 0xffff, 0xffff, 0xffff, 0xffff } },
> >   };
> >
> >   static const struct ice_protocol_entry ice_prot_id_tbl[] = { @@
> > -4285,6 +4341,7 @@ static const struct ice_protocol_entry ice_prot_id_tbl[]
> = {
> >   	{ ICE_IPV6_OFOS,	ICE_IPV6_OFOS_HW },
> >   	{ ICE_IPV6_IL,		ICE_IPV6_IL_HW },
> >   	{ ICE_TCP_IL,		ICE_TCP_IL_HW },
> > +	{ ICE_UDP_OF,		ICE_UDP_OF_HW },
> >   	{ ICE_UDP_ILOS,		ICE_UDP_ILOS_HW },
> >   	{ ICE_SCTP_IL,		ICE_SCTP_IL_HW },
> >   	{ ICE_VXLAN,		ICE_UDP_OF_HW },
> > @@ -4403,7 +4460,7 @@ ice_fill_valid_words(struct ice_adv_lkup_elem
> *rule,
> >   	word = lkup_exts->n_val_words;
> >
> >   	for (j = 0; j < sizeof(rule->m_u) / sizeof(u16); j++)
> > -		if (((u16 *)&rule->m_u)[j] == 0xffff &&
> > +		if (((u16 *)&rule->m_u)[j] &&
> >   		    rule->type < ARRAY_SIZE(ice_prot_ext)) {
> >   			/* No more space to accommodate */
> >   			if (word >= ICE_MAX_CHAIN_WORDS)
> > @@ -4412,6 +4469,7 @@ ice_fill_valid_words(struct ice_adv_lkup_elem
> *rule,
> >   				ice_prot_ext[rule->type].offs[j];
> >   			lkup_exts->fv_words[word].prot_id =
> >   				ice_prot_id_tbl[rule->type].protocol_id;
> > +			lkup_exts->field_mask[word] = ((u16 *)&rule-
> >m_u)[j];
> >   			word++;
> >   		}
> >
> > @@ -4535,6 +4593,7 @@ ice_create_first_fit_recp_def(struct ice_hw *hw,
> >   				lkup_exts->fv_words[j].prot_id;
> >   			grp->pairs[grp->n_val_pairs].off =
> >   				lkup_exts->fv_words[j].off;
> > +			grp->mask[grp->n_val_pairs] = lkup_exts-
> >field_mask[j];
> >   			grp->n_val_pairs++;
> >   		}
> >
> > @@ -4569,14 +4628,22 @@ ice_fill_fv_word_index(struct ice_hw *hw,
> > struct LIST_HEAD_TYPE *fv_list,
> >
> >   		for (i = 0; i < rg->r_group.n_val_pairs; i++) {
> >   			struct ice_fv_word *pr;
> > +			u16 mask;
> >   			u8 j;
> >
> >   			pr = &rg->r_group.pairs[i];
> > +			mask = rg->r_group.mask[i];
> > +
> >   			for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++)
> >   				if (fv_ext[j].prot_id == pr->prot_id &&
> >   				    fv_ext[j].off == pr->off) {
> >   					/* Store index of field vector */
> >   					rg->fv_idx[i] = j;
> > +					/* Mask is given by caller as big
> > +					 * endian, but sent to FW as little
> > +					 * endian
> > +					 */
> > +					rg->fv_mask[i] = mask << 8 | mask >>
> 8;
> >   					break;
> >   				}
> >   		}
> > @@ -4674,7 +4741,8 @@ ice_add_sw_recipe(struct ice_hw *hw, struct
> > ice_sw_recipe *rm,
> >
> >   		for (i = 0; i < entry->r_group.n_val_pairs; i++) {
> >   			buf[recps].content.lkup_indx[i + 1] = entry-
> >fv_idx[i];
> > -			buf[recps].content.mask[i + 1] =
> CPU_TO_LE16(0xFFFF);
> > +			buf[recps].content.mask[i + 1] =
> > +				CPU_TO_LE16(entry->fv_mask[i]);
> >   		}
> >
> >   		if (rm->n_grp_count > 1) {
> > @@ -4896,6 +4964,8 @@ ice_create_recipe_group(struct ice_hw *hw,
> struct ice_sw_recipe *rm,
> >   		rm->n_ext_words = lkup_exts->n_val_words;
> >   		ice_memcpy(&rm->ext_words, lkup_exts->fv_words,
> >   			   sizeof(rm->ext_words),
> ICE_NONDMA_TO_NONDMA);
> > +		ice_memcpy(rm->word_masks, lkup_exts->field_mask,
> > +			   sizeof(rm->word_masks),
> ICE_NONDMA_TO_NONDMA);
> >   		goto out;
> >   	}
> >
> > @@ -5097,16 +5167,8 @@ ice_add_adv_recipe(struct ice_hw *hw, struct
> ice_adv_lkup_elem *lkups,
> >   	return status;
> >   }
> >
> > -#define ICE_MAC_HDR_OFFSET	0
> > -#define ICE_IP_HDR_OFFSET	14
> > -#define ICE_GRE_HDR_OFFSET	34
> > -#define ICE_MAC_IL_HDR_OFFSET	42
> > -#define ICE_IP_IL_HDR_OFFSET	56
> > -#define ICE_L4_HDR_OFFSET	34
> > -#define ICE_UDP_TUN_HDR_OFFSET	42
> > -
> >   /**
> > - * ice_find_dummy_packet - find dummy packet with given match
> > criteria
> > + * ice_find_dummy_packet - find dummy packet by tunnel type
> >    *
> >    * @lkups: lookup elements or match criteria for the advanced recipe, one
> >    *	   structure per protocol header
> > @@ -5114,17 +5176,20 @@ ice_add_adv_recipe(struct ice_hw *hw, struct
> ice_adv_lkup_elem *lkups,
> >    * @tun_type: tunnel type from the match criteria
> >    * @pkt: dummy packet to fill according to filter match criteria
> >    * @pkt_len: packet length of dummy packet
> > + * @offsets: pointer to receive the pointer to the offsets for the
> > + packet
> >    */
> >   static void
> >   ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16
> lkups_cnt,
> >   		      enum ice_sw_tunnel_type tun_type, const u8 **pkt,
> > -		      u16 *pkt_len)
> > +		      u16 *pkt_len,
> > +		      const struct ice_dummy_pkt_offsets **offsets)
> >   {
> >   	u16 i;
> >
> >   	if (tun_type == ICE_SW_TUN_NVGRE || tun_type ==
> ICE_ALL_TUNNELS) {
> >   		*pkt = dummy_gre_packet;
> >   		*pkt_len = sizeof(dummy_gre_packet);
> > +		*offsets = dummy_gre_packet_offsets;
> >   		return;
> >   	}
> >
> > @@ -5132,6 +5197,7 @@ ice_find_dummy_packet(struct
> ice_adv_lkup_elem *lkups, u16 lkups_cnt,
> >   	    tun_type == ICE_SW_TUN_VXLAN_GPE) {
> >   		*pkt = dummy_udp_tun_packet;
> >   		*pkt_len = sizeof(dummy_udp_tun_packet);
> > +		*offsets = dummy_udp_tun_packet_offsets;
> >   		return;
> >   	}
> >
> > @@ -5139,12 +5205,14 @@ ice_find_dummy_packet(struct
> ice_adv_lkup_elem *lkups, u16 lkups_cnt,
> >   		if (lkups[i].type == ICE_UDP_ILOS) {
> >   			*pkt = dummy_udp_tun_packet;
> >   			*pkt_len = sizeof(dummy_udp_tun_packet);
> > +			*offsets = dummy_udp_tun_packet_offsets;
> >   			return;
> >   		}
> >   	}
> >
> >   	*pkt = dummy_tcp_tun_packet;
> >   	*pkt_len = sizeof(dummy_tcp_tun_packet);
> > +	*offsets = dummy_tcp_tun_packet_offsets;
> >   }
> >
> >   /**
> > @@ -5153,16 +5221,16 @@ ice_find_dummy_packet(struct
> ice_adv_lkup_elem *lkups, u16 lkups_cnt,
> >    * @lkups: lookup elements or match criteria for the advanced recipe, one
> >    *	   structure per protocol header
> >    * @lkups_cnt: number of protocols
> > - * @tun_type: to know if the dummy packet is supposed to be tunnel
> packet
> >    * @s_rule: stores rule information from the match criteria
> >    * @dummy_pkt: dummy packet to fill according to filter match criteria
> >    * @pkt_len: packet length of dummy packet
> > + * @offsets: offset info for the dummy packet
> >    */
> > -static void
> > +static enum ice_status
> >   ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16
> lkups_cnt,
> > -			  enum ice_sw_tunnel_type tun_type,
> >   			  struct ice_aqc_sw_rules_elem *s_rule,
> > -			  const u8 *dummy_pkt, u16 pkt_len)
> > +			  const u8 *dummy_pkt, u16 pkt_len,
> > +			  const struct ice_dummy_pkt_offsets *offsets)
> >   {
> >   	u8 *pkt;
> >   	u16 i;
> > @@ -5175,124 +5243,74 @@ ice_fill_adv_dummy_packet(struct
> ice_adv_lkup_elem *lkups, u16 lkups_cnt,
> >   	ice_memcpy(pkt, dummy_pkt, pkt_len,
> ICE_NONDMA_TO_NONDMA);
> >
> >   	for (i = 0; i < lkups_cnt; i++) {
> > -		u32 len, pkt_off, hdr_size, field_off;
> > +		enum ice_protocol_type type;
> > +		u16 offset = 0, len = 0, j;
> > +		bool found = false;
> > +
> > +		/* find the start of this layer; it should be found since this
> > +		 * was already checked when search for the dummy packet
> > +		 */
> > +		type = lkups[i].type;
> > +		for (j = 0; offsets[j].type != ICE_PROTOCOL_LAST; j++) {
> > +			if (type == offsets[j].type) {
> > +				offset = offsets[j].offset;
> > +				found = true;
> > +				break;
> > +			}
> > +		}
> > +		/* this should never happen in a correct calling sequence */
> > +		if (!found)
> > +			return ICE_ERR_PARAM;
> >
> >   		switch (lkups[i].type) {
> >   		case ICE_MAC_OFOS:
> >   		case ICE_MAC_IL:
> > -			pkt_off = offsetof(struct ice_ether_hdr, dst_addr) +
> > -				((lkups[i].type == ICE_MAC_IL) ?
> > -				 ICE_MAC_IL_HDR_OFFSET : 0);
> > -			len = sizeof(lkups[i].h_u.eth_hdr.dst_addr);
> > -			if ((tun_type == ICE_SW_TUN_VXLAN ||
> > -			     tun_type == ICE_SW_TUN_GENEVE ||
> > -			     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
> > -			     lkups[i].type == ICE_MAC_IL) {
> > -				pkt_off += sizeof(struct ice_udp_tnl_hdr);
> > -			}
> > -
> > -			ice_memcpy(&pkt[pkt_off],
> > -				   &lkups[i].h_u.eth_hdr.dst_addr, len,
> > -				   ICE_NONDMA_TO_NONDMA);
> > -			pkt_off = offsetof(struct ice_ether_hdr, src_addr) +
> > -				((lkups[i].type == ICE_MAC_IL) ?
> > -				 ICE_MAC_IL_HDR_OFFSET : 0);
> > -			len = sizeof(lkups[i].h_u.eth_hdr.src_addr);
> > -			if ((tun_type == ICE_SW_TUN_VXLAN ||
> > -			     tun_type == ICE_SW_TUN_GENEVE ||
> > -			     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
> > -			     lkups[i].type == ICE_MAC_IL) {
> > -				pkt_off += sizeof(struct ice_udp_tnl_hdr);
> > -			}
> > -			ice_memcpy(&pkt[pkt_off],
> > -				   &lkups[i].h_u.eth_hdr.src_addr, len,
> > -				   ICE_NONDMA_TO_NONDMA);
> > -			if (lkups[i].h_u.eth_hdr.ethtype_id) {
> > -				pkt_off = offsetof(struct ice_ether_hdr,
> > -						   ethtype_id) +
> > -					((lkups[i].type == ICE_MAC_IL) ?
> > -					 ICE_MAC_IL_HDR_OFFSET : 0);
> > -				len =
> sizeof(lkups[i].h_u.eth_hdr.ethtype_id);
> > -				if ((tun_type == ICE_SW_TUN_VXLAN ||
> > -				     tun_type == ICE_SW_TUN_GENEVE ||
> > -				     tun_type == ICE_SW_TUN_VXLAN_GPE)
> &&
> > -				     lkups[i].type == ICE_MAC_IL) {
> > -					pkt_off +=
> > -						sizeof(struct
> ice_udp_tnl_hdr);
> > -				}
> > -				ice_memcpy(&pkt[pkt_off],
> > -					   &lkups[i].h_u.eth_hdr.ethtype_id,
> > -					   len, ICE_NONDMA_TO_NONDMA);
> > -			}
> > +			len = sizeof(struct ice_ether_hdr);
> >   			break;
> >   		case ICE_IPV4_OFOS:
> > -			hdr_size = sizeof(struct ice_ipv4_hdr);
> > -			if (lkups[i].h_u.ipv4_hdr.dst_addr) {
> > -				pkt_off = ICE_IP_HDR_OFFSET +
> > -					   offsetof(struct ice_ipv4_hdr,
> > -						    dst_addr);
> > -				field_off = offsetof(struct ice_ipv4_hdr,
> > -						     dst_addr);
> > -				len = hdr_size - field_off;
> > -				ice_memcpy(&pkt[pkt_off],
> > -					   &lkups[i].h_u.ipv4_hdr.dst_addr,
> > -					   len, ICE_NONDMA_TO_NONDMA);
> > -			}
> > -			if (lkups[i].h_u.ipv4_hdr.src_addr) {
> > -				pkt_off = ICE_IP_HDR_OFFSET +
> > -					   offsetof(struct ice_ipv4_hdr,
> > -						    src_addr);
> > -				field_off = offsetof(struct ice_ipv4_hdr,
> > -						     src_addr);
> > -				len = hdr_size - field_off;
> > -				ice_memcpy(&pkt[pkt_off],
> > -					   &lkups[i].h_u.ipv4_hdr.src_addr,
> > -					   len, ICE_NONDMA_TO_NONDMA);
> > -			}
> > -			break;
> >   		case ICE_IPV4_IL:
> > +			len = sizeof(struct ice_ipv4_hdr);
> >   			break;
> >   		case ICE_TCP_IL:
> > +		case ICE_UDP_OF:
> >   		case ICE_UDP_ILOS:
> > +			len = sizeof(struct ice_l4_hdr);
> > +			break;
> >   		case ICE_SCTP_IL:
> > -			hdr_size = sizeof(struct ice_udp_tnl_hdr);
> > -			if (lkups[i].h_u.l4_hdr.dst_port) {
> > -				pkt_off = ICE_L4_HDR_OFFSET +
> > -					   offsetof(struct ice_l4_hdr,
> > -						    dst_port);
> > -				field_off = offsetof(struct ice_l4_hdr,
> > -						     dst_port);
> > -				len =  hdr_size - field_off;
> > -				ice_memcpy(&pkt[pkt_off],
> > -					   &lkups[i].h_u.l4_hdr.dst_port,
> > -					   len, ICE_NONDMA_TO_NONDMA);
> > -			}
> > -			if (lkups[i].h_u.l4_hdr.src_port) {
> > -				pkt_off = ICE_L4_HDR_OFFSET +
> > -					offsetof(struct ice_l4_hdr, src_port);
> > -				field_off = offsetof(struct ice_l4_hdr,
> > -						     src_port);
> > -				len =  hdr_size - field_off;
> > -				ice_memcpy(&pkt[pkt_off],
> > -					   &lkups[i].h_u.l4_hdr.src_port,
> > -					   len, ICE_NONDMA_TO_NONDMA);
> > -			}
> > +			len = sizeof(struct ice_sctp_hdr);
> >   			break;
> >   		case ICE_VXLAN:
> >   		case ICE_GENEVE:
> >   		case ICE_VXLAN_GPE:
> > -			pkt_off = ICE_UDP_TUN_HDR_OFFSET +
> > -				   offsetof(struct ice_udp_tnl_hdr, vni);
> > -			field_off = offsetof(struct ice_udp_tnl_hdr, vni);
> > -			len =  sizeof(struct ice_udp_tnl_hdr) - field_off;
> > -			ice_memcpy(&pkt[pkt_off],
> &lkups[i].h_u.tnl_hdr.vni,
> > -				   len, ICE_NONDMA_TO_NONDMA);
> > +			len = sizeof(struct ice_udp_tnl_hdr);
> >   			break;
> >   		default:
> > -			break;
> > +			return ICE_ERR_PARAM;
> >   		}
> > +
> > +		/* the length should be a word multiple */
> > +		if (len % ICE_BYTES_PER_WORD)
> > +			return ICE_ERR_CFG;
> > +
> > +		/* We have the offset to the header start, the length, the
> > +		 * caller's header values and mask. Use this information to
> > +		 * copy the data into the dummy packet appropriately based
> on
> > +		 * the mask. Note that we need to only write the bits as
> > +		 * indicated by the mask to make sure we don't improperly
> write
> > +		 * over any significant packet data.
> > +		 */
> > +		for (j = 0; j < len / sizeof(u16); j++)
> > +			if (((u16 *)&lkups[i].m_u)[j])
> > +				((u16 *)(pkt + offset))[j] =
> > +					(((u16 *)(pkt + offset))[j] &
> > +					 ~((u16 *)&lkups[i].m_u)[j]) |
> > +					(((u16 *)&lkups[i].h_u)[j] &
> > +					 ((u16 *)&lkups[i].m_u)[j]);
> >   	}
> > +
> >   	s_rule->pdata.lkup_tx_rx.hdr_len = CPU_TO_LE16(pkt_len);
> > +
> > +	return ICE_SUCCESS;
> >   }
> >
> >   /**
> > @@ -5446,7 +5464,7 @@ ice_adv_add_update_vsi_list(struct ice_hw *hw,
> >   }
> >
> >   /**
> > - * ice_add_adv_rule - create an advanced switch rule
> > + * ice_add_adv_rule - helper function to create an advanced switch
> > + rule
> >    * @hw: pointer to the hardware structure
> >    * @lkups: information on the words that needs to be looked up. All words
> >    * together makes one recipe
> > @@ -5470,11 +5488,13 @@ ice_add_adv_rule(struct ice_hw *hw, struct
> ice_adv_lkup_elem *lkups,
> >   {
> >   	struct ice_adv_fltr_mgmt_list_entry *m_entry, *adv_fltr = NULL;
> >   	u16 rid = 0, i, pkt_len, rule_buf_sz, vsi_handle;
> > -	struct ice_aqc_sw_rules_elem *s_rule;
> > +	const struct ice_dummy_pkt_offsets *pkt_offsets;
> > +	struct ice_aqc_sw_rules_elem *s_rule = NULL;
> >   	struct LIST_HEAD_TYPE *rule_head;
> >   	struct ice_switch_info *sw;
> >   	enum ice_status status;
> >   	const u8 *pkt = NULL;
> > +	bool found = false;
> >   	u32 act = 0;
> >
> >   	if (!lkups_cnt)
> > @@ -5483,13 +5503,25 @@ ice_add_adv_rule(struct ice_hw *hw, struct
> ice_adv_lkup_elem *lkups,
> >   	for (i = 0; i < lkups_cnt; i++) {
> >   		u16 j, *ptr;
> >
> > -		/* Validate match masks to make sure they match complete
> 16-bit
> > -		 * words.
> > +		/* Validate match masks to make sure that there is
> something
> > +		 * to match.
> >   		 */
> > -		ptr = (u16 *)&lkups->m_u;
> > +		ptr = (u16 *)&lkups[i].m_u;
> >   		for (j = 0; j < sizeof(lkups->m_u) / sizeof(u16); j++)
> > -			if (ptr[j] != 0 && ptr[j] != 0xffff)
> > -				return ICE_ERR_PARAM;
> > +			if (ptr[j] != 0) {
> > +				found = true;
> > +				break;
> > +			}
> > +	}
> > +	if (!found)
> > +		return ICE_ERR_PARAM;
> > +
> > +	/* make sure that we can locate a dummy packet */
> > +	ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt,
> &pkt_len,
> > +			      &pkt_offsets);
> > +	if (!pkt) {
> > +		status = ICE_ERR_PARAM;
> > +		goto err_ice_add_adv_rule;
> >   	}
> >
> >   	if (!(rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI || @@ -5530,8
> > +5562,6 @@ ice_add_adv_rule(struct ice_hw *hw, struct
> ice_adv_lkup_elem *lkups,
> >   		}
> >   		return status;
> >   	}
> > -	ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt,
> > -			      &pkt_len);
> >   	rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + pkt_len;
> >   	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw,
> rule_buf_sz);
> >   	if (!s_rule)
> > @@ -5576,8 +5606,8 @@ ice_add_adv_rule(struct ice_hw *hw, struct
> ice_adv_lkup_elem *lkups,
> >   	s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(rid);
> >   	s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
> >
> > -	ice_fill_adv_dummy_packet(lkups, lkups_cnt, rinfo->tun_type,
> s_rule,
> > -				  pkt, pkt_len);
> > +	ice_fill_adv_dummy_packet(lkups, lkups_cnt, s_rule, pkt, pkt_len,
> > +				  pkt_offsets);
> 
> Now that ice_fill_adv_dummy_packet() propagates an error, the caller
> should do the same.
> 

OK, can we accept this patch and have a separate patch that propagates the error? It will take some time to get a patch that propagates the error done.

> >
> >   	status = ice_aq_sw_rules(hw, (struct ice_aqc_sw_rules *)s_rule,
> >   				 rule_buf_sz, 1, ice_aqc_opc_add_sw_rules,
> @@ -5753,11 +5783,12
> > @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem
> *lkups,
> >   		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo)
> >   {
> >   	struct ice_adv_fltr_mgmt_list_entry *list_elem;
> > +	const struct ice_dummy_pkt_offsets *offsets;
> >   	struct ice_prot_lkup_ext lkup_exts;
> >   	u16 rule_buf_sz, pkt_len, i, rid;
> > +	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
> >   	enum ice_status status = ICE_SUCCESS;
> >   	bool remove_rule = false;
> > -	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
> >   	const u8 *pkt = NULL;
> >   	u16 vsi_handle;
> >
> > @@ -5805,7 +5836,7 @@ ice_rem_adv_rule(struct ice_hw *hw, struct
> ice_adv_lkup_elem *lkups,
> >   		struct ice_aqc_sw_rules_elem *s_rule;
> >
> >   		ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type,
> &pkt,
> > -				      &pkt_len);
> > +				      &pkt_len, &offsets);
> >   		rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE +
> pkt_len;
> >   		s_rule =
> >   			(struct ice_aqc_sw_rules_elem *)ice_malloc(hw, diff
> --git
> > a/drivers/net/ice/base/ice_switch.h
> > b/drivers/net/ice/base/ice_switch.h
> > index 05b1170c9..db79e41eb 100644
> > --- a/drivers/net/ice/base/ice_switch.h
> > +++ b/drivers/net/ice/base/ice_switch.h
> > @@ -192,6 +192,7 @@ struct ice_sw_recipe {
> >   	 * recipe
> >   	 */
> >   	struct ice_fv_word ext_words[ICE_MAX_CHAIN_WORDS];
> > +	u16 word_masks[ICE_MAX_CHAIN_WORDS];
> >
> >   	/* if this recipe is a collection of other recipe */
> >   	u8 big_recp;
> >

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 14/49] net/ice/base: refactor HW table init function
  2019-06-05 10:35   ` Maxime Coquelin
@ 2019-06-05 18:10     ` Stillwell Jr, Paul M
  0 siblings, 0 replies; 225+ messages in thread
From: Stillwell Jr, Paul M @ 2019-06-05 18:10 UTC (permalink / raw)
  To: Maxime Coquelin, Rong, Leyi, Zhang, Qi Z; +Cc: dev, Sridhar, Vignesh

> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Wednesday, June 5, 2019 3:35 AM
> To: Rong, Leyi <leyi.rong@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Sridhar, Vignesh <vignesh.sridhar@intel.com>; Stillwell Jr,
> Paul M <paul.m.stillwell.jr@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 14/49] net/ice/base: refactor HW table init
> function
> 
> 
> 
> On 6/4/19 7:42 AM, Leyi Rong wrote:
> > 1. Separated the calls to initialize and allocate the HW XLT tables
> > from call to fill table. This is to allow the ice_init_hw_tbls call to
> > be made prior to package download so that all HW structures are
> > correctly initialized. This will avoid any invalid memory references
> > if package download fails on unloading the driver.
> > 2. Fill HW tables with package content after successful package download.
> > 3. Free HW table and flow profile allocations when unloading driver.
> > 4. Add flag in block structure to check if lists in block are
> > initialized. This is to avoid any NULL reference in releasing flow
> > profiles that may have been freed in previous calls to free tables.
> >
> > Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
> > Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> > Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> > ---
> >   drivers/net/ice/base/ice_common.c    |   6 +-
> >   drivers/net/ice/base/ice_flex_pipe.c | 284 ++++++++++++++-------------
> >   drivers/net/ice/base/ice_flex_pipe.h |   1 +
> >   drivers/net/ice/base/ice_flex_type.h |   1 +
> >   4 files changed, 151 insertions(+), 141 deletions(-)
> >
> > diff --git a/drivers/net/ice/base/ice_common.c
> > b/drivers/net/ice/base/ice_common.c
> > index a0ab25aef..62c7fad0d 100644
> > --- a/drivers/net/ice/base/ice_common.c
> > +++ b/drivers/net/ice/base/ice_common.c
> > @@ -916,12 +916,13 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
> >
> >   	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC);
> >   	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC_2);
> > -
> >   	/* Obtain counter base index which would be used by flow director
> */
> >   	status = ice_alloc_fd_res_cntr(hw, &hw->fd_ctr_base);
> >   	if (status)
> >   		goto err_unroll_fltr_mgmt_struct;
> > -
> > +	status = ice_init_hw_tbls(hw);
> > +	if (status)
> > +		goto err_unroll_fltr_mgmt_struct;
> >   	return ICE_SUCCESS;
> >
> >   err_unroll_fltr_mgmt_struct:
> > @@ -952,6 +953,7 @@ void ice_deinit_hw(struct ice_hw *hw)
> >   	ice_sched_cleanup_all(hw);
> >   	ice_sched_clear_agg(hw);
> >   	ice_free_seg(hw);
> > +	ice_free_hw_tbls(hw);
> >
> >   	if (hw->port_info) {
> >   		ice_free(hw, hw->port_info);
> > diff --git a/drivers/net/ice/base/ice_flex_pipe.c
> > b/drivers/net/ice/base/ice_flex_pipe.c
> > index 8f0b513f4..93e056853 100644
> > --- a/drivers/net/ice/base/ice_flex_pipe.c
> > +++ b/drivers/net/ice/base/ice_flex_pipe.c
> > @@ -1375,10 +1375,12 @@ enum ice_status ice_init_pkg(struct ice_hw
> > *hw, u8 *buf, u32 len)
> >
> >   	if (!status) {
> >   		hw->seg = seg;
> > -		/* on successful package download, update other required
> > -		 * registers to support the package
> > +		/* on successful package download update other required
> > +		 * registers to support the package and fill HW tables
> > +		 * with package content.
> >   		 */
> >   		ice_init_pkg_regs(hw);
> > +		ice_fill_blk_tbls(hw);
> >   	} else {
> >   		ice_debug(hw, ICE_DBG_INIT, "package load failed, %d\n",
> >   			  status);
> > @@ -2755,6 +2757,65 @@ static const u32
> ice_blk_sids[ICE_BLK_COUNT][ICE_SID_OFF_COUNT] = {
> >   	}
> >   };
> >
> > +/**
> > + * ice_init_sw_xlt1_db - init software XLT1 database from HW tables
> > + * @hw: pointer to the hardware structure
> > + * @blk: the HW block to initialize
> > + */
> > +static
> > +void ice_init_sw_xlt1_db(struct ice_hw *hw, enum ice_block blk) {
> > +	u16 pt;
> > +
> > +	for (pt = 0; pt < hw->blk[blk].xlt1.count; pt++) {
> > +		u8 ptg;
> > +
> > +		ptg = hw->blk[blk].xlt1.t[pt];
> > +		if (ptg != ICE_DEFAULT_PTG) {
> > +			ice_ptg_alloc_val(hw, blk, ptg);
> > +			ice_ptg_add_mv_ptype(hw, blk, pt, ptg);
> 
> ice_ptg_add_mv_ptype() can fail, error should be propagated.

You are correct that ice_ptg_add_mv_ptype() can fail, but it can never fail in this case. This function is called at init time when the HW has been loaded to a known good state and there is no chance it will fail this call.

> 
> > +		}
> > +	}
> > +}
> > +
> > +/**
> > + * ice_init_sw_xlt2_db - init software XLT2 database from HW tables
> > + * @hw: pointer to the hardware structure
> > + * @blk: the HW block to initialize
> > + */
> > +static void ice_init_sw_xlt2_db(struct ice_hw *hw, enum ice_block
> > +blk) {
> > +	u16 vsi;
> > +
> > +	for (vsi = 0; vsi < hw->blk[blk].xlt2.count; vsi++) {
> > +		u16 vsig;
> > +
> > +		vsig = hw->blk[blk].xlt2.t[vsi];
> > +		if (vsig) {
> > +			ice_vsig_alloc_val(hw, blk, vsig);
> > +			ice_vsig_add_mv_vsi(hw, blk, vsi, vsig);
> 
> Ditto

Same issue here as above.

> 
> > +			/* no changes at this time, since this has been
> > +			 * initialized from the original package
> > +			 */
> > +			hw->blk[blk].xlt2.vsis[vsi].changed = 0;
> > +		}
> > +	}
> > +}
> > +
> > +/**
> > + * ice_init_sw_db - init software database from HW tables
> > + * @hw: pointer to the hardware structure  */ static void
> > +ice_init_sw_db(struct ice_hw *hw) {
> > +	u16 i;
> > +
> > +	for (i = 0; i < ICE_BLK_COUNT; i++) {
> > +		ice_init_sw_xlt1_db(hw, (enum ice_block)i);
> > +		ice_init_sw_xlt2_db(hw, (enum ice_block)i);
> > +	}
> 
> And so this function should also propagate the error.

Same comment here.

> 
> > +}
> > +
> >   /**
> >    * ice_fill_tbl - Reads content of a single table type into database
> >    * @hw: pointer to the hardware structure @@ -2853,12 +2914,12 @@
> > static void ice_fill_tbl(struct ice_hw *hw, enum ice_block block_id, u32 sid)
> >   		case ICE_SID_FLD_VEC_PE:
> >   			es = (struct ice_sw_fv_section *)sect;
> >   			src = (u8 *)es->fv;
> > -			sect_len = LE16_TO_CPU(es->count) *
> > -				hw->blk[block_id].es.fvw *
> > +			sect_len = (u32)(LE16_TO_CPU(es->count) *
> > +					 hw->blk[block_id].es.fvw) *
> >   				sizeof(*hw->blk[block_id].es.t);
> >   			dst = (u8 *)hw->blk[block_id].es.t;
> > -			dst_len = hw->blk[block_id].es.count *
> > -				hw->blk[block_id].es.fvw *
> > +			dst_len = (u32)(hw->blk[block_id].es.count *
> > +					hw->blk[block_id].es.fvw) *
> >   				sizeof(*hw->blk[block_id].es.t);
> >   			break;
> >   		default:
> > @@ -2886,75 +2947,61 @@ static void ice_fill_tbl(struct ice_hw *hw, enum
> ice_block block_id, u32 sid)
> >   }
> >
> >   /**
> > - * ice_fill_blk_tbls - Read package content for tables of a block
> > + * ice_fill_blk_tbls - Read package context for tables
> >    * @hw: pointer to the hardware structure
> > - * @block_id: The block ID which contains the tables to be copied
> >    *
> >    * Reads the current package contents and populates the driver
> > - * database with the data it contains to allow for advanced driver
> > - * features.
> > - */
> > -static void ice_fill_blk_tbls(struct ice_hw *hw, enum ice_block
> > block_id) -{
> > -	ice_fill_tbl(hw, block_id, hw->blk[block_id].xlt1.sid);
> > -	ice_fill_tbl(hw, block_id, hw->blk[block_id].xlt2.sid);
> > -	ice_fill_tbl(hw, block_id, hw->blk[block_id].prof.sid);
> > -	ice_fill_tbl(hw, block_id, hw->blk[block_id].prof_redir.sid);
> > -	ice_fill_tbl(hw, block_id, hw->blk[block_id].es.sid);
> > -}
> > -
> > -/**
> > - * ice_free_flow_profs - free flow profile entries
> > - * @hw: pointer to the hardware structure
> > + * database with the data iteratively for all advanced feature
> > + * blocks. Assume that the Hw tables have been allocated.
> >    */
> > -static void ice_free_flow_profs(struct ice_hw *hw)
> > +void ice_fill_blk_tbls(struct ice_hw *hw)
> >   {
> >   	u8 i;
> >
> >   	for (i = 0; i < ICE_BLK_COUNT; i++) {
> > -		struct ice_flow_prof *p, *tmp;
> > -
> > -		if (!&hw->fl_profs[i])
> > -			continue;
> > -
> > -		/* This call is being made as part of resource deallocation
> > -		 * during unload. Lock acquire and release will not be
> > -		 * necessary here.
> > -		 */
> > -		LIST_FOR_EACH_ENTRY_SAFE(p, tmp, &hw->fl_profs[i],
> > -					 ice_flow_prof, l_entry) {
> > -			struct ice_flow_entry *e, *t;
> > -
> > -			LIST_FOR_EACH_ENTRY_SAFE(e, t, &p->entries,
> > -						 ice_flow_entry, l_entry)
> > -				ice_flow_rem_entry(hw,
> ICE_FLOW_ENTRY_HNDL(e));
> > -
> > -			LIST_DEL(&p->l_entry);
> > -			if (p->acts)
> > -				ice_free(hw, p->acts);
> > -			ice_free(hw, p);
> > -		}
> > +		enum ice_block blk_id = (enum ice_block)i;
> >
> > -		ice_destroy_lock(&hw->fl_profs_locks[i]);
> > +		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].xlt1.sid);
> > +		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].xlt2.sid);
> > +		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].prof.sid);
> > +		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].prof_redir.sid);
> > +		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].es.sid);
> 
> >   	}
> > +
> > +	ice_init_sw_db(hw);
> 
> Propagate error once above is fixed.

Same comment here

> 
> >   }
> >


^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 00/49] shared code update
  2019-06-04 16:56 ` [dpdk-dev] [PATCH 00/49] shared code update Maxime Coquelin
@ 2019-06-06  5:44   ` Rong, Leyi
  2019-06-07 12:53     ` Maxime Coquelin
  0 siblings, 1 reply; 225+ messages in thread
From: Rong, Leyi @ 2019-06-06  5:44 UTC (permalink / raw)
  To: Maxime Coquelin, Zhang, Qi Z, Stillwell Jr, Paul M; +Cc: dev


> -----Original Message-----
> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
> Sent: Wednesday, June 5, 2019 12:56 AM
> To: Rong, Leyi <leyi.rong@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 00/49] shared code update
> 
> Hi Leyi,
> 
> On 6/4/19 7:41 AM, Leyi Rong wrote:
> > Main changes:
> > 1. Advanced switch rule support.
> > 2. Add more APIs for tunnel management.
> > 3. Add some minor features.
> > 4. Code clean and bug fix.
> 
> In order to ease the review process, I think it would be much better to split this series in multiple ones, by features.
> Otherwise, it is more difficult to keep track if comments are taken into account in the next revision.
> 
> Also, it is suggested to put the fixes first in the series to ease the backporting.
> 
> Thanks,
> Maxime

+Paul,

Hello Maxime,
Thanks for all your constructive comments, but we do the same process for the CVL shared code update to DPDK upstream on the previous release.
This series of patches are extracted/reorganized/squashed from the ND released packages, which the shared code difference between 1905 and 1908 can be more than 200 commits from the original shared code repo.

IMHO, there might be some reasons for take all these patches into one patchset.
	- the patchset try to keeps the history order as the commits in the original shared code repo.
	- the relatively behind patch in the patchset may have dependency on the front patches.
	- it's difficult to split this series into multiple ones, since the patches are irregular and squashed.


Best Regards,
Leyi Rong

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 39/49] net/ice/base: slightly code update
  2019-06-05 12:04   ` Maxime Coquelin
@ 2019-06-06  6:46     ` Rong, Leyi
  0 siblings, 0 replies; 225+ messages in thread
From: Rong, Leyi @ 2019-06-06  6:46 UTC (permalink / raw)
  To: Maxime Coquelin, Zhang, Qi Z; +Cc: dev, Stillwell Jr, Paul M


> -----Original Message-----
> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
> Sent: Wednesday, June 5, 2019 8:05 PM
> To: Rong, Leyi <leyi.rong@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Stillwell Jr, Paul M <paul.m.stillwell.jr@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 39/49] net/ice/base: slightly code update
> 
> 
> 
> On 6/4/19 7:42 AM, Leyi Rong wrote:
> > Mainly update below functions:
> >
> > ice_flow_proc_seg_hdrs
> > ice_flow_find_prof_conds
> > ice_dealloc_flow_entry
> > ice_add_rule_internal
> 
> 
> It seems that some of the changes are bug fixes.
> So IMO, these changes should be in dedicated patches, with Fixes tag in the commit message.
> 
> Overall, these changes should be split by kind of changes. There are functions reworks, minor cleanups, robustness
> changes, ...

Will try to split this commit into multiple ones as your suggestion, thanks!

> 
> > Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> > Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> > ---
> >   drivers/net/ice/base/ice_flex_pipe.c     | 13 +++----
> >   drivers/net/ice/base/ice_flow.c          | 47 +++++++++++++++++-------
> >   drivers/net/ice/base/ice_nvm.c           |  4 +-
> >   drivers/net/ice/base/ice_protocol_type.h |  1 +
> >   drivers/net/ice/base/ice_switch.c        | 24 +++++++-----
> >   drivers/net/ice/base/ice_switch.h        | 14 +++----
> >   drivers/net/ice/base/ice_type.h          | 13 ++++++-
> >   7 files changed, 73 insertions(+), 43 deletions(-)
> >
> > diff --git a/drivers/net/ice/base/ice_flex_pipe.c
> > b/drivers/net/ice/base/ice_flex_pipe.c
> > index 5864cbf3e..2a310b6e1 100644
> > --- a/drivers/net/ice/base/ice_flex_pipe.c
> > +++ b/drivers/net/ice/base/ice_flex_pipe.c
> > @@ -134,7 +134,7 @@ static struct ice_buf_table *ice_find_buf_table(struct ice_seg *ice_seg)
> >   	nvms = (struct ice_nvm_table *)(ice_seg->device_table +
> >   		LE32_TO_CPU(ice_seg->device_table_count));
> >
> > -	return (struct ice_buf_table *)
> > +	return (_FORCE_ struct ice_buf_table *)
> >   		(nvms->vers + LE32_TO_CPU(nvms->table_count));
> >   }
> >
> > @@ -1005,9 +1005,8 @@ ice_dwnld_cfg_bufs(struct ice_hw *hw, struct
> > ice_buf *bufs, u32 count)
> >
> >   		bh = (struct ice_buf_hdr *)(bufs + i);
> >
> > -		status = ice_aq_download_pkg(hw, bh, LE16_TO_CPU(bh->data_end),
> > -					     last, &offset, &info, NULL);
> > -
> > +		status = ice_aq_download_pkg(hw, bh, ICE_PKG_BUF_SIZE, last,
> > +					     &offset, &info, NULL);
> >   		if (status) {
> >   			ice_debug(hw, ICE_DBG_PKG,
> >   				  "Pkg download failed: err %d off %d inf %d\n", @@ -2937,7
> > +2936,7 @@ static void ice_fill_tbl(struct ice_hw *hw, enum ice_block block_id, u32 sid)
> >   		case ICE_SID_XLT2_ACL:
> >   		case ICE_SID_XLT2_PE:
> >   			xlt2 = (struct ice_xlt2_section *)sect;
> > -			src = (u8 *)xlt2->value;
> > +			src = (_FORCE_ u8 *)xlt2->value;
> >   			sect_len = LE16_TO_CPU(xlt2->count) *
> >   				sizeof(*hw->blk[block_id].xlt2.t);
> >   			dst = (u8 *)hw->blk[block_id].xlt2.t; @@ -3889,7 +3888,7 @@
> > ice_update_fd_swap(struct ice_hw *hw, u16 prof_id, struct ice_fv_word
> > *es)
> >
> >   	/* fill in the swap array */
> >   	si = hw->blk[ICE_BLK_FD].es.fvw - 1;
> > -	do {
> > +	while (si >= 0) {
> >   		u8 indexes_used = 1;
> >
> >   		/* assume flat at this index */
> > @@ -3921,7 +3920,7 @@ ice_update_fd_swap(struct ice_hw *hw, u16 prof_id, struct ice_fv_word *es)
> >   		}
> >
> >   		si -= indexes_used;
> > -	} while (si >= 0);
> > +	}
> >
> >   	/* for each set of 4 swap indexes, write the appropriate register */
> >   	for (j = 0; j < hw->blk[ICE_BLK_FD].es.fvw / 4; j++) { diff --git
> > a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
> > index 795abe98f..f31557eac 100644
> > --- a/drivers/net/ice/base/ice_flow.c
> > +++ b/drivers/net/ice/base/ice_flow.c
> > @@ -415,9 +415,6 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
> >   		const ice_bitmap_t *src;
> >   		u32 hdrs;
> >
> > -		if (i > 0 && (i + 1) < prof->segs_cnt)
> > -			continue;
> > -
> >   		hdrs = prof->segs[i].hdrs;
> >
> >   		if (hdrs & ICE_FLOW_SEG_HDR_ETH) { @@ -847,6 +844,7 @@
> > ice_flow_proc_segs(struct ice_hw *hw, struct ice_flow_prof_params
> > *params)
> >
> >   #define ICE_FLOW_FIND_PROF_CHK_FLDS	0x00000001
> >   #define ICE_FLOW_FIND_PROF_CHK_VSI	0x00000002
> > +#define ICE_FLOW_FIND_PROF_NOT_CHK_DIR	0x00000004
> >
> >   /**
> >    * ice_flow_find_prof_conds - Find a profile matching headers and
> > conditions @@ -866,7 +864,8 @@ ice_flow_find_prof_conds(struct ice_hw *hw, enum ice_block blk,
> >   	struct ice_flow_prof *p;
> >
> >   	LIST_FOR_EACH_ENTRY(p, &hw->fl_profs[blk], ice_flow_prof, l_entry) {
> > -		if (p->dir == dir && segs_cnt && segs_cnt == p->segs_cnt) {
> > +		if ((p->dir == dir || conds & ICE_FLOW_FIND_PROF_NOT_CHK_DIR) &&
> > +		    segs_cnt && segs_cnt == p->segs_cnt) {
> >   			u8 i;
> >
> >   			/* Check for profile-VSI association if specified */ @@ -935,17
> > +934,15 @@ ice_flow_find_prof_id(struct ice_hw *hw, enum ice_block blk, u64 prof_id)
> >   }
> >
> >   /**
> > - * ice_flow_rem_entry_sync - Remove a flow entry
> > + * ice_dealloc_flow_entry - Deallocate flow entry memory
> >    * @hw: pointer to the HW struct
> >    * @entry: flow entry to be removed
> >    */
> > -static enum ice_status
> > -ice_flow_rem_entry_sync(struct ice_hw *hw, struct ice_flow_entry
> > *entry)
> > +static void
> > +ice_dealloc_flow_entry(struct ice_hw *hw, struct ice_flow_entry
> > +*entry)
> >   {
> >   	if (!entry)
> > -		return ICE_ERR_BAD_PTR;
> > -
> > -	LIST_DEL(&entry->l_entry);
> > +		return;
> >
> >   	if (entry->entry)
> >   		ice_free(hw, entry->entry);
> > @@ -957,6 +954,22 @@ ice_flow_rem_entry_sync(struct ice_hw *hw, struct ice_flow_entry *entry)
> >   	}
> >
> >   	ice_free(hw, entry);
> > +}
> > +
> > +/**
> > + * ice_flow_rem_entry_sync - Remove a flow entry
> > + * @hw: pointer to the HW struct
> > + * @entry: flow entry to be removed
> > + */
> > +static enum ice_status
> > +ice_flow_rem_entry_sync(struct ice_hw *hw, struct ice_flow_entry
> > +*entry) {
> > +	if (!entry)
> > +		return ICE_ERR_BAD_PTR;
> > +
> > +	LIST_DEL(&entry->l_entry);
> > +
> > +	ice_dealloc_flow_entry(hw, entry);
> >
> >   	return ICE_SUCCESS;
> >   }
> > @@ -1395,9 +1408,12 @@ ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id,
> >   		goto out;
> >   	}
> >
> > -	ice_acquire_lock(&prof->entries_lock);
> > -	LIST_ADD(&e->l_entry, &prof->entries);
> > -	ice_release_lock(&prof->entries_lock);
> > +	if (blk != ICE_BLK_ACL) {
> > +		/* ACL will handle the entry management */
> > +		ice_acquire_lock(&prof->entries_lock);
> > +		LIST_ADD(&e->l_entry, &prof->entries);
> > +		ice_release_lock(&prof->entries_lock);
> > +	}
> >
> >   	*entry_h = ICE_FLOW_ENTRY_HNDL(e);
> >
> > @@ -1425,7 +1441,7 @@ enum ice_status ice_flow_rem_entry(struct ice_hw *hw, u64 entry_h)
> >   	if (entry_h == ICE_FLOW_ENTRY_HANDLE_INVAL)
> >   		return ICE_ERR_PARAM;
> >
> > -	entry = ICE_FLOW_ENTRY_PTR((unsigned long)entry_h);
> > +	entry = ICE_FLOW_ENTRY_PTR(entry_h);
> >
> >   	/* Retain the pointer to the flow profile as the entry will be freed */
> >   	prof = entry->prof;
> > @@ -1676,6 +1692,9 @@ enum ice_status ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
> >   	if (!ice_is_vsi_valid(hw, vsi_handle))
> >   		return ICE_ERR_PARAM;
> >
> > +	if (LIST_EMPTY(&hw->fl_profs[blk]))
> > +		return ICE_SUCCESS;
> > +
> 
> It should be useless as LIST_FOR_EACH_ENTRY_SAFE handles that case properly IIUC.
> 
> >   	ice_acquire_lock(&hw->fl_profs_locks[blk]);
> >   	LIST_FOR_EACH_ENTRY_SAFE(p, t, &hw->fl_profs[blk], ice_flow_prof,
> >   				 l_entry) {
> > diff --git a/drivers/net/ice/base/ice_nvm.c
> > b/drivers/net/ice/base/ice_nvm.c index fa9c348ce..76cfedb29 100644
> > --- a/drivers/net/ice/base/ice_nvm.c
> > +++ b/drivers/net/ice/base/ice_nvm.c
> > @@ -127,7 +127,7 @@ ice_read_sr_word_aq(struct ice_hw *hw, u16 offset,
> > u16 *data)
> >
> >   	status = ice_read_sr_aq(hw, offset, 1, data, true);
> >   	if (!status)
> > -		*data = LE16_TO_CPU(*(__le16 *)data);
> > +		*data = LE16_TO_CPU(*(_FORCE_ __le16 *)data);
> >
> >   	return status;
> >   }
> > @@ -185,7 +185,7 @@ ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
> >   	} while (words_read < *words);
> >
> >   	for (i = 0; i < *words; i++)
> > -		data[i] = LE16_TO_CPU(((__le16 *)data)[i]);
> > +		data[i] = LE16_TO_CPU(((_FORCE_ __le16 *)data)[i]);
> >
> >   read_nvm_buf_aq_exit:
> >   	*words = words_read;
> > diff --git a/drivers/net/ice/base/ice_protocol_type.h
> > b/drivers/net/ice/base/ice_protocol_type.h
> > index e572dd320..82822fb74 100644
> > --- a/drivers/net/ice/base/ice_protocol_type.h
> > +++ b/drivers/net/ice/base/ice_protocol_type.h
> > @@ -189,6 +189,7 @@ struct ice_udp_tnl_hdr {
> >   	u16 field;
> >   	u16 proto_type;
> >   	u16 vni;
> > +	u16 reserved;
> >   };
> >
> >   struct ice_nvgre {
> > diff --git a/drivers/net/ice/base/ice_switch.c
> > b/drivers/net/ice/base/ice_switch.c
> > index faaedd4c8..373acb7a6 100644
> > --- a/drivers/net/ice/base/ice_switch.c
> > +++ b/drivers/net/ice/base/ice_switch.c
> > @@ -279,6 +279,7 @@ enum ice_status ice_init_def_sw_recp(struct ice_hw *hw)
> >   		recps[i].root_rid = i;
> >   		INIT_LIST_HEAD(&recps[i].filt_rules);
> >   		INIT_LIST_HEAD(&recps[i].filt_replay_rules);
> > +		INIT_LIST_HEAD(&recps[i].rg_list);
> 
> That looks like a bug fix, isn't it?
> 
> >   		ice_init_lock(&recps[i].filt_rule_lock);
> >   	}
> >
> > @@ -859,7 +860,7 @@ ice_aq_add_update_mir_rule(struct ice_hw *hw, u16 rule_type, u16 dest_vsi,
> >   			return ICE_ERR_PARAM;
> >
> >   		buf_size = count * sizeof(__le16);
> > -		mr_list = (__le16 *)ice_malloc(hw, buf_size);
> > +		mr_list = (_FORCE_ __le16 *)ice_malloc(hw, buf_size);
> >   		if (!mr_list)
> >   			return ICE_ERR_NO_MEMORY;
> >   		break;
> > @@ -1459,7 +1460,6 @@ static int ice_ilog2(u64 n)
> >   	return -1;
> >   }
> >
> > -
> >   /**
> >    * ice_fill_sw_rule - Helper function to fill switch rule structure
> >    * @hw: pointer to the hardware structure @@ -1479,7 +1479,6 @@
> > ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
> >   	__be16 *off;
> >   	u8 q_rgn;
> >
> > -
> >   	if (opc == ice_aqc_opc_remove_sw_rules) {
> >   		s_rule->pdata.lkup_tx_rx.act = 0;
> >   		s_rule->pdata.lkup_tx_rx.index =
> > @@ -1555,7 +1554,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
> >   		daddr = f_info->l_data.ethertype_mac.mac_addr;
> >   		/* fall-through */
> >   	case ICE_SW_LKUP_ETHERTYPE:
> > -		off = (__be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
> > +		off = (_FORCE_ __be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
> >   		*off = CPU_TO_BE16(f_info->l_data.ethertype_mac.ethertype);
> >   		break;
> >   	case ICE_SW_LKUP_MAC_VLAN:
> > @@ -1586,7 +1585,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
> >   			   ICE_NONDMA_TO_NONDMA);
> >
> >   	if (!(vlan_id > ICE_MAX_VLAN_ID)) {
> > -		off = (__be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
> > +		off = (_FORCE_ __be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
> >   		*off = CPU_TO_BE16(vlan_id);
> >   	}
> >
> > @@ -2289,14 +2288,15 @@ ice_add_rule_internal(struct ice_hw *hw, u8
> > recp_id,
> >
> >   	m_entry = ice_find_rule_entry(hw, recp_id, new_fltr);
> >   	if (!m_entry) {
> > -		ice_release_lock(rule_lock);
> > -		return ice_create_pkt_fwd_rule(hw, f_entry);
> > +		status = ice_create_pkt_fwd_rule(hw, f_entry);
> > +		goto exit_add_rule_internal;
> >   	}
> >
> >   	cur_fltr = &m_entry->fltr_info;
> >   	status = ice_add_update_vsi_list(hw, m_entry, cur_fltr, new_fltr);
> > -	ice_release_lock(rule_lock);
> >
> > +exit_add_rule_internal:
> > +	ice_release_lock(rule_lock);
> >   	return status;
> >   }
> >
> > @@ -2975,12 +2975,19 @@ ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list)
> >    * ice_add_eth_mac - Add ethertype and MAC based filter rule
> >    * @hw: pointer to the hardware structure
> >    * @em_list: list of ether type MAC filter, MAC is optional
> > + *
> > + * This function requires the caller to populate the entries in
> > + * the filter list with the necessary fields (including flags to
> > + * indicate Tx or Rx rules).
> >    */
> >   enum ice_status
> >   ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list)
> >   {
> >   	struct ice_fltr_list_entry *em_list_itr;
> >
> > +	if (!em_list || !hw)
> > +		return ICE_ERR_PARAM;
> > +
> >   	LIST_FOR_EACH_ENTRY(em_list_itr, em_list, ice_fltr_list_entry,
> >   			    list_entry) {
> >   		enum ice_sw_lkup_type l_type =
> > @@ -2990,7 +2997,6 @@ ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list)
> >   		    l_type != ICE_SW_LKUP_ETHERTYPE)
> >   			return ICE_ERR_PARAM;
> >
> > -		em_list_itr->fltr_info.flag = ICE_FLTR_TX;
> >   		em_list_itr->status = ice_add_rule_internal(hw, l_type,
> >   							    em_list_itr);
> >   		if (em_list_itr->status)
> > diff --git a/drivers/net/ice/base/ice_switch.h
> > b/drivers/net/ice/base/ice_switch.h
> > index 2f140a86d..05b1170c9 100644
> > --- a/drivers/net/ice/base/ice_switch.h
> > +++ b/drivers/net/ice/base/ice_switch.h
> > @@ -11,6 +11,9 @@
> >   #define ICE_SW_CFG_MAX_BUF_LEN 2048
> >   #define ICE_MAX_SW 256
> >   #define ICE_DFLT_VSI_INVAL 0xff
> > +#define ICE_FLTR_RX BIT(0)
> > +#define ICE_FLTR_TX BIT(1)
> > +#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
> >
> >
> >   /* Worst case buffer length for ice_aqc_opc_get_res_alloc */ @@
> > -77,9 +80,6 @@ struct ice_fltr_info {
> >   	/* rule ID returned by firmware once filter rule is created */
> >   	u16 fltr_rule_id;
> >   	u16 flag;
> > -#define ICE_FLTR_RX		BIT(0)
> > -#define ICE_FLTR_TX		BIT(1)
> > -#define ICE_FLTR_TX_RX		(ICE_FLTR_RX | ICE_FLTR_TX)
> >
> >   	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
> >   	u16 src;
> > @@ -145,10 +145,6 @@ struct ice_sw_act_ctrl {
> >   	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
> >   	u16 src;
> >   	u16 flag;
> > -#define ICE_FLTR_RX             BIT(0)
> > -#define ICE_FLTR_TX             BIT(1)
> > -#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
> > -
> >   	enum ice_sw_fwd_act_type fltr_act;
> >   	/* Depending on filter action */
> >   	union {
> > @@ -368,6 +364,8 @@ ice_aq_get_res_descs(struct ice_hw *hw, u16 num_entries,
> >   		     struct ice_sq_cd *cd);
> >   enum ice_status
> >   ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
> > +enum ice_status
> > +ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
> >   void ice_rem_all_sw_rules_info(struct ice_hw *hw);
> >   enum ice_status ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
> >   enum ice_status ice_remove_mac(struct ice_hw *hw, struct
> > LIST_HEAD_TYPE *m_lst); @@ -375,8 +373,6 @@ enum ice_status
> >   ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list);
> >   enum ice_status
> >   ice_remove_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE
> > *em_list); -enum ice_status -ice_remove_vlan(struct ice_hw *hw, struct
> > LIST_HEAD_TYPE *v_list);
> >   #ifndef NO_MACVLAN_SUPPORT
> >   enum ice_status
> >   ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
> > diff --git a/drivers/net/ice/base/ice_type.h
> > b/drivers/net/ice/base/ice_type.h index 919ca7fa8..f4e151c55 100644
> > --- a/drivers/net/ice/base/ice_type.h
> > +++ b/drivers/net/ice/base/ice_type.h
> > @@ -14,6 +14,10 @@
> >
> >   #define BITS_PER_BYTE	8
> >
> > +#ifndef _FORCE_
> > +#define _FORCE_
> > +#endif
> > +
> >   #define ICE_BYTES_PER_WORD	2
> >   #define ICE_BYTES_PER_DWORD	4
> >   #define ICE_MAX_TRAFFIC_CLASS	8
> > @@ -35,7 +39,7 @@
> >   #endif
> >
> >   #ifndef IS_ASCII
> > -#define IS_ASCII(_ch)  ((_ch) < 0x80)
> > +#define IS_ASCII(_ch)	((_ch) < 0x80)
> >   #endif
> >
> >   #include "ice_status.h"
> > @@ -80,6 +84,7 @@ static inline u32 ice_round_to_num(u32 N, u32 R)
> >   #define ICE_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
> >
> >   /* debug masks - set these bits in hw->debug_mask to control output
> > */
> > +#define ICE_DBG_TRACE		BIT_ULL(0) /* for function-trace only */
> >   #define ICE_DBG_INIT		BIT_ULL(1)
> >   #define ICE_DBG_RELEASE		BIT_ULL(2)
> >   #define ICE_DBG_FW_LOG		BIT_ULL(3)
> > @@ -199,6 +204,7 @@ enum ice_vsi_type {
> >   #ifdef ADQ_SUPPORT
> >   	ICE_VSI_CHNL = 4,
> >   #endif /* ADQ_SUPPORT */
> > +	ICE_VSI_LB = 6,
> >   };
> >
> >   struct ice_link_status {
> > @@ -718,6 +724,8 @@ struct ice_fw_log_cfg {
> >   #define ICE_FW_LOG_EVNT_INIT	(ICE_AQC_FW_LOG_INIT_EN >> ICE_AQC_FW_LOG_EN_S)
> >   #define ICE_FW_LOG_EVNT_FLOW	(ICE_AQC_FW_LOG_FLOW_EN >> ICE_AQC_FW_LOG_EN_S)
> >   #define ICE_FW_LOG_EVNT_ERR	(ICE_AQC_FW_LOG_ERR_EN >> ICE_AQC_FW_LOG_EN_S)
> > +#define ICE_FW_LOG_EVNT_ALL	(ICE_FW_LOG_EVNT_INFO | ICE_FW_LOG_EVNT_INIT | \
> > +				 ICE_FW_LOG_EVNT_FLOW | ICE_FW_LOG_EVNT_ERR)
> >   	struct ice_fw_log_evnt evnts[ICE_AQC_FW_LOG_ID_MAX];
> >   };
> >
> > @@ -745,6 +753,7 @@ struct ice_hw {
> >   	u8 pf_id;		/* device profile info */
> >
> >   	u16 max_burst_size;	/* driver sets this value */
> > +
> >   	/* Tx Scheduler values */
> >   	u16 num_tx_sched_layers;
> >   	u16 num_tx_sched_phys_layers;
> > @@ -948,7 +957,6 @@ enum ice_sw_fwd_act_type {
> >   #define ICE_SR_CSR_PROTECTED_LIST_PTR		0x0D
> >   #define ICE_SR_MNG_CFG_PTR			0x0E
> >   #define ICE_SR_EMP_MODULE_PTR			0x0F
> > -#define ICE_SR_PBA_FLAGS			0x15
> >   #define ICE_SR_PBA_BLOCK_PTR			0x16
> >   #define ICE_SR_BOOT_CFG_PTR			0x17
> >   #define ICE_SR_NVM_WOL_CFG			0x19
> > @@ -994,6 +1002,7 @@ enum ice_sw_fwd_act_type {
> >   #define ICE_SR_EMP_SR_SETTINGS_PTR		0x48
> >   #define ICE_SR_CONFIGURATION_METADATA_PTR	0x4D
> >   #define ICE_SR_IMMEDIATE_VALUES_PTR		0x4E
> > +#define ICE_SR_POR_REGISTERS_AUTOLOAD_PTR	0x118
> >
> >   /* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
> >   #define ICE_SR_VPD_SIZE_WORDS		512
> >

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 40/49] net/ice/base: code clean up
  2019-06-05 12:06   ` Maxime Coquelin
@ 2019-06-06  7:32     ` Rong, Leyi
  0 siblings, 0 replies; 225+ messages in thread
From: Rong, Leyi @ 2019-06-06  7:32 UTC (permalink / raw)
  To: Maxime Coquelin, Zhang, Qi Z; +Cc: dev, Stillwell Jr, Paul M


> -----Original Message-----
> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
> Sent: Wednesday, June 5, 2019 8:07 PM
> To: Rong, Leyi <leyi.rong@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Stillwell Jr, Paul M <paul.m.stillwell.jr@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 40/49] net/ice/base: code clean up
> 
> 
> 
> On 6/4/19 7:42 AM, Leyi Rong wrote:
> > Cleanup the useless code.
> >
> > Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> > Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> > ---
> >   drivers/net/ice/base/ice_controlq.c  | 62 +---------------------------
> >   drivers/net/ice/base/ice_fdir.h      |  1 -
> >   drivers/net/ice/base/ice_flex_pipe.c |  5 ++-
> >   drivers/net/ice/base/ice_sched.c     |  4 +-
> >   drivers/net/ice/base/ice_type.h      |  3 ++
> >   5 files changed, 10 insertions(+), 65 deletions(-)
> >
> > diff --git a/drivers/net/ice/base/ice_controlq.c
> > b/drivers/net/ice/base/ice_controlq.c
> > index 4cb6df113..3ef07e094 100644
> > --- a/drivers/net/ice/base/ice_controlq.c
> > +++ b/drivers/net/ice/base/ice_controlq.c
> > @@ -262,7 +262,7 @@ ice_cfg_sq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
> >    * @hw: pointer to the hardware structure
> >    * @cq: pointer to the specific Control queue
> >    *
> > - * Configure base address and length registers for the receive (event
> > q)
> > + * Configure base address and length registers for the receive (event
> > + queue)
> >    */
> >   static enum ice_status
> >   ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq) @@
> > -772,9 +772,6 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
> >   	struct ice_ctl_q_ring *sq = &cq->sq;
> >   	u16 ntc = sq->next_to_clean;
> >   	struct ice_sq_cd *details;
> > -#if 0
> > -	struct ice_aq_desc desc_cb;
> > -#endif
> >   	struct ice_aq_desc *desc;
> >
> >   	desc = ICE_CTL_Q_DESC(*sq, ntc);
> > @@ -783,15 +780,6 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
> >   	while (rd32(hw, cq->sq.head) != ntc) {
> >   		ice_debug(hw, ICE_DBG_AQ_MSG,
> >   			  "ntc %d head %d.\n", ntc, rd32(hw, cq->sq.head)); -#if 0
> > -		if (details->callback) {
> > -			ICE_CTL_Q_CALLBACK cb_func =
> > -				(ICE_CTL_Q_CALLBACK)details->callback;
> > -			ice_memcpy(&desc_cb, desc, sizeof(desc_cb),
> > -				   ICE_DMA_TO_DMA);
> > -			cb_func(hw, &desc_cb);
> > -		}
> > -#endif
> >   		ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
> >   		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
> >   		ntc++;
> > @@ -941,38 +929,8 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
> >   	details = ICE_CTL_Q_DETAILS(cq->sq, cq->sq.next_to_use);
> >   	if (cd)
> >   		*details = *cd;
> > -#if 0
> > -		/* FIXME: if/when this block gets enabled (when the #if 0
> > -		 * is removed), add braces to both branches of the surrounding
> > -		 * conditional expression. The braces have been removed to
> > -		 * prevent checkpatch complaining.
> > -		 */
> > -
> > -		/* If the command details are defined copy the cookie. The
> > -		 * CPU_TO_LE32 is not needed here because the data is ignored
> > -		 * by the FW, only used by the driver
> > -		 */
> > -		if (details->cookie) {
> > -			desc->cookie_high =
> > -				CPU_TO_LE32(ICE_HI_DWORD(details->cookie));
> > -			desc->cookie_low =
> > -				CPU_TO_LE32(ICE_LO_DWORD(details->cookie));
> > -		}
> > -#endif
> >   	else
> >   		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM); -#if 0
> > -	/* clear requested flags and then set additional flags if defined */
> > -	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
> > -	desc->flags |= CPU_TO_LE16(details->flags_ena);
> > -
> > -	if (details->postpone && !details->async) {
> > -		ice_debug(hw, ICE_DBG_AQ_MSG,
> > -			  "Async flag not set along with postpone flag\n");
> > -		status = ICE_ERR_PARAM;
> > -		goto sq_send_command_error;
> > -	}
> > -#endif
> >
> >   	/* Call clean and check queue available function to reclaim the
> >   	 * descriptors that were processed by FW/MBX; the function returns
> > the @@ -1019,20 +977,8 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
> >   	(cq->sq.next_to_use)++;
> >   	if (cq->sq.next_to_use == cq->sq.count)
> >   		cq->sq.next_to_use = 0;
> > -#if 0
> > -	/* FIXME - handle this case? */
> > -	if (!details->postpone)
> > -#endif
> >   	wr32(hw, cq->sq.tail, cq->sq.next_to_use);
> >
> > -#if 0
> > -	/* if command details are not defined or async flag is not set,
> > -	 * we need to wait for desc write back
> > -	 */
> > -	if (!details->async && !details->postpone) {
> > -		/* FIXME - handle this case? */
> > -	}
> > -#endif
> >   	do {
> >   		if (ice_sq_done(hw, cq))
> >   			break;
> > @@ -1087,9 +1033,6 @@ ice_sq_send_cmd(struct ice_hw *hw, struct
> > ice_ctl_q_info *cq,
> >
> >   	/* update the error if time out occurred */
> >   	if (!cmd_completed) {
> > -#if 0
> > -	    (!details->async && !details->postpone)) {
> > -#endif
> >   		ice_debug(hw, ICE_DBG_AQ_MSG,
> >   			  "Control Send Queue Writeback timeout.\n");
> >   		status = ICE_ERR_AQ_TIMEOUT;
> > @@ -1208,9 +1151,6 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
> >   	cq->rq.next_to_clean = ntc;
> >   	cq->rq.next_to_use = ntu;
> >
> > -#if 0
> > -	ice_nvmupd_check_wait_event(hw, LE16_TO_CPU(e->desc.opcode));
> > -#endif
> >   clean_rq_elem_out:
> >   	/* Set pending if needed, unlock and return */
> >   	if (pending) {
> 
> 
> Starting from here, the rest looks unrelated to the commit subject.
> 
> > diff --git a/drivers/net/ice/base/ice_fdir.h
> > b/drivers/net/ice/base/ice_fdir.h index 2ecb147f1..f8f06658c 100644
> > --- a/drivers/net/ice/base/ice_fdir.h
> > +++ b/drivers/net/ice/base/ice_fdir.h
> > @@ -173,7 +173,6 @@ struct ice_fdir_fltr {
> >   	u32 fltr_id;
> >   };
> >
> > -
> >   /* Dummy packet filter definition structure. */
> >   struct ice_fdir_base_pkt {
> >   	enum ice_fltr_ptype flow;
> > diff --git a/drivers/net/ice/base/ice_flex_pipe.c
> > b/drivers/net/ice/base/ice_flex_pipe.c
> > index 2a310b6e1..46234c014 100644
> > --- a/drivers/net/ice/base/ice_flex_pipe.c
> > +++ b/drivers/net/ice/base/ice_flex_pipe.c
> > @@ -398,7 +398,7 @@ ice_find_boost_entry(struct ice_seg *ice_seg, u16 addr,
> >    * Handles enumeration of individual label entries.
> >    */
> >   static void *
> > -ice_label_enum_handler(u32 __always_unused sect_type, void *section,
> > u32 index,
> > +ice_label_enum_handler(u32 __ALWAYS_UNUSED sect_type, void *section,
> > +u32 index,
> >   		       u32 *offset)
> >   {
> >   	struct ice_label_section *labels;
> > @@ -640,7 +640,7 @@ static bool ice_bits_max_set(const u8 *mask, u16 size, u16 max)
> >    * @size: the size of the complete key in bytes (must be even)
> >    * @val: array of 8-bit values that makes up the value portion of the key
> >    * @upd: array of 8-bit masks that determine what key portion to
> > update
> > - * @dc: array of 8-bit masks that make up the dont' care mask
> > + * @dc: array of 8-bit masks that make up the don't care mask
> >    * @nm: array of 8-bit masks that make up the never match mask
> >    * @off: the offset of the first byte in the key to update
> >    * @len: the number of bytes in the key update @@ -4544,6 +4544,7 @@
> > ice_move_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig,
> >   	status = ice_vsig_find_vsi(hw, blk, vsi, &orig_vsig);
> >   	if (!status)
> >   		status = ice_vsig_add_mv_vsi(hw, blk, vsi, vsig);
> > +
> >   	if (status) {
> >   		ice_free(hw, p);
> >   		return status;
> > diff --git a/drivers/net/ice/base/ice_sched.c
> > b/drivers/net/ice/base/ice_sched.c
> > index a72e72982..fa3158a7b 100644
> > --- a/drivers/net/ice/base/ice_sched.c
> > +++ b/drivers/net/ice/base/ice_sched.c
> > @@ -1233,7 +1233,7 @@ enum ice_status ice_sched_init_port(struct ice_port_info *pi)
> >   		goto err_init_port;
> >   	}
> >
> > -	/* If the last node is a leaf node then the index of the Q group
> > +	/* If the last node is a leaf node then the index of the queue group
> >   	 * layer is two less than the number of elements.
> >   	 */
> >   	if (num_elems > 2 && buf[0].generic[num_elems - 1].data.elem_type
> > == @@ -3529,9 +3529,11 @@ ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
> >   		LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
> >   				    ice_sched_agg_vsi_info, list_entry)
> >   			if (agg_vsi_info->vsi_handle == vsi_handle) {
> > +				/* cppcheck-suppress unreadVariable */
> >   				vsi_handle_valid = true;
> >   				break;
> >   			}
> > +
> >   		if (!vsi_handle_valid)
> >   			goto exit_agg_priority_per_tc;
> >
> > diff --git a/drivers/net/ice/base/ice_type.h
> > b/drivers/net/ice/base/ice_type.h index f4e151c55..f76be2b58 100644
> > --- a/drivers/net/ice/base/ice_type.h
> > +++ b/drivers/net/ice/base/ice_type.h
> > @@ -114,6 +114,9 @@ static inline u32 ice_round_to_num(u32 N, u32 R)
> >   #define ICE_DBG_USER		BIT_ULL(31)
> >   #define ICE_DBG_ALL		0xFFFFFFFFFFFFFFFFULL
> >
> > +#ifndef __ALWAYS_UNUSED
> > +#define __ALWAYS_UNUSED
> > +#endif
> 
> That does not look related
> 

Hi Maxime,

For this commit, I did modifications that will not impact any functionalities to aligned to the ND
released shared code. Since the modified lines may scattered in many different commits in the
original shared code repo, so I just put them into one.

Thanks,
Leyi

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 47/49] net/ice/base: enable additional switch rules
  2019-06-05 16:34     ` Stillwell Jr, Paul M
@ 2019-06-07 12:41       ` Maxime Coquelin
  2019-06-07 15:58         ` Stillwell Jr, Paul M
  0 siblings, 1 reply; 225+ messages in thread
From: Maxime Coquelin @ 2019-06-07 12:41 UTC (permalink / raw)
  To: Stillwell Jr, Paul M, Rong, Leyi, Zhang, Qi Z; +Cc: dev, Nowlin, Dan



On 6/5/19 6:34 PM, Stillwell Jr, Paul M wrote:
>>>    	if (!s_rule)
>>> @@ -5576,8 +5606,8 @@ ice_add_adv_rule(struct ice_hw *hw, struct
>> ice_adv_lkup_elem *lkups,
>>>    	s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(rid);
>>>    	s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
>>>
>>> -	ice_fill_adv_dummy_packet(lkups, lkups_cnt, rinfo->tun_type,
>> s_rule,
>>> -				  pkt, pkt_len);
>>> +	ice_fill_adv_dummy_packet(lkups, lkups_cnt, s_rule, pkt, pkt_len,
>>> +				  pkt_offsets);
>> Now that ice_fill_adv_dummy_packet() propagates an error, the caller
>> should do the same.
>>
> OK, can we accept this patch and have a separate patch that propagates the error? It will take some time to get a patch that propagates the error done.
> 

I'm OK with that, do you think it can be done for v19.08?

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 00/49] shared code update
  2019-06-06  5:44   ` Rong, Leyi
@ 2019-06-07 12:53     ` Maxime Coquelin
  0 siblings, 0 replies; 225+ messages in thread
From: Maxime Coquelin @ 2019-06-07 12:53 UTC (permalink / raw)
  To: Rong, Leyi, Zhang, Qi Z, Stillwell Jr, Paul M; +Cc: dev

Hi Leyi,

On 6/6/19 7:44 AM, Rong, Leyi wrote:
> 
>> -----Original Message-----
>> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
>> Sent: Wednesday, June 5, 2019 12:56 AM
>> To: Rong, Leyi <leyi.rong@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
>> Cc: dev@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH 00/49] shared code update
>>
>> Hi Leyi,
>>
>> On 6/4/19 7:41 AM, Leyi Rong wrote:
>>> Main changes:
>>> 1. Advanced switch rule support.
>>> 2. Add more APIs for tunnel management.
>>> 3. Add some minor features.
>>> 4. Code clean and bug fix.
>>
>> In order to ease the review process, I think it would be much better to split this series in multiple ones, by features.
>> Otherwise, it is more difficult to keep track if comments are taken into account in the next revision.
>>
>> Also, it is suggested to put the fixes first in the series to ease the backporting.
>>
>> Thanks,
>> Maxime
> 
> +Paul,
> 
> Hello Maxime,
> Thanks for all your constructive comments, but we do the same process for the CVL shared code update to DPDK upstream on the previous release.
> This series of patches are extracted/reorganized/squashed from the ND released packages, which the shared code difference between 1905 and 1908 can be more than 200 commits from the original shared code repo.
> 
> IMHO, there might be some reasons for take all these patches into one patchset.
> 	- the patchset try to keeps the history order as the commits in the original shared code repo.
> 	- the relatively behind patch in the patchset may have dependency on the front patches.
> 	- it's difficult to split this series into multiple ones, since the patches are irregular and squashed.

On the other hand, it means we should just apply the series without even
reviewing it as it would not be taken into account.

I personally think this is not a sane practice.

This is not the case of this series, which is almost good to me except
some missing errors handling, but imagine there is a security issue in
it, should we just apply it as-is not to diverge from your internal code
base and wait for the next shared code release to get the fix?

Note that my comment is not intended at Intel drivers specifically, as
it seems a common practice to have the base driver not reviewed and
applied as-is.

Best regards,
Maxime
> 
> 
> Best Regards,
> Leyi Rong
> 

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH 47/49] net/ice/base: enable additional switch rules
  2019-06-07 12:41       ` Maxime Coquelin
@ 2019-06-07 15:58         ` Stillwell Jr, Paul M
  0 siblings, 0 replies; 225+ messages in thread
From: Stillwell Jr, Paul M @ 2019-06-07 15:58 UTC (permalink / raw)
  To: Maxime Coquelin, Rong, Leyi, Zhang, Qi Z; +Cc: dev, Nowlin, Dan

> -----Original Message-----
> From: Maxime Coquelin <maxime.coquelin@redhat.com>
> Sent: Friday, June 7, 2019 5:41 AM
> To: Stillwell Jr, Paul M <paul.m.stillwell.jr@intel.com>; Rong, Leyi
> <leyi.rong@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Nowlin, Dan <dan.nowlin@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 47/49] net/ice/base: enable additional
> switch rules
> 
> 
> 
> On 6/5/19 6:34 PM, Stillwell Jr, Paul M wrote:
> >>>    	if (!s_rule)
> >>> @@ -5576,8 +5606,8 @@ ice_add_adv_rule(struct ice_hw *hw, struct
> >> ice_adv_lkup_elem *lkups,
> >>>    	s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(rid);
> >>>    	s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
> >>>
> >>> -	ice_fill_adv_dummy_packet(lkups, lkups_cnt, rinfo->tun_type,
> >> s_rule,
> >>> -				  pkt, pkt_len);
> >>> +	ice_fill_adv_dummy_packet(lkups, lkups_cnt, s_rule, pkt, pkt_len,
> >>> +				  pkt_offsets);
> >> Now that ice_fill_adv_dummy_packet() propagates an error, the caller
> >> should do the same.
> >>
> > OK, can we accept this patch and have a separate patch that propagates
> the error? It will take some time to get a patch that propagates the error
> done.
> >
> 
> I'm OK with that, do you think it can be done for v19.08?

I believe it will be done for 19.08, it just takes some time due to internal review process. As an FYI: the internal patch that I created to fix this has exposed an issue in our internal testing so thanks for that!

^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 00/66] shared code update
  2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
                   ` (49 preceding siblings ...)
  2019-06-04 16:56 ` [dpdk-dev] [PATCH 00/49] shared code update Maxime Coquelin
@ 2019-06-11 15:51 ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 01/66] net/ice/base: add macro for rounding up Leyi Rong
                     ` (66 more replies)
  50 siblings, 67 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong

Main changes:
1. Advanced switch rule support.
2. Add more APIs for tunnel management.
3. Add some minor features.
4. Code clean and bug fix.

---
v2:
- Split [03/49] into 2 commits.
- Split [27/49] with a standalone commit for code change in ice_osdep.h.
- Split [39/48] by kind of changes.
- Remove [42/49].
- Add some new patches from latest shared code release.

Leyi Rong (66):
  net/ice/base: add macro for rounding up
  net/ice/base: update standard extr seq to include DIR flag
  net/ice/base: add API to configure MIB
  net/ice/base: add another valid DCBx state
  net/ice/base: add more recipe commands
  net/ice/base: add funcs to create new switch recipe
  net/ice/base: programming a new switch recipe
  net/ice/base: replay advanced rule after reset
  net/ice/base: code for removing advanced rule
  net/ice/base: add lock around profile map list
  net/ice/base: save and post reset replay q bandwidth
  net/ice/base: rollback AVF RSS configurations
  net/ice/base: move RSS replay list
  net/ice/base: cache the data of set PHY cfg AQ in SW
  net/ice/base: refactor HW table init function
  net/ice/base: add compatibility check for package version
  net/ice/base: add API to init FW logging
  net/ice/base: use macro instead of magic 8
  net/ice/base: move and redefine ice debug cq API
  net/ice/base: separate out control queue lock creation
  net/ice/base: add helper functions for PHY caching
  net/ice/base: added sibling head to parse nodes
  net/ice/base: add and fix debuglogs
  net/ice/base: add support for reading REPC statistics
  net/ice/base: move VSI to VSI group
  net/ice/base: forbid VSI to remove unassociated ucast filter
  net/ice/base: add some minor features
  net/ice/base: add hweight32 support
  net/ice/base: call out dev/func caps when printing
  net/ice/base: add some minor features
  net/ice/base: cleanup update link info
  net/ice/base: add rd64 support
  net/ice/base: track HW stat registers past rollover
  net/ice/base: implement LLDP persistent settings
  net/ice/base: check new FD filter duplicate location
  net/ice/base: correct UDP/TCP PTYPE assignments
  net/ice/base: calculate rate limit burst size correctly
  net/ice/base: add lock around profile map list
  net/ice/base: fix Flow Director VSI count
  net/ice/base: use more efficient structures
  net/ice/base: silent semantic parser warnings
  net/ice/base: fix for signed package download
  net/ice/base: add new API to dealloc flow entry
  net/ice/base: check RSS flow profile list
  net/ice/base: protect list add with lock
  net/ice/base: fix Rx functionality for ethertype filters
  net/ice/base: introduce some new macros
  net/ice/base: add init for SW recipe member rg list
  net/ice/base: code clean up
  net/ice/base: cleanup ice flex pipe files
  net/ice/base: refactor VSI node sched code
  net/ice/base: add some minor new defines
  net/ice/base: add 16-byte Flex Rx Descriptor
  net/ice/base: add vxlan/generic tunnel management
  net/ice/base: enable additional switch rules
  net/ice/base: allow forward to Q groups in switch rule
  net/ice/base: changes for reducing ice add adv rule time
  net/ice/base: deduce TSA value in the CEE mode
  net/ice/base: rework API for ice zero bitmap
  net/ice/base: rework API for ice cp bitmap
  net/ice/base: use ice zero bitmap instead of ice memset
  net/ice/base: use the specified size for ice zero bitmap
  net/ice/base: fix potential memory leak in destroy tunnel
  net/ice/base: correct NVGRE header structure
  net/ice/base: add link event defines
  net/ice/base: reduce calls to get profile associations

 drivers/net/ice/base/ice_adminq_cmd.h    |  127 +-
 drivers/net/ice/base/ice_bitops.h        |   36 +-
 drivers/net/ice/base/ice_common.c        |  611 ++++--
 drivers/net/ice/base/ice_common.h        |   30 +-
 drivers/net/ice/base/ice_controlq.c      |  247 ++-
 drivers/net/ice/base/ice_controlq.h      |    4 +-
 drivers/net/ice/base/ice_dcb.c           |   82 +-
 drivers/net/ice/base/ice_dcb.h           |   12 +-
 drivers/net/ice/base/ice_fdir.c          |   11 +-
 drivers/net/ice/base/ice_fdir.h          |    4 -
 drivers/net/ice/base/ice_flex_pipe.c     | 1257 +++++------
 drivers/net/ice/base/ice_flex_pipe.h     |   74 +-
 drivers/net/ice/base/ice_flex_type.h     |   54 +-
 drivers/net/ice/base/ice_flow.c          |  447 +++-
 drivers/net/ice/base/ice_flow.h          |   26 +-
 drivers/net/ice/base/ice_lan_tx_rx.h     |   31 +-
 drivers/net/ice/base/ice_nvm.c           |   18 +-
 drivers/net/ice/base/ice_osdep.h         |   23 +
 drivers/net/ice/base/ice_protocol_type.h |   12 +-
 drivers/net/ice/base/ice_sched.c         |  219 +-
 drivers/net/ice/base/ice_sched.h         |   24 +-
 drivers/net/ice/base/ice_switch.c        | 2401 +++++++++++++++++++++-
 drivers/net/ice/base/ice_switch.h        |   66 +-
 drivers/net/ice/base/ice_type.h          |   92 +-
 drivers/net/ice/ice_ethdev.c             |    4 +-
 25 files changed, 4492 insertions(+), 1420 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 01/66] net/ice/base: add macro for rounding up
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 02/66] net/ice/base: update standard extr seq to include DIR flag Leyi Rong
                     ` (65 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Bruce Allan, Paul M Stillwell Jr

Add macro ROUND_UP for rounding up to an arbitrary multiple.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_type.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index e4979b832..d994ea3d2 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -18,6 +18,18 @@
 #define ICE_BYTES_PER_DWORD	4
 #define ICE_MAX_TRAFFIC_CLASS	8
 
+#ifndef ROUND_UP
+/**
+ * ROUND_UP - round up to next arbitrary multiple (not a power of 2)
+ * @a: value to round up
+ * @b: arbitrary multiple
+ *
+ * Round up to the next multiple of the arbitrary b.
+ * Note, when b is a power of 2 use ICE_ALIGN() instead.
+ */
+#define ROUND_UP(a, b)	((b) * DIVIDE_AND_ROUND_UP((a), (b)))
+#endif
+
 #ifndef MIN_T
 #define MIN_T(_t, _a, _b)	min((_t)(_a), (_t)(_b))
 #endif
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 02/66] net/ice/base: update standard extr seq to include DIR flag
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 01/66] net/ice/base: add macro for rounding up Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 03/66] net/ice/base: add API to configure MIB Leyi Rong
                     ` (64 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Chinh T Cao, Paul M Stillwell Jr

Once upon a time, the ice_flow_create_xtrct_seq() function in ice_flow.c
extracted only protocol fields explicitly specified by the caller of the
ice_flow_add_prof() function via its struct ice_flow_seg_info instances.
However, to support different ingress and egress flow profiles with the
same matching criteria, it would be necessary to also match on the packet
Direction metadata. The primary reason was because there could not be more
than one HW profile with the same CDID, PTG, and VSIG. The Direction
metadata was not a parameter used to select HW profile IDs.

Thus, for ACL, the direction flag would need to be added to the extraction
sequence. This information will be use later as one criteria for ACL
scenario entry matching.

Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c | 43 +++++++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index be819e0e9..f1bf5b5e7 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -495,6 +495,42 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
 	return ICE_SUCCESS;
 }
 
+/**
+ * ice_flow_xtract_pkt_flags - Create an extr sequence entry for packet flags
+ * @hw: pointer to the HW struct
+ * @params: information about the flow to be processed
+ * @flags: The value of pkt_flags[x:x] in RX/TX MDID metadata.
+ *
+ * This function will allocate an extraction sequence entries for a DWORD size
+ * chunk of the packet flags.
+ */
+static enum ice_status
+ice_flow_xtract_pkt_flags(struct ice_hw *hw,
+			  struct ice_flow_prof_params *params,
+			  enum ice_flex_mdid_pkt_flags flags)
+{
+	u8 fv_words = hw->blk[params->blk].es.fvw;
+	u8 idx;
+
+	/* Make sure the number of extraction sequence entries required does not
+	 * exceed the block's capacity.
+	 */
+	if (params->es_cnt >= fv_words)
+		return ICE_ERR_MAX_LIMIT;
+
+	/* some blocks require a reversed field vector layout */
+	if (hw->blk[params->blk].es.reverse)
+		idx = fv_words - params->es_cnt - 1;
+	else
+		idx = params->es_cnt;
+
+	params->es[idx].prot_id = ICE_PROT_META_ID;
+	params->es[idx].off = flags;
+	params->es_cnt++;
+
+	return ICE_SUCCESS;
+}
+
 /**
  * ice_flow_xtract_fld - Create an extraction sequence entry for the given field
  * @hw: pointer to the HW struct
@@ -744,6 +780,13 @@ ice_flow_create_xtrct_seq(struct ice_hw *hw,
 	enum ice_status status = ICE_SUCCESS;
 	u8 i;
 
+	/* For ACL, we also need to extract the direction bit (Rx,Tx) data from
+	 * packet flags
+	 */
+	if (params->blk == ICE_BLK_ACL)
+		ice_flow_xtract_pkt_flags(hw, params,
+					  ICE_RX_MDID_PKT_FLAGS_15_0);
+
 	for (i = 0; i < params->prof->segs_cnt; i++) {
 		u64 match = params->prof->segs[i].match;
 		u16 j;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 03/66] net/ice/base: add API to configure MIB
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 01/66] net/ice/base: add macro for rounding up Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 02/66] net/ice/base: update standard extr seq to include DIR flag Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 04/66] net/ice/base: add another valid DCBx state Leyi Rong
                     ` (63 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Chinh T Cao, Paul M Stillwell Jr

Decouple ice_cfg_lldp_mib_change from the ice_init_dcb function call.

Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_dcb.c | 38 ++++++++++++++++++++++++++++++----
 drivers/net/ice/base/ice_dcb.h |  3 ++-
 2 files changed, 36 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c
index a7810578d..4e213d4f9 100644
--- a/drivers/net/ice/base/ice_dcb.c
+++ b/drivers/net/ice/base/ice_dcb.c
@@ -927,10 +927,11 @@ enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi)
 /**
  * ice_init_dcb
  * @hw: pointer to the HW struct
+ * @enable_mib_change: enable MIB change event
  *
  * Update DCB configuration from the Firmware
  */
-enum ice_status ice_init_dcb(struct ice_hw *hw)
+enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change)
 {
 	struct ice_port_info *pi = hw->port_info;
 	enum ice_status ret = ICE_SUCCESS;
@@ -952,13 +953,42 @@ enum ice_status ice_init_dcb(struct ice_hw *hw)
 			return ret;
 	} else if (pi->dcbx_status == ICE_DCBX_STATUS_DIS) {
 		return ICE_ERR_NOT_READY;
-	} else if (pi->dcbx_status == ICE_DCBX_STATUS_MULTIPLE_PEERS) {
 	}
 
 	/* Configure the LLDP MIB change event */
-	ret = ice_aq_cfg_lldp_mib_change(hw, true, NULL);
+	if (enable_mib_change) {
+		ret = ice_aq_cfg_lldp_mib_change(hw, true, NULL);
+		if (!ret)
+			pi->is_sw_lldp = false;
+	}
+
+	return ret;
+}
+
+/**
+ * ice_cfg_lldp_mib_change
+ * @hw: pointer to the HW struct
+ * @ena_mib: enable/disable MIB change event
+ *
+ * Configure (disable/enable) MIB
+ */
+enum ice_status ice_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_mib)
+{
+	struct ice_port_info *pi = hw->port_info;
+	enum ice_status ret;
+
+	if (!hw->func_caps.common_cap.dcb)
+		return ICE_ERR_NOT_SUPPORTED;
+
+	/* Get DCBX status */
+	pi->dcbx_status = ice_get_dcbx_status(hw);
+
+	if (pi->dcbx_status == ICE_DCBX_STATUS_DIS)
+		return ICE_ERR_NOT_READY;
+
+	ret = ice_aq_cfg_lldp_mib_change(hw, ena_mib, NULL);
 	if (!ret)
-		pi->is_sw_lldp = false;
+		pi->is_sw_lldp = !ena_mib;
 
 	return ret;
 }
diff --git a/drivers/net/ice/base/ice_dcb.h b/drivers/net/ice/base/ice_dcb.h
index d922c8a29..65d2bafef 100644
--- a/drivers/net/ice/base/ice_dcb.h
+++ b/drivers/net/ice/base/ice_dcb.h
@@ -197,7 +197,7 @@ ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
 		   struct ice_dcbx_cfg *dcbcfg);
 enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi);
 enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi);
-enum ice_status ice_init_dcb(struct ice_hw *hw);
+enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change);
 void ice_dcb_cfg_to_lldp(u8 *lldpmib, u16 *miblen, struct ice_dcbx_cfg *dcbcfg);
 enum ice_status
 ice_query_port_ets(struct ice_port_info *pi,
@@ -217,6 +217,7 @@ enum ice_status ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd);
 enum ice_status
 ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent,
 		       bool *dcbx_agent_status, struct ice_sq_cd *cd);
+enum ice_status ice_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_mib);
 enum ice_status
 ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update,
 			   struct ice_sq_cd *cd);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 04/66] net/ice/base: add another valid DCBx state
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (2 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 03/66] net/ice/base: add API to configure MIB Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 05/66] net/ice/base: add more recipe commands Leyi Rong
                     ` (62 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dave Ertman, Paul M Stillwell Jr

When a port is not cabled, but DCBx is enabled in the
firmware, the status of DCBx will be NOT_STARTED. This
is a valid state for FW enabled and should not be
treated as a is_fw_lldp true automatically.

Add the code to treat NOT_STARTED as another valid state.

Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_dcb.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c
index 4e213d4f9..100c4bb0f 100644
--- a/drivers/net/ice/base/ice_dcb.c
+++ b/drivers/net/ice/base/ice_dcb.c
@@ -945,7 +945,8 @@ enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change)
 	pi->dcbx_status = ice_get_dcbx_status(hw);
 
 	if (pi->dcbx_status == ICE_DCBX_STATUS_DONE ||
-	    pi->dcbx_status == ICE_DCBX_STATUS_IN_PROGRESS) {
+	    pi->dcbx_status == ICE_DCBX_STATUS_IN_PROGRESS ||
+	    pi->dcbx_status == ICE_DCBX_STATUS_NOT_STARTED) {
 		/* Get current DCBX configuration */
 		ret = ice_get_dcb_cfg(pi);
 		pi->is_sw_lldp = (hw->adminq.sq_last_status == ICE_AQ_RC_EPERM);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 05/66] net/ice/base: add more recipe commands
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (3 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 04/66] net/ice/base: add another valid DCBx state Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 06/66] net/ice/base: add funcs to create new switch recipe Leyi Rong
                     ` (61 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Lev Faerman, Paul M Stillwell Jr

Add the Add Recipe (0x0290), Recipe to Profile (0x0291), Get Recipe
(0x0292) and Get Recipe to Profile (0x0293) Commands.

Signed-off-by: Lev Faerman <lev.faerman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 73 +++++++++++++++++++++++++++
 1 file changed, 73 insertions(+)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index bbdca83fc..7b0aa8aaa 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -696,6 +696,72 @@ struct ice_aqc_storm_cfg {
 
 #define ICE_MAX_NUM_RECIPES 64
 
+/* Add/Get Recipe (indirect 0x0290/0x0292)*/
+struct ice_aqc_add_get_recipe {
+	__le16 num_sub_recipes;	/* Input in Add cmd, Output in Get cmd */
+	__le16 return_index;	/* Input, used for Get cmd only */
+	u8 reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct ice_aqc_recipe_content {
+	u8 rid;
+#define ICE_AQ_RECIPE_ID_S		0
+#define ICE_AQ_RECIPE_ID_M		(0x3F << ICE_AQ_RECIPE_ID_S)
+#define ICE_AQ_RECIPE_ID_IS_ROOT	BIT(7)
+	u8 lkup_indx[5];
+#define ICE_AQ_RECIPE_LKUP_DATA_S	0
+#define ICE_AQ_RECIPE_LKUP_DATA_M	(0x3F << ICE_AQ_RECIPE_LKUP_DATA_S)
+#define ICE_AQ_RECIPE_LKUP_IGNORE	BIT(7)
+#define ICE_AQ_SW_ID_LKUP_MASK		0x00FF
+	__le16 mask[5];
+	u8 result_indx;
+#define ICE_AQ_RECIPE_RESULT_DATA_S	0
+#define ICE_AQ_RECIPE_RESULT_DATA_M	(0x3F << ICE_AQ_RECIPE_RESULT_DATA_S)
+#define ICE_AQ_RECIPE_RESULT_EN		BIT(7)
+	u8 rsvd0[3];
+	u8 act_ctrl_join_priority;
+	u8 act_ctrl_fwd_priority;
+#define ICE_AQ_RECIPE_FWD_PRIORITY_S	0
+#define ICE_AQ_RECIPE_FWD_PRIORITY_M	(0xF << ICE_AQ_RECIPE_FWD_PRIORITY_S)
+	u8 act_ctrl;
+#define ICE_AQ_RECIPE_ACT_NEED_PASS_L2	BIT(0)
+#define ICE_AQ_RECIPE_ACT_ALLOW_PASS_L2	BIT(1)
+#define ICE_AQ_RECIPE_ACT_INV_ACT	BIT(2)
+#define ICE_AQ_RECIPE_ACT_PRUNE_INDX_S	4
+#define ICE_AQ_RECIPE_ACT_PRUNE_INDX_M	(0x3 << ICE_AQ_RECIPE_ACT_PRUNE_INDX_S)
+	u8 rsvd1;
+	__le32 dflt_act;
+#define ICE_AQ_RECIPE_DFLT_ACT_S	0
+#define ICE_AQ_RECIPE_DFLT_ACT_M	(0x7FFFF << ICE_AQ_RECIPE_DFLT_ACT_S)
+#define ICE_AQ_RECIPE_DFLT_ACT_VALID	BIT(31)
+};
+
+struct ice_aqc_recipe_data_elem {
+	u8 recipe_indx;
+	u8 resp_bits;
+#define ICE_AQ_RECIPE_WAS_UPDATED	BIT(0)
+	u8 rsvd0[2];
+	u8 recipe_bitmap[8];
+	u8 rsvd1[4];
+	struct ice_aqc_recipe_content content;
+	u8 rsvd2[20];
+};
+
+/* This struct contains a number of entries as per the
+ * num_sub_recipes in the command
+ */
+struct ice_aqc_add_get_recipe_data {
+	struct ice_aqc_recipe_data_elem recipe[1];
+};
+
+/* Set/Get Recipes to Profile Association (direct 0x0291/0x0293) */
+struct ice_aqc_recipe_to_profile {
+	__le16 profile_id;
+	u8 rsvd[6];
+	ice_declare_bitmap(recipe_assoc, ICE_MAX_NUM_RECIPES);
+};
 
 /* Add/Update/Remove/Get switch rules (indirect 0x02A0, 0x02A1, 0x02A2, 0x02A3)
  */
@@ -2210,6 +2276,8 @@ struct ice_aq_desc {
 		struct ice_aqc_get_sw_cfg get_sw_conf;
 		struct ice_aqc_sw_rules sw_rules;
 		struct ice_aqc_storm_cfg storm_conf;
+		struct ice_aqc_add_get_recipe add_get_recipe;
+		struct ice_aqc_recipe_to_profile recipe_to_profile;
 		struct ice_aqc_get_topo get_topo;
 		struct ice_aqc_sched_elem_cmd sched_elem_cmd;
 		struct ice_aqc_query_txsched_res query_sched_res;
@@ -2369,6 +2437,11 @@ enum ice_adminq_opc {
 	ice_aqc_opc_set_storm_cfg			= 0x0280,
 	ice_aqc_opc_get_storm_cfg			= 0x0281,
 
+	/* recipe commands */
+	ice_aqc_opc_add_recipe				= 0x0290,
+	ice_aqc_opc_recipe_to_profile			= 0x0291,
+	ice_aqc_opc_get_recipe				= 0x0292,
+	ice_aqc_opc_get_recipe_to_profile		= 0x0293,
 
 	/* switch rules population commands */
 	ice_aqc_opc_add_sw_rules			= 0x02A0,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 06/66] net/ice/base: add funcs to create new switch recipe
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (4 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 05/66] net/ice/base: add more recipe commands Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 07/66] net/ice/base: programming a " Leyi Rong
                     ` (60 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Grishma Kotecha, Paul M Stillwell Jr

Add functions to support following admin queue commands:
1. 0x0208: allocate resource to hold a switch recipe. This is needed
when a new switch recipe needs to be created.
2. 0x0290: create a recipe with protocol header information and
other details that determine how this recipe filter work.
3. 0x0292: get details of an existing recipe.
4. 0x0291: associate a switch recipe to a profile.

Signed-off-by: Grishma Kotecha <grishma.kotecha@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 132 ++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_switch.h |  12 +++
 2 files changed, 144 insertions(+)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index a1c29d606..b84a07459 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -914,6 +914,138 @@ ice_aq_sw_rules(struct ice_hw *hw, void *rule_list, u16 rule_list_sz,
 	return ice_aq_send_cmd(hw, &desc, rule_list, rule_list_sz, cd);
 }
 
+/**
+ * ice_aq_add_recipe - add switch recipe
+ * @hw: pointer to the HW struct
+ * @s_recipe_list: pointer to switch rule population list
+ * @num_recipes: number of switch recipes in the list
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add(0x0290)
+ */
+enum ice_status
+ice_aq_add_recipe(struct ice_hw *hw,
+		  struct ice_aqc_recipe_data_elem *s_recipe_list,
+		  u16 num_recipes, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_get_recipe *cmd;
+	struct ice_aq_desc desc;
+	u16 buf_size;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_add_recipe");
+	cmd = &desc.params.add_get_recipe;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_recipe);
+
+	cmd->num_sub_recipes = CPU_TO_LE16(num_recipes);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	buf_size = num_recipes * sizeof(*s_recipe_list);
+
+	return ice_aq_send_cmd(hw, &desc, s_recipe_list, buf_size, cd);
+}
+
+/**
+ * ice_aq_get_recipe - get switch recipe
+ * @hw: pointer to the HW struct
+ * @s_recipe_list: pointer to switch rule population list
+ * @num_recipes: pointer to the number of recipes (input and output)
+ * @recipe_root: root recipe number of recipe(s) to retrieve
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get(0x0292)
+ *
+ * On input, *num_recipes should equal the number of entries in s_recipe_list.
+ * On output, *num_recipes will equal the number of entries returned in
+ * s_recipe_list.
+ *
+ * The caller must supply enough space in s_recipe_list to hold all possible
+ * recipes and *num_recipes must equal ICE_MAX_NUM_RECIPES.
+ */
+enum ice_status
+ice_aq_get_recipe(struct ice_hw *hw,
+		  struct ice_aqc_recipe_data_elem *s_recipe_list,
+		  u16 *num_recipes, u16 recipe_root, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_get_recipe *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 buf_size;
+
+	if (*num_recipes != ICE_MAX_NUM_RECIPES)
+		return ICE_ERR_PARAM;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_get_recipe");
+	cmd = &desc.params.add_get_recipe;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_recipe);
+
+	cmd->return_index = CPU_TO_LE16(recipe_root);
+	cmd->num_sub_recipes = 0;
+
+	buf_size = *num_recipes * sizeof(*s_recipe_list);
+
+	status = ice_aq_send_cmd(hw, &desc, s_recipe_list, buf_size, cd);
+	/* cppcheck-suppress constArgument */
+	*num_recipes = LE16_TO_CPU(cmd->num_sub_recipes);
+
+	return status;
+}
+
+/**
+ * ice_aq_map_recipe_to_profile - Map recipe to packet profile
+ * @hw: pointer to the HW struct
+ * @profile_id: package profile ID to associate the recipe with
+ * @r_bitmap: Recipe bitmap filled in and need to be returned as response
+ * @cd: pointer to command details structure or NULL
+ * Recipe to profile association (0x0291)
+ */
+enum ice_status
+ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
+			     struct ice_sq_cd *cd)
+{
+	struct ice_aqc_recipe_to_profile *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_assoc_recipe_to_prof");
+	cmd = &desc.params.recipe_to_profile;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_recipe_to_profile);
+	cmd->profile_id = CPU_TO_LE16(profile_id);
+	/* Set the recipe ID bit in the bitmask to let the device know which
+	 * profile we are associating the recipe to
+	 */
+	ice_memcpy(cmd->recipe_assoc, r_bitmap, sizeof(cmd->recipe_assoc),
+		   ICE_NONDMA_TO_NONDMA);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_alloc_recipe - add recipe resource
+ * @hw: pointer to the hardware structure
+ * @rid: recipe ID returned as response to AQ call
+ */
+enum ice_status ice_alloc_recipe(struct ice_hw *hw, u16 *rid)
+{
+	struct ice_aqc_alloc_free_res_elem *sw_buf;
+	enum ice_status status;
+	u16 buf_len;
+
+	buf_len = sizeof(*sw_buf);
+	sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len);
+	if (!sw_buf)
+		return ICE_ERR_NO_MEMORY;
+
+	sw_buf->num_elems = CPU_TO_LE16(1);
+	sw_buf->res_type = CPU_TO_LE16((ICE_AQC_RES_TYPE_RECIPE <<
+					ICE_AQC_RES_TYPE_S) |
+					ICE_AQC_RES_TYPE_FLAG_SHARED);
+	status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len,
+				       ice_aqc_opc_alloc_res, NULL);
+	if (!status)
+		*rid = LE16_TO_CPU(sw_buf->elem[0].e.sw_resp);
+	ice_free(hw, sw_buf);
+
+	return status;
+}
 
 /* ice_init_port_info - Initialize port_info with switch configuration data
  * @pi: pointer to port_info
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index 13525d8d0..fd61c0eea 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -408,8 +408,20 @@ enum ice_status
 ice_get_vsi_vlan_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask,
 			 u16 *vid);
 
+enum ice_status
+ice_aq_add_recipe(struct ice_hw *hw,
+		  struct ice_aqc_recipe_data_elem *s_recipe_list,
+		  u16 num_recipes, struct ice_sq_cd *cd);
 
+enum ice_status
+ice_aq_get_recipe(struct ice_hw *hw,
+		  struct ice_aqc_recipe_data_elem *s_recipe_list,
+		  u16 *num_recipes, u16 recipe_root, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
+			     struct ice_sq_cd *cd);
 
+enum ice_status ice_alloc_recipe(struct ice_hw *hw, u16 *recipe_id);
 enum ice_status ice_replay_all_fltr(struct ice_hw *hw);
 
 enum ice_status ice_init_def_sw_recp(struct ice_hw *hw);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 07/66] net/ice/base: programming a new switch recipe
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (5 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 06/66] net/ice/base: add funcs to create new switch recipe Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 08/66] net/ice/base: replay advanced rule after reset Leyi Rong
                     ` (59 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Grishma Kotecha, Paul M Stillwell Jr

1. Added an interface to support adding advanced switch rules.
2. Advanced rules are provided in a form of protocol headers and values
to match in addition to actions (limited actions are current supported).
3. Retrieve field vectors for ICE configuration package to determine
extracted fields and extracted locations for recipe creation.
4. Chain multiple recipes together to match multiple protocol headers.
5. Add structure to manage the dynamic recipes.

Signed-off-by: Grishma Kotecha <grishma.kotecha@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c |   33 +-
 drivers/net/ice/base/ice_flex_pipe.h |    7 +-
 drivers/net/ice/base/ice_switch.c    | 1640 ++++++++++++++++++++++++++
 drivers/net/ice/base/ice_switch.h    |   21 +
 4 files changed, 1698 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 14e632fab..babad94f8 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -734,7 +734,7 @@ static void ice_release_global_cfg_lock(struct ice_hw *hw)
  *
  * This function will request ownership of the change lock.
  */
-static enum ice_status
+enum ice_status
 ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type access)
 {
 	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_change_lock");
@@ -749,7 +749,7 @@ ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type access)
  *
  * This function will release the change lock using the proper Admin Command.
  */
-static void ice_release_change_lock(struct ice_hw *hw)
+void ice_release_change_lock(struct ice_hw *hw)
 {
 	ice_debug(hw, ICE_DBG_TRACE, "ice_release_change_lock");
 
@@ -1801,6 +1801,35 @@ void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld)
 	ice_free(hw, bld);
 }
 
+/**
+ * ice_find_prot_off - find prot ID and offset pair, based on prof and FV index
+ * @hw: pointer to the hardware structure
+ * @blk: hardware block
+ * @prof: profile ID
+ * @fv_idx: field vector word index
+ * @prot: variable to receive the protocol ID
+ * @off: variable to receive the protocol offset
+ */
+enum ice_status
+ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx,
+		  u8 *prot, u16 *off)
+{
+	struct ice_fv_word *fv_ext;
+
+	if (prof >= hw->blk[blk].es.count)
+		return ICE_ERR_PARAM;
+
+	if (fv_idx >= hw->blk[blk].es.fvw)
+		return ICE_ERR_PARAM;
+
+	fv_ext = hw->blk[blk].es.t + (prof * hw->blk[blk].es.fvw);
+
+	*prot = fv_ext[fv_idx].prot_id;
+	*off = fv_ext[fv_idx].off;
+
+	return ICE_SUCCESS;
+}
+
 /* PTG Management */
 
 /**
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 00c2b6682..2710dded6 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -15,7 +15,12 @@
 
 enum ice_status
 ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count);
-
+enum ice_status
+ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type access);
+void ice_release_change_lock(struct ice_hw *hw);
+enum ice_status
+ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx,
+		  u8 *prot, u16 *off);
 struct ice_generic_seg_hdr *
 ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
 		    struct ice_pkg_hdr *pkg_hdr);
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index b84a07459..c53021aed 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -53,6 +53,210 @@ static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0,
 	 sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi) + \
 	 ((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi)))
 
+static const
+u8 dummy_gre_packet[] = { 0, 0, 0, 0,		/* Ether starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0x08, 0,		/* Ether ends */
+			  0x45, 0, 0, 0x3E,	/* IP starts */
+			  0, 0, 0, 0,
+			  0, 0x2F, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,		/* IP ends */
+			  0x80, 0, 0x65, 0x58,	/* GRE starts */
+			  0, 0, 0, 0,		/* GRE ends */
+			  0, 0, 0, 0,		/* Ether starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0x08, 0,		/* Ether ends */
+			  0x45, 0, 0, 0x14,	/* IP starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0		/* IP ends */
+			};
+
+static const u8
+dummy_udp_tun_packet[] = {0, 0, 0, 0,		/* Ether starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0x08, 0,		/* Ether ends */
+			  0x45, 0, 0, 0x32,	/* IP starts */
+			  0, 0, 0, 0,
+			  0, 0x11, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,		/* IP ends */
+			  0, 0, 0x12, 0xB5,	/* UDP start*/
+			  0, 0x1E, 0, 0,	/* UDP end*/
+			  0, 0, 0, 0,		/* VXLAN start */
+			  0, 0, 0, 0,		/* VXLAN end*/
+			  0, 0, 0, 0,		/* Ether starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0			/* Ether ends */
+			};
+
+static const u8
+dummy_tcp_tun_packet[] = {0, 0, 0, 0,		/* Ether starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0x08, 0,              /* Ether ends */
+			  0x45, 0, 0, 0x28,     /* IP starts */
+			  0, 0x01, 0, 0,
+			  0x40, 0x06, 0xF5, 0x69,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,   /* IP ends */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0x50, 0x02, 0x20,
+			  0, 0x9, 0x79, 0, 0,
+			  0, 0 /* 2 bytes padding for 4 byte alignment*/
+			};
+
+/* this is a recipe to profile bitmap association */
+static ice_declare_bitmap(recipe_to_profile[ICE_MAX_NUM_RECIPES],
+			  ICE_MAX_NUM_PROFILES);
+static ice_declare_bitmap(available_result_ids, ICE_CHAIN_FV_INDEX_START + 1);
+
+/**
+ * ice_get_recp_frm_fw - update SW bookkeeping from FW recipe entries
+ * @hw: pointer to hardware structure
+ * @recps: struct that we need to populate
+ * @rid: recipe ID that we are populating
+ *
+ * This function is used to populate all the necessary entries into our
+ * bookkeeping so that we have a current list of all the recipes that are
+ * programmed in the firmware.
+ */
+static enum ice_status
+ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid)
+{
+	u16 i, sub_recps, fv_word_idx = 0, result_idx = 0;
+	ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_PROFILES);
+	u16 result_idxs[ICE_MAX_CHAIN_RECIPE] = { 0 };
+	struct ice_aqc_recipe_data_elem *tmp;
+	u16 num_recps = ICE_MAX_NUM_RECIPES;
+	struct ice_prot_lkup_ext *lkup_exts;
+	enum ice_status status;
+
+	/* we need a buffer big enough to accommodate all the recipes */
+	tmp = (struct ice_aqc_recipe_data_elem *)ice_calloc(hw,
+		ICE_MAX_NUM_RECIPES, sizeof(*tmp));
+	if (!tmp)
+		return ICE_ERR_NO_MEMORY;
+
+	tmp[0].recipe_indx = rid;
+	status = ice_aq_get_recipe(hw, tmp, &num_recps, rid, NULL);
+	/* non-zero status meaning recipe doesn't exist */
+	if (status)
+		goto err_unroll;
+	lkup_exts = &recps[rid].lkup_exts;
+	/* start populating all the entries for recps[rid] based on lkups from
+	 * firmware
+	 */
+	for (sub_recps = 0; sub_recps < num_recps; sub_recps++) {
+		struct ice_aqc_recipe_data_elem root_bufs = tmp[sub_recps];
+		struct ice_recp_grp_entry *rg_entry;
+		u8 prof_id, prot = 0;
+		u16 off = 0;
+
+		rg_entry = (struct ice_recp_grp_entry *)
+			ice_malloc(hw, sizeof(*rg_entry));
+		if (!rg_entry) {
+			status = ICE_ERR_NO_MEMORY;
+			goto err_unroll;
+		}
+		/* Avoid 8th bit since its result enable bit */
+		result_idxs[result_idx] = root_bufs.content.result_indx &
+			~ICE_AQ_RECIPE_RESULT_EN;
+		/* Check if result enable bit is set */
+		if (root_bufs.content.result_indx & ICE_AQ_RECIPE_RESULT_EN)
+			ice_clear_bit(ICE_CHAIN_FV_INDEX_START -
+				      result_idxs[result_idx++],
+				      available_result_ids);
+		ice_memcpy(r_bitmap,
+			   recipe_to_profile[tmp[sub_recps].recipe_indx],
+			   sizeof(r_bitmap), ICE_NONDMA_TO_NONDMA);
+		/* get the first profile that is associated with rid */
+		prof_id = ice_find_first_bit(r_bitmap, ICE_MAX_NUM_PROFILES);
+		for (i = 0; i < ICE_NUM_WORDS_RECIPE; i++) {
+			u8 lkup_indx = root_bufs.content.lkup_indx[i + 1];
+
+			rg_entry->fv_idx[i] = lkup_indx;
+			/* If the recipe is a chained recipe then all its
+			 * child recipe's result will have a result index.
+			 * To fill fv_words we should not use those result
+			 * index, we only need the protocol ids and offsets.
+			 * We will skip all the fv_idx which stores result
+			 * index in them. We also need to skip any fv_idx which
+			 * has ICE_AQ_RECIPE_LKUP_IGNORE or 0 since it isn't a
+			 * valid offset value.
+			 */
+			if (result_idxs[0] == rg_entry->fv_idx[i] ||
+			    result_idxs[1] == rg_entry->fv_idx[i] ||
+			    result_idxs[2] == rg_entry->fv_idx[i] ||
+			    result_idxs[3] == rg_entry->fv_idx[i] ||
+			    result_idxs[4] == rg_entry->fv_idx[i] ||
+			    rg_entry->fv_idx[i] == ICE_AQ_RECIPE_LKUP_IGNORE ||
+			    rg_entry->fv_idx[i] == 0)
+				continue;
+
+			ice_find_prot_off(hw, ICE_BLK_SW, prof_id,
+					  rg_entry->fv_idx[i], &prot, &off);
+			lkup_exts->fv_words[fv_word_idx].prot_id = prot;
+			lkup_exts->fv_words[fv_word_idx].off = off;
+			fv_word_idx++;
+		}
+		/* populate rg_list with the data from the child entry of this
+		 * recipe
+		 */
+		LIST_ADD(&rg_entry->l_entry, &recps[rid].rg_list);
+	}
+	lkup_exts->n_val_words = fv_word_idx;
+	recps[rid].n_grp_count = num_recps;
+	recps[rid].root_buf = (struct ice_aqc_recipe_data_elem *)
+		ice_calloc(hw, recps[rid].n_grp_count,
+			   sizeof(struct ice_aqc_recipe_data_elem));
+	if (!recps[rid].root_buf)
+		goto err_unroll;
+
+	ice_memcpy(recps[rid].root_buf, tmp, recps[rid].n_grp_count *
+		   sizeof(*recps[rid].root_buf), ICE_NONDMA_TO_NONDMA);
+	recps[rid].recp_created = true;
+	if (tmp[sub_recps].content.rid & ICE_AQ_RECIPE_ID_IS_ROOT)
+		recps[rid].root_rid = rid;
+err_unroll:
+	ice_free(hw, tmp);
+	return status;
+}
+
+/**
+ * ice_get_recp_to_prof_map - updates recipe to profile mapping
+ * @hw: pointer to hardware structure
+ *
+ * This function is used to populate recipe_to_profile matrix where index to
+ * this array is the recipe ID and the element is the mapping of which profiles
+ * is this recipe mapped to.
+ */
+static void
+ice_get_recp_to_prof_map(struct ice_hw *hw)
+{
+	ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_RECIPES);
+	u16 i;
+
+	for (i = 0; i < ICE_MAX_NUM_PROFILES; i++) {
+		u16 j;
+
+		ice_zero_bitmap(r_bitmap, ICE_MAX_NUM_RECIPES);
+		if (ice_aq_get_recipe_to_profile(hw, i, (u8 *)r_bitmap, NULL))
+			continue;
+
+		for (j = 0; j < ICE_MAX_NUM_RECIPES; j++)
+			if (ice_is_bit_set(r_bitmap, j))
+				ice_set_bit(i, recipe_to_profile[j]);
+	}
+}
 
 /**
  * ice_init_def_sw_recp - initialize the recipe book keeping tables
@@ -1018,6 +1222,35 @@ ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
 	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
 }
 
+/**
+ * ice_aq_get_recipe_to_profile - Map recipe to packet profile
+ * @hw: pointer to the HW struct
+ * @profile_id: package profile ID to associate the recipe with
+ * @r_bitmap: Recipe bitmap filled in and need to be returned as response
+ * @cd: pointer to command details structure or NULL
+ * Associate profile ID with given recipe (0x0293)
+ */
+enum ice_status
+ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
+			     struct ice_sq_cd *cd)
+{
+	struct ice_aqc_recipe_to_profile *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_get_recipe_to_prof");
+	cmd = &desc.params.recipe_to_profile;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_recipe_to_profile);
+	cmd->profile_id = CPU_TO_LE16(profile_id);
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+	if (!status)
+		ice_memcpy(r_bitmap, cmd->recipe_assoc,
+			   sizeof(cmd->recipe_assoc), ICE_NONDMA_TO_NONDMA);
+
+	return status;
+}
+
 /**
  * ice_alloc_recipe - add recipe resource
  * @hw: pointer to the hardware structure
@@ -3899,6 +4132,1413 @@ ice_add_mac_with_counter(struct ice_hw *hw, struct ice_fltr_info *f_info)
 	return ret;
 }
 
+/* This is mapping table entry that maps every word within a given protocol
+ * structure to the real byte offset as per the specification of that
+ * protocol header.
+ * for example dst address is 3 words in ethertype header and corresponding
+ * bytes are 0, 2, 3 in the actual packet header and src address is at 4, 6, 8
+ * IMPORTANT: Every structure part of "ice_prot_hdr" union should have a
+ * matching entry describing its field. This needs to be updated if new
+ * structure is added to that union.
+ */
+static const struct ice_prot_ext_tbl_entry ice_prot_ext[] = {
+	{ ICE_MAC_OFOS,		{ 0, 2, 4, 6, 8, 10, 12 } },
+	{ ICE_MAC_IL,		{ 0, 2, 4, 6, 8, 10, 12 } },
+	{ ICE_IPV4_OFOS,	{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18 } },
+	{ ICE_IPV4_IL,		{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18 } },
+	{ ICE_IPV6_IL,		{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24,
+				 26, 28, 30, 32, 34, 36, 38 } },
+	{ ICE_IPV6_OFOS,	{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24,
+				 26, 28, 30, 32, 34, 36, 38 } },
+	{ ICE_TCP_IL,		{ 0, 2 } },
+	{ ICE_UDP_ILOS,		{ 0, 2 } },
+	{ ICE_SCTP_IL,		{ 0, 2 } },
+	{ ICE_VXLAN,		{ 8, 10, 12 } },
+	{ ICE_GENEVE,		{ 8, 10, 12 } },
+	{ ICE_VXLAN_GPE,	{ 0, 2, 4 } },
+	{ ICE_NVGRE,		{ 0, 2 } },
+	{ ICE_PROTOCOL_LAST,	{ 0 } }
+};
+
+/* The following table describes preferred grouping of recipes.
+ * If a recipe that needs to be programmed is a superset or matches one of the
+ * following combinations, then the recipe needs to be chained as per the
+ * following policy.
+ */
+static const struct ice_pref_recipe_group ice_recipe_pack[] = {
+	{3, { { ICE_MAC_OFOS_HW, 0, 0 }, { ICE_MAC_OFOS_HW, 2, 0 },
+	      { ICE_MAC_OFOS_HW, 4, 0 } } },
+	{4, { { ICE_MAC_IL_HW, 0, 0 }, { ICE_MAC_IL_HW, 2, 0 },
+	      { ICE_MAC_IL_HW, 4, 0 }, { ICE_META_DATA_ID_HW, 44, 0 } } },
+	{2, { { ICE_IPV4_IL_HW, 0, 0 }, { ICE_IPV4_IL_HW, 2, 0 } } },
+	{2, { { ICE_IPV4_IL_HW, 12, 0 }, { ICE_IPV4_IL_HW, 14, 0 } } },
+};
+
+static const struct ice_protocol_entry ice_prot_id_tbl[] = {
+	{ ICE_MAC_OFOS,		ICE_MAC_OFOS_HW },
+	{ ICE_MAC_IL,		ICE_MAC_IL_HW },
+	{ ICE_IPV4_OFOS,	ICE_IPV4_OFOS_HW },
+	{ ICE_IPV4_IL,		ICE_IPV4_IL_HW },
+	{ ICE_IPV6_OFOS,	ICE_IPV6_OFOS_HW },
+	{ ICE_IPV6_IL,		ICE_IPV6_IL_HW },
+	{ ICE_TCP_IL,		ICE_TCP_IL_HW },
+	{ ICE_UDP_ILOS,		ICE_UDP_ILOS_HW },
+	{ ICE_SCTP_IL,		ICE_SCTP_IL_HW },
+	{ ICE_VXLAN,		ICE_UDP_OF_HW },
+	{ ICE_GENEVE,		ICE_UDP_OF_HW },
+	{ ICE_VXLAN_GPE,	ICE_UDP_OF_HW },
+	{ ICE_NVGRE,		ICE_GRE_OF_HW },
+	{ ICE_PROTOCOL_LAST,	0 }
+};
+
+/**
+ * ice_find_recp - find a recipe
+ * @hw: pointer to the hardware structure
+ * @lkup_exts: extension sequence to match
+ *
+ * Returns index of matching recipe, or ICE_MAX_NUM_RECIPES if not found.
+ */
+static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts)
+{
+	struct ice_sw_recipe *recp;
+	u16 i;
+
+	ice_get_recp_to_prof_map(hw);
+	/* Initialize available_result_ids which tracks available result idx */
+	for (i = 0; i <= ICE_CHAIN_FV_INDEX_START; i++)
+		ice_set_bit(ICE_CHAIN_FV_INDEX_START - i,
+			    available_result_ids);
+
+	/* Walk through existing recipes to find a match */
+	recp = hw->switch_info->recp_list;
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		/* If recipe was not created for this ID, in SW bookkeeping,
+		 * check if FW has an entry for this recipe. If the FW has an
+		 * entry update it in our SW bookkeeping and continue with the
+		 * matching.
+		 */
+		if (!recp[i].recp_created)
+			if (ice_get_recp_frm_fw(hw,
+						hw->switch_info->recp_list, i))
+				continue;
+
+		/* if number of words we are looking for match */
+		if (lkup_exts->n_val_words == recp[i].lkup_exts.n_val_words) {
+			struct ice_fv_word *a = lkup_exts->fv_words;
+			struct ice_fv_word *b = recp[i].lkup_exts.fv_words;
+			bool found = true;
+			u8 p, q;
+
+			for (p = 0; p < lkup_exts->n_val_words; p++) {
+				for (q = 0; q < recp[i].lkup_exts.n_val_words;
+				     q++) {
+					if (a[p].off == b[q].off &&
+					    a[p].prot_id == b[q].prot_id)
+						/* Found the "p"th word in the
+						 * given recipe
+						 */
+						break;
+				}
+				/* After walking through all the words in the
+				 * "i"th recipe if "p"th word was not found then
+				 * this recipe is not what we are looking for.
+				 * So break out from this loop and try the next
+				 * recipe
+				 */
+				if (q >= recp[i].lkup_exts.n_val_words) {
+					found = false;
+					break;
+				}
+			}
+			/* If for "i"th recipe the found was never set to false
+			 * then it means we found our match
+			 */
+			if (found)
+				return i; /* Return the recipe ID */
+		}
+	}
+	return ICE_MAX_NUM_RECIPES;
+}
+
+/**
+ * ice_prot_type_to_id - get protocol ID from protocol type
+ * @type: protocol type
+ * @id: pointer to variable that will receive the ID
+ *
+ * Returns true if found, false otherwise
+ */
+static bool ice_prot_type_to_id(enum ice_protocol_type type, u16 *id)
+{
+	u16 i;
+
+	for (i = 0; ice_prot_id_tbl[i].type != ICE_PROTOCOL_LAST; i++)
+		if (ice_prot_id_tbl[i].type == type) {
+			*id = ice_prot_id_tbl[i].protocol_id;
+			return true;
+		}
+	return false;
+}
+
+/**
+ * ice_find_valid_words - count valid words
+ * @rule: advanced rule with lookup information
+ * @lkup_exts: byte offset extractions of the words that are valid
+ *
+ * calculate valid words in a lookup rule using mask value
+ */
+static u16
+ice_fill_valid_words(struct ice_adv_lkup_elem *rule,
+		     struct ice_prot_lkup_ext *lkup_exts)
+{
+	u16 j, word = 0;
+	u16 prot_id;
+	u16 ret_val;
+
+	if (!ice_prot_type_to_id(rule->type, &prot_id))
+		return 0;
+
+	word = lkup_exts->n_val_words;
+
+	for (j = 0; j < sizeof(rule->m_u) / sizeof(u16); j++)
+		if (((u16 *)&rule->m_u)[j] == 0xffff &&
+		    rule->type < ARRAY_SIZE(ice_prot_ext)) {
+			/* No more space to accommodate */
+			if (word >= ICE_MAX_CHAIN_WORDS)
+				return 0;
+			lkup_exts->fv_words[word].off =
+				ice_prot_ext[rule->type].offs[j];
+			lkup_exts->fv_words[word].prot_id =
+				ice_prot_id_tbl[rule->type].protocol_id;
+			word++;
+		}
+
+	ret_val = word - lkup_exts->n_val_words;
+	lkup_exts->n_val_words = word;
+
+	return ret_val;
+}
+
+/**
+ * ice_find_prot_off_ind - check for specific ID and offset in rule
+ * @lkup_exts: an array of protocol header extractions
+ * @prot_type: protocol type to check
+ * @off: expected offset of the extraction
+ *
+ * Check if the prot_ext has given protocol ID and offset
+ */
+static u8
+ice_find_prot_off_ind(struct ice_prot_lkup_ext *lkup_exts, u8 prot_type,
+		      u16 off)
+{
+	u8 j;
+
+	for (j = 0; j < lkup_exts->n_val_words; j++)
+		if (lkup_exts->fv_words[j].off == off &&
+		    lkup_exts->fv_words[j].prot_id == prot_type)
+			return j;
+
+	return ICE_MAX_CHAIN_WORDS;
+}
+
+/**
+ * ice_is_recipe_subset - check if recipe group policy is a subset of lookup
+ * @lkup_exts: an array of protocol header extractions
+ * @r_policy: preferred recipe grouping policy
+ *
+ * Helper function to check if given recipe group is subset we need to check if
+ * all the words described by the given recipe group exist in the advanced rule
+ * look up information
+ */
+static bool
+ice_is_recipe_subset(struct ice_prot_lkup_ext *lkup_exts,
+		     const struct ice_pref_recipe_group *r_policy)
+{
+	u8 ind[ICE_NUM_WORDS_RECIPE];
+	u8 count = 0;
+	u8 i;
+
+	/* check if everything in the r_policy is part of the entire rule */
+	for (i = 0; i < r_policy->n_val_pairs; i++) {
+		u8 j;
+
+		j = ice_find_prot_off_ind(lkup_exts, r_policy->pairs[i].prot_id,
+					  r_policy->pairs[i].off);
+		if (j >= ICE_MAX_CHAIN_WORDS)
+			return false;
+
+		/* store the indexes temporarily found by the find function
+		 * this will be used to mark the words as 'done'
+		 */
+		ind[count++] = j;
+	}
+
+	/* If the entire policy recipe was a true match, then mark the fields
+	 * that are covered by the recipe as 'done' meaning that these words
+	 * will be clumped together in one recipe.
+	 * "Done" here means in our searching if certain recipe group
+	 * matches or is subset of the given rule, then we mark all
+	 * the corresponding offsets as found. So the remaining recipes should
+	 * be created with whatever words that were left.
+	 */
+	for (i = 0; i < count; i++) {
+		u8 in = ind[i];
+
+		ice_set_bit(in, lkup_exts->done);
+	}
+	return true;
+}
+
+/**
+ * ice_create_first_fit_recp_def - Create a recipe grouping
+ * @hw: pointer to the hardware structure
+ * @lkup_exts: an array of protocol header extractions
+ * @rg_list: pointer to a list that stores new recipe groups
+ * @recp_cnt: pointer to a variable that stores returned number of recipe groups
+ *
+ * Using first fit algorithm, take all the words that are still not done
+ * and start grouping them in 4-word groups. Each group makes up one
+ * recipe.
+ */
+static enum ice_status
+ice_create_first_fit_recp_def(struct ice_hw *hw,
+			      struct ice_prot_lkup_ext *lkup_exts,
+			      struct LIST_HEAD_TYPE *rg_list,
+			      u8 *recp_cnt)
+{
+	struct ice_pref_recipe_group *grp = NULL;
+	u8 j;
+
+	*recp_cnt = 0;
+
+	/* Walk through every word in the rule to check if it is not done. If so
+	 * then this word needs to be part of a new recipe.
+	 */
+	for (j = 0; j < lkup_exts->n_val_words; j++)
+		if (!ice_is_bit_set(lkup_exts->done, j)) {
+			if (!grp ||
+			    grp->n_val_pairs == ICE_NUM_WORDS_RECIPE) {
+				struct ice_recp_grp_entry *entry;
+
+				entry = (struct ice_recp_grp_entry *)
+					ice_malloc(hw, sizeof(*entry));
+				if (!entry)
+					return ICE_ERR_NO_MEMORY;
+				LIST_ADD(&entry->l_entry, rg_list);
+				grp = &entry->r_group;
+				(*recp_cnt)++;
+			}
+
+			grp->pairs[grp->n_val_pairs].prot_id =
+				lkup_exts->fv_words[j].prot_id;
+			grp->pairs[grp->n_val_pairs].off =
+				lkup_exts->fv_words[j].off;
+			grp->n_val_pairs++;
+		}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_fill_fv_word_index - fill in the field vector indices for a recipe group
+ * @hw: pointer to the hardware structure
+ * @fv_list: field vector with the extraction sequence information
+ * @rg_list: recipe groupings with protocol-offset pairs
+ *
+ * Helper function to fill in the field vector indices for protocol-offset
+ * pairs. These indexes are then ultimately programmed into a recipe.
+ */
+static void
+ice_fill_fv_word_index(struct ice_hw *hw, struct LIST_HEAD_TYPE *fv_list,
+		       struct LIST_HEAD_TYPE *rg_list)
+{
+	struct ice_sw_fv_list_entry *fv;
+	struct ice_recp_grp_entry *rg;
+	struct ice_fv_word *fv_ext;
+
+	if (LIST_EMPTY(fv_list))
+		return;
+
+	fv = LIST_FIRST_ENTRY(fv_list, struct ice_sw_fv_list_entry, list_entry);
+	fv_ext = fv->fv_ptr->ew;
+
+	LIST_FOR_EACH_ENTRY(rg, rg_list, ice_recp_grp_entry, l_entry) {
+		u8 i;
+
+		for (i = 0; i < rg->r_group.n_val_pairs; i++) {
+			struct ice_fv_word *pr;
+			u8 j;
+
+			pr = &rg->r_group.pairs[i];
+			for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++)
+				if (fv_ext[j].prot_id == pr->prot_id &&
+				    fv_ext[j].off == pr->off) {
+					/* Store index of field vector */
+					rg->fv_idx[i] = j;
+					break;
+				}
+		}
+	}
+}
+
+/**
+ * ice_add_sw_recipe - function to call AQ calls to create switch recipe
+ * @hw: pointer to hardware structure
+ * @rm: recipe management list entry
+ * @match_tun: if field vector index for tunnel needs to be programmed
+ */
+static enum ice_status
+ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm,
+		  bool match_tun)
+{
+	struct ice_aqc_recipe_data_elem *tmp;
+	struct ice_aqc_recipe_data_elem *buf;
+	struct ice_recp_grp_entry *entry;
+	enum ice_status status;
+	u16 recipe_count;
+	u8 chain_idx;
+	u8 recps = 0;
+
+	/* When more than one recipe are required, another recipe is needed to
+	 * chain them together. Matching a tunnel metadata ID takes up one of
+	 * the match fields in the chaining recipe reducing the number of
+	 * chained recipes by one.
+	 */
+	if (rm->n_grp_count > 1)
+		rm->n_grp_count++;
+	if (rm->n_grp_count > ICE_MAX_CHAIN_RECIPE ||
+	    (match_tun && rm->n_grp_count > (ICE_MAX_CHAIN_RECIPE - 1)))
+		return ICE_ERR_MAX_LIMIT;
+
+	tmp = (struct ice_aqc_recipe_data_elem *)ice_calloc(hw,
+							    ICE_MAX_NUM_RECIPES,
+							    sizeof(*tmp));
+	if (!tmp)
+		return ICE_ERR_NO_MEMORY;
+
+	buf = (struct ice_aqc_recipe_data_elem *)
+		ice_calloc(hw, rm->n_grp_count, sizeof(*buf));
+	if (!buf) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_mem;
+	}
+
+	ice_zero_bitmap(rm->r_bitmap, ICE_MAX_NUM_RECIPES);
+	recipe_count = ICE_MAX_NUM_RECIPES;
+	status = ice_aq_get_recipe(hw, tmp, &recipe_count, ICE_SW_LKUP_MAC,
+				   NULL);
+	if (status || recipe_count == 0)
+		goto err_unroll;
+
+	/* Allocate the recipe resources, and configure them according to the
+	 * match fields from protocol headers and extracted field vectors.
+	 */
+	chain_idx = ICE_CHAIN_FV_INDEX_START -
+		ice_find_first_bit(available_result_ids,
+				   ICE_CHAIN_FV_INDEX_START + 1);
+	LIST_FOR_EACH_ENTRY(entry, &rm->rg_list, ice_recp_grp_entry, l_entry) {
+		u8 i;
+
+		status = ice_alloc_recipe(hw, &entry->rid);
+		if (status)
+			goto err_unroll;
+
+		/* Clear the result index of the located recipe, as this will be
+		 * updated, if needed, later in the recipe creation process.
+		 */
+		tmp[0].content.result_indx = 0;
+
+		buf[recps] = tmp[0];
+		buf[recps].recipe_indx = (u8)entry->rid;
+		/* if the recipe is a non-root recipe RID should be programmed
+		 * as 0 for the rules to be applied correctly.
+		 */
+		buf[recps].content.rid = 0;
+		ice_memset(&buf[recps].content.lkup_indx, 0,
+			   sizeof(buf[recps].content.lkup_indx),
+			   ICE_NONDMA_MEM);
+
+		/* All recipes use look-up field index 0 to match switch ID. */
+		buf[recps].content.lkup_indx[0] = 0;
+		buf[recps].content.mask[0] =
+			CPU_TO_LE16(ICE_AQ_SW_ID_LKUP_MASK);
+		/* Setup lkup_indx 1..4 to INVALID/ignore and set the mask
+		 * to be 0
+		 */
+		for (i = 1; i <= ICE_NUM_WORDS_RECIPE; i++) {
+			buf[recps].content.lkup_indx[i] = 0x80;
+			buf[recps].content.mask[i] = 0;
+		}
+
+		for (i = 0; i < entry->r_group.n_val_pairs; i++) {
+			buf[recps].content.lkup_indx[i + 1] = entry->fv_idx[i];
+			buf[recps].content.mask[i + 1] = CPU_TO_LE16(0xFFFF);
+		}
+
+		if (rm->n_grp_count > 1) {
+			entry->chain_idx = chain_idx;
+			buf[recps].content.result_indx =
+				ICE_AQ_RECIPE_RESULT_EN |
+				((chain_idx << ICE_AQ_RECIPE_RESULT_DATA_S) &
+				 ICE_AQ_RECIPE_RESULT_DATA_M);
+			ice_clear_bit(ICE_CHAIN_FV_INDEX_START - chain_idx,
+				      available_result_ids);
+			chain_idx = ICE_CHAIN_FV_INDEX_START -
+				ice_find_first_bit(available_result_ids,
+						   ICE_CHAIN_FV_INDEX_START +
+						   1);
+		}
+
+		/* fill recipe dependencies */
+		ice_zero_bitmap((ice_bitmap_t *)buf[recps].recipe_bitmap,
+				ICE_MAX_NUM_RECIPES);
+		ice_set_bit(buf[recps].recipe_indx,
+			    (ice_bitmap_t *)buf[recps].recipe_bitmap);
+		buf[recps].content.act_ctrl_fwd_priority = rm->priority;
+		recps++;
+	}
+
+	if (rm->n_grp_count == 1) {
+		rm->root_rid = buf[0].recipe_indx;
+		ice_set_bit(buf[0].recipe_indx, rm->r_bitmap);
+		buf[0].content.rid = rm->root_rid | ICE_AQ_RECIPE_ID_IS_ROOT;
+		if (sizeof(buf[0].recipe_bitmap) >= sizeof(rm->r_bitmap)) {
+			ice_memcpy(buf[0].recipe_bitmap, rm->r_bitmap,
+				   sizeof(buf[0].recipe_bitmap),
+				   ICE_NONDMA_TO_NONDMA);
+		} else {
+			status = ICE_ERR_BAD_PTR;
+			goto err_unroll;
+		}
+		/* Applicable only for ROOT_RECIPE, set the fwd_priority for
+		 * the recipe which is getting created if specified
+		 * by user. Usually any advanced switch filter, which results
+		 * into new extraction sequence, ended up creating a new recipe
+		 * of type ROOT and usually recipes are associated with profiles
+		 * Switch rule referreing newly created recipe, needs to have
+		 * either/or 'fwd' or 'join' priority, otherwise switch rule
+		 * evaluation will not happen correctly. In other words, if
+		 * switch rule to be evaluated on priority basis, then recipe
+		 * needs to have priority, otherwise it will be evaluated last.
+		 */
+		buf[0].content.act_ctrl_fwd_priority = rm->priority;
+	} else {
+		struct ice_recp_grp_entry *last_chain_entry;
+		u16 rid, i = 0;
+
+		/* Allocate the last recipe that will chain the outcomes of the
+		 * other recipes together
+		 */
+		status = ice_alloc_recipe(hw, &rid);
+		if (status)
+			goto err_unroll;
+
+		buf[recps].recipe_indx = (u8)rid;
+		buf[recps].content.rid = (u8)rid;
+		buf[recps].content.rid |= ICE_AQ_RECIPE_ID_IS_ROOT;
+		/* the new entry created should also be part of rg_list to
+		 * make sure we have complete recipe
+		 */
+		last_chain_entry = (struct ice_recp_grp_entry *)ice_malloc(hw,
+			sizeof(*last_chain_entry));
+		if (!last_chain_entry) {
+			status = ICE_ERR_NO_MEMORY;
+			goto err_unroll;
+		}
+		last_chain_entry->rid = rid;
+		ice_memset(&buf[recps].content.lkup_indx, 0,
+			   sizeof(buf[recps].content.lkup_indx),
+			   ICE_NONDMA_MEM);
+		buf[recps].content.lkup_indx[i] = hw->port_info->sw_id;
+		buf[recps].content.mask[i] =
+			CPU_TO_LE16(ICE_AQ_SW_ID_LKUP_MASK);
+		for (i = 1; i <= ICE_NUM_WORDS_RECIPE; i++) {
+			buf[recps].content.lkup_indx[i] =
+				ICE_AQ_RECIPE_LKUP_IGNORE;
+			buf[recps].content.mask[i] = 0;
+		}
+
+		i = 1;
+		/* update r_bitmap with the recp that is used for chaining */
+		ice_set_bit(rid, rm->r_bitmap);
+		/* this is the recipe that chains all the other recipes so it
+		 * should not have a chaining ID to indicate the same
+		 */
+		last_chain_entry->chain_idx = ICE_INVAL_CHAIN_IND;
+		LIST_FOR_EACH_ENTRY(entry, &rm->rg_list, ice_recp_grp_entry,
+				    l_entry) {
+			last_chain_entry->fv_idx[i] = entry->chain_idx;
+			buf[recps].content.lkup_indx[i] = entry->chain_idx;
+			buf[recps].content.mask[i++] = CPU_TO_LE16(0xFFFF);
+			ice_set_bit(entry->rid, rm->r_bitmap);
+		}
+		LIST_ADD(&last_chain_entry->l_entry, &rm->rg_list);
+		if (sizeof(buf[recps].recipe_bitmap) >=
+		    sizeof(rm->r_bitmap)) {
+			ice_memcpy(buf[recps].recipe_bitmap, rm->r_bitmap,
+				   sizeof(buf[recps].recipe_bitmap),
+				   ICE_NONDMA_TO_NONDMA);
+		} else {
+			status = ICE_ERR_BAD_PTR;
+			goto err_unroll;
+		}
+		buf[recps].content.act_ctrl_fwd_priority = rm->priority;
+
+		/* To differentiate among different UDP tunnels, a meta data ID
+		 * flag is used.
+		 */
+		if (match_tun) {
+			buf[recps].content.lkup_indx[i] = ICE_TUN_FLAG_FV_IND;
+			buf[recps].content.mask[i] =
+				CPU_TO_LE16(ICE_TUN_FLAG_MASK);
+		}
+
+		recps++;
+		rm->root_rid = (u8)rid;
+	}
+	status = ice_acquire_change_lock(hw, ICE_RES_WRITE);
+	if (status)
+		goto err_unroll;
+
+	status = ice_aq_add_recipe(hw, buf, rm->n_grp_count, NULL);
+	ice_release_change_lock(hw);
+	if (status)
+		goto err_unroll;
+
+	/* Every recipe that just got created add it to the recipe
+	 * book keeping list
+	 */
+	LIST_FOR_EACH_ENTRY(entry, &rm->rg_list, ice_recp_grp_entry, l_entry) {
+		struct ice_switch_info *sw = hw->switch_info;
+		struct ice_sw_recipe *recp;
+
+		recp = &sw->recp_list[entry->rid];
+		recp->root_rid = entry->rid;
+		ice_memcpy(&recp->ext_words, entry->r_group.pairs,
+			   entry->r_group.n_val_pairs *
+			   sizeof(struct ice_fv_word),
+			   ICE_NONDMA_TO_NONDMA);
+
+		recp->n_ext_words = entry->r_group.n_val_pairs;
+		recp->chain_idx = entry->chain_idx;
+		recp->recp_created = true;
+		recp->big_recp = false;
+	}
+	rm->root_buf = buf;
+	ice_free(hw, tmp);
+	return status;
+
+err_unroll:
+err_mem:
+	ice_free(hw, tmp);
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_create_recipe_group - creates recipe group
+ * @hw: pointer to hardware structure
+ * @rm: recipe management list entry
+ * @lkup_exts: lookup elements
+ */
+static enum ice_status
+ice_create_recipe_group(struct ice_hw *hw, struct ice_sw_recipe *rm,
+			struct ice_prot_lkup_ext *lkup_exts)
+{
+	struct ice_recp_grp_entry *entry;
+	struct ice_recp_grp_entry *tmp;
+	enum ice_status status;
+	u8 recp_count = 0;
+	u16 groups, i;
+
+	rm->n_grp_count = 0;
+
+	/* Each switch recipe can match up to 5 words or metadata. One word in
+	 * each recipe is used to match the switch ID. Four words are left for
+	 * matching other values. If the new advanced recipe requires more than
+	 * 4 words, it needs to be split into multiple recipes which are chained
+	 * together using the intermediate result that each produces as input to
+	 * the other recipes in the sequence.
+	 */
+	groups = ARRAY_SIZE(ice_recipe_pack);
+
+	/* Check if any of the preferred recipes from the grouping policy
+	 * matches.
+	 */
+	for (i = 0; i < groups; i++)
+		/* Check if the recipe from the preferred grouping matches
+		 * or is a subset of the fields that needs to be looked up.
+		 */
+		if (ice_is_recipe_subset(lkup_exts, &ice_recipe_pack[i])) {
+			/* This recipe can be used by itself or grouped with
+			 * other recipes.
+			 */
+			entry = (struct ice_recp_grp_entry *)
+				ice_malloc(hw, sizeof(*entry));
+			if (!entry) {
+				status = ICE_ERR_NO_MEMORY;
+				goto err_unroll;
+			}
+			entry->r_group = ice_recipe_pack[i];
+			LIST_ADD(&entry->l_entry, &rm->rg_list);
+			rm->n_grp_count++;
+		}
+
+	/* Create recipes for words that are marked not done by packing them
+	 * as best fit.
+	 */
+	status = ice_create_first_fit_recp_def(hw, lkup_exts,
+					       &rm->rg_list, &recp_count);
+	if (!status) {
+		rm->n_grp_count += recp_count;
+		rm->n_ext_words = lkup_exts->n_val_words;
+		ice_memcpy(&rm->ext_words, lkup_exts->fv_words,
+			   sizeof(rm->ext_words), ICE_NONDMA_TO_NONDMA);
+		goto out;
+	}
+
+err_unroll:
+	LIST_FOR_EACH_ENTRY_SAFE(entry, tmp, &rm->rg_list, ice_recp_grp_entry,
+				 l_entry) {
+		LIST_DEL(&entry->l_entry);
+		ice_free(hw, entry);
+	}
+
+out:
+	return status;
+}
+
+/**
+ * ice_get_fv - get field vectors/extraction sequences for spec. lookup types
+ * @hw: pointer to hardware structure
+ * @lkups: lookup elements or match criteria for the advanced recipe, one
+ *	   structure per protocol header
+ * @lkups_cnt: number of protocols
+ * @fv_list: pointer to a list that holds the returned field vectors
+ */
+static enum ice_status
+ice_get_fv(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
+	   struct LIST_HEAD_TYPE *fv_list)
+{
+	enum ice_status status;
+	u16 *prot_ids;
+	u16 i;
+
+	prot_ids = (u16 *)ice_calloc(hw, lkups_cnt, sizeof(*prot_ids));
+	if (!prot_ids)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < lkups_cnt; i++)
+		if (!ice_prot_type_to_id(lkups[i].type, &prot_ids[i])) {
+			status = ICE_ERR_CFG;
+			goto free_mem;
+		}
+
+	/* Find field vectors that include all specified protocol types */
+	status = ice_get_sw_fv_list(hw, prot_ids, lkups_cnt, fv_list);
+
+free_mem:
+	ice_free(hw, prot_ids);
+	return status;
+}
+
+/**
+ * ice_add_adv_recipe - Add an advanced recipe that is not part of the default
+ * @hw: pointer to hardware structure
+ * @lkups: lookup elements or match criteria for the advanced recipe, one
+ *  structure per protocol header
+ * @lkups_cnt: number of protocols
+ * @rinfo: other information regarding the rule e.g. priority and action info
+ * @rid: return the recipe ID of the recipe created
+ */
+static enum ice_status
+ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+		   u16 lkups_cnt, struct ice_adv_rule_info *rinfo, u16 *rid)
+{
+	struct ice_prot_lkup_ext *lkup_exts;
+	struct ice_recp_grp_entry *r_entry;
+	struct ice_sw_fv_list_entry *fvit;
+	struct ice_recp_grp_entry *r_tmp;
+	struct ice_sw_fv_list_entry *tmp;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_sw_recipe *rm;
+	bool match_tun = false;
+	u8 i;
+
+	if (!lkups_cnt)
+		return ICE_ERR_PARAM;
+
+	lkup_exts = (struct ice_prot_lkup_ext *)
+		ice_malloc(hw, sizeof(*lkup_exts));
+	if (!lkup_exts)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Determine the number of words to be matched and if it exceeds a
+	 * recipe's restrictions
+	 */
+	for (i = 0; i < lkups_cnt; i++) {
+		u16 count;
+
+		if (lkups[i].type >= ICE_PROTOCOL_LAST) {
+			status = ICE_ERR_CFG;
+			goto err_free_lkup_exts;
+		}
+
+		count = ice_fill_valid_words(&lkups[i], lkup_exts);
+		if (!count) {
+			status = ICE_ERR_CFG;
+			goto err_free_lkup_exts;
+		}
+	}
+
+	*rid = ice_find_recp(hw, lkup_exts);
+	if (*rid < ICE_MAX_NUM_RECIPES)
+		/* Success if found a recipe that match the existing criteria */
+		goto err_free_lkup_exts;
+
+	/* Recipe we need does not exist, add a recipe */
+
+	rm = (struct ice_sw_recipe *)ice_malloc(hw, sizeof(*rm));
+	if (!rm) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_free_lkup_exts;
+	}
+
+	/* Get field vectors that contain fields extracted from all the protocol
+	 * headers being programmed.
+	 */
+	INIT_LIST_HEAD(&rm->fv_list);
+	INIT_LIST_HEAD(&rm->rg_list);
+
+	status = ice_get_fv(hw, lkups, lkups_cnt, &rm->fv_list);
+	if (status)
+		goto err_unroll;
+
+	/* Group match words into recipes using preferred recipe grouping
+	 * criteria.
+	 */
+	status = ice_create_recipe_group(hw, rm, lkup_exts);
+	if (status)
+		goto err_unroll;
+
+	/* There is only profile for UDP tunnels. So, it is necessary to use a
+	 * metadata ID flag to differentiate different tunnel types. A separate
+	 * recipe needs to be used for the metadata.
+	 */
+	if ((rinfo->tun_type == ICE_SW_TUN_VXLAN_GPE ||
+	     rinfo->tun_type == ICE_SW_TUN_GENEVE ||
+	     rinfo->tun_type == ICE_SW_TUN_VXLAN) && rm->n_grp_count > 1)
+		match_tun = true;
+
+	/* set the recipe priority if specified */
+	rm->priority = rinfo->priority ? rinfo->priority : 0;
+
+	/* Find offsets from the field vector. Pick the first one for all the
+	 * recipes.
+	 */
+	ice_fill_fv_word_index(hw, &rm->fv_list, &rm->rg_list);
+	status = ice_add_sw_recipe(hw, rm, match_tun);
+	if (status)
+		goto err_unroll;
+
+	/* Associate all the recipes created with all the profiles in the
+	 * common field vector.
+	 */
+	LIST_FOR_EACH_ENTRY(fvit, &rm->fv_list, ice_sw_fv_list_entry,
+			    list_entry) {
+		ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_RECIPES);
+
+		status = ice_aq_get_recipe_to_profile(hw, fvit->profile_id,
+						      (u8 *)r_bitmap, NULL);
+		if (status)
+			goto err_unroll;
+
+		ice_or_bitmap(rm->r_bitmap, r_bitmap, rm->r_bitmap,
+			      ICE_MAX_NUM_RECIPES);
+		status = ice_acquire_change_lock(hw, ICE_RES_WRITE);
+		if (status)
+			goto err_unroll;
+
+		status = ice_aq_map_recipe_to_profile(hw, fvit->profile_id,
+						      (u8 *)rm->r_bitmap,
+						      NULL);
+		ice_release_change_lock(hw);
+
+		if (status)
+			goto err_unroll;
+	}
+
+	*rid = rm->root_rid;
+	ice_memcpy(&hw->switch_info->recp_list[*rid].lkup_exts,
+		   lkup_exts, sizeof(*lkup_exts), ICE_NONDMA_TO_NONDMA);
+err_unroll:
+	LIST_FOR_EACH_ENTRY_SAFE(r_entry, r_tmp, &rm->rg_list,
+				 ice_recp_grp_entry, l_entry) {
+		LIST_DEL(&r_entry->l_entry);
+		ice_free(hw, r_entry);
+	}
+
+	LIST_FOR_EACH_ENTRY_SAFE(fvit, tmp, &rm->fv_list, ice_sw_fv_list_entry,
+				 list_entry) {
+		LIST_DEL(&fvit->list_entry);
+		ice_free(hw, fvit);
+	}
+
+	if (rm->root_buf)
+		ice_free(hw, rm->root_buf);
+
+	ice_free(hw, rm);
+
+err_free_lkup_exts:
+	ice_free(hw, lkup_exts);
+
+	return status;
+}
+
+#define ICE_MAC_HDR_OFFSET	0
+#define ICE_IP_HDR_OFFSET	14
+#define ICE_GRE_HDR_OFFSET	34
+#define ICE_MAC_IL_HDR_OFFSET	42
+#define ICE_IP_IL_HDR_OFFSET	56
+#define ICE_L4_HDR_OFFSET	34
+#define ICE_UDP_TUN_HDR_OFFSET	42
+
+/**
+ * ice_find_dummy_packet - find dummy packet with given match criteria
+ *
+ * @lkups: lookup elements or match criteria for the advanced recipe, one
+ *	   structure per protocol header
+ * @lkups_cnt: number of protocols
+ * @tun_type: tunnel type from the match criteria
+ * @pkt: dummy packet to fill according to filter match criteria
+ * @pkt_len: packet length of dummy packet
+ */
+static void
+ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
+		      enum ice_sw_tunnel_type tun_type, const u8 **pkt,
+		      u16 *pkt_len)
+{
+	u16 i;
+
+	if (tun_type == ICE_SW_TUN_NVGRE || tun_type == ICE_ALL_TUNNELS) {
+		*pkt = dummy_gre_packet;
+		*pkt_len = sizeof(dummy_gre_packet);
+		return;
+	}
+
+	if (tun_type == ICE_SW_TUN_VXLAN || tun_type == ICE_SW_TUN_GENEVE ||
+	    tun_type == ICE_SW_TUN_VXLAN_GPE) {
+		*pkt = dummy_udp_tun_packet;
+		*pkt_len = sizeof(dummy_udp_tun_packet);
+		return;
+	}
+
+	for (i = 0; i < lkups_cnt; i++) {
+		if (lkups[i].type == ICE_UDP_ILOS) {
+			*pkt = dummy_udp_tun_packet;
+			*pkt_len = sizeof(dummy_udp_tun_packet);
+			return;
+		}
+	}
+
+	*pkt = dummy_tcp_tun_packet;
+	*pkt_len = sizeof(dummy_tcp_tun_packet);
+}
+
+/**
+ * ice_fill_adv_dummy_packet - fill a dummy packet with given match criteria
+ *
+ * @lkups: lookup elements or match criteria for the advanced recipe, one
+ *	   structure per protocol header
+ * @lkups_cnt: number of protocols
+ * @tun_type: to know if the dummy packet is supposed to be tunnel packet
+ * @s_rule: stores rule information from the match criteria
+ * @dummy_pkt: dummy packet to fill according to filter match criteria
+ * @pkt_len: packet length of dummy packet
+ */
+static void
+ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
+			  enum ice_sw_tunnel_type tun_type,
+			  struct ice_aqc_sw_rules_elem *s_rule,
+			  const u8 *dummy_pkt, u16 pkt_len)
+{
+	u8 *pkt;
+	u16 i;
+
+	/* Start with a packet with a pre-defined/dummy content. Then, fill
+	 * in the header values to be looked up or matched.
+	 */
+	pkt = s_rule->pdata.lkup_tx_rx.hdr;
+
+	ice_memcpy(pkt, dummy_pkt, pkt_len, ICE_NONDMA_TO_NONDMA);
+
+	for (i = 0; i < lkups_cnt; i++) {
+		u32 len, pkt_off, hdr_size, field_off;
+
+		switch (lkups[i].type) {
+		case ICE_MAC_OFOS:
+		case ICE_MAC_IL:
+			pkt_off = offsetof(struct ice_ether_hdr, dst_addr) +
+				((lkups[i].type == ICE_MAC_IL) ?
+				 ICE_MAC_IL_HDR_OFFSET : 0);
+			len = sizeof(lkups[i].h_u.eth_hdr.dst_addr);
+			if ((tun_type == ICE_SW_TUN_VXLAN ||
+			     tun_type == ICE_SW_TUN_GENEVE ||
+			     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
+			     lkups[i].type == ICE_MAC_IL) {
+				pkt_off += sizeof(struct ice_udp_tnl_hdr);
+			}
+
+			ice_memcpy(&pkt[pkt_off],
+				   &lkups[i].h_u.eth_hdr.dst_addr, len,
+				   ICE_NONDMA_TO_NONDMA);
+			pkt_off = offsetof(struct ice_ether_hdr, src_addr) +
+				((lkups[i].type == ICE_MAC_IL) ?
+				 ICE_MAC_IL_HDR_OFFSET : 0);
+			len = sizeof(lkups[i].h_u.eth_hdr.src_addr);
+			if ((tun_type == ICE_SW_TUN_VXLAN ||
+			     tun_type == ICE_SW_TUN_GENEVE ||
+			     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
+			     lkups[i].type == ICE_MAC_IL) {
+				pkt_off += sizeof(struct ice_udp_tnl_hdr);
+			}
+			ice_memcpy(&pkt[pkt_off],
+				   &lkups[i].h_u.eth_hdr.src_addr, len,
+				   ICE_NONDMA_TO_NONDMA);
+			if (lkups[i].h_u.eth_hdr.ethtype_id) {
+				pkt_off = offsetof(struct ice_ether_hdr,
+						   ethtype_id) +
+					((lkups[i].type == ICE_MAC_IL) ?
+					 ICE_MAC_IL_HDR_OFFSET : 0);
+				len = sizeof(lkups[i].h_u.eth_hdr.ethtype_id);
+				if ((tun_type == ICE_SW_TUN_VXLAN ||
+				     tun_type == ICE_SW_TUN_GENEVE ||
+				     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
+				     lkups[i].type == ICE_MAC_IL) {
+					pkt_off +=
+						sizeof(struct ice_udp_tnl_hdr);
+				}
+				ice_memcpy(&pkt[pkt_off],
+					   &lkups[i].h_u.eth_hdr.ethtype_id,
+					   len, ICE_NONDMA_TO_NONDMA);
+			}
+			break;
+		case ICE_IPV4_OFOS:
+			hdr_size = sizeof(struct ice_ipv4_hdr);
+			if (lkups[i].h_u.ipv4_hdr.dst_addr) {
+				pkt_off = ICE_IP_HDR_OFFSET +
+					   offsetof(struct ice_ipv4_hdr,
+						    dst_addr);
+				field_off = offsetof(struct ice_ipv4_hdr,
+						     dst_addr);
+				len = hdr_size - field_off;
+				ice_memcpy(&pkt[pkt_off],
+					   &lkups[i].h_u.ipv4_hdr.dst_addr,
+					   len, ICE_NONDMA_TO_NONDMA);
+			}
+			if (lkups[i].h_u.ipv4_hdr.src_addr) {
+				pkt_off = ICE_IP_HDR_OFFSET +
+					   offsetof(struct ice_ipv4_hdr,
+						    src_addr);
+				field_off = offsetof(struct ice_ipv4_hdr,
+						     src_addr);
+				len = hdr_size - field_off;
+				ice_memcpy(&pkt[pkt_off],
+					   &lkups[i].h_u.ipv4_hdr.src_addr,
+					   len, ICE_NONDMA_TO_NONDMA);
+			}
+			break;
+		case ICE_IPV4_IL:
+			break;
+		case ICE_TCP_IL:
+		case ICE_UDP_ILOS:
+		case ICE_SCTP_IL:
+			hdr_size = sizeof(struct ice_udp_tnl_hdr);
+			if (lkups[i].h_u.l4_hdr.dst_port) {
+				pkt_off = ICE_L4_HDR_OFFSET +
+					   offsetof(struct ice_l4_hdr,
+						    dst_port);
+				field_off = offsetof(struct ice_l4_hdr,
+						     dst_port);
+				len =  hdr_size - field_off;
+				ice_memcpy(&pkt[pkt_off],
+					   &lkups[i].h_u.l4_hdr.dst_port,
+					   len, ICE_NONDMA_TO_NONDMA);
+			}
+			if (lkups[i].h_u.l4_hdr.src_port) {
+				pkt_off = ICE_L4_HDR_OFFSET +
+					offsetof(struct ice_l4_hdr, src_port);
+				field_off = offsetof(struct ice_l4_hdr,
+						     src_port);
+				len =  hdr_size - field_off;
+				ice_memcpy(&pkt[pkt_off],
+					   &lkups[i].h_u.l4_hdr.src_port,
+					   len, ICE_NONDMA_TO_NONDMA);
+			}
+			break;
+		case ICE_VXLAN:
+		case ICE_GENEVE:
+		case ICE_VXLAN_GPE:
+			pkt_off = ICE_UDP_TUN_HDR_OFFSET +
+				   offsetof(struct ice_udp_tnl_hdr, vni);
+			field_off = offsetof(struct ice_udp_tnl_hdr, vni);
+			len =  sizeof(struct ice_udp_tnl_hdr) - field_off;
+			ice_memcpy(&pkt[pkt_off], &lkups[i].h_u.tnl_hdr.vni,
+				   len, ICE_NONDMA_TO_NONDMA);
+			break;
+		default:
+			break;
+		}
+	}
+	s_rule->pdata.lkup_tx_rx.hdr_len = CPU_TO_LE16(pkt_len);
+}
+
+/**
+ * ice_find_adv_rule_entry - Search a rule entry
+ * @hw: pointer to the hardware structure
+ * @lkups: lookup elements or match criteria for the advanced recipe, one
+ *	   structure per protocol header
+ * @lkups_cnt: number of protocols
+ * @recp_id: recipe ID for which we are finding the rule
+ * @rinfo: other information regarding the rule e.g. priority and action info
+ *
+ * Helper function to search for a given advance rule entry
+ * Returns pointer to entry storing the rule if found
+ */
+static struct ice_adv_fltr_mgmt_list_entry *
+ice_find_adv_rule_entry(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+			u16 lkups_cnt, u8 recp_id,
+			struct ice_adv_rule_info *rinfo)
+{
+	struct ice_adv_fltr_mgmt_list_entry *list_itr;
+	struct ice_switch_info *sw = hw->switch_info;
+	int i;
+
+	LIST_FOR_EACH_ENTRY(list_itr, &sw->recp_list[recp_id].filt_rules,
+			    ice_adv_fltr_mgmt_list_entry, list_entry) {
+		bool lkups_matched = true;
+
+		if (lkups_cnt != list_itr->lkups_cnt)
+			continue;
+		for (i = 0; i < list_itr->lkups_cnt; i++)
+			if (memcmp(&list_itr->lkups[i], &lkups[i],
+				   sizeof(*lkups))) {
+				lkups_matched = false;
+				break;
+			}
+		if (rinfo->sw_act.flag == list_itr->rule_info.sw_act.flag &&
+		    rinfo->tun_type == list_itr->rule_info.tun_type &&
+		    lkups_matched)
+			return list_itr;
+	}
+	return NULL;
+}
+
+/**
+ * ice_adv_add_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @m_entry: pointer to current adv filter management list entry
+ * @cur_fltr: filter information from the book keeping entry
+ * @new_fltr: filter information with the new VSI to be added
+ *
+ * Call AQ command to add or update previously created VSI list with new VSI.
+ *
+ * Helper function to do book keeping associated with adding filter information
+ * The algorithm to do the booking keeping is described below :
+ * When a VSI needs to subscribe to a given advanced filter
+ *	if only one VSI has been added till now
+ *		Allocate a new VSI list and add two VSIs
+ *		to this list using switch rule command
+ *		Update the previously created switch rule with the
+ *		newly created VSI list ID
+ *	if a VSI list was previously created
+ *		Add the new VSI to the previously created VSI list set
+ *		using the update switch rule command
+ */
+static enum ice_status
+ice_adv_add_update_vsi_list(struct ice_hw *hw,
+			    struct ice_adv_fltr_mgmt_list_entry *m_entry,
+			    struct ice_adv_rule_info *cur_fltr,
+			    struct ice_adv_rule_info *new_fltr)
+{
+	enum ice_status status;
+	u16 vsi_list_id = 0;
+
+	if (cur_fltr->sw_act.fltr_act == ICE_FWD_TO_Q ||
+	    cur_fltr->sw_act.fltr_act == ICE_FWD_TO_QGRP)
+		return ICE_ERR_NOT_IMPL;
+
+	if (cur_fltr->sw_act.fltr_act == ICE_DROP_PACKET &&
+	    new_fltr->sw_act.fltr_act == ICE_DROP_PACKET)
+		return ICE_ERR_ALREADY_EXISTS;
+
+	if ((new_fltr->sw_act.fltr_act == ICE_FWD_TO_Q ||
+	     new_fltr->sw_act.fltr_act == ICE_FWD_TO_QGRP) &&
+	    (cur_fltr->sw_act.fltr_act == ICE_FWD_TO_VSI ||
+	     cur_fltr->sw_act.fltr_act == ICE_FWD_TO_VSI_LIST))
+		return ICE_ERR_NOT_IMPL;
+
+	if (m_entry->vsi_count < 2 && !m_entry->vsi_list_info) {
+		 /* Only one entry existed in the mapping and it was not already
+		  * a part of a VSI list. So, create a VSI list with the old and
+		  * new VSIs.
+		  */
+		struct ice_fltr_info tmp_fltr;
+		u16 vsi_handle_arr[2];
+
+		/* A rule already exists with the new VSI being added */
+		if (cur_fltr->sw_act.fwd_id.hw_vsi_id ==
+		    new_fltr->sw_act.fwd_id.hw_vsi_id)
+			return ICE_ERR_ALREADY_EXISTS;
+
+		vsi_handle_arr[0] = cur_fltr->sw_act.vsi_handle;
+		vsi_handle_arr[1] = new_fltr->sw_act.vsi_handle;
+		status = ice_create_vsi_list_rule(hw, &vsi_handle_arr[0], 2,
+						  &vsi_list_id,
+						  ICE_SW_LKUP_LAST);
+		if (status)
+			return status;
+
+		tmp_fltr.fltr_rule_id = cur_fltr->fltr_rule_id;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI_LIST;
+		tmp_fltr.fwd_id.vsi_list_id = vsi_list_id;
+		/* Update the previous switch rule of "forward to VSI" to
+		 * "fwd to VSI list"
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			return status;
+
+		cur_fltr->sw_act.fwd_id.vsi_list_id = vsi_list_id;
+		cur_fltr->sw_act.fltr_act = ICE_FWD_TO_VSI_LIST;
+		m_entry->vsi_list_info =
+			ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
+						vsi_list_id);
+	} else {
+		u16 vsi_handle = new_fltr->sw_act.vsi_handle;
+
+		if (!m_entry->vsi_list_info)
+			return ICE_ERR_CFG;
+
+		/* A rule already exists with the new VSI being added */
+		if (ice_is_bit_set(m_entry->vsi_list_info->vsi_map, vsi_handle))
+			return ICE_SUCCESS;
+
+		/* Update the previously created VSI list set with
+		 * the new VSI ID passed in
+		 */
+		vsi_list_id = cur_fltr->sw_act.fwd_id.vsi_list_id;
+
+		status = ice_update_vsi_list_rule(hw, &vsi_handle, 1,
+						  vsi_list_id, false,
+						  ice_aqc_opc_update_sw_rules,
+						  ICE_SW_LKUP_LAST);
+		/* update VSI list mapping info with new VSI ID */
+		if (!status)
+			ice_set_bit(vsi_handle,
+				    m_entry->vsi_list_info->vsi_map);
+	}
+	if (!status)
+		m_entry->vsi_count++;
+	return status;
+}
+
+/**
+ * ice_add_adv_rule - create an advanced switch rule
+ * @hw: pointer to the hardware structure
+ * @lkups: information on the words that needs to be looked up. All words
+ * together makes one recipe
+ * @lkups_cnt: num of entries in the lkups array
+ * @rinfo: other information related to the rule that needs to be programmed
+ * @added_entry: this will return recipe_id, rule_id and vsi_handle. should be
+ *               ignored is case of error.
+ *
+ * This function can program only 1 rule at a time. The lkups is used to
+ * describe the all the words that forms the "lookup" portion of the recipe.
+ * These words can span multiple protocols. Callers to this function need to
+ * pass in a list of protocol headers with lookup information along and mask
+ * that determines which words are valid from the given protocol header.
+ * rinfo describes other information related to this rule such as forwarding
+ * IDs, priority of this rule, etc.
+ */
+enum ice_status
+ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo,
+		 struct ice_rule_query_data *added_entry)
+{
+	struct ice_adv_fltr_mgmt_list_entry *m_entry, *adv_fltr = NULL;
+	u16 rid = 0, i, pkt_len, rule_buf_sz, vsi_handle;
+	struct ice_aqc_sw_rules_elem *s_rule;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_switch_info *sw;
+	enum ice_status status;
+	const u8 *pkt = NULL;
+	u32 act = 0;
+
+	if (!lkups_cnt)
+		return ICE_ERR_PARAM;
+
+	for (i = 0; i < lkups_cnt; i++) {
+		u16 j, *ptr;
+
+		/* Validate match masks to make sure they match complete 16-bit
+		 * words.
+		 */
+		ptr = (u16 *)&lkups->m_u;
+		for (j = 0; j < sizeof(lkups->m_u) / sizeof(u16); j++)
+			if (ptr[j] != 0 && ptr[j] != 0xffff)
+				return ICE_ERR_PARAM;
+	}
+
+	if (!(rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI ||
+	      rinfo->sw_act.fltr_act == ICE_FWD_TO_Q ||
+	      rinfo->sw_act.fltr_act == ICE_DROP_PACKET))
+		return ICE_ERR_CFG;
+
+	vsi_handle = rinfo->sw_act.vsi_handle;
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	if (rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI)
+		rinfo->sw_act.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, vsi_handle);
+	if (rinfo->sw_act.flag & ICE_FLTR_TX)
+		rinfo->sw_act.src = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	status = ice_add_adv_recipe(hw, lkups, lkups_cnt, rinfo, &rid);
+	if (status)
+		return status;
+	m_entry = ice_find_adv_rule_entry(hw, lkups, lkups_cnt, rid, rinfo);
+	if (m_entry) {
+		/* we have to add VSI to VSI_LIST and increment vsi_count.
+		 * Also Update VSI list so that we can change forwarding rule
+		 * if the rule already exists, we will check if it exists with
+		 * same vsi_id, if not then add it to the VSI list if it already
+		 * exists if not then create a VSI list and add the existing VSI
+		 * ID and the new VSI ID to the list
+		 * We will add that VSI to the list
+		 */
+		status = ice_adv_add_update_vsi_list(hw, m_entry,
+						     &m_entry->rule_info,
+						     rinfo);
+		if (added_entry) {
+			added_entry->rid = rid;
+			added_entry->rule_id = m_entry->rule_info.fltr_rule_id;
+			added_entry->vsi_handle = rinfo->sw_act.vsi_handle;
+		}
+		return status;
+	}
+	ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt,
+			      &pkt_len);
+	rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + pkt_len;
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rule_buf_sz);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+	act |= ICE_SINGLE_ACT_LB_ENABLE | ICE_SINGLE_ACT_LAN_ENABLE;
+	switch (rinfo->sw_act.fltr_act) {
+	case ICE_FWD_TO_VSI:
+		act |= (rinfo->sw_act.fwd_id.hw_vsi_id <<
+			ICE_SINGLE_ACT_VSI_ID_S) & ICE_SINGLE_ACT_VSI_ID_M;
+		act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_Q:
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (rinfo->sw_act.fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+		       ICE_SINGLE_ACT_Q_INDEX_M;
+		break;
+	case ICE_DROP_PACKET:
+		act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_DROP |
+		       ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	default:
+		status = ICE_ERR_CFG;
+		goto err_ice_add_adv_rule;
+	}
+
+	/* set the rule LOOKUP type based on caller specified 'RX'
+	 * instead of hardcoding it to be either LOOKUP_TX/RX
+	 *
+	 * for 'RX' set the source to be the port number
+	 * for 'TX' set the source to be the source HW VSI number (determined
+	 * by caller)
+	 */
+	if (rinfo->rx) {
+		s_rule->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_RX);
+		s_rule->pdata.lkup_tx_rx.src =
+			CPU_TO_LE16(hw->port_info->lport);
+	} else {
+		s_rule->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_TX);
+		s_rule->pdata.lkup_tx_rx.src = CPU_TO_LE16(rinfo->sw_act.src);
+	}
+
+	s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(rid);
+	s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
+
+	ice_fill_adv_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, s_rule,
+				  pkt, pkt_len);
+
+	status = ice_aq_sw_rules(hw, (struct ice_aqc_sw_rules *)s_rule,
+				 rule_buf_sz, 1, ice_aqc_opc_add_sw_rules,
+				 NULL);
+	if (status)
+		goto err_ice_add_adv_rule;
+	adv_fltr = (struct ice_adv_fltr_mgmt_list_entry *)
+		ice_malloc(hw, sizeof(struct ice_adv_fltr_mgmt_list_entry));
+	if (!adv_fltr) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_ice_add_adv_rule;
+	}
+
+	adv_fltr->lkups = (struct ice_adv_lkup_elem *)
+		ice_memdup(hw, lkups, lkups_cnt * sizeof(*lkups),
+			   ICE_NONDMA_TO_NONDMA);
+	if (!adv_fltr->lkups) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_ice_add_adv_rule;
+	}
+
+	adv_fltr->lkups_cnt = lkups_cnt;
+	adv_fltr->rule_info = *rinfo;
+	adv_fltr->rule_info.fltr_rule_id =
+		LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+	sw = hw->switch_info;
+	sw->recp_list[rid].adv_rule = true;
+	rule_head = &sw->recp_list[rid].filt_rules;
+
+	if (rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI) {
+		struct ice_fltr_info tmp_fltr;
+
+		tmp_fltr.fltr_rule_id =
+			LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI;
+		tmp_fltr.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, vsi_handle);
+		tmp_fltr.vsi_handle = vsi_handle;
+		/* Update the previous switch rule of "forward to VSI" to
+		 * "fwd to VSI list"
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			goto err_ice_add_adv_rule;
+		adv_fltr->vsi_count = 1;
+	}
+
+	/* Add rule entry to book keeping list */
+	LIST_ADD(&adv_fltr->list_entry, rule_head);
+	if (added_entry) {
+		added_entry->rid = rid;
+		added_entry->rule_id = adv_fltr->rule_info.fltr_rule_id;
+		added_entry->vsi_handle = rinfo->sw_act.vsi_handle;
+	}
+err_ice_add_adv_rule:
+	if (status && adv_fltr) {
+		ice_free(hw, adv_fltr->lkups);
+		ice_free(hw, adv_fltr);
+	}
+
+	ice_free(hw, s_rule);
+
+	return status;
+}
 /**
  * ice_replay_fltr - Replay all the filters stored by a specific list head
  * @hw: pointer to the hardware structure
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index fd61c0eea..890df13dd 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -172,11 +172,21 @@ struct ice_sw_act_ctrl {
 	u8 qgrp_size;
 };
 
+struct ice_rule_query_data {
+	/* Recipe ID for which the requested rule was added */
+	u16 rid;
+	/* Rule ID that was added or is supposed to be removed */
+	u16 rule_id;
+	/* vsi_handle for which Rule was added or is supposed to be removed */
+	u16 vsi_handle;
+};
+
 struct ice_adv_rule_info {
 	enum ice_sw_tunnel_type tun_type;
 	struct ice_sw_act_ctrl sw_act;
 	u32 priority;
 	u8 rx; /* true means LOOKUP_RX otherwise LOOKUP_TX */
+	u16 fltr_rule_id;
 };
 
 /* A collection of one or more four word recipe */
@@ -222,6 +232,7 @@ struct ice_sw_recipe {
 	/* Profiles this recipe should be associated with */
 	struct LIST_HEAD_TYPE fv_list;
 
+#define ICE_MAX_NUM_PROFILES 256
 	/* Profiles this recipe is associated with */
 	u8 num_profs, *prof_ids;
 
@@ -281,6 +292,8 @@ struct ice_adv_fltr_mgmt_list_entry {
 	struct ice_adv_lkup_elem *lkups;
 	struct ice_adv_rule_info rule_info;
 	u16 lkups_cnt;
+	struct ice_vsi_list_map_info *vsi_list_info;
+	u16 vsi_count;
 };
 
 enum ice_promisc_flags {
@@ -421,7 +434,15 @@ enum ice_status
 ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
 			     struct ice_sq_cd *cd);
 
+enum ice_status
+ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
+			     struct ice_sq_cd *cd);
+
 enum ice_status ice_alloc_recipe(struct ice_hw *hw, u16 *recipe_id);
+enum ice_status
+ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo,
+		 struct ice_rule_query_data *added_entry);
 enum ice_status ice_replay_all_fltr(struct ice_hw *hw);
 
 enum ice_status ice_init_def_sw_recp(struct ice_hw *hw);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 08/66] net/ice/base: replay advanced rule after reset
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (6 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 07/66] net/ice/base: programming a " Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 09/66] net/ice/base: code for removing advanced rule Leyi Rong
                     ` (58 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Victor Raj, Paul M Stillwell Jr

Code added to replay the advanced rule per VSI basis and remove the
advanced rule information from shared code recipe list.

Signed-off-by: Victor Raj <victor.raj@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 81 ++++++++++++++++++++++++++-----
 1 file changed, 69 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index c53021aed..ca0497ca7 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -3033,6 +3033,27 @@ ice_rem_sw_rule_info(struct ice_hw *hw, struct LIST_HEAD_TYPE *rule_head)
 	}
 }
 
+/**
+ * ice_rem_adv_rule_info
+ * @hw: pointer to the hardware structure
+ * @rule_head: pointer to the switch list structure that we want to delete
+ */
+static void
+ice_rem_adv_rule_info(struct ice_hw *hw, struct LIST_HEAD_TYPE *rule_head)
+{
+	struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
+	struct ice_adv_fltr_mgmt_list_entry *lst_itr;
+
+	if (LIST_EMPTY(rule_head))
+		return;
+
+	LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry, rule_head,
+				 ice_adv_fltr_mgmt_list_entry, list_entry) {
+		LIST_DEL(&lst_itr->list_entry);
+		ice_free(hw, lst_itr->lkups);
+		ice_free(hw, lst_itr);
+	}
+}
 
 /**
  * ice_rem_all_sw_rules_info
@@ -3049,6 +3070,8 @@ void ice_rem_all_sw_rules_info(struct ice_hw *hw)
 		rule_head = &sw->recp_list[i].filt_rules;
 		if (!sw->recp_list[i].adv_rule)
 			ice_rem_sw_rule_info(hw, rule_head);
+		else
+			ice_rem_adv_rule_info(hw, rule_head);
 	}
 }
 
@@ -5687,6 +5710,38 @@ ice_replay_vsi_fltr(struct ice_hw *hw, u16 vsi_handle, u8 recp_id,
 	return status;
 }
 
+/**
+ * ice_replay_vsi_adv_rule - Replay advanced rule for requested VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: driver VSI handle
+ * @list_head: list for which filters need to be replayed
+ *
+ * Replay the advanced rule for the given VSI.
+ */
+static enum ice_status
+ice_replay_vsi_adv_rule(struct ice_hw *hw, u16 vsi_handle,
+			struct LIST_HEAD_TYPE *list_head)
+{
+	struct ice_rule_query_data added_entry = { 0 };
+	struct ice_adv_fltr_mgmt_list_entry *adv_fltr;
+	enum ice_status status = ICE_SUCCESS;
+
+	if (LIST_EMPTY(list_head))
+		return status;
+	LIST_FOR_EACH_ENTRY(adv_fltr, list_head, ice_adv_fltr_mgmt_list_entry,
+			    list_entry) {
+		struct ice_adv_rule_info *rinfo = &adv_fltr->rule_info;
+		u16 lk_cnt = adv_fltr->lkups_cnt;
+
+		if (vsi_handle != rinfo->sw_act.vsi_handle)
+			continue;
+		status = ice_add_adv_rule(hw, adv_fltr->lkups, lk_cnt, rinfo,
+					  &added_entry);
+		if (status)
+			break;
+	}
+	return status;
+}
 
 /**
  * ice_replay_vsi_all_fltr - replay all filters stored in bookkeeping lists
@@ -5698,23 +5753,23 @@ ice_replay_vsi_fltr(struct ice_hw *hw, u16 vsi_handle, u8 recp_id,
 enum ice_status ice_replay_vsi_all_fltr(struct ice_hw *hw, u16 vsi_handle)
 {
 	struct ice_switch_info *sw = hw->switch_info;
-	enum ice_status status = ICE_SUCCESS;
+	enum ice_status status;
 	u8 i;
 
+	/* Update the recipes that were created */
 	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
-		/* Update the default recipe lines and ones that were created */
-		if (i < ICE_MAX_NUM_RECIPES || sw->recp_list[i].recp_created) {
-			struct LIST_HEAD_TYPE *head;
+		struct LIST_HEAD_TYPE *head;
 
-			head = &sw->recp_list[i].filt_replay_rules;
-			if (!sw->recp_list[i].adv_rule)
-				status = ice_replay_vsi_fltr(hw, vsi_handle, i,
-							     head);
-			if (status != ICE_SUCCESS)
-				return status;
-		}
+		head = &sw->recp_list[i].filt_replay_rules;
+		if (!sw->recp_list[i].adv_rule)
+			status = ice_replay_vsi_fltr(hw, vsi_handle, i, head);
+		else
+			status = ice_replay_vsi_adv_rule(hw, vsi_handle, head);
+		if (status != ICE_SUCCESS)
+			return status;
 	}
-	return status;
+
+	return ICE_SUCCESS;
 }
 
 /**
@@ -5738,6 +5793,8 @@ void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw)
 			l_head = &sw->recp_list[i].filt_replay_rules;
 			if (!sw->recp_list[i].adv_rule)
 				ice_rem_sw_rule_info(hw, l_head);
+			else
+				ice_rem_adv_rule_info(hw, l_head);
 		}
 	}
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 09/66] net/ice/base: code for removing advanced rule
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (7 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 08/66] net/ice/base: replay advanced rule after reset Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 10/66] net/ice/base: add lock around profile map list Leyi Rong
                     ` (57 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Shivanshu Shukla, Paul M Stillwell Jr

This patch also contains ice_remove_adv_rule function to remove existing
advanced rules. it also handles the case when we have multiple VSI using
the same rule using the following helper functions:

ice_adv_rem_update_vsi_list - function to remove VS from VSI list for
advanced rules.

Signed-off-by: Shivanshu Shukla <shivanshu.shukla@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 309 +++++++++++++++++++++++++++++-
 drivers/net/ice/base/ice_switch.h |   9 +
 2 files changed, 310 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index ca0497ca7..3719ac4bb 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -2217,17 +2217,38 @@ ice_find_vsi_list_entry(struct ice_hw *hw, u8 recp_id, u16 vsi_handle,
 {
 	struct ice_vsi_list_map_info *map_info = NULL;
 	struct ice_switch_info *sw = hw->switch_info;
-	struct ice_fltr_mgmt_list_entry *list_itr;
 	struct LIST_HEAD_TYPE *list_head;
 
 	list_head = &sw->recp_list[recp_id].filt_rules;
-	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
-			    list_entry) {
-		if (list_itr->vsi_count == 1 && list_itr->vsi_list_info) {
-			map_info = list_itr->vsi_list_info;
-			if (ice_is_bit_set(map_info->vsi_map, vsi_handle)) {
-				*vsi_list_id = map_info->vsi_list_id;
-				return map_info;
+	if (sw->recp_list[recp_id].adv_rule) {
+		struct ice_adv_fltr_mgmt_list_entry *list_itr;
+
+		LIST_FOR_EACH_ENTRY(list_itr, list_head,
+				    ice_adv_fltr_mgmt_list_entry,
+				    list_entry) {
+			if (list_itr->vsi_list_info) {
+				map_info = list_itr->vsi_list_info;
+				if (ice_is_bit_set(map_info->vsi_map,
+						   vsi_handle)) {
+					*vsi_list_id = map_info->vsi_list_id;
+					return map_info;
+				}
+			}
+		}
+	} else {
+		struct ice_fltr_mgmt_list_entry *list_itr;
+
+		LIST_FOR_EACH_ENTRY(list_itr, list_head,
+				    ice_fltr_mgmt_list_entry,
+				    list_entry) {
+			if (list_itr->vsi_count == 1 &&
+			    list_itr->vsi_list_info) {
+				map_info = list_itr->vsi_list_info;
+				if (ice_is_bit_set(map_info->vsi_map,
+						   vsi_handle)) {
+					*vsi_list_id = map_info->vsi_list_id;
+					return map_info;
+				}
 			}
 		}
 	}
@@ -5562,6 +5583,278 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 
 	return status;
 }
+
+/**
+ * ice_adv_rem_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle of the VSI to remove
+ * @fm_list: filter management entry for which the VSI list management needs to
+ *	     be done
+ */
+static enum ice_status
+ice_adv_rem_update_vsi_list(struct ice_hw *hw, u16 vsi_handle,
+			    struct ice_adv_fltr_mgmt_list_entry *fm_list)
+{
+	struct ice_vsi_list_map_info *vsi_list_info;
+	enum ice_sw_lkup_type lkup_type;
+	enum ice_status status;
+	u16 vsi_list_id;
+
+	if (fm_list->rule_info.sw_act.fltr_act != ICE_FWD_TO_VSI_LIST ||
+	    fm_list->vsi_count == 0)
+		return ICE_ERR_PARAM;
+
+	/* A rule with the VSI being removed does not exist */
+	if (!ice_is_bit_set(fm_list->vsi_list_info->vsi_map, vsi_handle))
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	lkup_type = ICE_SW_LKUP_LAST;
+	vsi_list_id = fm_list->rule_info.sw_act.fwd_id.vsi_list_id;
+	status = ice_update_vsi_list_rule(hw, &vsi_handle, 1, vsi_list_id, true,
+					  ice_aqc_opc_update_sw_rules,
+					  lkup_type);
+	if (status)
+		return status;
+
+	fm_list->vsi_count--;
+	ice_clear_bit(vsi_handle, fm_list->vsi_list_info->vsi_map);
+	vsi_list_info = fm_list->vsi_list_info;
+	if (fm_list->vsi_count == 1) {
+		struct ice_fltr_info tmp_fltr;
+		u16 rem_vsi_handle;
+
+		rem_vsi_handle = ice_find_first_bit(vsi_list_info->vsi_map,
+						    ICE_MAX_VSI);
+		if (!ice_is_vsi_valid(hw, rem_vsi_handle))
+			return ICE_ERR_OUT_OF_RANGE;
+
+		/* Make sure VSI list is empty before removing it below */
+		status = ice_update_vsi_list_rule(hw, &rem_vsi_handle, 1,
+						  vsi_list_id, true,
+						  ice_aqc_opc_update_sw_rules,
+						  lkup_type);
+		if (status)
+			return status;
+		tmp_fltr.fltr_rule_id = fm_list->rule_info.fltr_rule_id;
+		fm_list->rule_info.sw_act.fltr_act = ICE_FWD_TO_VSI;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI;
+		tmp_fltr.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, rem_vsi_handle);
+		fm_list->rule_info.sw_act.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, rem_vsi_handle);
+
+		/* Update the previous switch rule of "MAC forward to VSI" to
+		 * "MAC fwd to VSI list"
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to update pkt fwd rule to FWD_TO_VSI on HW VSI %d, error %d\n",
+				  tmp_fltr.fwd_id.hw_vsi_id, status);
+			return status;
+		}
+	}
+
+	if (fm_list->vsi_count == 1) {
+		/* Remove the VSI list since it is no longer used */
+		status = ice_remove_vsi_list_rule(hw, vsi_list_id, lkup_type);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to remove VSI list %d, error %d\n",
+				  vsi_list_id, status);
+			return status;
+		}
+
+		LIST_DEL(&vsi_list_info->list_entry);
+		ice_free(hw, vsi_list_info);
+		fm_list->vsi_list_info = NULL;
+	}
+
+	return status;
+}
+
+/**
+ * ice_rem_adv_rule - removes existing advanced switch rule
+ * @hw: pointer to the hardware structure
+ * @lkups: information on the words that needs to be looked up. All words
+ *         together makes one recipe
+ * @lkups_cnt: num of entries in the lkups array
+ * @rinfo: Its the pointer to the rule information for the rule
+ *
+ * This function can be used to remove 1 rule at a time. The lkups is
+ * used to describe all the words that forms the "lookup" portion of the
+ * rule. These words can span multiple protocols. Callers to this function
+ * need to pass in a list of protocol headers with lookup information along
+ * and mask that determines which words are valid from the given protocol
+ * header. rinfo describes other information related to this rule such as
+ * forwarding IDs, priority of this rule, etc.
+ */
+enum ice_status
+ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo)
+{
+	struct ice_adv_fltr_mgmt_list_entry *list_elem;
+	struct ice_prot_lkup_ext lkup_exts;
+	u16 rule_buf_sz, pkt_len, i, rid;
+	enum ice_status status = ICE_SUCCESS;
+	bool remove_rule = false;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	const u8 *pkt = NULL;
+	u16 vsi_handle;
+
+	ice_memset(&lkup_exts, 0, sizeof(lkup_exts), ICE_NONDMA_MEM);
+	for (i = 0; i < lkups_cnt; i++) {
+		u16 count;
+
+		if (lkups[i].type >= ICE_PROTOCOL_LAST)
+			return ICE_ERR_CFG;
+
+		count = ice_fill_valid_words(&lkups[i], &lkup_exts);
+		if (!count)
+			return ICE_ERR_CFG;
+	}
+	rid = ice_find_recp(hw, &lkup_exts);
+	/* If did not find a recipe that match the existing criteria */
+	if (rid == ICE_MAX_NUM_RECIPES)
+		return ICE_ERR_PARAM;
+
+	rule_lock = &hw->switch_info->recp_list[rid].filt_rule_lock;
+	list_elem = ice_find_adv_rule_entry(hw, lkups, lkups_cnt, rid, rinfo);
+	/* the rule is already removed */
+	if (!list_elem)
+		return ICE_SUCCESS;
+	ice_acquire_lock(rule_lock);
+	if (list_elem->rule_info.sw_act.fltr_act != ICE_FWD_TO_VSI_LIST) {
+		remove_rule = true;
+	} else if (list_elem->vsi_count > 1) {
+		list_elem->vsi_list_info->ref_cnt--;
+		remove_rule = false;
+		vsi_handle = rinfo->sw_act.vsi_handle;
+		status = ice_adv_rem_update_vsi_list(hw, vsi_handle, list_elem);
+	} else {
+		vsi_handle = rinfo->sw_act.vsi_handle;
+		status = ice_adv_rem_update_vsi_list(hw, vsi_handle, list_elem);
+		if (status) {
+			ice_release_lock(rule_lock);
+			return status;
+		}
+		if (list_elem->vsi_count == 0)
+			remove_rule = true;
+	}
+	ice_release_lock(rule_lock);
+	if (remove_rule) {
+		struct ice_aqc_sw_rules_elem *s_rule;
+
+		ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt,
+				      &pkt_len);
+		rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + pkt_len;
+		s_rule =
+			(struct ice_aqc_sw_rules_elem *)ice_malloc(hw,
+								   rule_buf_sz);
+		if (!s_rule)
+			return ICE_ERR_NO_MEMORY;
+		s_rule->pdata.lkup_tx_rx.act = 0;
+		s_rule->pdata.lkup_tx_rx.index =
+			CPU_TO_LE16(list_elem->rule_info.fltr_rule_id);
+		s_rule->pdata.lkup_tx_rx.hdr_len = 0;
+		status = ice_aq_sw_rules(hw, (struct ice_aqc_sw_rules *)s_rule,
+					 rule_buf_sz, 1,
+					 ice_aqc_opc_remove_sw_rules, NULL);
+		if (status == ICE_SUCCESS) {
+			ice_acquire_lock(rule_lock);
+			LIST_DEL(&list_elem->list_entry);
+			ice_free(hw, list_elem->lkups);
+			ice_free(hw, list_elem);
+			ice_release_lock(rule_lock);
+		}
+		ice_free(hw, s_rule);
+	}
+	return status;
+}
+
+/**
+ * ice_rem_adv_rule_by_id - removes existing advanced switch rule by ID
+ * @hw: pointer to the hardware structure
+ * @remove_entry: data struct which holds rule_id, VSI handle and recipe ID
+ *
+ * This function is used to remove 1 rule at a time. The removal is based on
+ * the remove_entry parameter. This function will remove rule for a given
+ * vsi_handle with a given rule_id which is passed as parameter in remove_entry
+ */
+enum ice_status
+ice_rem_adv_rule_by_id(struct ice_hw *hw,
+		       struct ice_rule_query_data *remove_entry)
+{
+	struct ice_adv_fltr_mgmt_list_entry *list_itr;
+	struct LIST_HEAD_TYPE *list_head;
+	struct ice_adv_rule_info rinfo;
+	struct ice_switch_info *sw;
+
+	sw = hw->switch_info;
+	if (!sw->recp_list[remove_entry->rid].recp_created)
+		return ICE_ERR_PARAM;
+	list_head = &sw->recp_list[remove_entry->rid].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_adv_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (list_itr->rule_info.fltr_rule_id ==
+		    remove_entry->rule_id) {
+			rinfo = list_itr->rule_info;
+			rinfo.sw_act.vsi_handle = remove_entry->vsi_handle;
+			return ice_rem_adv_rule(hw, list_itr->lkups,
+						list_itr->lkups_cnt, &rinfo);
+		}
+	}
+	return ICE_ERR_PARAM;
+}
+
+/**
+ * ice_rem_adv_for_vsi - removes existing advanced switch rules for a
+ *                       given VSI handle
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle for which we are supposed to remove all the rules.
+ *
+ * This function is used to remove all the rules for a given VSI and as soon
+ * as removing a rule fails, it will return immediately with the error code,
+ * else it will return ICE_SUCCESS
+ */
+enum ice_status
+ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_adv_fltr_mgmt_list_entry *list_itr;
+	struct ice_vsi_list_map_info *map_info;
+	struct LIST_HEAD_TYPE *list_head;
+	struct ice_adv_rule_info rinfo;
+	struct ice_switch_info *sw;
+	enum ice_status status;
+	u16 vsi_list_id = 0;
+	u8 rid;
+
+	sw = hw->switch_info;
+	for (rid = 0; rid < ICE_MAX_NUM_RECIPES; rid++) {
+		if (!sw->recp_list[rid].recp_created)
+			continue;
+		if (!sw->recp_list[rid].adv_rule)
+			continue;
+		list_head = &sw->recp_list[rid].filt_rules;
+		map_info = NULL;
+		LIST_FOR_EACH_ENTRY(list_itr, list_head,
+				    ice_adv_fltr_mgmt_list_entry, list_entry) {
+			map_info = ice_find_vsi_list_entry(hw, rid, vsi_handle,
+							   &vsi_list_id);
+			if (!map_info)
+				continue;
+			rinfo = list_itr->rule_info;
+			rinfo.sw_act.vsi_handle = vsi_handle;
+			status = ice_rem_adv_rule(hw, list_itr->lkups,
+						  list_itr->lkups_cnt, &rinfo);
+			if (status)
+				return status;
+			map_info = NULL;
+		}
+	}
+	return ICE_SUCCESS;
+}
+
 /**
  * ice_replay_fltr - Replay all the filters stored by a specific list head
  * @hw: pointer to the hardware structure
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index 890df13dd..a6e17e861 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -443,6 +443,15 @@ enum ice_status
 ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo,
 		 struct ice_rule_query_data *added_entry);
+enum ice_status
+ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle);
+enum ice_status
+ice_rem_adv_rule_by_id(struct ice_hw *hw,
+		       struct ice_rule_query_data *remove_entry);
+enum ice_status
+ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo);
+
 enum ice_status ice_replay_all_fltr(struct ice_hw *hw);
 
 enum ice_status ice_init_def_sw_recp(struct ice_hw *hw);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 10/66] net/ice/base: add lock around profile map list
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (8 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 09/66] net/ice/base: code for removing advanced rule Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 11/66] net/ice/base: save and post reset replay q bandwidth Leyi Rong
                     ` (56 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Add locking mechanism around profile map list.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 29 ++++++++++++++++++++++++----
 drivers/net/ice/base/ice_flex_type.h |  5 +++--
 2 files changed, 28 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index babad94f8..8f0b513f4 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -3919,15 +3919,16 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 }
 
 /**
- * ice_search_prof_id - Search for a profile tracking ID
+ * ice_search_prof_id_low - Search for a profile tracking ID low level
  * @hw: pointer to the HW struct
  * @blk: hardware block
  * @id: profile tracking ID
  *
- * This will search for a profile tracking ID which was previously added.
+ * This will search for a profile tracking ID which was previously added. This
+ * version assumes that the caller has already acquired the prof map lock.
  */
-struct ice_prof_map *
-ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id)
+static struct ice_prof_map *
+ice_search_prof_id_low(struct ice_hw *hw, enum ice_block blk, u64 id)
 {
 	struct ice_prof_map *entry = NULL;
 	struct ice_prof_map *map;
@@ -3943,6 +3944,26 @@ ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id)
 	return entry;
 }
 
+/**
+ * ice_search_prof_id - Search for a profile tracking ID
+ * @hw: pointer to the HW struct
+ * @blk: hardware block
+ * @id: profile tracking ID
+ *
+ * This will search for a profile tracking ID which was previously added.
+ */
+struct ice_prof_map *
+ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id)
+{
+	struct ice_prof_map *entry;
+
+	ice_acquire_lock(&hw->blk[blk].es.prof_map_lock);
+	entry = ice_search_prof_id_low(hw, blk, id);
+	ice_release_lock(&hw->blk[blk].es.prof_map_lock);
+
+	return entry;
+}
+
 /**
  * ice_set_prof_context - Set context for a given profile
  * @hw: pointer to the HW struct
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
index f2a5f27e7..892c94b1f 100644
--- a/drivers/net/ice/base/ice_flex_type.h
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -503,10 +503,11 @@ struct ice_es {
 	u16 count;
 	u16 fvw;
 	u16 *ref_count;
-	u8 *written;
-	u8 reverse; /* set to true to reverse FV order */
 	struct LIST_HEAD_TYPE prof_map;
 	struct ice_fv_word *t;
+	struct ice_lock prof_map_lock;	/* protect access to profiles list */
+	u8 *written;
+	u8 reverse; /* set to true to reverse FV order */
 };
 
 /* PTYPE Group management */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 11/66] net/ice/base: save and post reset replay q bandwidth
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (9 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 10/66] net/ice/base: add lock around profile map list Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 12/66] net/ice/base: rollback AVF RSS configurations Leyi Rong
                     ` (55 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Tarun Singh, Paul M Stillwell Jr

Added code to save the queue bandwidth information when it is applied
and it is replayed when queue is re-enabled again. Earlier saved value
is used for replay purpose.
Added vsi_handle, tc, and q_handle argument to the ice_cfg_q_bw_lmt,
ice_cfg_q_bw_dflt_lmt.

Signed-off-by: Tarun Singh <tarun.k.singh@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c |  7 ++-
 drivers/net/ice/base/ice_common.h |  4 ++
 drivers/net/ice/base/ice_sched.c  | 91 ++++++++++++++++++++++++++-----
 drivers/net/ice/base/ice_sched.h  |  8 +--
 drivers/net/ice/base/ice_switch.h |  5 --
 drivers/net/ice/base/ice_type.h   |  8 +++
 6 files changed, 98 insertions(+), 25 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index c74e4e1d4..09296ead2 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -3606,7 +3606,7 @@ ice_get_ctx(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info)
  * @tc: TC number
  * @q_handle: software queue handle
  */
-static struct ice_q_ctx *
+struct ice_q_ctx *
 ice_get_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 q_handle)
 {
 	struct ice_vsi_ctx *vsi;
@@ -3703,9 +3703,12 @@ ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle,
 	node.node_teid = buf->txqs[0].q_teid;
 	node.data.elem_type = ICE_AQC_ELEM_TYPE_LEAF;
 	q_ctx->q_handle = q_handle;
+	q_ctx->q_teid = LE32_TO_CPU(node.node_teid);
 
-	/* add a leaf node into schduler tree queue layer */
+	/* add a leaf node into scheduler tree queue layer */
 	status = ice_sched_add_node(pi, hw->num_tx_sched_layers - 1, &node);
+	if (!status)
+		status = ice_sched_replay_q_bw(pi, q_ctx);
 
 ena_txq_exit:
 	ice_release_lock(&pi->sched_lock);
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 58c66fdc0..aee754b85 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -186,6 +186,10 @@ void ice_sched_replay_agg(struct ice_hw *hw);
 enum ice_status ice_sched_replay_tc_node_bw(struct ice_hw *hw);
 enum ice_status ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle);
 enum ice_status
+ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx);
+struct ice_q_ctx *
+ice_get_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 q_handle);
+enum ice_status
 ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
 			 enum ice_rl_type rl_type, u8 bw_alloc);
 enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes);
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 8773e62a9..855e3848c 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -4326,27 +4326,61 @@ ice_sched_validate_srl_node(struct ice_sched_node *node, u8 sel_layer)
 	return ICE_ERR_CFG;
 }
 
+/**
+ * ice_sched_save_q_bw - save queue node's BW information
+ * @q_ctx: queue context structure
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save BW information of queue type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_q_bw(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type, u32 bw)
+{
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw(&q_ctx->bw_t_info, bw);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw(&q_ctx->bw_t_info, bw);
+		break;
+	case ICE_SHARED_BW:
+		ice_set_clear_shared_bw(&q_ctx->bw_t_info, bw);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
 /**
  * ice_sched_set_q_bw_lmt - sets queue BW limit
  * @pi: port information structure
- * @q_id: queue ID (leaf node TEID)
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
  * @rl_type: min, max, or shared
  * @bw: bandwidth in Kbps
  *
  * This function sets BW limit of queue scheduling node.
  */
 static enum ice_status
-ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u32 q_id,
-		       enum ice_rl_type rl_type, u32 bw)
+ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		       u16 q_handle, enum ice_rl_type rl_type, u32 bw)
 {
 	enum ice_status status = ICE_ERR_PARAM;
 	struct ice_sched_node *node;
+	struct ice_q_ctx *q_ctx;
 
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
 	ice_acquire_lock(&pi->sched_lock);
-
-	node = ice_sched_find_node_by_teid(pi->root, q_id);
+	q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handle);
+	if (!q_ctx)
+		goto exit_q_bw_lmt;
+	node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
 	if (!node) {
-		ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_id\n");
+		ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_teid\n");
 		goto exit_q_bw_lmt;
 	}
 
@@ -4374,6 +4408,9 @@ ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u32 q_id,
 	else
 		status = ice_sched_set_node_bw_lmt(pi, node, rl_type, bw);
 
+	if (!status)
+		status = ice_sched_save_q_bw(q_ctx, rl_type, bw);
+
 exit_q_bw_lmt:
 	ice_release_lock(&pi->sched_lock);
 	return status;
@@ -4382,32 +4419,38 @@ ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u32 q_id,
 /**
  * ice_cfg_q_bw_lmt - configure queue BW limit
  * @pi: port information structure
- * @q_id: queue ID (leaf node TEID)
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
  * @rl_type: min, max, or shared
  * @bw: bandwidth in Kbps
  *
  * This function configures BW limit of queue scheduling node.
  */
 enum ice_status
-ice_cfg_q_bw_lmt(struct ice_port_info *pi, u32 q_id, enum ice_rl_type rl_type,
-		 u32 bw)
+ice_cfg_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		 u16 q_handle, enum ice_rl_type rl_type, u32 bw)
 {
-	return ice_sched_set_q_bw_lmt(pi, q_id, rl_type, bw);
+	return ice_sched_set_q_bw_lmt(pi, vsi_handle, tc, q_handle, rl_type,
+				      bw);
 }
 
 /**
  * ice_cfg_q_bw_dflt_lmt - configure queue BW default limit
  * @pi: port information structure
- * @q_id: queue ID (leaf node TEID)
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
  * @rl_type: min, max, or shared
  *
  * This function configures BW default limit of queue scheduling node.
  */
 enum ice_status
-ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u32 q_id,
-		      enum ice_rl_type rl_type)
+ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		      u16 q_handle, enum ice_rl_type rl_type)
 {
-	return ice_sched_set_q_bw_lmt(pi, q_id, rl_type, ICE_SCHED_DFLT_BW);
+	return ice_sched_set_q_bw_lmt(pi, vsi_handle, tc, q_handle, rl_type,
+				      ICE_SCHED_DFLT_BW);
 }
 
 /**
@@ -5421,3 +5464,23 @@ ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle)
 	ice_release_lock(&pi->sched_lock);
 	return status;
 }
+
+/**
+ * ice_sched_replay_q_bw - replay queue type node BW
+ * @pi: port information structure
+ * @q_ctx: queue context structure
+ *
+ * This function replays queue type node bandwidth. This function needs to be
+ * called with scheduler lock held.
+ */
+enum ice_status
+ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx)
+{
+	struct ice_sched_node *q_node;
+
+	/* Following also checks the presence of node in tree */
+	q_node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
+	if (!q_node)
+		return ICE_ERR_PARAM;
+	return ice_sched_replay_node_bw(pi->hw, q_node, &q_ctx->bw_t_info);
+}
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 92377a82e..56f9977ab 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -122,11 +122,11 @@ ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
 		    u8 tc_bitmap);
 enum ice_status ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id);
 enum ice_status
-ice_cfg_q_bw_lmt(struct ice_port_info *pi, u32 q_id, enum ice_rl_type rl_type,
-		 u32 bw);
+ice_cfg_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		 u16 q_handle, enum ice_rl_type rl_type, u32 bw);
 enum ice_status
-ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u32 q_id,
-		      enum ice_rl_type rl_type);
+ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		      u16 q_handle, enum ice_rl_type rl_type);
 enum ice_status
 ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
 		       enum ice_rl_type rl_type, u32 bw);
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index a6e17e861..e3fb0434d 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -21,11 +21,6 @@
 #define ICE_VSI_INVAL_ID 0xFFFF
 #define ICE_INVAL_Q_HANDLE 0xFFFF
 
-/* VSI queue context structure */
-struct ice_q_ctx {
-	u16  q_handle;
-};
-
 /* VSI context structure for add/get/update/free operations */
 struct ice_vsi_ctx {
 	u16 vsi_num;
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index d994ea3d2..b229be158 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -569,6 +569,14 @@ struct ice_bw_type_info {
 	u32 shared_bw;
 };
 
+/* VSI queue context structure for given TC */
+struct ice_q_ctx {
+	u16  q_handle;
+	u32  q_teid;
+	/* bw_t_info saves queue BW information */
+	struct ice_bw_type_info bw_t_info;
+};
+
 /* VSI type list entry to locate corresponding VSI/aggregator nodes */
 struct ice_sched_vsi_info {
 	struct ice_sched_node *vsi_node[ICE_MAX_TRAFFIC_CLASS];
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 12/66] net/ice/base: rollback AVF RSS configurations
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (10 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 11/66] net/ice/base: save and post reset replay q bandwidth Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 13/66] net/ice/base: move RSS replay list Leyi Rong
                     ` (54 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Vignesh Sridhar, Paul M Stillwell Jr

Adding support to remove RSS configurations added
prior to failing case in AVF.

Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c | 128 ++++++++++++++++++++++++++++++++
 1 file changed, 128 insertions(+)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index f1bf5b5e7..d97fe1fc7 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -1915,6 +1915,134 @@ ice_rem_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	return status;
 }
 
+/* Mapping of AVF hash bit fields to an L3-L4 hash combination.
+ * As the ice_flow_avf_hdr_field represent individual bit shifts in a hash,
+ * convert its values to their appropriate flow L3, L4 values.
+ */
+#define ICE_FLOW_AVF_RSS_IPV4_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_OTHER) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_FRAG_IPV4))
+#define ICE_FLOW_AVF_RSS_TCP_IPV4_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_TCP_SYN_NO_ACK) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_TCP))
+#define ICE_FLOW_AVF_RSS_UDP_IPV4_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_UNICAST_IPV4_UDP) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_MULTICAST_IPV4_UDP) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_UDP))
+#define ICE_FLOW_AVF_RSS_ALL_IPV4_MASKS \
+	(ICE_FLOW_AVF_RSS_TCP_IPV4_MASKS | ICE_FLOW_AVF_RSS_UDP_IPV4_MASKS | \
+	 ICE_FLOW_AVF_RSS_IPV4_MASKS | BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_SCTP))
+
+#define ICE_FLOW_AVF_RSS_IPV6_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_OTHER) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_FRAG_IPV6))
+#define ICE_FLOW_AVF_RSS_UDP_IPV6_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_UNICAST_IPV6_UDP) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_MULTICAST_IPV6_UDP) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_UDP))
+#define ICE_FLOW_AVF_RSS_TCP_IPV6_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_TCP_SYN_NO_ACK) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_TCP))
+#define ICE_FLOW_AVF_RSS_ALL_IPV6_MASKS \
+	(ICE_FLOW_AVF_RSS_TCP_IPV6_MASKS | ICE_FLOW_AVF_RSS_UDP_IPV6_MASKS | \
+	 ICE_FLOW_AVF_RSS_IPV6_MASKS | BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_SCTP))
+
+#define ICE_FLOW_MAX_CFG	10
+
+/**
+ * ice_add_avf_rss_cfg - add an RSS configuration for AVF driver
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: software VSI handle
+ * @avf_hash: hash bit fields (ICE_AVF_FLOW_FIELD_*) to configure
+ *
+ * This function will take the hash bitmap provided by the AVF driver via a
+ * message, convert it to ICE-compatible values, and configure RSS flow
+ * profiles.
+ */
+enum ice_status
+ice_add_avf_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 avf_hash)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u64 hash_flds;
+
+	if (avf_hash == ICE_AVF_FLOW_FIELD_INVALID ||
+	    !ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Make sure no unsupported bits are specified */
+	if (avf_hash & ~(ICE_FLOW_AVF_RSS_ALL_IPV4_MASKS |
+			 ICE_FLOW_AVF_RSS_ALL_IPV6_MASKS))
+		return ICE_ERR_CFG;
+
+	hash_flds = avf_hash;
+
+	/* Always create an L3 RSS configuration for any L4 RSS configuration */
+	if (hash_flds & ICE_FLOW_AVF_RSS_ALL_IPV4_MASKS)
+		hash_flds |= ICE_FLOW_AVF_RSS_IPV4_MASKS;
+
+	if (hash_flds & ICE_FLOW_AVF_RSS_ALL_IPV6_MASKS)
+		hash_flds |= ICE_FLOW_AVF_RSS_IPV6_MASKS;
+
+	/* Create the corresponding RSS configuration for each valid hash bit */
+	while (hash_flds) {
+		u64 rss_hash = ICE_HASH_INVALID;
+
+		if (hash_flds & ICE_FLOW_AVF_RSS_ALL_IPV4_MASKS) {
+			if (hash_flds & ICE_FLOW_AVF_RSS_IPV4_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV4;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_IPV4_MASKS;
+			} else if (hash_flds &
+				   ICE_FLOW_AVF_RSS_TCP_IPV4_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV4 |
+					ICE_FLOW_HASH_TCP_PORT;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_TCP_IPV4_MASKS;
+			} else if (hash_flds &
+				   ICE_FLOW_AVF_RSS_UDP_IPV4_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV4 |
+					ICE_FLOW_HASH_UDP_PORT;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_UDP_IPV4_MASKS;
+			} else if (hash_flds &
+				   BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_SCTP)) {
+				rss_hash = ICE_FLOW_HASH_IPV4 |
+					ICE_FLOW_HASH_SCTP_PORT;
+				hash_flds &=
+					~BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_SCTP);
+			}
+		} else if (hash_flds & ICE_FLOW_AVF_RSS_ALL_IPV6_MASKS) {
+			if (hash_flds & ICE_FLOW_AVF_RSS_IPV6_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV6;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_IPV6_MASKS;
+			} else if (hash_flds &
+				   ICE_FLOW_AVF_RSS_TCP_IPV6_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV6 |
+					ICE_FLOW_HASH_TCP_PORT;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_TCP_IPV6_MASKS;
+			} else if (hash_flds &
+				   ICE_FLOW_AVF_RSS_UDP_IPV6_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV6 |
+					ICE_FLOW_HASH_UDP_PORT;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_UDP_IPV6_MASKS;
+			} else if (hash_flds &
+				   BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_SCTP)) {
+				rss_hash = ICE_FLOW_HASH_IPV6 |
+					ICE_FLOW_HASH_SCTP_PORT;
+				hash_flds &=
+					~BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_SCTP);
+			}
+		}
+
+		if (rss_hash == ICE_HASH_INVALID)
+			return ICE_ERR_OUT_OF_RANGE;
+
+		status = ice_add_rss_cfg(hw, vsi_handle, rss_hash,
+					 ICE_FLOW_SEG_HDR_NONE);
+		if (status)
+			break;
+	}
+
+	return status;
+}
+
 /**
  * ice_rem_rss_cfg - remove an existing RSS config with matching hashed fields
  * @hw: pointer to the hardware structure
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 13/66] net/ice/base: move RSS replay list
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (11 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 12/66] net/ice/base: rollback AVF RSS configurations Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 14/66] net/ice/base: cache the data of set PHY cfg AQ in SW Leyi Rong
                     ` (53 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang
  Cc: dev, Leyi Rong, Vignesh Sridhar, Henry Tieman, Paul M Stillwell Jr

1. Move the RSS list pointer and lock from the VSI context to the ice_hw
structure. This is to ensure that the RSS configurations added to the
list prior to reset and maintained until the PF is unloaded. This will
ensure that the configuration list is unaffected by VFRs that would
destroy the VSI context. This will allow the replay of RSS entries for
VF VSI, as against current method of re-adding default configurations
and also eliminates the need to re-allocate the RSS list and lock post-VFR.
2. Align RSS flow functions to the new position of the RSS list and lock.
3. Adding bitmap for flow type status.

Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Henry Tieman <henry.w.tieman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c   | 100 +++++++++++++++++-------------
 drivers/net/ice/base/ice_flow.h   |   4 +-
 drivers/net/ice/base/ice_switch.c |   6 +-
 drivers/net/ice/base/ice_switch.h |   2 -
 drivers/net/ice/base/ice_type.h   |   3 +
 5 files changed, 63 insertions(+), 52 deletions(-)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index d97fe1fc7..dccd7d3c7 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -1605,27 +1605,32 @@ ice_flow_set_rss_seg_info(struct ice_flow_seg_info *segs, u64 hash_fields,
 }
 
 /**
- * ice_rem_all_rss_vsi_ctx - remove all RSS configurations from VSI context
+ * ice_rem_vsi_rss_list - remove VSI from RSS list
  * @hw: pointer to the hardware structure
  * @vsi_handle: software VSI handle
  *
+ * Remove the VSI from all RSS configurations in the list.
  */
-void ice_rem_all_rss_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
+void ice_rem_vsi_rss_list(struct ice_hw *hw, u16 vsi_handle)
 {
 	struct ice_rss_cfg *r, *tmp;
 
-	if (!ice_is_vsi_valid(hw, vsi_handle) ||
-	    LIST_EMPTY(&hw->vsi_ctx[vsi_handle]->rss_list_head))
+	if (LIST_EMPTY(&hw->rss_list_head))
 		return;
 
-	ice_acquire_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
-	LIST_FOR_EACH_ENTRY_SAFE(r, tmp,
-				 &hw->vsi_ctx[vsi_handle]->rss_list_head,
+	ice_acquire_lock(&hw->rss_locks);
+	LIST_FOR_EACH_ENTRY_SAFE(r, tmp, &hw->rss_list_head,
 				 ice_rss_cfg, l_entry) {
-		LIST_DEL(&r->l_entry);
-		ice_free(hw, r);
+		if (ice_is_bit_set(r->vsis, vsi_handle)) {
+			ice_clear_bit(vsi_handle, r->vsis);
+
+			if (!ice_is_any_bit_set(r->vsis, ICE_MAX_VSI)) {
+				LIST_DEL(&r->l_entry);
+				ice_free(hw, r);
+			}
+		}
 	}
-	ice_release_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_release_lock(&hw->rss_locks);
 }
 
 /**
@@ -1667,7 +1672,7 @@ enum ice_status ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
 }
 
 /**
- * ice_rem_rss_cfg_vsi_ctx - remove RSS configuration from VSI context
+ * ice_rem_rss_list - remove RSS configuration from list
  * @hw: pointer to the hardware structure
  * @vsi_handle: software VSI handle
  * @prof: pointer to flow profile
@@ -1675,8 +1680,7 @@ enum ice_status ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
  * Assumption: lock has already been acquired for RSS list
  */
 static void
-ice_rem_rss_cfg_vsi_ctx(struct ice_hw *hw, u16 vsi_handle,
-			struct ice_flow_prof *prof)
+ice_rem_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof)
 {
 	struct ice_rss_cfg *r, *tmp;
 
@@ -1684,20 +1688,22 @@ ice_rem_rss_cfg_vsi_ctx(struct ice_hw *hw, u16 vsi_handle,
 	 * hash configurations associated to the flow profile. If found
 	 * remove from the RSS entry list of the VSI context and delete entry.
 	 */
-	LIST_FOR_EACH_ENTRY_SAFE(r, tmp,
-				 &hw->vsi_ctx[vsi_handle]->rss_list_head,
+	LIST_FOR_EACH_ENTRY_SAFE(r, tmp, &hw->rss_list_head,
 				 ice_rss_cfg, l_entry) {
 		if (r->hashed_flds == prof->segs[prof->segs_cnt - 1].match &&
 		    r->packet_hdr == prof->segs[prof->segs_cnt - 1].hdrs) {
-			LIST_DEL(&r->l_entry);
-			ice_free(hw, r);
+			ice_clear_bit(vsi_handle, r->vsis);
+			if (!ice_is_any_bit_set(r->vsis, ICE_MAX_VSI)) {
+				LIST_DEL(&r->l_entry);
+				ice_free(hw, r);
+			}
 			return;
 		}
 	}
 }
 
 /**
- * ice_add_rss_vsi_ctx - add RSS configuration to VSI context
+ * ice_add_rss_list - add RSS configuration to list
  * @hw: pointer to the hardware structure
  * @vsi_handle: software VSI handle
  * @prof: pointer to flow profile
@@ -1705,16 +1711,17 @@ ice_rem_rss_cfg_vsi_ctx(struct ice_hw *hw, u16 vsi_handle,
  * Assumption: lock has already been acquired for RSS list
  */
 static enum ice_status
-ice_add_rss_vsi_ctx(struct ice_hw *hw, u16 vsi_handle,
-		    struct ice_flow_prof *prof)
+ice_add_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof)
 {
 	struct ice_rss_cfg *r, *rss_cfg;
 
-	LIST_FOR_EACH_ENTRY(r, &hw->vsi_ctx[vsi_handle]->rss_list_head,
+	LIST_FOR_EACH_ENTRY(r, &hw->rss_list_head,
 			    ice_rss_cfg, l_entry)
 		if (r->hashed_flds == prof->segs[prof->segs_cnt - 1].match &&
-		    r->packet_hdr == prof->segs[prof->segs_cnt - 1].hdrs)
+		    r->packet_hdr == prof->segs[prof->segs_cnt - 1].hdrs) {
+			ice_set_bit(vsi_handle, r->vsis);
 			return ICE_SUCCESS;
+		}
 
 	rss_cfg = (struct ice_rss_cfg *)ice_malloc(hw, sizeof(*rss_cfg));
 	if (!rss_cfg)
@@ -1722,8 +1729,9 @@ ice_add_rss_vsi_ctx(struct ice_hw *hw, u16 vsi_handle,
 
 	rss_cfg->hashed_flds = prof->segs[prof->segs_cnt - 1].match;
 	rss_cfg->packet_hdr = prof->segs[prof->segs_cnt - 1].hdrs;
-	LIST_ADD_TAIL(&rss_cfg->l_entry,
-		      &hw->vsi_ctx[vsi_handle]->rss_list_head);
+	ice_set_bit(vsi_handle, rss_cfg->vsis);
+
+	LIST_ADD_TAIL(&rss_cfg->l_entry, &hw->rss_list_head);
 
 	return ICE_SUCCESS;
 }
@@ -1785,7 +1793,7 @@ ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	if (prof) {
 		status = ice_flow_disassoc_prof(hw, blk, prof, vsi_handle);
 		if (!status)
-			ice_rem_rss_cfg_vsi_ctx(hw, vsi_handle, prof);
+			ice_rem_rss_list(hw, vsi_handle, prof);
 		else
 			goto exit;
 
@@ -1806,7 +1814,7 @@ ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	if (prof) {
 		status = ice_flow_assoc_prof(hw, blk, prof, vsi_handle);
 		if (!status)
-			status = ice_add_rss_vsi_ctx(hw, vsi_handle, prof);
+			status = ice_add_rss_list(hw, vsi_handle, prof);
 		goto exit;
 	}
 
@@ -1828,7 +1836,7 @@ ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 		goto exit;
 	}
 
-	status = ice_add_rss_vsi_ctx(hw, vsi_handle, prof);
+	status = ice_add_rss_list(hw, vsi_handle, prof);
 
 exit:
 	ice_free(hw, segs);
@@ -1856,9 +1864,9 @@ ice_add_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	    !ice_is_vsi_valid(hw, vsi_handle))
 		return ICE_ERR_PARAM;
 
-	ice_acquire_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_acquire_lock(&hw->rss_locks);
 	status = ice_add_rss_cfg_sync(hw, vsi_handle, hashed_flds, addl_hdrs);
-	ice_release_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_release_lock(&hw->rss_locks);
 
 	return status;
 }
@@ -1905,7 +1913,7 @@ ice_rem_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	/* Remove RSS configuration from VSI context before deleting
 	 * the flow profile.
 	 */
-	ice_rem_rss_cfg_vsi_ctx(hw, vsi_handle, prof);
+	ice_rem_rss_list(hw, vsi_handle, prof);
 
 	if (!ice_is_any_bit_set(prof->vsis, ICE_MAX_VSI))
 		status = ice_flow_rem_prof_sync(hw, blk, prof);
@@ -2066,15 +2074,15 @@ ice_rem_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	    !ice_is_vsi_valid(hw, vsi_handle))
 		return ICE_ERR_PARAM;
 
-	ice_acquire_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_acquire_lock(&hw->rss_locks);
 	status = ice_rem_rss_cfg_sync(hw, vsi_handle, hashed_flds, addl_hdrs);
-	ice_release_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_release_lock(&hw->rss_locks);
 
 	return status;
 }
 
 /**
- * ice_replay_rss_cfg - remove RSS configurations associated with VSI
+ * ice_replay_rss_cfg - replay RSS configurations associated with VSI
  * @hw: pointer to the hardware structure
  * @vsi_handle: software VSI handle
  */
@@ -2086,15 +2094,18 @@ enum ice_status ice_replay_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
 	if (!ice_is_vsi_valid(hw, vsi_handle))
 		return ICE_ERR_PARAM;
 
-	ice_acquire_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
-	LIST_FOR_EACH_ENTRY(r, &hw->vsi_ctx[vsi_handle]->rss_list_head,
+	ice_acquire_lock(&hw->rss_locks);
+	LIST_FOR_EACH_ENTRY(r, &hw->rss_list_head,
 			    ice_rss_cfg, l_entry) {
-		status = ice_add_rss_cfg_sync(hw, vsi_handle, r->hashed_flds,
-					      r->packet_hdr);
-		if (status)
-			break;
+		if (ice_is_bit_set(r->vsis, vsi_handle)) {
+			status = ice_add_rss_cfg_sync(hw, vsi_handle,
+						      r->hashed_flds,
+						      r->packet_hdr);
+			if (status)
+				break;
+		}
 	}
-	ice_release_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_release_lock(&hw->rss_locks);
 
 	return status;
 }
@@ -2116,14 +2127,15 @@ u64 ice_get_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u32 hdrs)
 	if (hdrs == ICE_FLOW_SEG_HDR_NONE || !ice_is_vsi_valid(hw, vsi_handle))
 		return ICE_HASH_INVALID;
 
-	ice_acquire_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
-	LIST_FOR_EACH_ENTRY(r, &hw->vsi_ctx[vsi_handle]->rss_list_head,
+	ice_acquire_lock(&hw->rss_locks);
+	LIST_FOR_EACH_ENTRY(r, &hw->rss_list_head,
 			    ice_rss_cfg, l_entry)
-		if (r->packet_hdr == hdrs) {
+		if (ice_is_bit_set(r->vsis, vsi_handle) &&
+		    r->packet_hdr == hdrs) {
 			rss_cfg = r;
 			break;
 		}
-	ice_release_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_release_lock(&hw->rss_locks);
 
 	return rss_cfg ? rss_cfg->hashed_flds : ICE_HASH_INVALID;
 }
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
index f0c74a348..4fa13064e 100644
--- a/drivers/net/ice/base/ice_flow.h
+++ b/drivers/net/ice/base/ice_flow.h
@@ -270,6 +270,8 @@ struct ice_flow_prof {
 
 struct ice_rss_cfg {
 	struct LIST_ENTRY_TYPE l_entry;
+	/* bitmap of VSIs added to the RSS entry */
+	ice_declare_bitmap(vsis, ICE_MAX_VSI);
 	u64 hashed_flds;
 	u32 packet_hdr;
 };
@@ -338,7 +340,7 @@ ice_flow_set_fld_prefix(struct ice_flow_seg_info *seg, enum ice_flow_field fld,
 void
 ice_flow_add_fld_raw(struct ice_flow_seg_info *seg, u16 off, u8 len,
 		     u16 val_loc, u16 mask_loc);
-void ice_rem_all_rss_vsi_ctx(struct ice_hw *hw, u16 vsi_handle);
+void ice_rem_vsi_rss_list(struct ice_hw *hw, u16 vsi_handle);
 enum ice_status ice_replay_rss_cfg(struct ice_hw *hw, u16 vsi_handle);
 enum ice_status
 ice_add_avf_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds);
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 3719ac4bb..7cccaf4d3 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -686,10 +686,7 @@ static void ice_clear_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
 
 	vsi = ice_get_vsi_ctx(hw, vsi_handle);
 	if (vsi) {
-		if (!LIST_EMPTY(&vsi->rss_list_head))
-			ice_rem_all_rss_vsi_ctx(hw, vsi_handle);
 		ice_clear_vsi_q_ctx(hw, vsi_handle);
-		ice_destroy_lock(&vsi->rss_locks);
 		ice_free(hw, vsi);
 		hw->vsi_ctx[vsi_handle] = NULL;
 	}
@@ -740,8 +737,7 @@ ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
 			return ICE_ERR_NO_MEMORY;
 		}
 		*tmp_vsi_ctx = *vsi_ctx;
-		ice_init_lock(&tmp_vsi_ctx->rss_locks);
-		INIT_LIST_HEAD(&tmp_vsi_ctx->rss_list_head);
+
 		ice_save_vsi_ctx(hw, vsi_handle, tmp_vsi_ctx);
 	} else {
 		/* update with new HW VSI num */
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index e3fb0434d..2f140a86d 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -32,8 +32,6 @@ struct ice_vsi_ctx {
 	u8 alloc_from_pool;
 	u16 num_lan_q_entries[ICE_MAX_TRAFFIC_CLASS];
 	struct ice_q_ctx *lan_q_ctx[ICE_MAX_TRAFFIC_CLASS];
-	struct ice_lock rss_locks;	/* protect rss config in VSI ctx */
-	struct LIST_HEAD_TYPE rss_list_head;
 };
 
 /* This is to be used by add/update mirror rule Admin Queue command */
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index b229be158..45b0b3c05 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -817,6 +817,9 @@ struct ice_hw {
 	u16 fdir_fltr_cnt[ICE_FLTR_PTYPE_MAX];
 
 	struct ice_fd_hw_prof **fdir_prof;
+	ice_declare_bitmap(fdir_perfect_fltr, ICE_FLTR_PTYPE_MAX);
+	struct ice_lock rss_locks;	/* protect RSS configuration */
+	struct LIST_HEAD_TYPE rss_list_head;
 };
 
 /* Statistics collected by each port, VSI, VEB, and S-channel */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 14/66] net/ice/base: cache the data of set PHY cfg AQ in SW
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (12 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 13/66] net/ice/base: move RSS replay list Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 15/66] net/ice/base: refactor HW table init function Leyi Rong
                     ` (52 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Chinh T Cao, Paul M Stillwell Jr

After the transition from cable-unplug to cable-plug events, FW will
clear the set-phy-cfg data, sent by user. Thus, we will need to
cache these info.
1. The submitted data when set-phy-cfg is called. This info will be used
later to check if FW clears out the PHY info, requested by user.
2. The FC, FEC and LinkSpeed, requested by user. This info will be used
later, by device driver, to construct the new input data for the
set-phy-cfg AQ command.

Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 119 +++++++++++++++++++++++-------
 drivers/net/ice/base/ice_common.h |   2 +-
 drivers/net/ice/base/ice_type.h   |  31 ++++++--
 drivers/net/ice/ice_ethdev.c      |   2 +-
 4 files changed, 122 insertions(+), 32 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 09296ead2..a0ab25aef 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -270,21 +270,23 @@ enum ice_status
 ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
 		     struct ice_link_status *link, struct ice_sq_cd *cd)
 {
-	struct ice_link_status *hw_link_info_old, *hw_link_info;
 	struct ice_aqc_get_link_status_data link_data = { 0 };
 	struct ice_aqc_get_link_status *resp;
+	struct ice_link_status *li_old, *li;
 	enum ice_media_type *hw_media_type;
 	struct ice_fc_info *hw_fc_info;
 	bool tx_pause, rx_pause;
 	struct ice_aq_desc desc;
 	enum ice_status status;
+	struct ice_hw *hw;
 	u16 cmd_flags;
 
 	if (!pi)
 		return ICE_ERR_PARAM;
-	hw_link_info_old = &pi->phy.link_info_old;
+	hw = pi->hw;
+	li_old = &pi->phy.link_info_old;
 	hw_media_type = &pi->phy.media_type;
-	hw_link_info = &pi->phy.link_info;
+	li = &pi->phy.link_info;
 	hw_fc_info = &pi->fc;
 
 	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_status);
@@ -293,27 +295,27 @@ ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
 	resp->cmd_flags = CPU_TO_LE16(cmd_flags);
 	resp->lport_num = pi->lport;
 
-	status = ice_aq_send_cmd(pi->hw, &desc, &link_data, sizeof(link_data),
-				 cd);
+	status = ice_aq_send_cmd(hw, &desc, &link_data, sizeof(link_data), cd);
 
 	if (status != ICE_SUCCESS)
 		return status;
 
 	/* save off old link status information */
-	*hw_link_info_old = *hw_link_info;
+	*li_old = *li;
 
 	/* update current link status information */
-	hw_link_info->link_speed = LE16_TO_CPU(link_data.link_speed);
-	hw_link_info->phy_type_low = LE64_TO_CPU(link_data.phy_type_low);
-	hw_link_info->phy_type_high = LE64_TO_CPU(link_data.phy_type_high);
+	li->link_speed = LE16_TO_CPU(link_data.link_speed);
+	li->phy_type_low = LE64_TO_CPU(link_data.phy_type_low);
+	li->phy_type_high = LE64_TO_CPU(link_data.phy_type_high);
 	*hw_media_type = ice_get_media_type(pi);
-	hw_link_info->link_info = link_data.link_info;
-	hw_link_info->an_info = link_data.an_info;
-	hw_link_info->ext_info = link_data.ext_info;
-	hw_link_info->max_frame_size = LE16_TO_CPU(link_data.max_frame_size);
-	hw_link_info->fec_info = link_data.cfg & ICE_AQ_FEC_MASK;
-	hw_link_info->topo_media_conflict = link_data.topo_media_conflict;
-	hw_link_info->pacing = link_data.cfg & ICE_AQ_CFG_PACING_M;
+	li->link_info = link_data.link_info;
+	li->an_info = link_data.an_info;
+	li->ext_info = link_data.ext_info;
+	li->max_frame_size = LE16_TO_CPU(link_data.max_frame_size);
+	li->fec_info = link_data.cfg & ICE_AQ_FEC_MASK;
+	li->topo_media_conflict = link_data.topo_media_conflict;
+	li->pacing = link_data.cfg & (ICE_AQ_CFG_PACING_M |
+				      ICE_AQ_CFG_PACING_TYPE_M);
 
 	/* update fc info */
 	tx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_TX);
@@ -327,13 +329,24 @@ ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
 	else
 		hw_fc_info->current_mode = ICE_FC_NONE;
 
-	hw_link_info->lse_ena =
-		!!(resp->cmd_flags & CPU_TO_LE16(ICE_AQ_LSE_IS_ENABLED));
-
+	li->lse_ena = !!(resp->cmd_flags & CPU_TO_LE16(ICE_AQ_LSE_IS_ENABLED));
+
+	ice_debug(hw, ICE_DBG_LINK, "link_speed = 0x%x\n", li->link_speed);
+	ice_debug(hw, ICE_DBG_LINK, "phy_type_low = 0x%llx\n",
+		  (unsigned long long)li->phy_type_low);
+	ice_debug(hw, ICE_DBG_LINK, "phy_type_high = 0x%llx\n",
+		  (unsigned long long)li->phy_type_high);
+	ice_debug(hw, ICE_DBG_LINK, "media_type = 0x%x\n", *hw_media_type);
+	ice_debug(hw, ICE_DBG_LINK, "link_info = 0x%x\n", li->link_info);
+	ice_debug(hw, ICE_DBG_LINK, "an_info = 0x%x\n", li->an_info);
+	ice_debug(hw, ICE_DBG_LINK, "ext_info = 0x%x\n", li->ext_info);
+	ice_debug(hw, ICE_DBG_LINK, "lse_ena = 0x%x\n", li->lse_ena);
+	ice_debug(hw, ICE_DBG_LINK, "max_frame = 0x%x\n", li->max_frame_size);
+	ice_debug(hw, ICE_DBG_LINK, "pacing = 0x%x\n", li->pacing);
 
 	/* save link status information */
 	if (link)
-		*link = *hw_link_info;
+		*link = *li;
 
 	/* flag cleared so calling functions don't call AQ again */
 	pi->phy.get_link_info = false;
@@ -2412,7 +2425,7 @@ ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
 /**
  * ice_aq_set_phy_cfg
  * @hw: pointer to the HW struct
- * @lport: logical port number
+ * @pi: port info structure of the interested logical port
  * @cfg: structure with PHY configuration data to be set
  * @cd: pointer to command details structure or NULL
  *
@@ -2422,10 +2435,11 @@ ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
  * parameters. This status will be indicated by the command response (0x0601).
  */
 enum ice_status
-ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi,
 		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd)
 {
 	struct ice_aq_desc desc;
+	enum ice_status status;
 
 	if (!cfg)
 		return ICE_ERR_PARAM;
@@ -2440,10 +2454,26 @@ ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
 	}
 
 	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_phy_cfg);
-	desc.params.set_phy.lport_num = lport;
+	desc.params.set_phy.lport_num = pi->lport;
 	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
 
-	return ice_aq_send_cmd(hw, &desc, cfg, sizeof(*cfg), cd);
+	ice_debug(hw, ICE_DBG_LINK, "phy_type_low = 0x%llx\n",
+		  (unsigned long long)LE64_TO_CPU(cfg->phy_type_low));
+	ice_debug(hw, ICE_DBG_LINK, "phy_type_high = 0x%llx\n",
+		  (unsigned long long)LE64_TO_CPU(cfg->phy_type_high));
+	ice_debug(hw, ICE_DBG_LINK, "caps = 0x%x\n", cfg->caps);
+	ice_debug(hw, ICE_DBG_LINK, "low_power_ctrl = 0x%x\n",
+		  cfg->low_power_ctrl);
+	ice_debug(hw, ICE_DBG_LINK, "eee_cap = 0x%x\n", cfg->eee_cap);
+	ice_debug(hw, ICE_DBG_LINK, "eeer_value = 0x%x\n", cfg->eeer_value);
+	ice_debug(hw, ICE_DBG_LINK, "link_fec_opt = 0x%x\n", cfg->link_fec_opt);
+
+	status = ice_aq_send_cmd(hw, &desc, cfg, sizeof(*cfg), cd);
+
+	if (!status)
+		pi->phy.curr_user_phy_cfg = *cfg;
+
+	return status;
 }
 
 /**
@@ -2487,6 +2517,38 @@ enum ice_status ice_update_link_info(struct ice_port_info *pi)
 	return status;
 }
 
+/**
+ * ice_cache_phy_user_req
+ * @pi: port information structure
+ * @cache_data: PHY logging data
+ * @cache_mode: PHY logging mode
+ *
+ * Log the user request on (FC, FEC, SPEED) for later user.
+ */
+static void
+ice_cache_phy_user_req(struct ice_port_info *pi,
+		       struct ice_phy_cache_mode_data cache_data,
+		       enum ice_phy_cache_mode cache_mode)
+{
+	if (!pi)
+		return;
+
+	switch (cache_mode) {
+	case ICE_FC_MODE:
+		pi->phy.curr_user_fc_req = cache_data.data.curr_user_fc_req;
+		break;
+	case ICE_SPEED_MODE:
+		pi->phy.curr_user_speed_req =
+			cache_data.data.curr_user_speed_req;
+		break;
+	case ICE_FEC_MODE:
+		pi->phy.curr_user_fec_req = cache_data.data.curr_user_fec_req;
+		break;
+	default:
+		break;
+	}
+}
+
 /**
  * ice_set_fc
  * @pi: port information structure
@@ -2499,6 +2561,7 @@ enum ice_status
 ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
 {
 	struct ice_aqc_set_phy_cfg_data cfg = { 0 };
+	struct ice_phy_cache_mode_data cache_data;
 	struct ice_aqc_get_phy_caps_data *pcaps;
 	enum ice_status status;
 	u8 pause_mask = 0x0;
@@ -2509,6 +2572,10 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
 	hw = pi->hw;
 	*aq_failures = ICE_SET_FC_AQ_FAIL_NONE;
 
+	/* Cache user FC request */
+	cache_data.data.curr_user_fc_req = pi->fc.req_mode;
+	ice_cache_phy_user_req(pi, cache_data, ICE_FC_MODE);
+
 	switch (pi->fc.req_mode) {
 	case ICE_FC_FULL:
 		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
@@ -2540,8 +2607,10 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
 	/* clear the old pause settings */
 	cfg.caps = pcaps->caps & ~(ICE_AQC_PHY_EN_TX_LINK_PAUSE |
 				   ICE_AQC_PHY_EN_RX_LINK_PAUSE);
+
 	/* set the new capabilities */
 	cfg.caps |= pause_mask;
+
 	/* If the capabilities have changed, then set the new config */
 	if (cfg.caps != pcaps->caps) {
 		int retry_count, retry_max = 10;
@@ -2557,7 +2626,7 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
 		cfg.eeer_value = pcaps->eeer_value;
 		cfg.link_fec_opt = pcaps->link_fec_options;
 
-		status = ice_aq_set_phy_cfg(hw, pi->lport, &cfg, NULL);
+		status = ice_aq_set_phy_cfg(hw, pi, &cfg, NULL);
 		if (status) {
 			*aq_failures = ICE_SET_FC_AQ_FAIL_SET;
 			goto out;
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index aee754b85..cccb5f009 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -134,7 +134,7 @@ ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
 
 enum ice_status ice_clear_pf_cfg(struct ice_hw *hw);
 enum ice_status
-ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi,
 		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd);
 enum ice_status
 ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 45b0b3c05..5da267f1b 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -148,6 +148,12 @@ enum ice_fc_mode {
 	ICE_FC_DFLT
 };
 
+enum ice_phy_cache_mode {
+	ICE_FC_MODE = 0,
+	ICE_SPEED_MODE,
+	ICE_FEC_MODE
+};
+
 enum ice_fec_mode {
 	ICE_FEC_NONE = 0,
 	ICE_FEC_RS,
@@ -155,6 +161,14 @@ enum ice_fec_mode {
 	ICE_FEC_AUTO
 };
 
+struct ice_phy_cache_mode_data {
+	union {
+		enum ice_fec_mode curr_user_fec_req;
+		enum ice_fc_mode curr_user_fc_req;
+		u16 curr_user_speed_req;
+	} data;
+};
+
 enum ice_set_fc_aq_failures {
 	ICE_SET_FC_AQ_FAIL_NONE = 0,
 	ICE_SET_FC_AQ_FAIL_GET,
@@ -232,6 +246,13 @@ struct ice_phy_info {
 	u64 phy_type_high;
 	enum ice_media_type media_type;
 	u8 get_link_info;
+	/* Please refer to struct ice_aqc_get_link_status_data to get
+	 * detail of enable bit in curr_user_speed_req
+	 */
+	u16 curr_user_speed_req;
+	enum ice_fec_mode curr_user_fec_req;
+	enum ice_fc_mode curr_user_fc_req;
+	struct ice_aqc_set_phy_cfg_data curr_user_phy_cfg;
 };
 
 #define ICE_MAX_NUM_MIRROR_RULES	64
@@ -648,6 +669,8 @@ struct ice_port_info {
 	u8 port_state;
 #define ICE_SCHED_PORT_STATE_INIT	0x0
 #define ICE_SCHED_PORT_STATE_READY	0x1
+	u8 lport;
+#define ICE_LPORT_MASK			0xff
 	u16 dflt_tx_vsi_rule_id;
 	u16 dflt_tx_vsi_num;
 	u16 dflt_rx_vsi_rule_id;
@@ -663,11 +686,9 @@ struct ice_port_info {
 	struct ice_dcbx_cfg remote_dcbx_cfg;	/* Peer Cfg */
 	struct ice_dcbx_cfg desired_dcbx_cfg;	/* CEE Desired Cfg */
 	/* LLDP/DCBX Status */
-	u8 dcbx_status;
-	u8 is_sw_lldp;
-	u8 lport;
-#define ICE_LPORT_MASK		0xff
-	u8 is_vf;
+	u8 dcbx_status:3;		/* see ICE_DCBX_STATUS_DIS */
+	u8 is_sw_lldp:1;
+	u8 is_vf:1;
 };
 
 struct ice_switch_info {
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index bdbceb411..962d506a1 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -2295,7 +2295,7 @@ ice_force_phys_link_state(struct ice_hw *hw, bool link_up)
 	else
 		cfg.caps &= ~ICE_AQ_PHY_ENA_LINK;
 
-	status = ice_aq_set_phy_cfg(hw, pi->lport, &cfg, NULL);
+	status = ice_aq_set_phy_cfg(hw, pi, &cfg, NULL);
 
 out:
 	ice_free(hw, pcaps);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 15/66] net/ice/base: refactor HW table init function
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (13 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 14/66] net/ice/base: cache the data of set PHY cfg AQ in SW Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 16/66] net/ice/base: add compatibility check for package version Leyi Rong
                     ` (51 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Vignesh Sridhar, Paul M Stillwell Jr

1. Separated the calls to initialize and allocate the HW XLT tables
from call to fill table. This is to allow the ice_init_hw_tbls call
to be made prior to package download so that all HW structures are
correctly initialized. This will avoid any invalid memory references
if package download fails on unloading the driver.
2. Fill HW tables with package content after successful package download.
3. Free HW table and flow profile allocations when unloading driver.
4. Add flag in block structure to check if lists in block are
initialized. This is to avoid any NULL reference in releasing flow
profiles that may have been freed in previous calls to free tables.

Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c    |   6 +-
 drivers/net/ice/base/ice_flex_pipe.c | 284 ++++++++++++++-------------
 drivers/net/ice/base/ice_flex_pipe.h |   1 +
 drivers/net/ice/base/ice_flex_type.h |   1 +
 4 files changed, 151 insertions(+), 141 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index a0ab25aef..62c7fad0d 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -916,12 +916,13 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 
 	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC);
 	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC_2);
-
 	/* Obtain counter base index which would be used by flow director */
 	status = ice_alloc_fd_res_cntr(hw, &hw->fd_ctr_base);
 	if (status)
 		goto err_unroll_fltr_mgmt_struct;
-
+	status = ice_init_hw_tbls(hw);
+	if (status)
+		goto err_unroll_fltr_mgmt_struct;
 	return ICE_SUCCESS;
 
 err_unroll_fltr_mgmt_struct:
@@ -952,6 +953,7 @@ void ice_deinit_hw(struct ice_hw *hw)
 	ice_sched_cleanup_all(hw);
 	ice_sched_clear_agg(hw);
 	ice_free_seg(hw);
+	ice_free_hw_tbls(hw);
 
 	if (hw->port_info) {
 		ice_free(hw, hw->port_info);
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 8f0b513f4..93e056853 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1375,10 +1375,12 @@ enum ice_status ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len)
 
 	if (!status) {
 		hw->seg = seg;
-		/* on successful package download, update other required
-		 * registers to support the package
+		/* on successful package download update other required
+		 * registers to support the package and fill HW tables
+		 * with package content.
 		 */
 		ice_init_pkg_regs(hw);
+		ice_fill_blk_tbls(hw);
 	} else {
 		ice_debug(hw, ICE_DBG_INIT, "package load failed, %d\n",
 			  status);
@@ -2755,6 +2757,65 @@ static const u32 ice_blk_sids[ICE_BLK_COUNT][ICE_SID_OFF_COUNT] = {
 	}
 };
 
+/**
+ * ice_init_sw_xlt1_db - init software XLT1 database from HW tables
+ * @hw: pointer to the hardware structure
+ * @blk: the HW block to initialize
+ */
+static
+void ice_init_sw_xlt1_db(struct ice_hw *hw, enum ice_block blk)
+{
+	u16 pt;
+
+	for (pt = 0; pt < hw->blk[blk].xlt1.count; pt++) {
+		u8 ptg;
+
+		ptg = hw->blk[blk].xlt1.t[pt];
+		if (ptg != ICE_DEFAULT_PTG) {
+			ice_ptg_alloc_val(hw, blk, ptg);
+			ice_ptg_add_mv_ptype(hw, blk, pt, ptg);
+		}
+	}
+}
+
+/**
+ * ice_init_sw_xlt2_db - init software XLT2 database from HW tables
+ * @hw: pointer to the hardware structure
+ * @blk: the HW block to initialize
+ */
+static void ice_init_sw_xlt2_db(struct ice_hw *hw, enum ice_block blk)
+{
+	u16 vsi;
+
+	for (vsi = 0; vsi < hw->blk[blk].xlt2.count; vsi++) {
+		u16 vsig;
+
+		vsig = hw->blk[blk].xlt2.t[vsi];
+		if (vsig) {
+			ice_vsig_alloc_val(hw, blk, vsig);
+			ice_vsig_add_mv_vsi(hw, blk, vsi, vsig);
+			/* no changes at this time, since this has been
+			 * initialized from the original package
+			 */
+			hw->blk[blk].xlt2.vsis[vsi].changed = 0;
+		}
+	}
+}
+
+/**
+ * ice_init_sw_db - init software database from HW tables
+ * @hw: pointer to the hardware structure
+ */
+static void ice_init_sw_db(struct ice_hw *hw)
+{
+	u16 i;
+
+	for (i = 0; i < ICE_BLK_COUNT; i++) {
+		ice_init_sw_xlt1_db(hw, (enum ice_block)i);
+		ice_init_sw_xlt2_db(hw, (enum ice_block)i);
+	}
+}
+
 /**
  * ice_fill_tbl - Reads content of a single table type into database
  * @hw: pointer to the hardware structure
@@ -2853,12 +2914,12 @@ static void ice_fill_tbl(struct ice_hw *hw, enum ice_block block_id, u32 sid)
 		case ICE_SID_FLD_VEC_PE:
 			es = (struct ice_sw_fv_section *)sect;
 			src = (u8 *)es->fv;
-			sect_len = LE16_TO_CPU(es->count) *
-				hw->blk[block_id].es.fvw *
+			sect_len = (u32)(LE16_TO_CPU(es->count) *
+					 hw->blk[block_id].es.fvw) *
 				sizeof(*hw->blk[block_id].es.t);
 			dst = (u8 *)hw->blk[block_id].es.t;
-			dst_len = hw->blk[block_id].es.count *
-				hw->blk[block_id].es.fvw *
+			dst_len = (u32)(hw->blk[block_id].es.count *
+					hw->blk[block_id].es.fvw) *
 				sizeof(*hw->blk[block_id].es.t);
 			break;
 		default:
@@ -2886,75 +2947,61 @@ static void ice_fill_tbl(struct ice_hw *hw, enum ice_block block_id, u32 sid)
 }
 
 /**
- * ice_fill_blk_tbls - Read package content for tables of a block
+ * ice_fill_blk_tbls - Read package context for tables
  * @hw: pointer to the hardware structure
- * @block_id: The block ID which contains the tables to be copied
  *
  * Reads the current package contents and populates the driver
- * database with the data it contains to allow for advanced driver
- * features.
- */
-static void ice_fill_blk_tbls(struct ice_hw *hw, enum ice_block block_id)
-{
-	ice_fill_tbl(hw, block_id, hw->blk[block_id].xlt1.sid);
-	ice_fill_tbl(hw, block_id, hw->blk[block_id].xlt2.sid);
-	ice_fill_tbl(hw, block_id, hw->blk[block_id].prof.sid);
-	ice_fill_tbl(hw, block_id, hw->blk[block_id].prof_redir.sid);
-	ice_fill_tbl(hw, block_id, hw->blk[block_id].es.sid);
-}
-
-/**
- * ice_free_flow_profs - free flow profile entries
- * @hw: pointer to the hardware structure
+ * database with the data iteratively for all advanced feature
+ * blocks. Assume that the Hw tables have been allocated.
  */
-static void ice_free_flow_profs(struct ice_hw *hw)
+void ice_fill_blk_tbls(struct ice_hw *hw)
 {
 	u8 i;
 
 	for (i = 0; i < ICE_BLK_COUNT; i++) {
-		struct ice_flow_prof *p, *tmp;
-
-		if (!&hw->fl_profs[i])
-			continue;
-
-		/* This call is being made as part of resource deallocation
-		 * during unload. Lock acquire and release will not be
-		 * necessary here.
-		 */
-		LIST_FOR_EACH_ENTRY_SAFE(p, tmp, &hw->fl_profs[i],
-					 ice_flow_prof, l_entry) {
-			struct ice_flow_entry *e, *t;
-
-			LIST_FOR_EACH_ENTRY_SAFE(e, t, &p->entries,
-						 ice_flow_entry, l_entry)
-				ice_flow_rem_entry(hw, ICE_FLOW_ENTRY_HNDL(e));
-
-			LIST_DEL(&p->l_entry);
-			if (p->acts)
-				ice_free(hw, p->acts);
-			ice_free(hw, p);
-		}
+		enum ice_block blk_id = (enum ice_block)i;
 
-		ice_destroy_lock(&hw->fl_profs_locks[i]);
+		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].xlt1.sid);
+		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].xlt2.sid);
+		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].prof.sid);
+		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].prof_redir.sid);
+		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].es.sid);
 	}
+
+	ice_init_sw_db(hw);
 }
 
 /**
- * ice_free_prof_map - frees the profile map
+ * ice_free_flow_profs - free flow profile entries
  * @hw: pointer to the hardware structure
- * @blk: the HW block which contains the profile map to be freed
+ * @blk_idx: HW block index
  */
-static void ice_free_prof_map(struct ice_hw *hw, enum ice_block blk)
+static void ice_free_flow_profs(struct ice_hw *hw, u8 blk_idx)
 {
-	struct ice_prof_map *del, *tmp;
+	struct ice_flow_prof *p, *tmp;
 
-	if (LIST_EMPTY(&hw->blk[blk].es.prof_map))
-		return;
+	/* This call is being made as part of resource deallocation
+	 * during unload. Lock acquire and release will not be
+	 * necessary here.
+	 */
+	LIST_FOR_EACH_ENTRY_SAFE(p, tmp, &hw->fl_profs[blk_idx],
+				 ice_flow_prof, l_entry) {
+		struct ice_flow_entry *e, *t;
+
+		LIST_FOR_EACH_ENTRY_SAFE(e, t, &p->entries,
+					 ice_flow_entry, l_entry)
+			ice_flow_rem_entry(hw, ICE_FLOW_ENTRY_HNDL(e));
 
-	LIST_FOR_EACH_ENTRY_SAFE(del, tmp, &hw->blk[blk].es.prof_map,
-				 ice_prof_map, list) {
-		ice_rem_prof(hw, blk, del->profile_cookie);
+		LIST_DEL(&p->l_entry);
+		if (p->acts)
+			ice_free(hw, p->acts);
+		ice_free(hw, p);
 	}
+
+	/* if driver is in reset and tables are being cleared
+	 * re-initialize the flow profile list heads
+	 */
+	INIT_LIST_HEAD(&hw->fl_profs[blk_idx]);
 }
 
 /**
@@ -2980,10 +3027,25 @@ static void ice_free_vsig_tbl(struct ice_hw *hw, enum ice_block blk)
  */
 void ice_free_hw_tbls(struct ice_hw *hw)
 {
+	struct ice_rss_cfg *r, *rt;
 	u8 i;
 
 	for (i = 0; i < ICE_BLK_COUNT; i++) {
-		ice_free_prof_map(hw, (enum ice_block)i);
+		if (hw->blk[i].is_list_init) {
+			struct ice_es *es = &hw->blk[i].es;
+			struct ice_prof_map *del, *tmp;
+
+			LIST_FOR_EACH_ENTRY_SAFE(del, tmp, &es->prof_map,
+						 ice_prof_map, list) {
+				LIST_DEL(&del->list);
+				ice_free(hw, del);
+			}
+			ice_destroy_lock(&es->prof_map_lock);
+
+			ice_free_flow_profs(hw, i);
+			ice_destroy_lock(&hw->fl_profs_locks[i]);
+			hw->blk[i].is_list_init = false;
+		}
 		ice_free_vsig_tbl(hw, (enum ice_block)i);
 		ice_free(hw, hw->blk[i].xlt1.ptypes);
 		ice_free(hw, hw->blk[i].xlt1.ptg_tbl);
@@ -2998,84 +3060,24 @@ void ice_free_hw_tbls(struct ice_hw *hw)
 		ice_free(hw, hw->blk[i].es.written);
 	}
 
+	LIST_FOR_EACH_ENTRY_SAFE(r, rt, &hw->rss_list_head,
+				 ice_rss_cfg, l_entry) {
+		LIST_DEL(&r->l_entry);
+		ice_free(hw, r);
+	}
+	ice_destroy_lock(&hw->rss_locks);
 	ice_memset(hw->blk, 0, sizeof(hw->blk), ICE_NONDMA_MEM);
-
-	ice_free_flow_profs(hw);
 }
 
 /**
  * ice_init_flow_profs - init flow profile locks and list heads
  * @hw: pointer to the hardware structure
+ * @blk_idx: HW block index
  */
-static void ice_init_flow_profs(struct ice_hw *hw)
+static void ice_init_flow_profs(struct ice_hw *hw, u8 blk_idx)
 {
-	u8 i;
-
-	for (i = 0; i < ICE_BLK_COUNT; i++) {
-		ice_init_lock(&hw->fl_profs_locks[i]);
-		INIT_LIST_HEAD(&hw->fl_profs[i]);
-	}
-}
-
-/**
- * ice_init_sw_xlt1_db - init software XLT1 database from HW tables
- * @hw: pointer to the hardware structure
- * @blk: the HW block to initialize
- */
-static
-void ice_init_sw_xlt1_db(struct ice_hw *hw, enum ice_block blk)
-{
-	u16 pt;
-
-	for (pt = 0; pt < hw->blk[blk].xlt1.count; pt++) {
-		u8 ptg;
-
-		ptg = hw->blk[blk].xlt1.t[pt];
-		if (ptg != ICE_DEFAULT_PTG) {
-			ice_ptg_alloc_val(hw, blk, ptg);
-			ice_ptg_add_mv_ptype(hw, blk, pt, ptg);
-		}
-	}
-}
-
-/**
- * ice_init_sw_xlt2_db - init software XLT2 database from HW tables
- * @hw: pointer to the hardware structure
- * @blk: the HW block to initialize
- */
-static
-void ice_init_sw_xlt2_db(struct ice_hw *hw, enum ice_block blk)
-{
-	u16 vsi;
-
-	for (vsi = 0; vsi < hw->blk[blk].xlt2.count; vsi++) {
-		u16 vsig;
-
-		vsig = hw->blk[blk].xlt2.t[vsi];
-		if (vsig) {
-			ice_vsig_alloc_val(hw, blk, vsig);
-			ice_vsig_add_mv_vsi(hw, blk, vsi, vsig);
-			/* no changes at this time, since this has been
-			 * initialized from the original package
-			 */
-			hw->blk[blk].xlt2.vsis[vsi].changed = 0;
-		}
-	}
-}
-
-/**
- * ice_init_sw_db - init software database from HW tables
- * @hw: pointer to the hardware structure
- */
-static
-void ice_init_sw_db(struct ice_hw *hw)
-{
-	u16 i;
-
-	for (i = 0; i < ICE_BLK_COUNT; i++) {
-		ice_init_sw_xlt1_db(hw, (enum ice_block)i);
-		ice_init_sw_xlt2_db(hw, (enum ice_block)i);
-	}
+	ice_init_lock(&hw->fl_profs_locks[blk_idx]);
+	INIT_LIST_HEAD(&hw->fl_profs[blk_idx]);
 }
 
 /**
@@ -3086,14 +3088,23 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw)
 {
 	u8 i;
 
-	ice_init_flow_profs(hw);
-
+	ice_init_lock(&hw->rss_locks);
+	INIT_LIST_HEAD(&hw->rss_list_head);
 	for (i = 0; i < ICE_BLK_COUNT; i++) {
 		struct ice_prof_redir *prof_redir = &hw->blk[i].prof_redir;
 		struct ice_prof_tcam *prof = &hw->blk[i].prof;
 		struct ice_xlt1 *xlt1 = &hw->blk[i].xlt1;
 		struct ice_xlt2 *xlt2 = &hw->blk[i].xlt2;
 		struct ice_es *es = &hw->blk[i].es;
+		u16 j;
+
+		if (hw->blk[i].is_list_init)
+			continue;
+
+		ice_init_flow_profs(hw, i);
+		ice_init_lock(&es->prof_map_lock);
+		INIT_LIST_HEAD(&es->prof_map);
+		hw->blk[i].is_list_init = true;
 
 		hw->blk[i].overwrite = blk_sizes[i].overwrite;
 		es->reverse = blk_sizes[i].reverse;
@@ -3131,6 +3142,9 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw)
 		if (!xlt2->vsig_tbl)
 			goto err;
 
+		for (j = 0; j < xlt2->count; j++)
+			INIT_LIST_HEAD(&xlt2->vsig_tbl[j].prop_lst);
+
 		xlt2->t = (u16 *)ice_calloc(hw, xlt2->count, sizeof(*xlt2->t));
 		if (!xlt2->t)
 			goto err;
@@ -3157,8 +3171,8 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw)
 		es->count = blk_sizes[i].es;
 		es->fvw = blk_sizes[i].fvw;
 		es->t = (struct ice_fv_word *)
-			ice_calloc(hw, es->count * es->fvw, sizeof(*es->t));
-
+			ice_calloc(hw, (u32)(es->count * es->fvw),
+				   sizeof(*es->t));
 		if (!es->t)
 			goto err;
 
@@ -3170,15 +3184,7 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw)
 
 		if (!es->ref_count)
 			goto err;
-
-		INIT_LIST_HEAD(&es->prof_map);
-
-		/* Now that tables are allocated, read in package data */
-		ice_fill_blk_tbls(hw, (enum ice_block)i);
 	}
-
-	ice_init_sw_db(hw);
-
 	return ICE_SUCCESS;
 
 err:
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 2710dded6..e8cc9cef3 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -98,6 +98,7 @@ enum ice_status
 ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len);
 enum ice_status ice_init_hw_tbls(struct ice_hw *hw);
 void ice_free_seg(struct ice_hw *hw);
+void ice_fill_blk_tbls(struct ice_hw *hw);
 void ice_free_hw_tbls(struct ice_hw *hw);
 enum ice_status
 ice_add_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
index 892c94b1f..7133983ff 100644
--- a/drivers/net/ice/base/ice_flex_type.h
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -676,6 +676,7 @@ struct ice_blk_info {
 	struct ice_prof_redir prof_redir;
 	struct ice_es es;
 	u8 overwrite; /* set to true to allow overwrite of table entries */
+	u8 is_list_init;
 };
 
 enum ice_chg_type {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 16/66] net/ice/base: add compatibility check for package version
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (14 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 15/66] net/ice/base: refactor HW table init function Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 17/66] net/ice/base: add API to init FW logging Leyi Rong
                     ` (50 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

1. Perform a check against the package version to make sure that
it will be compatible with the shared code implementation. There
will be points in time when the shared code and package will need
to be changed in lock step; the mechanism added here is meant to
deal with those situations.
2. Support package tunnel labels owned by PF. VXLAN and GENEVE
tunnel labels names in the package are changing to incorporate
the PF that owns them.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 96 ++++++++++++++++++++++------
 drivers/net/ice/base/ice_flex_pipe.h |  8 +++
 drivers/net/ice/base/ice_flex_type.h | 10 ---
 3 files changed, 85 insertions(+), 29 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 93e056853..5faee6d52 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -7,19 +7,12 @@
 #include "ice_protocol_type.h"
 #include "ice_flow.h"
 
+/* To support tunneling entries by PF, the package will append the PF number to
+ * the label; for example TNL_VXLAN_PF0, TNL_VXLAN_PF1, TNL_VXLAN_PF2, etc.
+ */
 static const struct ice_tunnel_type_scan tnls[] = {
-	{ TNL_VXLAN,		"TNL_VXLAN" },
-	{ TNL_GTPC,		"TNL_GTPC" },
-	{ TNL_GTPC_TEID,	"TNL_GTPC_TEID" },
-	{ TNL_GTPU,		"TNL_GTPC" },
-	{ TNL_GTPU_TEID,	"TNL_GTPU_TEID" },
-	{ TNL_VXLAN_GPE,	"TNL_VXLAN_GPE" },
-	{ TNL_GENEVE,		"TNL_GENEVE" },
-	{ TNL_NAT,		"TNL_NAT" },
-	{ TNL_ROCE_V2,		"TNL_ROCE_V2" },
-	{ TNL_MPLSO_UDP,	"TNL_MPLSO_UDP" },
-	{ TNL_UDP2_END,		"TNL_UDP2_END" },
-	{ TNL_UPD_END,		"TNL_UPD_END" },
+	{ TNL_VXLAN,		"TNL_VXLAN_PF" },
+	{ TNL_GENEVE,		"TNL_GENEVE_PF" },
 	{ TNL_LAST,		"" }
 };
 
@@ -485,8 +478,17 @@ void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg)
 
 	while (label_name && hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) {
 		for (i = 0; tnls[i].type != TNL_LAST; i++) {
-			if (!strncmp(label_name, tnls[i].label_prefix,
-				     strlen(tnls[i].label_prefix))) {
+			size_t len = strlen(tnls[i].label_prefix);
+
+			/* Look for matching label start, before continuing */
+			if (strncmp(label_name, tnls[i].label_prefix, len))
+				continue;
+
+			/* Make sure this label matches our PF. Note that the PF
+			 * character ('0' - '7') will be located where our
+			 * prefix string's null terminator is located.
+			 */
+			if ((label_name[len] - '0') == hw->pf_id) {
 				hw->tnl.tbl[hw->tnl.count].type = tnls[i].type;
 				hw->tnl.tbl[hw->tnl.count].valid = false;
 				hw->tnl.tbl[hw->tnl.count].in_use = false;
@@ -1083,12 +1085,8 @@ enum ice_status ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
 enum ice_status
 ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
 {
-	struct ice_aqc_get_pkg_info_resp *pkg_info;
 	struct ice_global_metadata_seg *meta_seg;
 	struct ice_generic_seg_hdr *seg_hdr;
-	enum ice_status status;
-	u16 size;
-	u32 i;
 
 	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 	if (!pkg_hdr)
@@ -1127,7 +1125,25 @@ ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
 		return ICE_ERR_CFG;
 	}
 
-#define ICE_PKG_CNT	4
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_get_pkg_info
+ * @hw: pointer to the hardware structure
+ *
+ * Store details of the package currently loaded in HW into the HW structure.
+ */
+enum ice_status
+ice_get_pkg_info(struct ice_hw *hw)
+{
+	struct ice_aqc_get_pkg_info_resp *pkg_info;
+	enum ice_status status;
+	u16 size;
+	u32 i;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_init_pkg_info\n");
+
 	size = sizeof(*pkg_info) + (sizeof(pkg_info->pkg_info[0]) *
 				    (ICE_PKG_CNT - 1));
 	pkg_info = (struct ice_aqc_get_pkg_info_resp *)ice_malloc(hw, size);
@@ -1310,6 +1326,32 @@ static void ice_init_pkg_regs(struct ice_hw *hw)
 	ice_init_fd_mask_regs(hw);
 }
 
+/**
+ * ice_chk_pkg_version - check package version for compatibility with driver
+ * @hw: pointer to the hardware structure
+ * @pkg_ver: pointer to a version structure to check
+ *
+ * Check to make sure that the package about to be downloaded is compatible with
+ * the driver. To be compatible, the major and minor components of the package
+ * version must match our ICE_PKG_SUPP_VER_MAJ and ICE_PKG_SUPP_VER_MNR
+ * definitions.
+ */
+static enum ice_status
+ice_chk_pkg_version(struct ice_hw *hw, struct ice_pkg_ver *pkg_ver)
+{
+	if (pkg_ver->major != ICE_PKG_SUPP_VER_MAJ ||
+	    pkg_ver->minor != ICE_PKG_SUPP_VER_MNR) {
+		ice_info(hw, "ERROR: Incompatible package: %d.%d.%d.%d - requires package version: %d.%d.*.*\n",
+			 pkg_ver->major, pkg_ver->minor, pkg_ver->update,
+			 pkg_ver->draft, ICE_PKG_SUPP_VER_MAJ,
+			 ICE_PKG_SUPP_VER_MNR);
+
+		return ICE_ERR_NOT_SUPPORTED;
+	}
+
+	return ICE_SUCCESS;
+}
+
 /**
  * ice_init_pkg - initialize/download package
  * @hw: pointer to the hardware structure
@@ -1357,6 +1399,13 @@ enum ice_status ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len)
 	if (status)
 		return status;
 
+	/* before downloading the package, check package version for
+	 * compatibility with driver
+	 */
+	status = ice_chk_pkg_version(hw, &hw->pkg_ver);
+	if (status)
+		return status;
+
 	/* find segment in given package */
 	seg = (struct ice_seg *)ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE, pkg);
 	if (!seg) {
@@ -1373,6 +1422,15 @@ enum ice_status ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len)
 		status = ICE_SUCCESS;
 	}
 
+	/* Get information on the package currently loaded in HW, then make sure
+	 * the driver is compatible with this version.
+	 */
+	if (!status) {
+		status = ice_get_pkg_info(hw);
+		if (!status)
+			status = ice_chk_pkg_version(hw, &hw->active_pkg_ver);
+	}
+
 	if (!status) {
 		hw->seg = seg;
 		/* on successful package download update other required
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index e8cc9cef3..375758c8d 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -7,12 +7,18 @@
 
 #include "ice_type.h"
 
+/* Package minimal version supported */
+#define ICE_PKG_SUPP_VER_MAJ	1
+#define ICE_PKG_SUPP_VER_MNR	2
+
 /* Package format version */
 #define ICE_PKG_FMT_VER_MAJ	1
 #define ICE_PKG_FMT_VER_MNR	0
 #define ICE_PKG_FMT_VER_UPD	0
 #define ICE_PKG_FMT_VER_DFT	0
 
+#define ICE_PKG_CNT 4
+
 enum ice_status
 ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count);
 enum ice_status
@@ -28,6 +34,8 @@ enum ice_status ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg);
 
 enum ice_status
 ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_header);
+enum ice_status
+ice_get_pkg_info(struct ice_hw *hw);
 
 void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg);
 
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
index 7133983ff..d23b2ae82 100644
--- a/drivers/net/ice/base/ice_flex_type.h
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -455,17 +455,7 @@ struct ice_pkg_enum {
 
 enum ice_tunnel_type {
 	TNL_VXLAN = 0,
-	TNL_GTPC,
-	TNL_GTPC_TEID,
-	TNL_GTPU,
-	TNL_GTPU_TEID,
-	TNL_VXLAN_GPE,
 	TNL_GENEVE,
-	TNL_NAT,
-	TNL_ROCE_V2,
-	TNL_MPLSO_UDP,
-	TNL_UDP2_END,
-	TNL_UPD_END,
 	TNL_LAST = 0xFF,
 	TNL_ALL = 0xFF,
 };
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 17/66] net/ice/base: add API to init FW logging
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (15 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 16/66] net/ice/base: add compatibility check for package version Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 16:23     ` Stillwell Jr, Paul M
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 18/66] net/ice/base: use macro instead of magic 8 Leyi Rong
                     ` (49 subsequent siblings)
  66 siblings, 1 reply; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

In order to initialize the current status of the FW logging,
the api ice_get_fw_log_cfg is added. The function retrieves
the current setting of the FW logging from HW and updates the
ice_hw structure accordingly.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h |  1 +
 drivers/net/ice/base/ice_common.c     | 48 +++++++++++++++++++++++++++
 2 files changed, 49 insertions(+)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 7b0aa8aaa..739f79e88 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -2196,6 +2196,7 @@ enum ice_aqc_fw_logging_mod {
 	ICE_AQC_FW_LOG_ID_WATCHDOG,
 	ICE_AQC_FW_LOG_ID_TASK_DISPATCH,
 	ICE_AQC_FW_LOG_ID_MNG,
+	ICE_AQC_FW_LOG_ID_SYNCE,
 	ICE_AQC_FW_LOG_ID_MAX,
 };
 
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 62c7fad0d..7093ee4f4 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -582,6 +582,49 @@ static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
 #define ICE_FW_LOG_DESC_SIZE_MAX	\
 	ICE_FW_LOG_DESC_SIZE(ICE_AQC_FW_LOG_ID_MAX)
 
+/**
+ * ice_get_fw_log_cfg - get FW logging configuration
+ * @hw: pointer to the HW struct
+ */
+static enum ice_status ice_get_fw_log_cfg(struct ice_hw *hw)
+{
+	struct ice_aqc_fw_logging_data *config;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 size;
+
+	size = ICE_FW_LOG_DESC_SIZE_MAX;
+	config = (struct ice_aqc_fw_logging_data *)ice_malloc(hw, size);
+	if (!config)
+		return ICE_ERR_NO_MEMORY;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging_info);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	status = ice_aq_send_cmd(hw, &desc, config, size, NULL);
+	if (!status) {
+		u16 i;
+
+		/* Save fw logging information into the HW structure */
+		for (i = 0; i < ICE_AQC_FW_LOG_ID_MAX; i++) {
+			u16 v, m, flgs;
+
+			v = LE16_TO_CPU(config->entry[i]);
+			m = (v & ICE_AQC_FW_LOG_ID_M) >> ICE_AQC_FW_LOG_ID_S;
+			flgs = (v & ICE_AQC_FW_LOG_EN_M) >> ICE_AQC_FW_LOG_EN_S;
+
+			if (m < ICE_AQC_FW_LOG_ID_MAX)
+				hw->fw_log.evnts[m].cur = flgs;
+		}
+	}
+
+	ice_free(hw, config);
+
+	return status;
+}
+
 /**
  * ice_cfg_fw_log - configure FW logging
  * @hw: pointer to the HW struct
@@ -636,6 +679,11 @@ static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
 	    (!hw->fw_log.actv_evnts || !ice_check_sq_alive(hw, &hw->adminq)))
 		return ICE_SUCCESS;
 
+	/* Get current FW log settings */
+	status = ice_get_fw_log_cfg(hw);
+	if (status)
+		return status;
+
 	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging);
 	cmd = &desc.params.fw_logging;
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 18/66] net/ice/base: use macro instead of magic 8
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (16 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 17/66] net/ice/base: add API to init FW logging Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 19/66] net/ice/base: move and redefine ice debug cq API Leyi Rong
                     ` (48 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Bruce Allan, Paul M Stillwell Jr

Replace the use of the magic number 8 by BITS_PER_BYTE when calculating
the number of bits from the number of bytes.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c |  4 +-
 drivers/net/ice/base/ice_flow.c      | 74 +++++++++++++++-------------
 2 files changed, 43 insertions(+), 35 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 5faee6d52..b569b91a7 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -3862,7 +3862,7 @@ ice_update_fd_swap(struct ice_hw *hw, u16 prof_id, struct ice_fv_word *es)
 
 			idx = (j * 4) + k;
 			if (used[idx])
-				raw_entry |= used[idx] << (k * 8);
+				raw_entry |= used[idx] << (k * BITS_PER_BYTE);
 		}
 
 		/* write the appropriate register set, based on HW block */
@@ -3955,7 +3955,7 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 				u16 ptype;
 				u8 m;
 
-				ptype = byte * 8 + bit;
+				ptype = byte * BITS_PER_BYTE + bit;
 				if (ptype < ICE_FLOW_PTYPE_MAX) {
 					prof->ptype[prof->ptype_count] = ptype;
 
diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index dccd7d3c7..9f2a794bc 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -26,8 +26,8 @@
  * protocol headers. Displacement values are expressed in number of bits.
  */
 #define ICE_FLOW_FLD_IPV6_TTL_DSCP_DISP	(-4)
-#define ICE_FLOW_FLD_IPV6_TTL_PROT_DISP	((-2) * 8)
-#define ICE_FLOW_FLD_IPV6_TTL_TTL_DISP	((-1) * 8)
+#define ICE_FLOW_FLD_IPV6_TTL_PROT_DISP	((-2) * BITS_PER_BYTE)
+#define ICE_FLOW_FLD_IPV6_TTL_TTL_DISP	((-1) * BITS_PER_BYTE)
 
 /* Describe properties of a protocol header field */
 struct ice_flow_field_info {
@@ -36,70 +36,76 @@ struct ice_flow_field_info {
 	u16 size;	/* Size of fields in bits */
 };
 
+#define ICE_FLOW_FLD_INFO(_hdr, _offset_bytes, _size_bytes) { \
+	.hdr = _hdr, \
+	.off = _offset_bytes * BITS_PER_BYTE, \
+	.size = _size_bytes * BITS_PER_BYTE, \
+}
+
 /* Table containing properties of supported protocol header fields */
 static const
 struct ice_flow_field_info ice_flds_info[ICE_FLOW_FIELD_IDX_MAX] = {
 	/* Ether */
 	/* ICE_FLOW_FIELD_IDX_ETH_DA */
-	{ ICE_FLOW_SEG_HDR_ETH, 0, ETH_ALEN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ETH, 0, ETH_ALEN),
 	/* ICE_FLOW_FIELD_IDX_ETH_SA */
-	{ ICE_FLOW_SEG_HDR_ETH, ETH_ALEN * 8, ETH_ALEN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ETH, ETH_ALEN, ETH_ALEN),
 	/* ICE_FLOW_FIELD_IDX_S_VLAN */
-	{ ICE_FLOW_SEG_HDR_VLAN, 12 * 8, ICE_FLOW_FLD_SZ_VLAN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_VLAN, 12, ICE_FLOW_FLD_SZ_VLAN),
 	/* ICE_FLOW_FIELD_IDX_C_VLAN */
-	{ ICE_FLOW_SEG_HDR_VLAN, 14 * 8, ICE_FLOW_FLD_SZ_VLAN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_VLAN, 14, ICE_FLOW_FLD_SZ_VLAN),
 	/* ICE_FLOW_FIELD_IDX_ETH_TYPE */
-	{ ICE_FLOW_SEG_HDR_ETH, 12 * 8, ICE_FLOW_FLD_SZ_ETH_TYPE * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ETH, 12, ICE_FLOW_FLD_SZ_ETH_TYPE),
 	/* IPv4 */
 	/* ICE_FLOW_FIELD_IDX_IP_DSCP */
-	{ ICE_FLOW_SEG_HDR_IPV4, 1 * 8, 1 * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV4, 1, 1),
 	/* ICE_FLOW_FIELD_IDX_IP_TTL */
-	{ ICE_FLOW_SEG_HDR_NONE, 8 * 8, 1 * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_NONE, 8, 1),
 	/* ICE_FLOW_FIELD_IDX_IP_PROT */
-	{ ICE_FLOW_SEG_HDR_NONE, 9 * 8, ICE_FLOW_FLD_SZ_IP_PROT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_NONE, 9, ICE_FLOW_FLD_SZ_IP_PROT),
 	/* ICE_FLOW_FIELD_IDX_IPV4_SA */
-	{ ICE_FLOW_SEG_HDR_IPV4, 12 * 8, ICE_FLOW_FLD_SZ_IPV4_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV4, 12, ICE_FLOW_FLD_SZ_IPV4_ADDR),
 	/* ICE_FLOW_FIELD_IDX_IPV4_DA */
-	{ ICE_FLOW_SEG_HDR_IPV4, 16 * 8, ICE_FLOW_FLD_SZ_IPV4_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV4, 16, ICE_FLOW_FLD_SZ_IPV4_ADDR),
 	/* IPv6 */
 	/* ICE_FLOW_FIELD_IDX_IPV6_SA */
-	{ ICE_FLOW_SEG_HDR_IPV6, 8 * 8, ICE_FLOW_FLD_SZ_IPV6_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV6, 8, ICE_FLOW_FLD_SZ_IPV6_ADDR),
 	/* ICE_FLOW_FIELD_IDX_IPV6_DA */
-	{ ICE_FLOW_SEG_HDR_IPV6, 24 * 8, ICE_FLOW_FLD_SZ_IPV6_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV6, 24, ICE_FLOW_FLD_SZ_IPV6_ADDR),
 	/* Transport */
 	/* ICE_FLOW_FIELD_IDX_TCP_SRC_PORT */
-	{ ICE_FLOW_SEG_HDR_TCP, 0 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_TCP, 0, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_TCP_DST_PORT */
-	{ ICE_FLOW_SEG_HDR_TCP, 2 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_TCP, 2, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_UDP_SRC_PORT */
-	{ ICE_FLOW_SEG_HDR_UDP, 0 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_UDP, 0, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_UDP_DST_PORT */
-	{ ICE_FLOW_SEG_HDR_UDP, 2 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_UDP, 2, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT */
-	{ ICE_FLOW_SEG_HDR_SCTP, 0 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_SCTP, 0, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_SCTP_DST_PORT */
-	{ ICE_FLOW_SEG_HDR_SCTP, 2 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_SCTP, 2, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_TCP_FLAGS */
-	{ ICE_FLOW_SEG_HDR_TCP, 13 * 8, ICE_FLOW_FLD_SZ_TCP_FLAGS * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_TCP, 13, ICE_FLOW_FLD_SZ_TCP_FLAGS),
 	/* ARP */
 	/* ICE_FLOW_FIELD_IDX_ARP_SIP */
-	{ ICE_FLOW_SEG_HDR_ARP, 14 * 8, ICE_FLOW_FLD_SZ_IPV4_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ARP, 14, ICE_FLOW_FLD_SZ_IPV4_ADDR),
 	/* ICE_FLOW_FIELD_IDX_ARP_DIP */
-	{ ICE_FLOW_SEG_HDR_ARP, 24 * 8, ICE_FLOW_FLD_SZ_IPV4_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ARP, 24, ICE_FLOW_FLD_SZ_IPV4_ADDR),
 	/* ICE_FLOW_FIELD_IDX_ARP_SHA */
-	{ ICE_FLOW_SEG_HDR_ARP, 8 * 8, ETH_ALEN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ARP, 8, ETH_ALEN),
 	/* ICE_FLOW_FIELD_IDX_ARP_DHA */
-	{ ICE_FLOW_SEG_HDR_ARP, 18 * 8, ETH_ALEN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ARP, 18, ETH_ALEN),
 	/* ICE_FLOW_FIELD_IDX_ARP_OP */
-	{ ICE_FLOW_SEG_HDR_ARP, 6 * 8, ICE_FLOW_FLD_SZ_ARP_OPER * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ARP, 6, ICE_FLOW_FLD_SZ_ARP_OPER),
 	/* ICMP */
 	/* ICE_FLOW_FIELD_IDX_ICMP_TYPE */
-	{ ICE_FLOW_SEG_HDR_ICMP, 0 * 8, ICE_FLOW_FLD_SZ_ICMP_TYPE * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ICMP, 0, ICE_FLOW_FLD_SZ_ICMP_TYPE),
 	/* ICE_FLOW_FIELD_IDX_ICMP_CODE */
-	{ ICE_FLOW_SEG_HDR_ICMP, 1 * 8, ICE_FLOW_FLD_SZ_ICMP_CODE * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ICMP, 1, ICE_FLOW_FLD_SZ_ICMP_CODE),
 	/* GRE */
 	/* ICE_FLOW_FIELD_IDX_GRE_KEYID */
-	{ ICE_FLOW_SEG_HDR_GRE, 12 * 8, ICE_FLOW_FLD_SZ_GRE_KEYID * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_GRE, 12, ICE_FLOW_FLD_SZ_GRE_KEYID),
 };
 
 /* Bitmaps indicating relevant packet types for a particular protocol header
@@ -644,7 +650,7 @@ ice_flow_xtract_fld(struct ice_hw *hw, struct ice_flow_prof_params *params,
 	/* Each extraction sequence entry is a word in size, and extracts a
 	 * word-aligned offset from a protocol header.
 	 */
-	ese_bits = ICE_FLOW_FV_EXTRACT_SZ * 8;
+	ese_bits = ICE_FLOW_FV_EXTRACT_SZ * BITS_PER_BYTE;
 
 	flds[fld].xtrct.prot_id = prot_id;
 	flds[fld].xtrct.off = (ice_flds_info[fld].off / ese_bits) *
@@ -737,15 +743,17 @@ ice_flow_xtract_raws(struct ice_hw *hw, struct ice_flow_prof_params *params,
 		raw->info.xtrct.prot_id = ICE_PROT_PAY;
 		raw->info.xtrct.off = (off / ICE_FLOW_FV_EXTRACT_SZ) *
 			ICE_FLOW_FV_EXTRACT_SZ;
-		raw->info.xtrct.disp = (off % ICE_FLOW_FV_EXTRACT_SZ) * 8;
+		raw->info.xtrct.disp = (off % ICE_FLOW_FV_EXTRACT_SZ) *
+			BITS_PER_BYTE;
 		raw->info.xtrct.idx = params->es_cnt;
 
 		/* Determine the number of field vector entries this raw field
 		 * consumes.
 		 */
 		cnt = DIVIDE_AND_ROUND_UP(raw->info.xtrct.disp +
-					  (raw->info.src.last * 8),
-					  ICE_FLOW_FV_EXTRACT_SZ * 8);
+					  (raw->info.src.last * BITS_PER_BYTE),
+					  (ICE_FLOW_FV_EXTRACT_SZ *
+					   BITS_PER_BYTE));
 		off = raw->info.xtrct.off;
 		for (j = 0; j < cnt; j++) {
 			/* Make sure the number of extraction sequence required
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 19/66] net/ice/base: move and redefine ice debug cq API
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (17 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 18/66] net/ice/base: use macro instead of magic 8 Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 20/66] net/ice/base: separate out control queue lock creation Leyi Rong
                     ` (47 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

The ice_debug_cq function is only called from ice_controlq.c, and has no
other callers outside of that file. Move it and mark it static to avoid
namespace pollution.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c   | 47 -------------------------
 drivers/net/ice/base/ice_common.h   |  2 --
 drivers/net/ice/base/ice_controlq.c | 54 +++++++++++++++++++++++++++--
 3 files changed, 51 insertions(+), 52 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 7093ee4f4..c1af24322 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -1474,53 +1474,6 @@ ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index)
 }
 #endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
 
-/**
- * ice_debug_cq
- * @hw: pointer to the hardware structure
- * @mask: debug mask
- * @desc: pointer to control queue descriptor
- * @buf: pointer to command buffer
- * @buf_len: max length of buf
- *
- * Dumps debug log about control command with descriptor contents.
- */
-void
-ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len)
-{
-	struct ice_aq_desc *cq_desc = (struct ice_aq_desc *)desc;
-	u16 len;
-
-	if (!(mask & hw->debug_mask))
-		return;
-
-	if (!desc)
-		return;
-
-	len = LE16_TO_CPU(cq_desc->datalen);
-
-	ice_debug(hw, mask,
-		  "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
-		  LE16_TO_CPU(cq_desc->opcode),
-		  LE16_TO_CPU(cq_desc->flags),
-		  LE16_TO_CPU(cq_desc->datalen), LE16_TO_CPU(cq_desc->retval));
-	ice_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
-		  LE32_TO_CPU(cq_desc->cookie_high),
-		  LE32_TO_CPU(cq_desc->cookie_low));
-	ice_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
-		  LE32_TO_CPU(cq_desc->params.generic.param0),
-		  LE32_TO_CPU(cq_desc->params.generic.param1));
-	ice_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
-		  LE32_TO_CPU(cq_desc->params.generic.addr_high),
-		  LE32_TO_CPU(cq_desc->params.generic.addr_low));
-	if (buf && cq_desc->datalen != 0) {
-		ice_debug(hw, mask, "Buffer:\n");
-		if (buf_len < len)
-			len = buf_len;
-
-		ice_debug_array(hw, mask, 16, 1, (u8 *)buf, len);
-	}
-}
-
 
 /* FW Admin Queue command wrappers */
 
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index cccb5f009..58f22b0d3 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -20,8 +20,6 @@ enum ice_fw_modes {
 
 enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw);
 
-void
-ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len);
 enum ice_status ice_init_hw(struct ice_hw *hw);
 void ice_deinit_hw(struct ice_hw *hw);
 enum ice_status ice_check_reset(struct ice_hw *hw);
diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
index f3404023a..90dec0156 100644
--- a/drivers/net/ice/base/ice_controlq.c
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -727,6 +727,54 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 	return ICE_CTL_Q_DESC_UNUSED(sq);
 }
 
+/**
+ * ice_debug_cq
+ * @hw: pointer to the hardware structure
+ * @desc: pointer to control queue descriptor
+ * @buf: pointer to command buffer
+ * @buf_len: max length of buf
+ *
+ * Dumps debug log about control command with descriptor contents.
+ */
+static void ice_debug_cq(struct ice_hw *hw, void *desc, void *buf, u16 buf_len)
+{
+	struct ice_aq_desc *cq_desc = (struct ice_aq_desc *)desc;
+	u16 datalen, flags;
+
+	if (!((ICE_DBG_AQ_DESC | ICE_DBG_AQ_DESC_BUF) & hw->debug_mask))
+		return;
+
+	if (!desc)
+		return;
+
+	datalen = LE16_TO_CPU(cq_desc->datalen);
+	flags = LE16_TO_CPU(cq_desc->flags);
+
+	ice_debug(hw, ICE_DBG_AQ_DESC,
+		  "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+		  LE16_TO_CPU(cq_desc->opcode), flags, datalen,
+		  LE16_TO_CPU(cq_desc->retval));
+	ice_debug(hw, ICE_DBG_AQ_DESC, "\tcookie (h,l) 0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->cookie_high),
+		  LE32_TO_CPU(cq_desc->cookie_low));
+	ice_debug(hw, ICE_DBG_AQ_DESC, "\tparam (0,1)  0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.param0),
+		  LE32_TO_CPU(cq_desc->params.generic.param1));
+	ice_debug(hw, ICE_DBG_AQ_DESC, "\taddr (h,l)   0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.addr_high),
+		  LE32_TO_CPU(cq_desc->params.generic.addr_low));
+	/* Dump buffer iff 1) one exists and 2) is either a response indicated
+	 * by the DD and/or CMP flag set or a command with the RD flag set.
+	 */
+	if (buf && cq_desc->datalen != 0 &&
+	    (flags & (ICE_AQ_FLAG_DD | ICE_AQ_FLAG_CMP) ||
+	     flags & ICE_AQ_FLAG_RD)) {
+		ice_debug(hw, ICE_DBG_AQ_DESC_BUF, "Buffer:\n");
+		ice_debug_array(hw, ICE_DBG_AQ_DESC_BUF, 16, 1, (u8 *)buf,
+				min(buf_len, datalen));
+	}
+}
+
 /**
  * ice_sq_done - check if FW has processed the Admin Send Queue (ATQ)
  * @hw: pointer to the HW struct
@@ -886,7 +934,7 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	ice_debug(hw, ICE_DBG_AQ_MSG,
 		  "ATQ: Control Send queue desc and buffer:\n");
 
-	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc_on_ring, buf, buf_size);
+	ice_debug_cq(hw, (void *)desc_on_ring, buf, buf_size);
 
 
 	(cq->sq.next_to_use)++;
@@ -950,7 +998,7 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	ice_debug(hw, ICE_DBG_AQ_MSG,
 		  "ATQ: desc and buffer writeback:\n");
 
-	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, buf, buf_size);
+	ice_debug_cq(hw, (void *)desc, buf, buf_size);
 
 
 	/* save writeback AQ if requested */
@@ -1055,7 +1103,7 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 
 	ice_debug(hw, ICE_DBG_AQ_MSG, "ARQ: desc and buffer:\n");
 
-	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, e->msg_buf,
+	ice_debug_cq(hw, (void *)desc, e->msg_buf,
 		     cq->rq_buf_size);
 
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 20/66] net/ice/base: separate out control queue lock creation
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (18 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 19/66] net/ice/base: move and redefine ice debug cq API Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 21/66] net/ice/base: add helper functions for PHY caching Leyi Rong
                     ` (46 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

The ice_init_all_ctrlq and ice_shutdown_all_ctrlq functions create and
destroy the locks used to protect the send and receive process of each
control queue.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c   |   6 +-
 drivers/net/ice/base/ice_common.h   |   2 +
 drivers/net/ice/base/ice_controlq.c | 112 +++++++++++++++++++++-------
 3 files changed, 91 insertions(+), 29 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index c1af24322..5b4a13a41 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -853,7 +853,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 	ice_get_itr_intrl_gran(hw);
 
 
-	status = ice_init_all_ctrlq(hw);
+	status = ice_create_all_ctrlq(hw);
 	if (status)
 		goto err_unroll_cqinit;
 
@@ -981,7 +981,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 	ice_free(hw, hw->port_info);
 	hw->port_info = NULL;
 err_unroll_cqinit:
-	ice_shutdown_all_ctrlq(hw);
+	ice_destroy_all_ctrlq(hw);
 	return status;
 }
 
@@ -1010,7 +1010,7 @@ void ice_deinit_hw(struct ice_hw *hw)
 
 	/* Attempt to disable FW logging before shutting down control queues */
 	ice_cfg_fw_log(hw, false);
-	ice_shutdown_all_ctrlq(hw);
+	ice_destroy_all_ctrlq(hw);
 
 	/* Clear VSI contexts if not already cleared */
 	ice_clear_all_vsi_ctx(hw);
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 58f22b0d3..4cd87fc1e 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -25,8 +25,10 @@ void ice_deinit_hw(struct ice_hw *hw);
 enum ice_status ice_check_reset(struct ice_hw *hw);
 enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req);
 
+enum ice_status ice_create_all_ctrlq(struct ice_hw *hw);
 enum ice_status ice_init_all_ctrlq(struct ice_hw *hw);
 void ice_shutdown_all_ctrlq(struct ice_hw *hw);
+void ice_destroy_all_ctrlq(struct ice_hw *hw);
 enum ice_status
 ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 		  struct ice_rq_event_info *e, u16 *pending);
diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
index 90dec0156..6d893e2f2 100644
--- a/drivers/net/ice/base/ice_controlq.c
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -283,7 +283,7 @@ ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
  * @cq: pointer to the specific Control queue
  *
  * This is the main initialization routine for the Control Send Queue
- * Prior to calling this function, drivers *MUST* set the following fields
+ * Prior to calling this function, the driver *MUST* set the following fields
  * in the cq->structure:
  *     - cq->num_sq_entries
  *     - cq->sq_buf_size
@@ -342,7 +342,7 @@ static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
  * @cq: pointer to the specific Control queue
  *
  * The main initialization routine for the Admin Receive (Event) Queue.
- * Prior to calling this function, drivers *MUST* set the following fields
+ * Prior to calling this function, the driver *MUST* set the following fields
  * in the cq->structure:
  *     - cq->num_rq_entries
  *     - cq->rq_buf_size
@@ -535,14 +535,8 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
 	return ICE_SUCCESS;
 
 init_ctrlq_free_rq:
-	if (cq->rq.count) {
-		ice_shutdown_rq(hw, cq);
-		ice_destroy_lock(&cq->rq_lock);
-	}
-	if (cq->sq.count) {
-		ice_shutdown_sq(hw, cq);
-		ice_destroy_lock(&cq->sq_lock);
-	}
+	ice_shutdown_rq(hw, cq);
+	ice_shutdown_sq(hw, cq);
 	return status;
 }
 
@@ -551,12 +545,14 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
  * @hw: pointer to the hardware structure
  * @q_type: specific Control queue type
  *
- * Prior to calling this function, drivers *MUST* set the following fields
+ * Prior to calling this function, the driver *MUST* set the following fields
  * in the cq->structure:
  *     - cq->num_sq_entries
  *     - cq->num_rq_entries
  *     - cq->rq_buf_size
  *     - cq->sq_buf_size
+ *
+ * NOTE: this function does not initialize the controlq locks
  */
 static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 {
@@ -582,8 +578,6 @@ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 	    !cq->rq_buf_size || !cq->sq_buf_size) {
 		return ICE_ERR_CFG;
 	}
-	ice_init_lock(&cq->sq_lock);
-	ice_init_lock(&cq->rq_lock);
 
 	/* setup SQ command write back timeout */
 	cq->sq_cmd_timeout = ICE_CTL_Q_SQ_CMD_TIMEOUT;
@@ -591,7 +585,7 @@ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 	/* allocate the ATQ */
 	ret_code = ice_init_sq(hw, cq);
 	if (ret_code)
-		goto init_ctrlq_destroy_locks;
+		return ret_code;
 
 	/* allocate the ARQ */
 	ret_code = ice_init_rq(hw, cq);
@@ -603,9 +597,6 @@ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 
 init_ctrlq_free_sq:
 	ice_shutdown_sq(hw, cq);
-init_ctrlq_destroy_locks:
-	ice_destroy_lock(&cq->sq_lock);
-	ice_destroy_lock(&cq->rq_lock);
 	return ret_code;
 }
 
@@ -613,12 +604,14 @@ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
  * ice_init_all_ctrlq - main initialization routine for all control queues
  * @hw: pointer to the hardware structure
  *
- * Prior to calling this function, drivers *MUST* set the following fields
+ * Prior to calling this function, the driver MUST* set the following fields
  * in the cq->structure for all control queues:
  *     - cq->num_sq_entries
  *     - cq->num_rq_entries
  *     - cq->rq_buf_size
  *     - cq->sq_buf_size
+ *
+ * NOTE: this function does not initialize the controlq locks.
  */
 enum ice_status ice_init_all_ctrlq(struct ice_hw *hw)
 {
@@ -637,10 +630,48 @@ enum ice_status ice_init_all_ctrlq(struct ice_hw *hw)
 	return ice_init_ctrlq(hw, ICE_CTL_Q_MAILBOX);
 }
 
+/**
+ * ice_init_ctrlq_locks - Initialize locks for a control queue
+ * @cq: pointer to the control queue
+ *
+ * Initializes the send and receive queue locks for a given control queue.
+ */
+static void ice_init_ctrlq_locks(struct ice_ctl_q_info *cq)
+{
+	ice_init_lock(&cq->sq_lock);
+	ice_init_lock(&cq->rq_lock);
+}
+
+/**
+ * ice_create_all_ctrlq - main initialization routine for all control queues
+ * @hw: pointer to the hardware structure
+ *
+ * Prior to calling this function, the driver *MUST* set the following fields
+ * in the cq->structure for all control queues:
+ *     - cq->num_sq_entries
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *     - cq->sq_buf_size
+ *
+ * This function creates all the control queue locks and then calls
+ * ice_init_all_ctrlq. It should be called once during driver load. If the
+ * driver needs to re-initialize control queues at run time it should call
+ * ice_init_all_ctrlq instead.
+ */
+enum ice_status ice_create_all_ctrlq(struct ice_hw *hw)
+{
+	ice_init_ctrlq_locks(&hw->adminq);
+	ice_init_ctrlq_locks(&hw->mailboxq);
+
+	return ice_init_all_ctrlq(hw);
+}
+
 /**
  * ice_shutdown_ctrlq - shutdown routine for any control queue
  * @hw: pointer to the hardware structure
  * @q_type: specific Control queue type
+ *
+ * NOTE: this function does not destroy the control queue locks.
  */
 static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 {
@@ -659,19 +690,17 @@ static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 		return;
 	}
 
-	if (cq->sq.count) {
-		ice_shutdown_sq(hw, cq);
-		ice_destroy_lock(&cq->sq_lock);
-	}
-	if (cq->rq.count) {
-		ice_shutdown_rq(hw, cq);
-		ice_destroy_lock(&cq->rq_lock);
-	}
+	ice_shutdown_sq(hw, cq);
+	ice_shutdown_rq(hw, cq);
 }
 
 /**
  * ice_shutdown_all_ctrlq - shutdown routine for all control queues
  * @hw: pointer to the hardware structure
+ *
+ * NOTE: this function does not destroy the control queue locks. The driver
+ * may call this at runtime to shutdown and later restart control queues, such
+ * as in response to a reset event.
  */
 void ice_shutdown_all_ctrlq(struct ice_hw *hw)
 {
@@ -681,6 +710,37 @@ void ice_shutdown_all_ctrlq(struct ice_hw *hw)
 	ice_shutdown_ctrlq(hw, ICE_CTL_Q_MAILBOX);
 }
 
+/**
+ * ice_destroy_ctrlq_locks - Destroy locks for a control queue
+ * @cq: pointer to the control queue
+ *
+ * Destroys the send and receive queue locks for a given control queue.
+ */
+static void
+ice_destroy_ctrlq_locks(struct ice_ctl_q_info *cq)
+{
+	ice_destroy_lock(&cq->sq_lock);
+	ice_destroy_lock(&cq->rq_lock);
+}
+
+/**
+ * ice_destroy_all_ctrlq - exit routine for all control queues
+ * @hw: pointer to the hardware structure
+ *
+ * This function shuts down all the control queues and then destroys the
+ * control queue locks. It should be called once during driver unload. The
+ * driver should call ice_shutdown_all_ctrlq if it needs to shut down and
+ * reinitialize control queues, such as in response to a reset event.
+ */
+void ice_destroy_all_ctrlq(struct ice_hw *hw)
+{
+	/* shut down all the control queues first */
+	ice_shutdown_all_ctrlq(hw);
+
+	ice_destroy_ctrlq_locks(&hw->adminq);
+	ice_destroy_ctrlq_locks(&hw->mailboxq);
+}
+
 /**
  * ice_clean_sq - cleans Admin send queue (ATQ)
  * @hw: pointer to the hardware structure
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 21/66] net/ice/base: add helper functions for PHY caching
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (19 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 20/66] net/ice/base: separate out control queue lock creation Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 16:26     ` Stillwell Jr, Paul M
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 22/66] net/ice/base: added sibling head to parse nodes Leyi Rong
                     ` (45 subsequent siblings)
  66 siblings, 1 reply; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Tony Nguyen, Paul M Stillwell Jr

Add additional functions to aide in caching PHY configuration.
In order to cache the initial modes, we need to determine the
operating mode based on capabilities. Add helper functions
for flow control and FEC to take a set of capabilities and
return the operating mode matching those capabilities. Also
add a helper function to determine whether a PHY capability
matches a PHY configuration.

Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h |  1 +
 drivers/net/ice/base/ice_common.c     | 83 +++++++++++++++++++++++++++
 drivers/net/ice/base/ice_common.h     |  9 ++-
 3 files changed, 91 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 739f79e88..77f93b950 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -1594,6 +1594,7 @@ struct ice_aqc_get_link_status_data {
 #define ICE_AQ_LINK_PWR_QSFP_CLASS_3	2
 #define ICE_AQ_LINK_PWR_QSFP_CLASS_4	3
 	__le16 link_speed;
+#define ICE_AQ_LINK_SPEED_M		0x7FF
 #define ICE_AQ_LINK_SPEED_10MB		BIT(0)
 #define ICE_AQ_LINK_SPEED_100MB		BIT(1)
 #define ICE_AQ_LINK_SPEED_1000MB	BIT(2)
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 5b4a13a41..7f7f4dad0 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -2552,6 +2552,53 @@ ice_cache_phy_user_req(struct ice_port_info *pi,
 	}
 }
 
+/**
+ * ice_caps_to_fc_mode
+ * @caps: PHY capabilities
+ *
+ * Convert PHY FC capabilities to ice FC mode
+ */
+enum ice_fc_mode ice_caps_to_fc_mode(u8 caps)
+{
+	if (caps & ICE_AQC_PHY_EN_TX_LINK_PAUSE &&
+	    caps & ICE_AQC_PHY_EN_RX_LINK_PAUSE)
+		return ICE_FC_FULL;
+
+	if (caps & ICE_AQC_PHY_EN_TX_LINK_PAUSE)
+		return ICE_FC_TX_PAUSE;
+
+	if (caps & ICE_AQC_PHY_EN_RX_LINK_PAUSE)
+		return ICE_FC_RX_PAUSE;
+
+	return ICE_FC_NONE;
+}
+
+/**
+ * ice_caps_to_fec_mode
+ * @caps: PHY capabilities
+ * @fec_options: Link FEC options
+ *
+ * Convert PHY FEC capabilities to ice FEC mode
+ */
+enum ice_fec_mode ice_caps_to_fec_mode(u8 caps, u8 fec_options)
+{
+	if (caps & ICE_AQC_PHY_EN_AUTO_FEC)
+		return ICE_FEC_AUTO;
+
+	if (fec_options & (ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
+			   ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
+			   ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN |
+			   ICE_AQC_PHY_FEC_25G_KR_REQ))
+		return ICE_FEC_BASER;
+
+	if (fec_options & (ICE_AQC_PHY_FEC_25G_RS_528_REQ |
+			   ICE_AQC_PHY_FEC_25G_RS_544_REQ |
+			   ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN))
+		return ICE_FEC_RS;
+
+	return ICE_FEC_NONE;
+}
+
 /**
  * ice_set_fc
  * @pi: port information structure
@@ -2658,6 +2705,42 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
 	return status;
 }
 
+/**
+ * ice_phy_caps_equals_cfg
+ * @phy_caps: PHY capabilities
+ * @phy_cfg: PHY configuration
+ *
+ * Helper function to determine if PHY capabilities matches PHY
+ * configuration
+ */
+bool
+ice_phy_caps_equals_cfg(struct ice_aqc_get_phy_caps_data *phy_caps,
+			struct ice_aqc_set_phy_cfg_data *phy_cfg)
+{
+	u8 caps_mask, cfg_mask;
+
+	if (!phy_caps || !phy_cfg)
+		return false;
+
+	/* These bits are not common between capabilities and configuration.
+	 * Do not use them to determine equality.
+	 */
+	caps_mask = ICE_AQC_PHY_CAPS_MASK & ~(ICE_AQC_PHY_AN_MODE |
+					      ICE_AQC_PHY_EN_MOD_QUAL);
+	cfg_mask = ICE_AQ_PHY_ENA_VALID_MASK & ~ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
+
+	if (phy_caps->phy_type_low != phy_cfg->phy_type_low ||
+	    phy_caps->phy_type_high != phy_cfg->phy_type_high ||
+	    ((phy_caps->caps & caps_mask) != (phy_cfg->caps & cfg_mask)) ||
+	    phy_caps->low_power_ctrl != phy_cfg->low_power_ctrl ||
+	    phy_caps->eee_cap != phy_cfg->eee_cap ||
+	    phy_caps->eeer_value != phy_cfg->eeer_value ||
+	    phy_caps->link_fec_options != phy_cfg->link_fec_opt)
+		return false;
+
+	return true;
+}
+
 /**
  * ice_copy_phy_caps_to_cfg - Copy PHY ability data to configuration data
  * @caps: PHY ability structure to copy date from
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 4cd87fc1e..10131b473 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -136,14 +136,19 @@ enum ice_status ice_clear_pf_cfg(struct ice_hw *hw);
 enum ice_status
 ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi,
 		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd);
+enum ice_fc_mode ice_caps_to_fc_mode(u8 caps);
+enum ice_fec_mode ice_caps_to_fec_mode(u8 caps, u8 fec_options);
 enum ice_status
 ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
 	   bool ena_auto_link_update);
-void
-ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec);
+bool
+ice_phy_caps_equals_cfg(struct ice_aqc_get_phy_caps_data *caps,
+			struct ice_aqc_set_phy_cfg_data *cfg);
 void
 ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps,
 			 struct ice_aqc_set_phy_cfg_data *cfg);
+void
+ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec);
 enum ice_status
 ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
 			   struct ice_sq_cd *cd);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 22/66] net/ice/base: added sibling head to parse nodes
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (20 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 21/66] net/ice/base: add helper functions for PHY caching Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 23/66] net/ice/base: add and fix debuglogs Leyi Rong
                     ` (44 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Victor Raj, Paul M Stillwell Jr

There was a bug in the previous code which never traverses all the
children to get the first node of the requested layer.

Added a sibling head pointer to point the first node of each layer
per TC. This helps the traverse easy and quicker and also removed the
recursive, complexity of the code.

Signed-off-by: Victor Raj <victor.raj@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_sched.c | 61 ++++++++++++--------------------
 drivers/net/ice/base/ice_type.h  |  2 ++
 2 files changed, 25 insertions(+), 38 deletions(-)

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 855e3848c..0c1c18ba1 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -260,33 +260,17 @@ ice_sched_remove_elems(struct ice_hw *hw, struct ice_sched_node *parent,
 
 /**
  * ice_sched_get_first_node - get the first node of the given layer
- * @hw: pointer to the HW struct
+ * @pi: port information structure
  * @parent: pointer the base node of the subtree
  * @layer: layer number
  *
  * This function retrieves the first node of the given layer from the subtree
  */
 static struct ice_sched_node *
-ice_sched_get_first_node(struct ice_hw *hw, struct ice_sched_node *parent,
-			 u8 layer)
+ice_sched_get_first_node(struct ice_port_info *pi,
+			 struct ice_sched_node *parent, u8 layer)
 {
-	u8 i;
-
-	if (layer < hw->sw_entry_point_layer)
-		return NULL;
-	for (i = 0; i < parent->num_children; i++) {
-		struct ice_sched_node *node = parent->children[i];
-
-		if (node) {
-			if (node->tx_sched_layer == layer)
-				return node;
-			/* this recursion is intentional, and wouldn't
-			 * go more than 9 calls
-			 */
-			return ice_sched_get_first_node(hw, node, layer);
-		}
-	}
-	return NULL;
+	return pi->sib_head[parent->tc_num][layer];
 }
 
 /**
@@ -342,7 +326,7 @@ void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node)
 	parent = node->parent;
 	/* root has no parent */
 	if (parent) {
-		struct ice_sched_node *p, *tc_node;
+		struct ice_sched_node *p;
 
 		/* update the parent */
 		for (i = 0; i < parent->num_children; i++)
@@ -354,16 +338,7 @@ void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node)
 				break;
 			}
 
-		/* search for previous sibling that points to this node and
-		 * remove the reference
-		 */
-		tc_node = ice_sched_get_tc_node(pi, node->tc_num);
-		if (!tc_node) {
-			ice_debug(hw, ICE_DBG_SCHED,
-				  "Invalid TC number %d\n", node->tc_num);
-			goto err_exit;
-		}
-		p = ice_sched_get_first_node(hw, tc_node, node->tx_sched_layer);
+		p = ice_sched_get_first_node(pi, node, node->tx_sched_layer);
 		while (p) {
 			if (p->sibling == node) {
 				p->sibling = node->sibling;
@@ -371,8 +346,13 @@ void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node)
 			}
 			p = p->sibling;
 		}
+
+		/* update the sibling head if head is getting removed */
+		if (pi->sib_head[node->tc_num][node->tx_sched_layer] == node)
+			pi->sib_head[node->tc_num][node->tx_sched_layer] =
+				node->sibling;
 	}
-err_exit:
+
 	/* leaf nodes have no children */
 	if (node->children)
 		ice_free(hw, node->children);
@@ -979,13 +959,17 @@ ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 
 		/* add it to previous node sibling pointer */
 		/* Note: siblings are not linked across branches */
-		prev = ice_sched_get_first_node(hw, tc_node, layer);
+		prev = ice_sched_get_first_node(pi, tc_node, layer);
 		if (prev && prev != new_node) {
 			while (prev->sibling)
 				prev = prev->sibling;
 			prev->sibling = new_node;
 		}
 
+		/* initialize the sibling head */
+		if (!pi->sib_head[tc_node->tc_num][layer])
+			pi->sib_head[tc_node->tc_num][layer] = new_node;
+
 		if (i == 0)
 			*first_node_teid = teid;
 	}
@@ -1451,7 +1435,7 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
 		goto lan_q_exit;
 
 	/* get the first queue group node from VSI sub-tree */
-	qgrp_node = ice_sched_get_first_node(pi->hw, vsi_node, qgrp_layer);
+	qgrp_node = ice_sched_get_first_node(pi, vsi_node, qgrp_layer);
 	while (qgrp_node) {
 		/* make sure the qgroup node is part of the VSI subtree */
 		if (ice_sched_find_node_in_subtree(pi->hw, vsi_node, qgrp_node))
@@ -1482,7 +1466,7 @@ ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
 	u8 vsi_layer;
 
 	vsi_layer = ice_sched_get_vsi_layer(hw);
-	node = ice_sched_get_first_node(hw, tc_node, vsi_layer);
+	node = ice_sched_get_first_node(hw->port_info, tc_node, vsi_layer);
 
 	/* Check whether it already exists */
 	while (node) {
@@ -1511,7 +1495,7 @@ ice_sched_get_agg_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
 	u8 agg_layer;
 
 	agg_layer = ice_sched_get_agg_layer(hw);
-	node = ice_sched_get_first_node(hw, tc_node, agg_layer);
+	node = ice_sched_get_first_node(hw->port_info, tc_node, agg_layer);
 
 	/* Check whether it already exists */
 	while (node) {
@@ -1663,7 +1647,8 @@ ice_sched_calc_vsi_support_nodes(struct ice_hw *hw,
 			/* If intermediate nodes are reached max children
 			 * then add a new one.
 			 */
-			node = ice_sched_get_first_node(hw, tc_node, (u8)i);
+			node = ice_sched_get_first_node(hw->port_info, tc_node,
+							(u8)i);
 			/* scan all the siblings */
 			while (node) {
 				if (node->num_children < hw->max_children[i])
@@ -2528,7 +2513,7 @@ ice_sched_add_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc)
 	 * intermediate node on those layers
 	 */
 	for (i = hw->sw_entry_point_layer; i < aggl; i++) {
-		parent = ice_sched_get_first_node(hw, tc_node, i);
+		parent = ice_sched_get_first_node(pi, tc_node, i);
 
 		/* scan all the siblings */
 		while (parent) {
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 5da267f1b..3523b0c35 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -679,6 +679,8 @@ struct ice_port_info {
 	struct ice_mac_info mac;
 	struct ice_phy_info phy;
 	struct ice_lock sched_lock;	/* protect access to TXSched tree */
+	struct ice_sched_node *
+		sib_head[ICE_MAX_TRAFFIC_CLASS][ICE_AQC_TOPO_MAX_LEVEL_NUM];
 	/* List contain profile ID(s) and other params per layer */
 	struct LIST_HEAD_TYPE rl_prof_list[ICE_AQC_TOPO_MAX_LEVEL_NUM];
 	struct ice_dcbx_cfg local_dcbx_cfg;	/* Oper/Local Cfg */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 23/66] net/ice/base: add and fix debuglogs
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (21 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 22/66] net/ice/base: added sibling head to parse nodes Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 24/66] net/ice/base: add support for reading REPC statistics Leyi Rong
                     ` (43 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Marta Plantykow, Paul M Stillwell Jr

Adding missing debuglogs and fixing existing debuglogs.

Signed-off-by: Marta Plantykow <marta.a.plantykow@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c    | 16 +++----
 drivers/net/ice/base/ice_controlq.c  | 19 ++++++++
 drivers/net/ice/base/ice_flex_pipe.c | 72 +++++++++++++++++++++++++++-
 drivers/net/ice/base/ice_flex_pipe.h |  1 +
 drivers/net/ice/base/ice_nvm.c       | 14 +++---
 5 files changed, 106 insertions(+), 16 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 7f7f4dad0..da72434d3 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -833,7 +833,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 	u16 mac_buf_len;
 	void *mac_buf;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_init_hw");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 
 	/* Set MAC type based on DeviceID */
@@ -1623,7 +1623,7 @@ ice_aq_req_res(struct ice_hw *hw, enum ice_aq_res_ids res,
 	struct ice_aq_desc desc;
 	enum ice_status status;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_req_res");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	cmd_resp = &desc.params.res_owner;
 
@@ -1692,7 +1692,7 @@ ice_aq_release_res(struct ice_hw *hw, enum ice_aq_res_ids res, u8 sdp_number,
 	struct ice_aqc_req_res *cmd;
 	struct ice_aq_desc desc;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_release_res");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	cmd = &desc.params.res_owner;
 
@@ -1722,7 +1722,7 @@ ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
 	u32 time_left = timeout;
 	enum ice_status status;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_res");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
 
@@ -1780,7 +1780,7 @@ void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res)
 	enum ice_status status;
 	u32 total_delay = 0;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_release_res");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	status = ice_aq_release_res(hw, res, 0, NULL);
 
@@ -1814,7 +1814,7 @@ ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
 	struct ice_aqc_alloc_free_res_cmd *cmd;
 	struct ice_aq_desc desc;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_alloc_free_res");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	cmd = &desc.params.sw_res_ctrl;
 
@@ -3189,7 +3189,7 @@ ice_aq_add_lan_txq(struct ice_hw *hw, u8 num_qgrps,
 	struct ice_aqc_add_txqs *cmd;
 	struct ice_aq_desc desc;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_add_lan_txq");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	cmd = &desc.params.add_txqs;
 
@@ -3245,7 +3245,7 @@ ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps,
 	enum ice_status status;
 	u16 i, sz = 0;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_dis_lan_txq");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 	cmd = &desc.params.dis_txqs;
 	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dis_txqs);
 
diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
index 6d893e2f2..4cb6df113 100644
--- a/drivers/net/ice/base/ice_controlq.c
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -35,6 +35,8 @@ static void ice_adminq_init_regs(struct ice_hw *hw)
 {
 	struct ice_ctl_q_info *cq = &hw->adminq;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	ICE_CQ_INIT_REGS(cq, PF_FW);
 }
 
@@ -295,6 +297,8 @@ static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 {
 	enum ice_status ret_code;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	if (cq->sq.count > 0) {
 		/* queue already initialized */
 		ret_code = ICE_ERR_NOT_READY;
@@ -354,6 +358,8 @@ static enum ice_status ice_init_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 {
 	enum ice_status ret_code;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	if (cq->rq.count > 0) {
 		/* queue already initialized */
 		ret_code = ICE_ERR_NOT_READY;
@@ -422,6 +428,8 @@ ice_shutdown_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 {
 	enum ice_status ret_code = ICE_SUCCESS;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	ice_acquire_lock(&cq->sq_lock);
 
 	if (!cq->sq.count) {
@@ -485,6 +493,8 @@ ice_shutdown_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 {
 	enum ice_status ret_code = ICE_SUCCESS;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	ice_acquire_lock(&cq->rq_lock);
 
 	if (!cq->rq.count) {
@@ -521,6 +531,8 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
 	struct ice_ctl_q_info *cq = &hw->adminq;
 	enum ice_status status;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 
 	status = ice_aq_get_fw_ver(hw, NULL);
 	if (status)
@@ -559,6 +571,8 @@ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 	struct ice_ctl_q_info *cq;
 	enum ice_status ret_code;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	switch (q_type) {
 	case ICE_CTL_Q_ADMIN:
 		ice_adminq_init_regs(hw);
@@ -617,6 +631,8 @@ enum ice_status ice_init_all_ctrlq(struct ice_hw *hw)
 {
 	enum ice_status ret_code;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 
 	/* Init FW admin queue */
 	ret_code = ice_init_ctrlq(hw, ICE_CTL_Q_ADMIN);
@@ -677,6 +693,8 @@ static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 {
 	struct ice_ctl_q_info *cq;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	switch (q_type) {
 	case ICE_CTL_Q_ADMIN:
 		cq = &hw->adminq;
@@ -704,6 +722,7 @@ static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
  */
 void ice_shutdown_all_ctrlq(struct ice_hw *hw)
 {
+	ice_debug(hw, ICE_DBG_TRACE, "ice_shutdown_all_ctrlq\n");
 	/* Shutdown FW admin queue */
 	ice_shutdown_ctrlq(hw, ICE_CTL_Q_ADMIN);
 	/* Shutdown PF-VF Mailbox */
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index b569b91a7..69d65c53e 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1142,7 +1142,7 @@ ice_get_pkg_info(struct ice_hw *hw)
 	u16 size;
 	u32 i;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_init_pkg_info\n");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	size = sizeof(*pkg_info) + (sizeof(pkg_info->pkg_info[0]) *
 				    (ICE_PKG_CNT - 1));
@@ -2417,6 +2417,11 @@ ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig)
 		ice_free(hw, del);
 	}
 
+	/* if VSIG characteristic list was cleared for reset
+	 * re-initialize the list head
+	 */
+	INIT_LIST_HEAD(&hw->blk[blk].xlt2.vsig_tbl[idx].prop_lst);
+
 	return ICE_SUCCESS;
 }
 
@@ -3138,6 +3143,71 @@ static void ice_init_flow_profs(struct ice_hw *hw, u8 blk_idx)
 	INIT_LIST_HEAD(&hw->fl_profs[blk_idx]);
 }
 
+/**
+ * ice_clear_hw_tbls - clear HW tables and flow profiles
+ * @hw: pointer to the hardware structure
+ */
+void ice_clear_hw_tbls(struct ice_hw *hw)
+{
+	u8 i;
+
+	for (i = 0; i < ICE_BLK_COUNT; i++) {
+		struct ice_prof_redir *prof_redir = &hw->blk[i].prof_redir;
+		struct ice_prof_tcam *prof = &hw->blk[i].prof;
+		struct ice_xlt1 *xlt1 = &hw->blk[i].xlt1;
+		struct ice_xlt2 *xlt2 = &hw->blk[i].xlt2;
+		struct ice_es *es = &hw->blk[i].es;
+
+		if (hw->blk[i].is_list_init) {
+			struct ice_prof_map *del, *tmp;
+
+			ice_acquire_lock(&es->prof_map_lock);
+			LIST_FOR_EACH_ENTRY_SAFE(del, tmp, &es->prof_map,
+						 ice_prof_map, list) {
+				LIST_DEL(&del->list);
+				ice_free(hw, del);
+			}
+			INIT_LIST_HEAD(&es->prof_map);
+			ice_release_lock(&es->prof_map_lock);
+
+			ice_acquire_lock(&hw->fl_profs_locks[i]);
+			ice_free_flow_profs(hw, i);
+			ice_release_lock(&hw->fl_profs_locks[i]);
+		}
+
+		ice_free_vsig_tbl(hw, (enum ice_block)i);
+
+		ice_memset(xlt1->ptypes, 0, xlt1->count * sizeof(*xlt1->ptypes),
+			   ICE_NONDMA_MEM);
+		ice_memset(xlt1->ptg_tbl, 0,
+			   ICE_MAX_PTGS * sizeof(*xlt1->ptg_tbl),
+			   ICE_NONDMA_MEM);
+		ice_memset(xlt1->t, 0, xlt1->count * sizeof(*xlt1->t),
+			   ICE_NONDMA_MEM);
+
+		ice_memset(xlt2->vsis, 0, xlt2->count * sizeof(*xlt2->vsis),
+			   ICE_NONDMA_MEM);
+		ice_memset(xlt2->vsig_tbl, 0,
+			   xlt2->count * sizeof(*xlt2->vsig_tbl),
+			   ICE_NONDMA_MEM);
+		ice_memset(xlt2->t, 0, xlt2->count * sizeof(*xlt2->t),
+			   ICE_NONDMA_MEM);
+
+		ice_memset(prof->t, 0, prof->count * sizeof(*prof->t),
+			   ICE_NONDMA_MEM);
+		ice_memset(prof_redir->t, 0,
+			   prof_redir->count * sizeof(*prof_redir->t),
+			   ICE_NONDMA_MEM);
+
+		ice_memset(es->t, 0, es->count * sizeof(*es->t),
+			   ICE_NONDMA_MEM);
+		ice_memset(es->ref_count, 0, es->count * sizeof(*es->ref_count),
+			   ICE_NONDMA_MEM);
+		ice_memset(es->written, 0, es->count * sizeof(*es->written),
+			   ICE_NONDMA_MEM);
+	}
+}
+
 /**
  * ice_init_hw_tbls - init hardware table memory
  * @hw: pointer to the hardware structure
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 375758c8d..df8eac05b 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -107,6 +107,7 @@ ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len);
 enum ice_status ice_init_hw_tbls(struct ice_hw *hw);
 void ice_free_seg(struct ice_hw *hw);
 void ice_fill_blk_tbls(struct ice_hw *hw);
+void ice_clear_hw_tbls(struct ice_hw *hw);
 void ice_free_hw_tbls(struct ice_hw *hw);
 enum ice_status
 ice_add_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
index b770abfd0..fa9c348ce 100644
--- a/drivers/net/ice/base/ice_nvm.c
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -24,7 +24,7 @@ ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length,
 	struct ice_aq_desc desc;
 	struct ice_aqc_nvm *cmd;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_read_nvm");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	cmd = &desc.params.nvm;
 
@@ -95,7 +95,7 @@ ice_read_sr_aq(struct ice_hw *hw, u32 offset, u16 words, u16 *data,
 {
 	enum ice_status status;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_aq");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	status = ice_check_sr_access_params(hw, offset, words);
 
@@ -123,7 +123,7 @@ ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
 {
 	enum ice_status status;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_word_aq");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	status = ice_read_sr_aq(hw, offset, 1, data, true);
 	if (!status)
@@ -152,7 +152,7 @@ ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
 	u16 words_read = 0;
 	u16 i = 0;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_buf_aq");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	do {
 		u16 read_size, off_w;
@@ -202,7 +202,7 @@ ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
 static enum ice_status
 ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access)
 {
-	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_nvm");
+	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_nvm\n");
 
 	if (hw->nvm.blank_nvm_mode)
 		return ICE_SUCCESS;
@@ -218,7 +218,7 @@ ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access)
  */
 static void ice_release_nvm(struct ice_hw *hw)
 {
-	ice_debug(hw, ICE_DBG_TRACE, "ice_release_nvm");
+	ice_debug(hw, ICE_DBG_TRACE, "ice_release_nvm\n");
 
 	if (hw->nvm.blank_nvm_mode)
 		return;
@@ -263,7 +263,7 @@ enum ice_status ice_init_nvm(struct ice_hw *hw)
 	u32 fla, gens_stat;
 	u8 sr_size;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_init_nvm");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	/* The SR size is stored regardless of the NVM programming mode
 	 * as the blank mode may be used in the factory line.
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 24/66] net/ice/base: add support for reading REPC statistics
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (22 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 23/66] net/ice/base: add and fix debuglogs Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 16:28     ` Stillwell Jr, Paul M
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 25/66] net/ice/base: move VSI to VSI group Leyi Rong
                     ` (42 subsequent siblings)
  66 siblings, 1 reply; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

Add a new ice_stat_update_repc function which will read the register and
increment the appropriate statistics in the ice_eth_stats structure.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 51 +++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_common.h |  3 ++
 drivers/net/ice/base/ice_type.h   |  2 ++
 3 files changed, 56 insertions(+)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index da72434d3..b4a9172b9 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -4138,6 +4138,57 @@ ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
 		*cur_stat = (new_data + BIT_ULL(32)) - *prev_stat;
 }
 
+/**
+ * ice_stat_update_repc - read GLV_REPC stats from chip and update stat values
+ * @hw: ptr to the hardware info
+ * @vsi_handle: VSI handle
+ * @prev_stat_loaded: bool to specify if the previous stat values are loaded
+ * @cur_stats: ptr to current stats structure
+ *
+ * The GLV_REPC statistic register actually tracks two 16bit statistics, and
+ * thus cannot be read using the normal ice_stat_update32 function.
+ *
+ * Read the GLV_REPC register associated with the given VSI, and update the
+ * rx_no_desc and rx_error values in the ice_eth_stats structure.
+ *
+ * Because the statistics in GLV_REPC stick at 0xFFFF, the register must be
+ * cleared each time it's read.
+ *
+ * Note that the GLV_RDPC register also counts the causes that would trigger
+ * GLV_REPC. However, it does not give the finer grained detail about why the
+ * packets are being dropped. The GLV_REPC values can be used to distinguish
+ * whether Rx packets are dropped due to errors or due to no available
+ * descriptors.
+ */
+void
+ice_stat_update_repc(struct ice_hw *hw, u16 vsi_handle, bool prev_stat_loaded,
+		     struct ice_eth_stats *cur_stats)
+{
+	u16 vsi_num, no_desc, error_cnt;
+	u32 repc;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return;
+
+	vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	/* If we haven't loaded stats yet, just clear the current value */
+	if (!prev_stat_loaded) {
+		wr32(hw, GLV_REPC(vsi_num), 0);
+		return;
+	}
+
+	repc = rd32(hw, GLV_REPC(vsi_num));
+	no_desc = (repc & GLV_REPC_NO_DESC_CNT_M) >> GLV_REPC_NO_DESC_CNT_S;
+	error_cnt = (repc & GLV_REPC_ERROR_CNT_M) >> GLV_REPC_ERROR_CNT_S;
+
+	/* Clear the count by writing to the stats register */
+	wr32(hw, GLV_REPC(vsi_num), 0);
+
+	cur_stats->rx_no_desc += no_desc;
+	cur_stats->rx_errors += error_cnt;
+}
+
 
 /**
  * ice_sched_query_elem - query element information from HW
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 10131b473..2ea4a6e8e 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -205,6 +205,9 @@ ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
 void
 ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
 		  u64 *prev_stat, u64 *cur_stat);
+void
+ice_stat_update_repc(struct ice_hw *hw, u16 vsi_handle, bool prev_stat_loaded,
+		     struct ice_eth_stats *cur_stats);
 enum ice_status
 ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
 		     struct ice_aqc_get_elem *buf);
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 3523b0c35..477f34595 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -853,6 +853,8 @@ struct ice_eth_stats {
 	u64 rx_broadcast;		/* bprc */
 	u64 rx_discards;		/* rdpc */
 	u64 rx_unknown_protocol;	/* rupp */
+	u64 rx_no_desc;			/* repc */
+	u64 rx_errors;			/* repc */
 	u64 tx_bytes;			/* gotc */
 	u64 tx_unicast;			/* uptc */
 	u64 tx_multicast;		/* mptc */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 25/66] net/ice/base: move VSI to VSI group
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (23 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 24/66] net/ice/base: add support for reading REPC statistics Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 26/66] net/ice/base: forbid VSI to remove unassociated ucast filter Leyi Rong
                     ` (41 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Vignesh Sridhar, Paul M Stillwell Jr

Add function to add a VSI to a given VSIG and update package with this
entry. The usual flow in XLT management would iterate through all
characteristics of the input VSI and create a new VSIG and TCAMs till a
matching characteristic is found. When a match is found the VSI is moved
into a matching VSIG and entries are collapsed, leading to added package
update calls. This function serves as an optimization if we know
beforehand that the input VSI has characteristics same as VSI configured
previously added to a VSIG. This is particularly useful for VF VSIs
which are all usually programmed with the same configurations.

Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 41 ++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_flex_pipe.h |  2 ++
 drivers/net/ice/base/ice_flow.c      | 28 +++++++++++++++++++
 drivers/net/ice/base/ice_flow.h      |  4 ++-
 4 files changed, 74 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 69d65c53e..c2d1be484 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -4897,6 +4897,47 @@ ice_find_prof_vsig(struct ice_hw *hw, enum ice_block blk, u64 hdl, u16 *vsig)
 	return status == ICE_SUCCESS;
 }
 
+/**
+ * ice_add_vsi_flow - add VSI flow
+ * @hw: pointer to the HW struct
+ * @blk: hardware block
+ * @vsi: input VSI
+ * @vsig: target VSIG to include the input VSI
+ *
+ * Calling this function will add the VSI to a given VSIG and
+ * update the HW tables accordingly. This call can be used to
+ * add multiple VSIs to a VSIG if we know beforehand that those
+ * VSIs have the same characteristics of the VSIG. This will
+ * save time in generating a new VSIG and TCAMs till a match is
+ * found and subsequent rollback when a matching VSIG is found.
+ */
+enum ice_status
+ice_add_vsi_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig)
+{
+	struct ice_chs_chg *tmp, *del;
+	struct LIST_HEAD_TYPE chg;
+	enum ice_status status;
+
+	/* if target VSIG is default the move is invalid */
+	if ((vsig & ICE_VSIG_IDX_M) == ICE_DEFAULT_VSIG)
+		return ICE_ERR_PARAM;
+
+	INIT_LIST_HEAD(&chg);
+
+	/* move VSI to the VSIG that matches */
+	status = ice_move_vsi(hw, blk, vsi, vsig, &chg);
+	/* update hardware if success */
+	if (!status)
+		status = ice_upd_prof_hw(hw, blk, &chg);
+
+	LIST_FOR_EACH_ENTRY_SAFE(del, tmp, &chg, ice_chs_chg, list_entry) {
+		LIST_DEL(&del->list_entry);
+		ice_free(hw, del);
+	}
+
+	return status;
+}
+
 /**
  * ice_add_prof_id_flow - add profile flow
  * @hw: pointer to the HW struct
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index df8eac05b..4714fe646 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -93,6 +93,8 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 struct ice_prof_map *
 ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id);
 enum ice_status
+ice_add_vsi_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig);
+enum ice_status
 ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl);
 enum ice_status
 ice_rem_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl);
diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 9f2a794bc..1ec49fcd9 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -1109,6 +1109,34 @@ ice_flow_rem_prof_sync(struct ice_hw *hw, enum ice_block blk,
 	return status;
 }
 
+/**
+ * ice_flow_assoc_vsig_vsi - associate a VSI with VSIG
+ * @hw: pointer to the hardware structure
+ * @blk: classification stage
+ * @vsi_handle: software VSI handle
+ * @vsig: target VSI group
+ *
+ * Assumption: the caller has already verified that the VSI to
+ * be added has the same characteristics as the VSIG and will
+ * thereby have access to all resources added to that VSIG.
+ */
+enum ice_status
+ice_flow_assoc_vsig_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi_handle,
+			u16 vsig)
+{
+	enum ice_status status;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle) || blk >= ICE_BLK_COUNT)
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&hw->fl_profs_locks[blk]);
+	status = ice_add_vsi_flow(hw, blk, ice_get_hw_vsi_num(hw, vsi_handle),
+				  vsig);
+	ice_release_lock(&hw->fl_profs_locks[blk]);
+
+	return status;
+}
+
 /**
  * ice_flow_assoc_prof - associate a VSI with a flow profile
  * @hw: pointer to the hardware structure
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
index 4fa13064e..57514a078 100644
--- a/drivers/net/ice/base/ice_flow.h
+++ b/drivers/net/ice/base/ice_flow.h
@@ -319,7 +319,9 @@ ice_flow_add_prof(struct ice_hw *hw, enum ice_block blk, enum ice_flow_dir dir,
 		  struct ice_flow_prof **prof);
 enum ice_status
 ice_flow_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id);
-
+enum ice_status
+ice_flow_assoc_vsig_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi_handle,
+			u16 vsig);
 enum ice_status
 ice_flow_get_hw_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id,
 		     u8 *hw_prof);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 26/66] net/ice/base: forbid VSI to remove unassociated ucast filter
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (24 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 25/66] net/ice/base: move VSI to VSI group Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 27/66] net/ice/base: add some minor features Leyi Rong
                     ` (40 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Akeem G Abodunrin, Paul M Stillwell Jr

If a VSI is not using a unicast filter or did not configure that
particular unicast filter, driver should not allow it to be removed
by the rogue VSI.

Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 57 +++++++++++++++++++++++++++++++
 1 file changed, 57 insertions(+)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 7cccaf4d3..faaedd4c8 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -3180,6 +3180,39 @@ ice_cfg_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool set,
 	return status;
 }
 
+/**
+ * ice_find_ucast_rule_entry - Search for a unicast MAC filter rule entry
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type for which the specified rule needs to be searched
+ * @f_info: rule information
+ *
+ * Helper function to search for a unicast rule entry - this is to be used
+ * to remove unicast MAC filter that is not shared with other VSIs on the
+ * PF switch.
+ *
+ * Returns pointer to entry storing the rule if found
+ */
+static struct ice_fltr_mgmt_list_entry *
+ice_find_ucast_rule_entry(struct ice_hw *hw, u8 recp_id,
+			  struct ice_fltr_info *f_info)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *list_itr;
+	struct LIST_HEAD_TYPE *list_head;
+
+	list_head = &sw->recp_list[recp_id].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (!memcmp(&f_info->l_data, &list_itr->fltr_info.l_data,
+			    sizeof(f_info->l_data)) &&
+		    f_info->fwd_id.hw_vsi_id ==
+		    list_itr->fltr_info.fwd_id.hw_vsi_id &&
+		    f_info->flag == list_itr->fltr_info.flag)
+			return list_itr;
+	}
+	return NULL;
+}
+
 /**
  * ice_remove_mac - remove a MAC address based filter rule
  * @hw: pointer to the hardware structure
@@ -3197,16 +3230,40 @@ enum ice_status
 ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list)
 {
 	struct ice_fltr_list_entry *list_itr, *tmp;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
 
 	if (!m_list)
 		return ICE_ERR_PARAM;
 
+	rule_lock = &hw->switch_info->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock;
 	LIST_FOR_EACH_ENTRY_SAFE(list_itr, tmp, m_list, ice_fltr_list_entry,
 				 list_entry) {
 		enum ice_sw_lkup_type l_type = list_itr->fltr_info.lkup_type;
+		u8 *add = &list_itr->fltr_info.l_data.mac.mac_addr[0];
+		u16 vsi_handle;
 
 		if (l_type != ICE_SW_LKUP_MAC)
 			return ICE_ERR_PARAM;
+
+		vsi_handle = list_itr->fltr_info.vsi_handle;
+		if (!ice_is_vsi_valid(hw, vsi_handle))
+			return ICE_ERR_PARAM;
+
+		list_itr->fltr_info.fwd_id.hw_vsi_id =
+					ice_get_hw_vsi_num(hw, vsi_handle);
+		if (IS_UNICAST_ETHER_ADDR(add) && !hw->ucast_shared) {
+			/* Don't remove the unicast address that belongs to
+			 * another VSI on the switch, since it is not being
+			 * shared...
+			 */
+			ice_acquire_lock(rule_lock);
+			if (!ice_find_ucast_rule_entry(hw, ICE_SW_LKUP_MAC,
+						       &list_itr->fltr_info)) {
+				ice_release_lock(rule_lock);
+				return ICE_ERR_DOES_NOT_EXIST;
+			}
+			ice_release_lock(rule_lock);
+		}
 		list_itr->status = ice_remove_rule_internal(hw,
 							    ICE_SW_LKUP_MAC,
 							    list_itr);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 27/66] net/ice/base: add some minor features
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (25 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 26/66] net/ice/base: forbid VSI to remove unassociated ucast filter Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 16:30     ` Stillwell Jr, Paul M
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 28/66] net/ice/base: add hweight32 support Leyi Rong
                     ` (39 subsequent siblings)
  66 siblings, 1 reply; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Paul M Stillwell Jr

1. Add loopback reporting to get link response.
2. Add infrastructure for NVM Write/Write Activate calls.
3. Add opcode for NVM save factory settings/NVM Update EMPR command.
4. Add lan overflow event to ice_aq_desc.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 47 ++++++++++++++++++---------
 1 file changed, 32 insertions(+), 15 deletions(-)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 77f93b950..4e6bce18c 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -110,6 +110,7 @@ struct ice_aqc_list_caps {
 struct ice_aqc_list_caps_elem {
 	__le16 cap;
 #define ICE_AQC_CAPS_VALID_FUNCTIONS			0x0005
+#define ICE_AQC_MAX_VALID_FUNCTIONS			0x8
 #define ICE_AQC_CAPS_VSI				0x0017
 #define ICE_AQC_CAPS_DCB				0x0018
 #define ICE_AQC_CAPS_RSS				0x0040
@@ -143,11 +144,9 @@ struct ice_aqc_manage_mac_read {
 #define ICE_AQC_MAN_MAC_WOL_ADDR_VALID		BIT(7)
 #define ICE_AQC_MAN_MAC_READ_S			4
 #define ICE_AQC_MAN_MAC_READ_M			(0xF << ICE_AQC_MAN_MAC_READ_S)
-	u8 lport_num;
-	u8 lport_num_valid;
-#define ICE_AQC_MAN_MAC_PORT_NUM_IS_VALID	BIT(0)
+	u8 rsvd[2];
 	u8 num_addr; /* Used in response */
-	u8 reserved[3];
+	u8 rsvd1[3];
 	__le32 addr_high;
 	__le32 addr_low;
 };
@@ -165,7 +164,7 @@ struct ice_aqc_manage_mac_read_resp {
 
 /* Manage MAC address, write command - direct (0x0108) */
 struct ice_aqc_manage_mac_write {
-	u8 port_num;
+	u8 rsvd;
 	u8 flags;
 #define ICE_AQC_MAN_MAC_WR_MC_MAG_EN		BIT(0)
 #define ICE_AQC_MAN_MAC_WR_WOL_LAA_PFR_KEEP	BIT(1)
@@ -481,8 +480,8 @@ struct ice_aqc_vsi_props {
 #define ICE_AQ_VSI_VLAN_MODE_TAGGED	0x2
 #define ICE_AQ_VSI_VLAN_MODE_ALL	0x3
 #define ICE_AQ_VSI_PVLAN_INSERT_PVID	BIT(2)
-#define ICE_AQ_VSI_VLAN_EMOD_S	3
-#define ICE_AQ_VSI_VLAN_EMOD_M	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_S		3
+#define ICE_AQ_VSI_VLAN_EMOD_M		(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
 #define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH	(0x0 << ICE_AQ_VSI_VLAN_EMOD_S)
 #define ICE_AQ_VSI_VLAN_EMOD_STR_UP	(0x1 << ICE_AQ_VSI_VLAN_EMOD_S)
 #define ICE_AQ_VSI_VLAN_EMOD_STR	(0x2 << ICE_AQ_VSI_VLAN_EMOD_S)
@@ -1425,6 +1424,7 @@ struct ice_aqc_get_phy_caps_data {
 #define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN		BIT(6)
 #define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN		BIT(7)
 #define ICE_AQC_PHY_FEC_MASK				MAKEMASK(0xdf, 0)
+	u8 rsvd1;	/* Byte 35 reserved */
 	u8 extended_compliance_code;
 #define ICE_MODULE_TYPE_TOTAL_BYTE			3
 	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
@@ -1439,13 +1439,14 @@ struct ice_aqc_get_phy_caps_data {
 #define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS			0xA0
 #define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS		0x86
 	u8 qualified_module_count;
+	u8 rsvd2[7];	/* Bytes 47:41 reserved */
 #define ICE_AQC_QUAL_MOD_COUNT_MAX			16
 	struct {
 		u8 v_oui[3];
 		u8 rsvd3;
 		u8 v_part[16];
 		__le32 v_rev;
-		__le64 rsvd8;
+		__le64 rsvd4;
 	} qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
 };
 
@@ -1571,7 +1572,12 @@ struct ice_aqc_get_link_status_data {
 #define ICE_AQ_LINK_TX_ACTIVE		0
 #define ICE_AQ_LINK_TX_DRAINED		1
 #define ICE_AQ_LINK_TX_FLUSHED		3
-	u8 reserved2;
+	u8 lb_status;
+#define ICE_AQ_LINK_LB_PHY_LCL		BIT(0)
+#define ICE_AQ_LINK_LB_PHY_RMT		BIT(1)
+#define ICE_AQ_LINK_LB_MAC_LCL		BIT(2)
+#define ICE_AQ_LINK_LB_PHY_IDX_S	3
+#define ICE_AQ_LINK_LB_PHY_IDX_M	(0x7 << ICE_AQ_LB_PHY_IDX_S)
 	__le16 max_frame_size;
 	u8 cfg;
 #define ICE_AQ_LINK_25G_KR_FEC_EN	BIT(0)
@@ -1659,20 +1665,26 @@ struct ice_aqc_set_port_id_led {
 
 /* NVM Read command (indirect 0x0701)
  * NVM Erase commands (direct 0x0702)
- * NVM Update commands (indirect 0x0703)
+ * NVM Write commands (indirect 0x0703)
+ * NVM Write Activate commands (direct 0x0707)
+ * NVM Shadow RAM Dump commands (direct 0x0707)
  */
 struct ice_aqc_nvm {
 	__le16 offset_low;
 	u8 offset_high;
 	u8 cmd_flags;
 #define ICE_AQC_NVM_LAST_CMD		BIT(0)
-#define ICE_AQC_NVM_PCIR_REQ		BIT(0)	/* Used by NVM Update reply */
-#define ICE_AQC_NVM_PRESERVATION_S	1
+#define ICE_AQC_NVM_PCIR_REQ		BIT(0)	/* Used by NVM Write reply */
+#define ICE_AQC_NVM_PRESERVATION_S	1 /* Used by NVM Write Activate only */
 #define ICE_AQC_NVM_PRESERVATION_M	(3 << ICE_AQC_NVM_PRESERVATION_S)
 #define ICE_AQC_NVM_NO_PRESERVATION	(0 << ICE_AQC_NVM_PRESERVATION_S)
 #define ICE_AQC_NVM_PRESERVE_ALL	BIT(1)
 #define ICE_AQC_NVM_FACTORY_DEFAULT	(2 << ICE_AQC_NVM_PRESERVATION_S)
 #define ICE_AQC_NVM_PRESERVE_SELECTED	(3 << ICE_AQC_NVM_PRESERVATION_S)
+#define ICE_AQC_NVM_ACTIV_SEL_NVM	BIT(3) /* Write Activate/SR Dump only */
+#define ICE_AQC_NVM_ACTIV_SEL_OROM	BIT(4)
+#define ICE_AQC_NVM_ACTIV_SEL_EXT_TLV	BIT(5)
+#define ICE_AQC_NVM_ACTIV_SEL_MASK	MAKEMASK(0x7, 3)
 #define ICE_AQC_NVM_FLASH_ONLY		BIT(7)
 	__le16 module_typeid;
 	__le16 length;
@@ -1832,7 +1844,7 @@ struct ice_aqc_get_cee_dcb_cfg_resp {
 };
 
 /* Set Local LLDP MIB (indirect 0x0A08)
- * Used to replace the local MIB of a given LLDP agent. e.g. DCBx
+ * Used to replace the local MIB of a given LLDP agent. e.g. DCBX
  */
 struct ice_aqc_lldp_set_local_mib {
 	u8 type;
@@ -1857,7 +1869,7 @@ struct ice_aqc_lldp_set_local_mib_resp {
 };
 
 /* Stop/Start LLDP Agent (direct 0x0A09)
- * Used for stopping/starting specific LLDP agent. e.g. DCBx.
+ * Used for stopping/starting specific LLDP agent. e.g. DCBX.
  * The same structure is used for the response, with the command field
  * being used as the status field.
  */
@@ -2321,6 +2333,7 @@ struct ice_aq_desc {
 		struct ice_aqc_set_mac_cfg set_mac_cfg;
 		struct ice_aqc_set_event_mask set_event_mask;
 		struct ice_aqc_get_link_status get_link_status;
+		struct ice_aqc_event_lan_overflow lan_overflow;
 	} params;
 };
 
@@ -2492,10 +2505,14 @@ enum ice_adminq_opc {
 	/* NVM commands */
 	ice_aqc_opc_nvm_read				= 0x0701,
 	ice_aqc_opc_nvm_erase				= 0x0702,
-	ice_aqc_opc_nvm_update				= 0x0703,
+	ice_aqc_opc_nvm_write				= 0x0703,
 	ice_aqc_opc_nvm_cfg_read			= 0x0704,
 	ice_aqc_opc_nvm_cfg_write			= 0x0705,
 	ice_aqc_opc_nvm_checksum			= 0x0706,
+	ice_aqc_opc_nvm_write_activate			= 0x0707,
+	ice_aqc_opc_nvm_sr_dump				= 0x0707,
+	ice_aqc_opc_nvm_save_factory_settings		= 0x0708,
+	ice_aqc_opc_nvm_update_empr			= 0x0709,
 
 	/* LLDP commands */
 	ice_aqc_opc_lldp_get_mib			= 0x0A00,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 28/66] net/ice/base: add hweight32 support
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (26 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 27/66] net/ice/base: add some minor features Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 29/66] net/ice/base: call out dev/func caps when printing Leyi Rong
                     ` (38 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong

Add API support for hweight32.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_osdep.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/drivers/net/ice/base/ice_osdep.h b/drivers/net/ice/base/ice_osdep.h
index d2d9238c7..ede893fc9 100644
--- a/drivers/net/ice/base/ice_osdep.h
+++ b/drivers/net/ice/base/ice_osdep.h
@@ -267,6 +267,20 @@ ice_hweight8(u32 num)
 	return bits;
 }
 
+static inline u8
+ice_hweight32(u32 num)
+{
+	u8 bits = 0;
+	u32 i;
+
+	for (i = 0; i < 32; i++) {
+		bits += (u8)(num & 0x1);
+		num >>= 1;
+	}
+
+	return bits;
+}
+
 #define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
 #define DELAY(x) rte_delay_us(x)
 #define ice_usec_delay(x) rte_delay_us(x)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 29/66] net/ice/base: call out dev/func caps when printing
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (27 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 28/66] net/ice/base: add hweight32 support Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 30/66] net/ice/base: add some minor features Leyi Rong
                     ` (37 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Anirudh Venkataramanan, Paul M Stillwell Jr

This patch makes a change to add a "func cap" prefix when printing
function capabilities, and a "dev cap" prefix when printing device
capabilities.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 75 ++++++++++++++++++-------------
 1 file changed, 45 insertions(+), 30 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index b4a9172b9..6e5a60a38 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -1948,6 +1948,7 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 	struct ice_hw_func_caps *func_p = NULL;
 	struct ice_hw_dev_caps *dev_p = NULL;
 	struct ice_hw_common_caps *caps;
+	char const *prefix;
 	u32 i;
 
 	if (!buf)
@@ -1958,9 +1959,11 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 	if (opc == ice_aqc_opc_list_dev_caps) {
 		dev_p = &hw->dev_caps;
 		caps = &dev_p->common_cap;
+		prefix = "dev cap";
 	} else if (opc == ice_aqc_opc_list_func_caps) {
 		func_p = &hw->func_caps;
 		caps = &func_p->common_cap;
+		prefix = "func cap";
 	} else {
 		ice_debug(hw, ICE_DBG_INIT, "wrong opcode\n");
 		return;
@@ -1976,21 +1979,25 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 		case ICE_AQC_CAPS_VALID_FUNCTIONS:
 			caps->valid_functions = number;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Valid Functions = %d\n",
+				  "%s: valid functions = %d\n", prefix,
 				  caps->valid_functions);
 			break;
 		case ICE_AQC_CAPS_VSI:
 			if (dev_p) {
 				dev_p->num_vsi_allocd_to_host = number;
 				ice_debug(hw, ICE_DBG_INIT,
-					  "HW caps: Dev.VSI cnt = %d\n",
+					  "%s: num VSI alloc to host = %d\n",
+					  prefix,
 					  dev_p->num_vsi_allocd_to_host);
 			} else if (func_p) {
 				func_p->guar_num_vsi =
 					ice_get_num_per_func(hw, ICE_MAX_VSI);
 				ice_debug(hw, ICE_DBG_INIT,
-					  "HW caps: Func.VSI cnt = %d\n",
-					  number);
+					  "%s: num guaranteed VSI (fw) = %d\n",
+					  prefix, number);
+				ice_debug(hw, ICE_DBG_INIT,
+					  "%s: num guaranteed VSI = %d\n",
+					  prefix, func_p->guar_num_vsi);
 			}
 			break;
 		case ICE_AQC_CAPS_DCB:
@@ -1998,49 +2005,51 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 			caps->active_tc_bitmap = logical_id;
 			caps->maxtc = phys_id;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: DCB = %d\n", caps->dcb);
+				  "%s: DCB = %d\n", prefix, caps->dcb);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Active TC bitmap = %d\n",
+				  "%s: active TC bitmap = %d\n", prefix,
 				  caps->active_tc_bitmap);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: TC Max = %d\n", caps->maxtc);
+				  "%s: TC max = %d\n", prefix, caps->maxtc);
 			break;
 		case ICE_AQC_CAPS_RSS:
 			caps->rss_table_size = number;
 			caps->rss_table_entry_width = logical_id;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: RSS table size = %d\n",
+				  "%s: RSS table size = %d\n", prefix,
 				  caps->rss_table_size);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: RSS table width = %d\n",
+				  "%s: RSS table width = %d\n", prefix,
 				  caps->rss_table_entry_width);
 			break;
 		case ICE_AQC_CAPS_RXQS:
 			caps->num_rxq = number;
 			caps->rxq_first_id = phys_id;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Num Rx Qs = %d\n", caps->num_rxq);
+				  "%s: num Rx queues = %d\n", prefix,
+				  caps->num_rxq);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Rx first queue ID = %d\n",
+				  "%s: Rx first queue ID = %d\n", prefix,
 				  caps->rxq_first_id);
 			break;
 		case ICE_AQC_CAPS_TXQS:
 			caps->num_txq = number;
 			caps->txq_first_id = phys_id;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Num Tx Qs = %d\n", caps->num_txq);
+				  "%s: num Tx queues = %d\n", prefix,
+				  caps->num_txq);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Tx first queue ID = %d\n",
+				  "%s: Tx first queue ID = %d\n", prefix,
 				  caps->txq_first_id);
 			break;
 		case ICE_AQC_CAPS_MSIX:
 			caps->num_msix_vectors = number;
 			caps->msix_vector_first_id = phys_id;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: MSIX vector count = %d\n",
+				  "%s: MSIX vector count = %d\n", prefix,
 				  caps->num_msix_vectors);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: MSIX first vector index = %d\n",
+				  "%s: MSIX first vector index = %d\n", prefix,
 				  caps->msix_vector_first_id);
 			break;
 		case ICE_AQC_CAPS_FD:
@@ -2050,7 +2059,7 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 			if (dev_p) {
 				dev_p->num_flow_director_fltr = number;
 				ice_debug(hw, ICE_DBG_INIT,
-					  "HW caps: Dev.fd_fltr =%d\n",
+					  "%s: num FD filters = %d\n", prefix,
 					  dev_p->num_flow_director_fltr);
 			}
 			if (func_p) {
@@ -2063,32 +2072,38 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 				      GLQF_FD_SIZE_FD_BSIZE_S;
 				func_p->fd_fltr_best_effort = val;
 				ice_debug(hw, ICE_DBG_INIT,
-					  "HW:func.fd_fltr guar= %d\n",
-					  func_p->fd_fltr_guar);
+					  "%s: num guaranteed FD filters = %d\n",
+					  prefix, func_p->fd_fltr_guar);
 				ice_debug(hw, ICE_DBG_INIT,
-					  "HW:func.fd_fltr best effort=%d\n",
-					  func_p->fd_fltr_best_effort);
+					  "%s: num best effort FD filters = %d\n",
+					  prefix, func_p->fd_fltr_best_effort);
 			}
 			break;
 		}
 		case ICE_AQC_CAPS_MAX_MTU:
 			caps->max_mtu = number;
-			if (dev_p)
-				ice_debug(hw, ICE_DBG_INIT,
-					  "HW caps: Dev.MaxMTU = %d\n",
-					  caps->max_mtu);
-			else if (func_p)
-				ice_debug(hw, ICE_DBG_INIT,
-					  "HW caps: func.MaxMTU = %d\n",
-					  caps->max_mtu);
+			ice_debug(hw, ICE_DBG_INIT, "%s: max MTU = %d\n",
+				  prefix, caps->max_mtu);
 			break;
 		default:
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Unknown capability[%d]: 0x%x\n", i,
-				  cap);
+				  "%s: unknown capability[%d]: 0x%x\n", prefix,
+				  i, cap);
 			break;
 		}
 	}
+
+	/* Re-calculate capabilities that are dependent on the number of
+	 * physical ports; i.e. some features are not supported or function
+	 * differently on devices with more than 4 ports.
+	 */
+	if (caps && (ice_hweight32(caps->valid_functions) > 4)) {
+		/* Max 4 TCs per port */
+		caps->maxtc = 4;
+		ice_debug(hw, ICE_DBG_INIT,
+			  "%s: TC max = %d (based on #ports)\n", prefix,
+			  caps->maxtc);
+	}
 }
 
 /**
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 30/66] net/ice/base: add some minor features
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (28 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 29/66] net/ice/base: call out dev/func caps when printing Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 16:30     ` Stillwell Jr, Paul M
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 31/66] net/ice/base: cleanup update link info Leyi Rong
                     ` (36 subsequent siblings)
  66 siblings, 1 reply; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Paul M Stillwell Jr

1. Disable TX pacing option.
2. Use a different ICE_DBG bit for firmware log messages.
3. Always set prefena when configuring a RX queue.
4. make FDID available for FlexDescriptor.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c    | 44 +++++++++++++---------------
 drivers/net/ice/base/ice_fdir.c      |  2 +-
 drivers/net/ice/base/ice_lan_tx_rx.h |  3 +-
 drivers/net/ice/base/ice_type.h      |  2 +-
 4 files changed, 25 insertions(+), 26 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 6e5a60a38..89c922bed 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -449,11 +449,7 @@ ice_aq_set_mac_cfg(struct ice_hw *hw, u16 max_frame_size, struct ice_sq_cd *cd)
 {
 	u16 fc_threshold_val, tx_timer_val;
 	struct ice_aqc_set_mac_cfg *cmd;
-	struct ice_port_info *pi;
 	struct ice_aq_desc desc;
-	enum ice_status status;
-	u8 port_num = 0;
-	bool link_up;
 	u32 reg_val;
 
 	cmd = &desc.params.set_mac_cfg;
@@ -465,21 +461,6 @@ ice_aq_set_mac_cfg(struct ice_hw *hw, u16 max_frame_size, struct ice_sq_cd *cd)
 
 	cmd->max_frame_size = CPU_TO_LE16(max_frame_size);
 
-	/* Retrieve the current data_pacing value in FW*/
-	pi = &hw->port_info[port_num];
-
-	/* We turn on the get_link_info so that ice_update_link_info(...)
-	 * can be called.
-	 */
-	pi->phy.get_link_info = 1;
-
-	status = ice_get_link_status(pi, &link_up);
-
-	if (status)
-		return status;
-
-	cmd->params = pi->phy.link_info.pacing;
-
 	/* We read back the transmit timer and fc threshold value of
 	 * LFC. Thus, we will use index =
 	 * PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX.
@@ -544,7 +525,15 @@ static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
 	}
 	recps = hw->switch_info->recp_list;
 	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		struct ice_recp_grp_entry *rg_entry, *tmprg_entry;
+
 		recps[i].root_rid = i;
+		LIST_FOR_EACH_ENTRY_SAFE(rg_entry, tmprg_entry,
+					 &recps[i].rg_list, ice_recp_grp_entry,
+					 l_entry) {
+			LIST_DEL(&rg_entry->l_entry);
+			ice_free(hw, rg_entry);
+		}
 
 		if (recps[i].adv_rule) {
 			struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
@@ -571,6 +560,8 @@ static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
 				ice_free(hw, lst_itr);
 			}
 		}
+		if (recps[i].root_buf)
+			ice_free(hw, recps[i].root_buf);
 	}
 	ice_rm_all_sw_replay_rule_info(hw);
 	ice_free(hw, sw->recp_list);
@@ -789,10 +780,10 @@ static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
  */
 void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf)
 {
-	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg Start ]\n");
-	ice_debug_array(hw, ICE_DBG_AQ_MSG, 16, 1, (u8 *)buf,
+	ice_debug(hw, ICE_DBG_FW_LOG, "[ FW Log Msg Start ]\n");
+	ice_debug_array(hw, ICE_DBG_FW_LOG, 16, 1, (u8 *)buf,
 			LE16_TO_CPU(desc->datalen));
-	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg End ]\n");
+	ice_debug(hw, ICE_DBG_FW_LOG, "[ FW Log Msg End ]\n");
 }
 
 /**
@@ -1213,6 +1204,7 @@ static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
 	ICE_CTX_STORE(ice_rlan_ctx, tphdata_ena,	1,	195),
 	ICE_CTX_STORE(ice_rlan_ctx, tphhead_ena,	1,	196),
 	ICE_CTX_STORE(ice_rlan_ctx, lrxqthresh,		3,	198),
+	ICE_CTX_STORE(ice_rlan_ctx, prefena,		1,	201),
 	{ 0 }
 };
 
@@ -1223,7 +1215,8 @@ static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
  * @rxq_index: the index of the Rx queue
  *
  * Converts rxq context from sparse to dense structure and then writes
- * it to HW register space
+ * it to HW register space and enables the hardware to prefetch descriptors
+ * instead of only fetching them on demand
  */
 enum ice_status
 ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
@@ -1231,6 +1224,11 @@ ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
 {
 	u8 ctx_buf[ICE_RXQ_CTX_SZ] = { 0 };
 
+	if (!rlan_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	rlan_ctx->prefena = 1;
+
 	ice_set_ctx((u8 *)rlan_ctx, ctx_buf, ice_rlan_ctx_info);
 	return ice_copy_rxq_ctx_to_hw(hw, ctx_buf, rxq_index);
 }
diff --git a/drivers/net/ice/base/ice_fdir.c b/drivers/net/ice/base/ice_fdir.c
index 4bc8e6dcb..bde676a8f 100644
--- a/drivers/net/ice/base/ice_fdir.c
+++ b/drivers/net/ice/base/ice_fdir.c
@@ -186,7 +186,7 @@ ice_set_dflt_val_fd_desc(struct ice_fd_fltr_desc_ctx *fd_fltr_ctx)
 	fd_fltr_ctx->desc_prof_prio = ICE_FXD_FLTR_QW1_PROF_PRIO_ZERO;
 	fd_fltr_ctx->desc_prof = ICE_FXD_FLTR_QW1_PROF_ZERO;
 	fd_fltr_ctx->swap = ICE_FXD_FLTR_QW1_SWAP_SET;
-	fd_fltr_ctx->fdid_prio = ICE_FXD_FLTR_QW1_FDID_PRI_ZERO;
+	fd_fltr_ctx->fdid_prio = ICE_FXD_FLTR_QW1_FDID_PRI_ONE;
 	fd_fltr_ctx->fdid_mdid = ICE_FXD_FLTR_QW1_FDID_MDID_FD;
 	fd_fltr_ctx->fdid = ICE_FXD_FLTR_QW1_FDID_ZERO;
 }
diff --git a/drivers/net/ice/base/ice_lan_tx_rx.h b/drivers/net/ice/base/ice_lan_tx_rx.h
index 8c9902994..fa2309bf1 100644
--- a/drivers/net/ice/base/ice_lan_tx_rx.h
+++ b/drivers/net/ice/base/ice_lan_tx_rx.h
@@ -162,7 +162,7 @@ struct ice_fltr_desc {
 
 #define ICE_FXD_FLTR_QW1_FDID_PRI_S	25
 #define ICE_FXD_FLTR_QW1_FDID_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW1_FDID_PRI_S)
-#define ICE_FXD_FLTR_QW1_FDID_PRI_ZERO	0x0ULL
+#define ICE_FXD_FLTR_QW1_FDID_PRI_ONE	0x1ULL
 
 #define ICE_FXD_FLTR_QW1_FDID_MDID_S	28
 #define ICE_FXD_FLTR_QW1_FDID_MDID_M	(0xFULL << ICE_FXD_FLTR_QW1_FDID_MDID_S)
@@ -807,6 +807,7 @@ struct ice_rlan_ctx {
 	u8 tphdata_ena;
 	u8 tphhead_ena;
 	u16 lrxqthresh; /* bigger than needed, see above for reason */
+	u8 prefena;	/* NOTE: normally must be set to 1 at init */
 };
 
 struct ice_ctx_ele {
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 477f34595..116cfe647 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -82,7 +82,7 @@ static inline u32 ice_round_to_num(u32 N, u32 R)
 /* debug masks - set these bits in hw->debug_mask to control output */
 #define ICE_DBG_INIT		BIT_ULL(1)
 #define ICE_DBG_RELEASE		BIT_ULL(2)
-
+#define ICE_DBG_FW_LOG		BIT_ULL(3)
 #define ICE_DBG_LINK		BIT_ULL(4)
 #define ICE_DBG_PHY		BIT_ULL(5)
 #define ICE_DBG_QCTX		BIT_ULL(6)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 31/66] net/ice/base: cleanup update link info
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (29 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 30/66] net/ice/base: add some minor features Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 32/66] net/ice/base: add rd64 support Leyi Rong
                     ` (35 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Bruce Allan, Chinh T Cao, Paul M Stillwell Jr

1. Do not unnecessarily initialize local variable.
2. Cleanup ice_update_link_info.
3. Dont clear auto_fec bit in ice_cfg_phy_fec.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 52 ++++++++++++++-----------------
 1 file changed, 24 insertions(+), 28 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 89c922bed..db3acc040 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -2414,10 +2414,10 @@ void
 ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
 		    u16 link_speeds_bitmap)
 {
-	u16 speed = ICE_AQ_LINK_SPEED_UNKNOWN;
 	u64 pt_high;
 	u64 pt_low;
 	int index;
+	u16 speed;
 
 	/* We first check with low part of phy_type */
 	for (index = 0; index <= ICE_PHY_TYPE_LOW_MAX_INDEX; index++) {
@@ -2498,38 +2498,38 @@ ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi,
  */
 enum ice_status ice_update_link_info(struct ice_port_info *pi)
 {
-	struct ice_aqc_get_phy_caps_data *pcaps;
-	struct ice_phy_info *phy_info;
+	struct ice_link_status *li;
 	enum ice_status status;
-	struct ice_hw *hw;
 
 	if (!pi)
 		return ICE_ERR_PARAM;
 
-	hw = pi->hw;
-
-	pcaps = (struct ice_aqc_get_phy_caps_data *)
-		ice_malloc(hw, sizeof(*pcaps));
-	if (!pcaps)
-		return ICE_ERR_NO_MEMORY;
+	li = &pi->phy.link_info;
 
-	phy_info = &pi->phy;
 	status = ice_aq_get_link_info(pi, true, NULL, NULL);
 	if (status)
-		goto out;
+		return status;
+
+	if (li->link_info & ICE_AQ_MEDIA_AVAILABLE) {
+		struct ice_aqc_get_phy_caps_data *pcaps;
+		struct ice_hw *hw;
 
-	if (phy_info->link_info.link_info & ICE_AQ_MEDIA_AVAILABLE) {
-		status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG,
+		hw = pi->hw;
+		pcaps = (struct ice_aqc_get_phy_caps_data *)
+			ice_malloc(hw, sizeof(*pcaps));
+		if (!pcaps)
+			return ICE_ERR_NO_MEMORY;
+
+		status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP,
 					     pcaps, NULL);
-		if (status)
-			goto out;
+		if (status == ICE_SUCCESS)
+			ice_memcpy(li->module_type, &pcaps->module_type,
+				   sizeof(li->module_type),
+				   ICE_NONDMA_TO_NONDMA);
 
-		ice_memcpy(phy_info->link_info.module_type, &pcaps->module_type,
-			   sizeof(phy_info->link_info.module_type),
-			   ICE_NONDMA_TO_NONDMA);
+		ice_free(hw, pcaps);
 	}
-out:
-	ice_free(hw, pcaps);
+
 	return status;
 }
 
@@ -2792,27 +2792,24 @@ ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec)
 {
 	switch (fec) {
 	case ICE_FEC_BASER:
-		/* Clear auto FEC and RS bits, and AND BASE-R ability
+		/* Clear RS bits, and AND BASE-R ability
 		 * bits and OR request bits.
 		 */
-		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
 		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
 				     ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN;
 		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
 				     ICE_AQC_PHY_FEC_25G_KR_REQ;
 		break;
 	case ICE_FEC_RS:
-		/* Clear auto FEC and BASE-R bits, and AND RS ability
+		/* Clear BASE-R bits, and AND RS ability
 		 * bits and OR request bits.
 		 */
-		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
 		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN;
 		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_25G_RS_528_REQ |
 				     ICE_AQC_PHY_FEC_25G_RS_544_REQ;
 		break;
 	case ICE_FEC_NONE:
-		/* Clear auto FEC and all FEC option bits. */
-		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+		/* Clear all FEC option bits. */
 		cfg->link_fec_opt &= ~ICE_AQC_PHY_FEC_MASK;
 		break;
 	case ICE_FEC_AUTO:
@@ -3912,7 +3909,6 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues,
 	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
 		return ICE_ERR_CFG;
 
-
 	if (!num_queues) {
 		/* if queue is disabled already yet the disable queue command
 		 * has to be sent to complete the VF reset, then call
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 32/66] net/ice/base: add rd64 support
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (30 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 31/66] net/ice/base: cleanup update link info Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 33/66] net/ice/base: track HW stat registers past rollover Leyi Rong
                     ` (34 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong

Add API support for rd64.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_osdep.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/net/ice/base/ice_osdep.h b/drivers/net/ice/base/ice_osdep.h
index ede893fc9..35a17b941 100644
--- a/drivers/net/ice/base/ice_osdep.h
+++ b/drivers/net/ice/base/ice_osdep.h
@@ -126,11 +126,19 @@ do {									\
 #define ICE_PCI_REG(reg)     rte_read32(reg)
 #define ICE_PCI_REG_ADDR(a, reg) \
 	((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+#define ICE_PCI_REG64(reg)     rte_read64(reg)
+#define ICE_PCI_REG_ADDR64(a, reg) \
+	((volatile uint64_t *)((char *)(a)->hw_addr + (reg)))
 static inline uint32_t ice_read_addr(volatile void *addr)
 {
 	return rte_le_to_cpu_32(ICE_PCI_REG(addr));
 }
 
+static inline uint64_t ice_read_addr64(volatile void *addr)
+{
+	return rte_le_to_cpu_64(ICE_PCI_REG64(addr));
+}
+
 #define ICE_PCI_REG_WRITE(reg, value) \
 	rte_write32((rte_cpu_to_le_32(value)), reg)
 
@@ -145,6 +153,7 @@ static inline uint32_t ice_read_addr(volatile void *addr)
 	ICE_PCI_REG_WRITE(ICE_PCI_REG_ADDR((a), (reg)), (value))
 #define flush(a) ice_read_addr(ICE_PCI_REG_ADDR((a), (GLGEN_STAT)))
 #define div64_long(n, d) ((n) / (d))
+#define rd64(a, reg) ice_read_addr64(ICE_PCI_REG_ADDR64((a), (reg)))
 
 #define BITS_PER_BYTE       8
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 33/66] net/ice/base: track HW stat registers past rollover
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (31 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 32/66] net/ice/base: add rd64 support Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 34/66] net/ice/base: implement LLDP persistent settings Leyi Rong
                     ` (33 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

Modify ice_stat_update40 to use rd64 instead of two calls to rd32.
Additionally, drop the now unnecessary hireg function parameter.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 57 +++++++++++++++++++------------
 drivers/net/ice/base/ice_common.h |  8 ++---
 2 files changed, 38 insertions(+), 27 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index db3acc040..f9a5d43e6 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -4087,40 +4087,44 @@ void ice_replay_post(struct ice_hw *hw)
 /**
  * ice_stat_update40 - read 40 bit stat from the chip and update stat values
  * @hw: ptr to the hardware info
- * @hireg: high 32 bit HW register to read from
- * @loreg: low 32 bit HW register to read from
+ * @reg: offset of 64 bit HW register to read from
  * @prev_stat_loaded: bool to specify if previous stats are loaded
  * @prev_stat: ptr to previous loaded stat value
  * @cur_stat: ptr to current stat value
  */
 void
-ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
-		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat)
+ice_stat_update40(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat)
 {
-	u64 new_data;
-
-	new_data = rd32(hw, loreg);
-	new_data |= ((u64)(rd32(hw, hireg) & 0xFFFF)) << 32;
+	u64 new_data = rd64(hw, reg) & (BIT_ULL(40) - 1);
 
 	/* device stats are not reset at PFR, they likely will not be zeroed
-	 * when the driver starts. So save the first values read and use them as
-	 * offsets to be subtracted from the raw values in order to report stats
-	 * that count from zero.
+	 * when the driver starts. Thus, save the value from the first read
+	 * without adding to the statistic value so that we report stats which
+	 * count up from zero.
 	 */
-	if (!prev_stat_loaded)
+	if (!prev_stat_loaded) {
 		*prev_stat = new_data;
+		return;
+	}
+
+	/* Calculate the difference between the new and old values, and then
+	 * add it to the software stat value.
+	 */
 	if (new_data >= *prev_stat)
-		*cur_stat = new_data - *prev_stat;
+		*cur_stat += new_data - *prev_stat;
 	else
 		/* to manage the potential roll-over */
-		*cur_stat = (new_data + BIT_ULL(40)) - *prev_stat;
-	*cur_stat &= 0xFFFFFFFFFFULL;
+		*cur_stat += (new_data + BIT_ULL(40)) - *prev_stat;
+
+	/* Update the previously stored value to prepare for next read */
+	*prev_stat = new_data;
 }
 
 /**
  * ice_stat_update32 - read 32 bit stat from the chip and update stat values
  * @hw: ptr to the hardware info
- * @reg: HW register to read from
+ * @reg: offset of HW register to read from
  * @prev_stat_loaded: bool to specify if previous stats are loaded
  * @prev_stat: ptr to previous loaded stat value
  * @cur_stat: ptr to current stat value
@@ -4134,17 +4138,26 @@ ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
 	new_data = rd32(hw, reg);
 
 	/* device stats are not reset at PFR, they likely will not be zeroed
-	 * when the driver starts. So save the first values read and use them as
-	 * offsets to be subtracted from the raw values in order to report stats
-	 * that count from zero.
+	 * when the driver starts. Thus, save the value from the first read
+	 * without adding to the statistic value so that we report stats which
+	 * count up from zero.
 	 */
-	if (!prev_stat_loaded)
+	if (!prev_stat_loaded) {
 		*prev_stat = new_data;
+		return;
+	}
+
+	/* Calculate the difference between the new and old values, and then
+	 * add it to the software stat value.
+	 */
 	if (new_data >= *prev_stat)
-		*cur_stat = new_data - *prev_stat;
+		*cur_stat += new_data - *prev_stat;
 	else
 		/* to manage the potential roll-over */
-		*cur_stat = (new_data + BIT_ULL(32)) - *prev_stat;
+		*cur_stat += (new_data + BIT_ULL(32)) - *prev_stat;
+
+	/* Update the previously stored value to prepare for next read */
+	*prev_stat = new_data;
 }
 
 /**
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 2ea4a6e8e..32d0e90c3 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -6,7 +6,6 @@
 #define _ICE_COMMON_H_
 
 #include "ice_type.h"
-
 #include "ice_flex_pipe.h"
 #include "ice_switch.h"
 #include "ice_fdir.h"
@@ -34,8 +33,7 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 		  struct ice_rq_event_info *e, u16 *pending);
 enum ice_status
 ice_get_link_status(struct ice_port_info *pi, bool *link_up);
-enum ice_status
-ice_update_link_info(struct ice_port_info *pi);
+enum ice_status ice_update_link_info(struct ice_port_info *pi);
 enum ice_status
 ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
 		enum ice_aq_res_access_type access, u32 timeout);
@@ -200,8 +198,8 @@ ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
 enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes);
 void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf);
 void
-ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
-		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat);
+ice_stat_update40(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat);
 void
 ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
 		  u64 *prev_stat, u64 *cur_stat);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 34/66] net/ice/base: implement LLDP persistent settings
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (32 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 33/66] net/ice/base: track HW stat registers past rollover Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 35/66] net/ice/base: check new FD filter duplicate location Leyi Rong
                     ` (32 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jaroslaw Ilgiewicz, Paul M Stillwell Jr

This patch implements persistent, across reboots, start and stop
of LLDP agent. Added additional function parameter to
ice_aq_start_lldp and ice_aq_stop_lldp.

Signed-off-by: Jaroslaw Ilgiewicz <jaroslaw.ilgiewicz@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_dcb.c | 33 ++++++++++++++++++++++-----------
 drivers/net/ice/base/ice_dcb.h |  9 ++++-----
 drivers/net/ice/ice_ethdev.c   |  2 +-
 3 files changed, 27 insertions(+), 17 deletions(-)

diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c
index 100c4bb0f..008c7a110 100644
--- a/drivers/net/ice/base/ice_dcb.c
+++ b/drivers/net/ice/base/ice_dcb.c
@@ -83,12 +83,14 @@ ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update,
  * @hw: pointer to the HW struct
  * @shutdown_lldp_agent: True if LLDP Agent needs to be Shutdown
  *			 False if LLDP Agent needs to be Stopped
+ * @persist: True if Stop/Shutdown of LLDP Agent needs to be persistent across
+ *	     reboots
  * @cd: pointer to command details structure or NULL
  *
  * Stop or Shutdown the embedded LLDP Agent (0x0A05)
  */
 enum ice_status
-ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent,
+ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent, bool persist,
 		 struct ice_sq_cd *cd)
 {
 	struct ice_aqc_lldp_stop *cmd;
@@ -101,17 +103,22 @@ ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent,
 	if (shutdown_lldp_agent)
 		cmd->command |= ICE_AQ_LLDP_AGENT_SHUTDOWN;
 
+	if (persist)
+		cmd->command |= ICE_AQ_LLDP_AGENT_PERSIST_DIS;
+
 	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
 }
 
 /**
  * ice_aq_start_lldp
  * @hw: pointer to the HW struct
+ * @persist: True if Start of LLDP Agent needs to be persistent across reboots
  * @cd: pointer to command details structure or NULL
  *
  * Start the embedded LLDP Agent on all ports. (0x0A06)
  */
-enum ice_status ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd)
+enum ice_status
+ice_aq_start_lldp(struct ice_hw *hw, bool persist, struct ice_sq_cd *cd)
 {
 	struct ice_aqc_lldp_start *cmd;
 	struct ice_aq_desc desc;
@@ -122,6 +129,9 @@ enum ice_status ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd)
 
 	cmd->command = ICE_AQ_LLDP_AGENT_START;
 
+	if (persist)
+		cmd->command |= ICE_AQ_LLDP_AGENT_PERSIST_ENA;
+
 	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
 }
 
@@ -615,7 +625,8 @@ ice_parse_org_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
  *
  * Parse DCB configuration from the LLDPDU
  */
-enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg)
+enum ice_status
+ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg)
 {
 	struct ice_lldp_org_tlv *tlv;
 	enum ice_status ret = ICE_SUCCESS;
@@ -659,7 +670,7 @@ enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg)
 /**
  * ice_aq_get_dcb_cfg
  * @hw: pointer to the HW struct
- * @mib_type: mib type for the query
+ * @mib_type: MIB type for the query
  * @bridgetype: bridge type for the query (remote)
  * @dcbcfg: store for LLDPDU data
  *
@@ -690,13 +701,13 @@ ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
 }
 
 /**
- * ice_aq_start_stop_dcbx - Start/Stop DCBx service in FW
+ * ice_aq_start_stop_dcbx - Start/Stop DCBX service in FW
  * @hw: pointer to the HW struct
- * @start_dcbx_agent: True if DCBx Agent needs to be started
- *		      False if DCBx Agent needs to be stopped
- * @dcbx_agent_status: FW indicates back the DCBx agent status
- *		       True if DCBx Agent is active
- *		       False if DCBx Agent is stopped
+ * @start_dcbx_agent: True if DCBX Agent needs to be started
+ *		      False if DCBX Agent needs to be stopped
+ * @dcbx_agent_status: FW indicates back the DCBX agent status
+ *		       True if DCBX Agent is active
+ *		       False if DCBX Agent is stopped
  * @cd: pointer to command details structure or NULL
  *
  * Start/Stop the embedded dcbx Agent. In case that this wrapper function
@@ -1236,7 +1247,7 @@ ice_add_dcb_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg,
 /**
  * ice_dcb_cfg_to_lldp - Convert DCB configuration to MIB format
  * @lldpmib: pointer to the HW struct
- * @miblen: length of LLDP mib
+ * @miblen: length of LLDP MIB
  * @dcbcfg: Local store which holds the DCB Config
  *
  * Convert the DCB configuration to MIB format
diff --git a/drivers/net/ice/base/ice_dcb.h b/drivers/net/ice/base/ice_dcb.h
index 65d2bafef..47127096b 100644
--- a/drivers/net/ice/base/ice_dcb.h
+++ b/drivers/net/ice/base/ice_dcb.h
@@ -114,7 +114,6 @@ struct ice_lldp_org_tlv {
 	__be32 ouisubtype;
 	u8 tlvinfo[1];
 };
-
 #pragma pack()
 
 struct ice_cee_tlv_hdr {
@@ -147,7 +146,6 @@ struct ice_cee_app_prio {
 	__be16 lower_oui;
 	u8 prio_map;
 };
-
 #pragma pack()
 
 /* TODO: The below structures related LLDP/DCBX variables
@@ -190,8 +188,8 @@ enum ice_status
 ice_aq_get_cee_dcb_cfg(struct ice_hw *hw,
 		       struct ice_aqc_get_cee_dcb_cfg_resp *buff,
 		       struct ice_sq_cd *cd);
-u8 ice_get_dcbx_status(struct ice_hw *hw);
 enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg);
+u8 ice_get_dcbx_status(struct ice_hw *hw);
 enum ice_status
 ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
 		   struct ice_dcbx_cfg *dcbcfg);
@@ -211,9 +209,10 @@ enum ice_status
 ice_update_port_tc_tree_cfg(struct ice_port_info *pi,
 			    struct ice_aqc_port_ets_elem *buf);
 enum ice_status
-ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent,
+ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent, bool persist,
 		 struct ice_sq_cd *cd);
-enum ice_status ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_start_lldp(struct ice_hw *hw, bool persist, struct ice_sq_cd *cd);
 enum ice_status
 ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent,
 		       bool *dcbx_agent_status, struct ice_sq_cd *cd);
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 962d506a1..b14bc7102 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1447,7 +1447,7 @@ ice_dev_init(struct rte_eth_dev *dev)
 	/* Disable double vlan by default */
 	ice_vsi_config_double_vlan(vsi, FALSE);
 
-	ret = ice_aq_stop_lldp(hw, TRUE, NULL);
+	ret = ice_aq_stop_lldp(hw, TRUE, FALSE, NULL);
 	if (ret != ICE_SUCCESS)
 		PMD_INIT_LOG(DEBUG, "lldp has already stopped\n");
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 35/66] net/ice/base: check new FD filter duplicate location
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (33 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 34/66] net/ice/base: implement LLDP persistent settings Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 36/66] net/ice/base: correct UDP/TCP PTYPE assignments Leyi Rong
                     ` (31 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Karol Kolacinski, Paul M Stillwell Jr

Function ice_fdir_is_dup_fltr tests if new Flow Director rule
is not a duplicate.

Signed-off-by: Karol Kolacinski <karol.kolacinski@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_fdir.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_fdir.c b/drivers/net/ice/base/ice_fdir.c
index bde676a8f..9ef91b3b8 100644
--- a/drivers/net/ice/base/ice_fdir.c
+++ b/drivers/net/ice/base/ice_fdir.c
@@ -692,8 +692,13 @@ bool ice_fdir_is_dup_fltr(struct ice_hw *hw, struct ice_fdir_fltr *input)
 				ret = ice_fdir_comp_rules(rule, input, false);
 			else
 				ret = ice_fdir_comp_rules(rule, input, true);
-			if (ret)
-				break;
+			if (ret) {
+				if (rule->fltr_id == input->fltr_id &&
+				    rule->q_index != input->q_index)
+					ret = false;
+				else
+					break;
+			}
 		}
 	}
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 36/66] net/ice/base: correct UDP/TCP PTYPE assignments
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (34 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 35/66] net/ice/base: check new FD filter duplicate location Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 37/66] net/ice/base: calculate rate limit burst size correctly Leyi Rong
                     ` (30 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Vignesh Sridhar, Paul M Stillwell Jr

1. Using the UDP-IL PTYPEs when processing packet segments as it contains
all PTYPEs with UDP and allow packets to be forwarded to associated VSIs
as switch rules are based on outer IPs.
2. Add PTYPE 0x088 to TCP PTYPE bitmap list.

Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c | 23 ++++++-----------------
 1 file changed, 6 insertions(+), 17 deletions(-)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 1ec49fcd9..36657a1a3 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -195,21 +195,11 @@ static const u32 ice_ptypes_arp_of[] = {
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 };
 
-/* Packet types for packets with an Outermost/First UDP header */
-static const u32 ice_ptypes_udp_of[] = {
-	0x81000000, 0x00000000, 0x04000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-};
-
-/* Packet types for packets with an Innermost/Last UDP header */
+/* UDP Packet types for non-tunneled packets or tunneled
+ * packets with inner UDP.
+ */
 static const u32 ice_ptypes_udp_il[] = {
-	0x80000000, 0x20204040, 0x00081010, 0x80810102,
+	0x81000000, 0x20204040, 0x04081010, 0x80810102,
 	0x00204040, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
@@ -222,7 +212,7 @@ static const u32 ice_ptypes_udp_il[] = {
 /* Packet types for packets with an Innermost/Last TCP header */
 static const u32 ice_ptypes_tcp_il[] = {
 	0x04000000, 0x80810102, 0x10204040, 0x42040408,
-	0x00810002, 0x00000000, 0x00000000, 0x00000000,
+	0x00810102, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
@@ -473,8 +463,7 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
 				       ICE_FLOW_PTYPE_MAX);
 			hdrs &= ~ICE_FLOW_SEG_HDR_ICMP;
 		} else if (hdrs & ICE_FLOW_SEG_HDR_UDP) {
-			src = !i ? (const ice_bitmap_t *)ice_ptypes_udp_of :
-				(const ice_bitmap_t *)ice_ptypes_udp_il;
+			src = (const ice_bitmap_t *)ice_ptypes_udp_il;
 			ice_and_bitmap(params->ptypes, params->ptypes, src,
 				       ICE_FLOW_PTYPE_MAX);
 			hdrs &= ~ICE_FLOW_SEG_HDR_UDP;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 37/66] net/ice/base: calculate rate limit burst size correctly
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (35 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 36/66] net/ice/base: correct UDP/TCP PTYPE assignments Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 38/66] net/ice/base: add lock around profile map list Leyi Rong
                     ` (29 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Ben Shelton, Paul M Stillwell Jr

When the MSB is not set, the lower 11 bits do not represent bytes, but
chunks of 64 bytes. Adjust the rate limit burst size calculation
accordingly, and update the comments to indicate the way the hardware
actually works.

Signed-off-by: Ben Shelton <benjamin.h.shelton@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_sched.c | 17 ++++++++---------
 drivers/net/ice/base/ice_sched.h | 14 ++++++++------
 2 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 0c1c18ba1..a72e72982 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -5060,16 +5060,15 @@ enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes)
 	if (bytes < ICE_MIN_BURST_SIZE_ALLOWED ||
 	    bytes > ICE_MAX_BURST_SIZE_ALLOWED)
 		return ICE_ERR_PARAM;
-	if (bytes <= ICE_MAX_BURST_SIZE_BYTE_GRANULARITY) {
-		/* byte granularity case */
+	if (ice_round_to_num(bytes, 64) <=
+	    ICE_MAX_BURST_SIZE_64_BYTE_GRANULARITY) {
+		/* 64 byte granularity case */
 		/* Disable MSB granularity bit */
-		burst_size_to_prog = ICE_BYTE_GRANULARITY;
-		/* round number to nearest 256 granularity */
-		bytes = ice_round_to_num(bytes, 256);
-		/* check rounding doesn't go beyond allowed */
-		if (bytes > ICE_MAX_BURST_SIZE_BYTE_GRANULARITY)
-			bytes = ICE_MAX_BURST_SIZE_BYTE_GRANULARITY;
-		burst_size_to_prog |= (u16)bytes;
+		burst_size_to_prog = ICE_64_BYTE_GRANULARITY;
+		/* round number to nearest 64 byte granularity */
+		bytes = ice_round_to_num(bytes, 64);
+		/* The value is in 64 byte chunks */
+		burst_size_to_prog |= (u16)(bytes / 64);
 	} else {
 		/* k bytes granularity case */
 		/* Enable MSB granularity bit */
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 56f9977ab..e444dc880 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -13,14 +13,16 @@
 #define ICE_SCHED_INVAL_LAYER_NUM	0xFF
 /* Burst size is a 12 bits register that is configured while creating the RL
  * profile(s). MSB is a granularity bit and tells the granularity type
- * 0 - LSB bits are in bytes granularity
+ * 0 - LSB bits are in 64 bytes granularity
  * 1 - LSB bits are in 1K bytes granularity
  */
-#define ICE_BYTE_GRANULARITY			0
-#define ICE_KBYTE_GRANULARITY			0x800
-#define ICE_MIN_BURST_SIZE_ALLOWED		1 /* In Bytes */
-#define ICE_MAX_BURST_SIZE_ALLOWED		(2047 * 1024) /* In Bytes */
-#define ICE_MAX_BURST_SIZE_BYTE_GRANULARITY	2047 /* In Bytes */
+#define ICE_64_BYTE_GRANULARITY			0
+#define ICE_KBYTE_GRANULARITY			BIT(11)
+#define ICE_MIN_BURST_SIZE_ALLOWED		64 /* In Bytes */
+#define ICE_MAX_BURST_SIZE_ALLOWED \
+	((BIT(11) - 1) * 1024) /* In Bytes */
+#define ICE_MAX_BURST_SIZE_64_BYTE_GRANULARITY \
+	((BIT(11) - 1) * 64) /* In Bytes */
 #define ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY	ICE_MAX_BURST_SIZE_ALLOWED
 
 #define ICE_RL_PROF_FREQUENCY 446000000
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 38/66] net/ice/base: add lock around profile map list
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (36 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 37/66] net/ice/base: calculate rate limit burst size correctly Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 39/66] net/ice/base: fix Flow Director VSI count Leyi Rong
                     ` (28 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Add locking mechanism around profile map list.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 31 +++++++++++++++++-----------
 1 file changed, 19 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index c2d1be484..f7d6d9194 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -3973,6 +3973,8 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 	u32 byte = 0;
 	u8 prof_id;
 
+	ice_acquire_lock(&hw->blk[blk].es.prof_map_lock);
+
 	/* search for existing profile */
 	status = ice_find_prof_id(hw, blk, es, &prof_id);
 	if (status) {
@@ -4044,11 +4046,12 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 		bytes--;
 		byte++;
 	}
-	LIST_ADD(&prof->list, &hw->blk[blk].es.prof_map);
 
-	return ICE_SUCCESS;
+	LIST_ADD(&prof->list, &hw->blk[blk].es.prof_map);
+	status = ICE_SUCCESS;
 
 err_ice_add_prof:
+	ice_release_lock(&hw->blk[blk].es.prof_map_lock);
 	return status;
 }
 
@@ -4350,29 +4353,33 @@ ice_rem_flow_all(struct ice_hw *hw, enum ice_block blk, u64 id)
  */
 enum ice_status ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id)
 {
-	enum ice_status status;
 	struct ice_prof_map *pmap;
+	enum ice_status status;
 
-	pmap = ice_search_prof_id(hw, blk, id);
-	if (!pmap)
-		return ICE_ERR_DOES_NOT_EXIST;
+	ice_acquire_lock(&hw->blk[blk].es.prof_map_lock);
+
+	pmap = ice_search_prof_id_low(hw, blk, id);
+	if (!pmap) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto err_ice_rem_prof;
+	}
 
 	/* remove all flows with this profile */
 	status = ice_rem_flow_all(hw, blk, pmap->profile_cookie);
 	if (status)
-		return status;
+		goto err_ice_rem_prof;
 
-	/* remove profile */
-	status = ice_free_prof_id(hw, blk, pmap->prof_id);
-	if (status)
-		return status;
 	/* dereference profile, and possibly remove */
 	ice_prof_dec_ref(hw, blk, pmap->prof_id);
 
 	LIST_DEL(&pmap->list);
 	ice_free(hw, pmap);
 
-	return ICE_SUCCESS;
+	status = ICE_SUCCESS;
+
+err_ice_rem_prof:
+	ice_release_lock(&hw->blk[blk].es.prof_map_lock);
+	return status;
 }
 
 /**
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 39/66] net/ice/base: fix Flow Director VSI count
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (37 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 38/66] net/ice/base: add lock around profile map list Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 40/66] net/ice/base: use more efficient structures Leyi Rong
                     ` (27 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Henry Tieman, Paul M Stillwell Jr

Flow director keeps a list of VSIs for each flow type (TCP4, UDP6, etc.)
This list varies in length depending on the number of traffic classes
(ADQ). This patch uses the define of max TCs to calculate the size of
the VSI array.

Fixes: bd984f155f49 ("net/ice/base: support FDIR")

Signed-off-by: Henry Tieman <henry.w.tieman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_type.h | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 116cfe647..919ca7fa8 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -273,8 +273,13 @@ enum ice_fltr_ptype {
 	ICE_FLTR_PTYPE_MAX,
 };
 
-/* 6 VSI = 1 ICE_VSI_PF + 1 ICE_VSI_CTRL + 4 ICE_VSI_CHNL */
-#define ICE_MAX_FDIR_VSI_PER_FILTER	6
+#ifndef ADQ_SUPPORT
+/* 2 VSI = 1 ICE_VSI_PF + 1 ICE_VSI_CTRL */
+#define ICE_MAX_FDIR_VSI_PER_FILTER	2
+#else
+/* 1 ICE_VSI_PF + 1 ICE_VSI_CTRL + ICE_MAX_TRAFFIC_CLASS */
+#define ICE_MAX_FDIR_VSI_PER_FILTER	(2 + ICE_MAX_TRAFFIC_CLASS)
+#endif /* !ADQ_SUPPORT */
 
 struct ice_fd_hw_prof {
 	struct ice_flow_seg_info *fdir_seg;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 40/66] net/ice/base: use more efficient structures
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (38 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 39/66] net/ice/base: fix Flow Director VSI count Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 41/66] net/ice/base: silent semantic parser warnings Leyi Rong
                     ` (26 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jesse Brandeburg, Paul M Stillwell Jr

Move a bunch of members around to make more efficient use of
memory, eliminating holes where possible. None of these members
are hot path so cache line alignment is not very important here.

Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_controlq.h  |  4 +--
 drivers/net/ice/base/ice_flex_type.h | 38 +++++++++++++---------------
 drivers/net/ice/base/ice_flow.c      |  4 +--
 drivers/net/ice/base/ice_flow.h      | 18 ++++++-------
 4 files changed, 29 insertions(+), 35 deletions(-)

diff --git a/drivers/net/ice/base/ice_controlq.h b/drivers/net/ice/base/ice_controlq.h
index 182db6754..21c8722e5 100644
--- a/drivers/net/ice/base/ice_controlq.h
+++ b/drivers/net/ice/base/ice_controlq.h
@@ -81,6 +81,7 @@ struct ice_rq_event_info {
 /* Control Queue information */
 struct ice_ctl_q_info {
 	enum ice_ctl_q qtype;
+	enum ice_aq_err rq_last_status;	/* last status on receive queue */
 	struct ice_ctl_q_ring rq;	/* receive queue */
 	struct ice_ctl_q_ring sq;	/* send queue */
 	u32 sq_cmd_timeout;		/* send queue cmd write back timeout */
@@ -88,10 +89,9 @@ struct ice_ctl_q_info {
 	u16 num_sq_entries;		/* send queue depth */
 	u16 rq_buf_size;		/* receive queue buffer size */
 	u16 sq_buf_size;		/* send queue buffer size */
+	enum ice_aq_err sq_last_status;	/* last status on send queue */
 	struct ice_lock sq_lock;		/* Send queue lock */
 	struct ice_lock rq_lock;		/* Receive queue lock */
-	enum ice_aq_err sq_last_status;	/* last status on send queue */
-	enum ice_aq_err rq_last_status;	/* last status on receive queue */
 };
 
 #endif /* _ICE_CONTROLQ_H_ */
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
index d23b2ae82..dca5cf285 100644
--- a/drivers/net/ice/base/ice_flex_type.h
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -5,7 +5,7 @@
 #ifndef _ICE_FLEX_TYPE_H_
 #define _ICE_FLEX_TYPE_H_
 
-#define ICE_FV_OFFSET_INVAL    0x1FF
+#define ICE_FV_OFFSET_INVAL	0x1FF
 
 #pragma pack(1)
 /* Extraction Sequence (Field Vector) Table */
@@ -14,7 +14,6 @@ struct ice_fv_word {
 	u16 off;		/* Offset within the protocol header */
 	u8 resvrd;
 };
-
 #pragma pack()
 
 #define ICE_MAX_FV_WORDS 48
@@ -367,7 +366,6 @@ struct ice_boost_key_value {
 	__le16 hv_src_port_key;
 	u8 tcam_search_key;
 };
-
 #pragma pack()
 
 struct ice_boost_key {
@@ -406,7 +404,6 @@ struct ice_xlt1_section {
 	__le16 offset;
 	u8 value[1];
 };
-
 #pragma pack()
 
 #define ICE_XLT1_SIZE(n)	(sizeof(struct ice_xlt1_section) + \
@@ -467,19 +464,19 @@ struct ice_tunnel_type_scan {
 
 struct ice_tunnel_entry {
 	enum ice_tunnel_type type;
-	u8 valid;
-	u8 in_use;
-	u8 marked;
 	u16 boost_addr;
 	u16 port;
 	struct ice_boost_tcam_entry *boost_entry;
+	u8 valid;
+	u8 in_use;
+	u8 marked;
 };
 
 #define ICE_TUNNEL_MAX_ENTRIES	16
 
 struct ice_tunnel_table {
-	u16 count;
 	struct ice_tunnel_entry tbl[ICE_TUNNEL_MAX_ENTRIES];
+	u16 count;
 };
 
 struct ice_pkg_es {
@@ -511,13 +508,13 @@ struct ice_es {
 #define ICE_DEFAULT_PTG	0
 
 struct ice_ptg_entry {
-	u8 in_use;
 	struct ice_ptg_ptype *first_ptype;
+	u8 in_use;
 };
 
 struct ice_ptg_ptype {
-	u8 ptg;
 	struct ice_ptg_ptype *next_ptype;
+	u8 ptg;
 };
 
 #define ICE_MAX_TCAM_PER_PROFILE	8
@@ -535,9 +532,9 @@ struct ice_prof_map {
 #define ICE_INVALID_TCAM	0xFFFF
 
 struct ice_tcam_inf {
+	u16 tcam_idx;
 	u8 ptg;
 	u8 prof_id;
-	u16 tcam_idx;
 	u8 in_use;
 };
 
@@ -550,16 +547,16 @@ struct ice_vsig_prof {
 };
 
 struct ice_vsig_entry {
-	u8 in_use;
 	struct LIST_HEAD_TYPE prop_lst;
 	struct ice_vsig_vsi *first_vsi;
+	u8 in_use;
 };
 
 struct ice_vsig_vsi {
+	struct ice_vsig_vsi *next_vsi;
+	u32 prop_mask;
 	u16 changed;
 	u16 vsig;
-	u32 prop_mask;
-	struct ice_vsig_vsi *next_vsi;
 };
 
 #define ICE_XLT1_CNT	1024
@@ -567,11 +564,11 @@ struct ice_vsig_vsi {
 
 /* XLT1 Table */
 struct ice_xlt1 {
-	u32 sid;
-	u16 count;
 	struct ice_ptg_entry *ptg_tbl;
 	struct ice_ptg_ptype *ptypes;
 	u8 *t;
+	u32 sid;
+	u16 count;
 };
 
 #define ICE_XLT2_CNT	768
@@ -591,11 +588,11 @@ struct ice_xlt1 {
 
 /* XLT2 Table */
 struct ice_xlt2 {
-	u32 sid;
-	u16 count;
 	struct ice_vsig_entry *vsig_tbl;
 	struct ice_vsig_vsi *vsis;
 	u16 *t;
+	u32 sid;
+	u16 count;
 };
 
 /* Extraction sequence - list of match fields:
@@ -641,21 +638,20 @@ struct ice_prof_id_section {
 	__le16 count;
 	struct ice_prof_tcam_entry entry[1];
 };
-
 #pragma pack()
 
 struct ice_prof_tcam {
 	u32 sid;
 	u16 count;
 	u16 max_prof_id;
-	u8 cdid_bits; /* # cdid bits to use in key, 0, 2, 4, or 8 */
 	struct ice_prof_tcam_entry *t;
+	u8 cdid_bits; /* # cdid bits to use in key, 0, 2, 4, or 8 */
 };
 
 struct ice_prof_redir {
+	u8 *t;
 	u32 sid;
 	u16 count;
-	u8 *t;
 };
 
 /* Tables per block */
diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 36657a1a3..795abe98f 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -284,10 +284,10 @@ static const u32 ice_ptypes_mac_il[] = {
 /* Manage parameters and info. used during the creation of a flow profile */
 struct ice_flow_prof_params {
 	enum ice_block blk;
-	struct ice_flow_prof *prof;
-
 	u16 entry_length; /* # of bytes formatted entry will require */
 	u8 es_cnt;
+	struct ice_flow_prof *prof;
+
 	/* For ACL, the es[0] will have the data of ICE_RX_MDID_PKT_FLAGS_15_0
 	 * This will give us the direction flags.
 	 */
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
index 57514a078..715fd7471 100644
--- a/drivers/net/ice/base/ice_flow.h
+++ b/drivers/net/ice/base/ice_flow.h
@@ -225,20 +225,18 @@ struct ice_flow_entry {
 	struct LIST_ENTRY_TYPE l_entry;
 
 	u64 id;
-	u16 vsi_handle;
-	enum ice_flow_priority priority;
 	struct ice_flow_prof *prof;
-
+	/* Action list */
+	struct ice_flow_action *acts;
 	/* Flow entry's content */
-	u16 entry_sz;
 	void *entry;
-
-	/* Action list */
+	enum ice_flow_priority priority;
+	u16 vsi_handle;
+	u16 entry_sz;
 	u8 acts_cnt;
-	struct ice_flow_action *acts;
 };
 
-#define ICE_FLOW_ENTRY_HNDL(e)	((unsigned long)e)
+#define ICE_FLOW_ENTRY_HNDL(e)	((u64)e)
 #define ICE_FLOW_ENTRY_PTR(h)	((struct ice_flow_entry *)(h))
 
 struct ice_flow_prof {
@@ -246,12 +244,13 @@ struct ice_flow_prof {
 
 	u64 id;
 	enum ice_flow_dir dir;
+	u8 segs_cnt;
+	u8 acts_cnt;
 
 	/* Keep track of flow entries associated with this flow profile */
 	struct ice_lock entries_lock;
 	struct LIST_HEAD_TYPE entries;
 
-	u8 segs_cnt;
 	struct ice_flow_seg_info segs[ICE_FLOW_SEG_MAX];
 
 	/* software VSI handles referenced by this flow profile */
@@ -264,7 +263,6 @@ struct ice_flow_prof {
 	} cfg;
 
 	/* Default actions */
-	u8 acts_cnt;
 	struct ice_flow_action *acts;
 };
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 41/66] net/ice/base: silent semantic parser warnings
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (39 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 40/66] net/ice/base: use more efficient structures Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 42/66] net/ice/base: fix for signed package download Leyi Rong
                     ` (25 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Bruce Allan, Kevin Scott, Paul M Stillwell Jr

Eliminate some semantic warnings, static analysis warnings.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Kevin Scott <kevin.c.scott@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c     | 8 ++++----
 drivers/net/ice/base/ice_flow.c          | 5 +----
 drivers/net/ice/base/ice_nvm.c           | 4 ++--
 drivers/net/ice/base/ice_protocol_type.h | 1 +
 drivers/net/ice/base/ice_switch.c        | 8 +++-----
 drivers/net/ice/base/ice_type.h          | 7 ++++++-
 6 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index f7d6d9194..d5f0707b6 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -134,7 +134,7 @@ static struct ice_buf_table *ice_find_buf_table(struct ice_seg *ice_seg)
 	nvms = (struct ice_nvm_table *)(ice_seg->device_table +
 		LE32_TO_CPU(ice_seg->device_table_count));
 
-	return (struct ice_buf_table *)
+	return (_FORCE_ struct ice_buf_table *)
 		(nvms->vers + LE32_TO_CPU(nvms->table_count));
 }
 
@@ -2937,7 +2937,7 @@ static void ice_fill_tbl(struct ice_hw *hw, enum ice_block block_id, u32 sid)
 		case ICE_SID_XLT2_ACL:
 		case ICE_SID_XLT2_PE:
 			xlt2 = (struct ice_xlt2_section *)sect;
-			src = (u8 *)xlt2->value;
+			src = (_FORCE_ u8 *)xlt2->value;
 			sect_len = LE16_TO_CPU(xlt2->count) *
 				sizeof(*hw->blk[block_id].xlt2.t);
 			dst = (u8 *)hw->blk[block_id].xlt2.t;
@@ -3889,7 +3889,7 @@ ice_update_fd_swap(struct ice_hw *hw, u16 prof_id, struct ice_fv_word *es)
 
 	/* fill in the swap array */
 	si = hw->blk[ICE_BLK_FD].es.fvw - 1;
-	do {
+	while (si >= 0) {
 		u8 indexes_used = 1;
 
 		/* assume flat at this index */
@@ -3921,7 +3921,7 @@ ice_update_fd_swap(struct ice_hw *hw, u16 prof_id, struct ice_fv_word *es)
 		}
 
 		si -= indexes_used;
-	} while (si >= 0);
+	}
 
 	/* for each set of 4 swap indexes, write the appropriate register */
 	for (j = 0; j < hw->blk[ICE_BLK_FD].es.fvw / 4; j++) {
diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 795abe98f..48cfa897d 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -415,9 +415,6 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
 		const ice_bitmap_t *src;
 		u32 hdrs;
 
-		if (i > 0 && (i + 1) < prof->segs_cnt)
-			continue;
-
 		hdrs = prof->segs[i].hdrs;
 
 		if (hdrs & ICE_FLOW_SEG_HDR_ETH) {
@@ -1425,7 +1422,7 @@ enum ice_status ice_flow_rem_entry(struct ice_hw *hw, u64 entry_h)
 	if (entry_h == ICE_FLOW_ENTRY_HANDLE_INVAL)
 		return ICE_ERR_PARAM;
 
-	entry = ICE_FLOW_ENTRY_PTR((unsigned long)entry_h);
+	entry = ICE_FLOW_ENTRY_PTR(entry_h);
 
 	/* Retain the pointer to the flow profile as the entry will be freed */
 	prof = entry->prof;
diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
index fa9c348ce..76cfedb29 100644
--- a/drivers/net/ice/base/ice_nvm.c
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -127,7 +127,7 @@ ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
 
 	status = ice_read_sr_aq(hw, offset, 1, data, true);
 	if (!status)
-		*data = LE16_TO_CPU(*(__le16 *)data);
+		*data = LE16_TO_CPU(*(_FORCE_ __le16 *)data);
 
 	return status;
 }
@@ -185,7 +185,7 @@ ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
 	} while (words_read < *words);
 
 	for (i = 0; i < *words; i++)
-		data[i] = LE16_TO_CPU(((__le16 *)data)[i]);
+		data[i] = LE16_TO_CPU(((_FORCE_ __le16 *)data)[i]);
 
 read_nvm_buf_aq_exit:
 	*words = words_read;
diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
index e572dd320..82822fb74 100644
--- a/drivers/net/ice/base/ice_protocol_type.h
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -189,6 +189,7 @@ struct ice_udp_tnl_hdr {
 	u16 field;
 	u16 proto_type;
 	u16 vni;
+	u16 reserved;
 };
 
 struct ice_nvgre {
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index faaedd4c8..77bcf9aa8 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -859,7 +859,7 @@ ice_aq_add_update_mir_rule(struct ice_hw *hw, u16 rule_type, u16 dest_vsi,
 			return ICE_ERR_PARAM;
 
 		buf_size = count * sizeof(__le16);
-		mr_list = (__le16 *)ice_malloc(hw, buf_size);
+		mr_list = (_FORCE_ __le16 *)ice_malloc(hw, buf_size);
 		if (!mr_list)
 			return ICE_ERR_NO_MEMORY;
 		break;
@@ -1459,7 +1459,6 @@ static int ice_ilog2(u64 n)
 	return -1;
 }
 
-
 /**
  * ice_fill_sw_rule - Helper function to fill switch rule structure
  * @hw: pointer to the hardware structure
@@ -1479,7 +1478,6 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
 	__be16 *off;
 	u8 q_rgn;
 
-
 	if (opc == ice_aqc_opc_remove_sw_rules) {
 		s_rule->pdata.lkup_tx_rx.act = 0;
 		s_rule->pdata.lkup_tx_rx.index =
@@ -1555,7 +1553,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
 		daddr = f_info->l_data.ethertype_mac.mac_addr;
 		/* fall-through */
 	case ICE_SW_LKUP_ETHERTYPE:
-		off = (__be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
+		off = (_FORCE_ __be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
 		*off = CPU_TO_BE16(f_info->l_data.ethertype_mac.ethertype);
 		break;
 	case ICE_SW_LKUP_MAC_VLAN:
@@ -1586,7 +1584,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
 			   ICE_NONDMA_TO_NONDMA);
 
 	if (!(vlan_id > ICE_MAX_VLAN_ID)) {
-		off = (__be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
+		off = (_FORCE_ __be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
 		*off = CPU_TO_BE16(vlan_id);
 	}
 
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 919ca7fa8..e0820b679 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -14,6 +14,10 @@
 
 #define BITS_PER_BYTE	8
 
+#ifndef _FORCE_
+#define _FORCE_
+#endif
+
 #define ICE_BYTES_PER_WORD	2
 #define ICE_BYTES_PER_DWORD	4
 #define ICE_MAX_TRAFFIC_CLASS	8
@@ -35,7 +39,7 @@
 #endif
 
 #ifndef IS_ASCII
-#define IS_ASCII(_ch)  ((_ch) < 0x80)
+#define IS_ASCII(_ch)	((_ch) < 0x80)
 #endif
 
 #include "ice_status.h"
@@ -745,6 +749,7 @@ struct ice_hw {
 	u8 pf_id;		/* device profile info */
 
 	u16 max_burst_size;	/* driver sets this value */
+
 	/* Tx Scheduler values */
 	u16 num_tx_sched_layers;
 	u16 num_tx_sched_phys_layers;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 42/66] net/ice/base: fix for signed package download
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (40 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 41/66] net/ice/base: silent semantic parser warnings Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 43/66] net/ice/base: add new API to dealloc flow entry Leyi Rong
                     ` (24 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

In order to properly support signed packages, we always have
to send the complete buffer to firmware, regardless of any
unused space at the end. This is because the SHA hash value
is computed over the entire buffer.

Fixes: 51d04e4933e3 ("net/ice/base: add flexible pipeline module")

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index d5f0707b6..fdc9eb6eb 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1005,9 +1005,8 @@ ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
 
 		bh = (struct ice_buf_hdr *)(bufs + i);
 
-		status = ice_aq_download_pkg(hw, bh, LE16_TO_CPU(bh->data_end),
-					     last, &offset, &info, NULL);
-
+		status = ice_aq_download_pkg(hw, bh, ICE_PKG_BUF_SIZE, last,
+					     &offset, &info, NULL);
 		if (status) {
 			ice_debug(hw, ICE_DBG_PKG,
 				  "Pkg download failed: err %d off %d inf %d\n",
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 43/66] net/ice/base: add new API to dealloc flow entry
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (41 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 42/66] net/ice/base: fix for signed package download Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 44/66] net/ice/base: check RSS flow profile list Leyi Rong
                     ` (23 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Paul M Stillwell Jr

Decouple ice_dealloc_flow_entry from ice_flow_rem_entry_sync.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c | 35 ++++++++++++++++++++++++---------
 1 file changed, 26 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 48cfa897d..0dad62010 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -932,17 +932,15 @@ ice_flow_find_prof_id(struct ice_hw *hw, enum ice_block blk, u64 prof_id)
 }
 
 /**
- * ice_flow_rem_entry_sync - Remove a flow entry
+ * ice_dealloc_flow_entry - Deallocate flow entry memory
  * @hw: pointer to the HW struct
  * @entry: flow entry to be removed
  */
-static enum ice_status
-ice_flow_rem_entry_sync(struct ice_hw *hw, struct ice_flow_entry *entry)
+static void
+ice_dealloc_flow_entry(struct ice_hw *hw, struct ice_flow_entry *entry)
 {
 	if (!entry)
-		return ICE_ERR_BAD_PTR;
-
-	LIST_DEL(&entry->l_entry);
+		return;
 
 	if (entry->entry)
 		ice_free(hw, entry->entry);
@@ -954,6 +952,22 @@ ice_flow_rem_entry_sync(struct ice_hw *hw, struct ice_flow_entry *entry)
 	}
 
 	ice_free(hw, entry);
+}
+
+/**
+ * ice_flow_rem_entry_sync - Remove a flow entry
+ * @hw: pointer to the HW struct
+ * @entry: flow entry to be removed
+ */
+static enum ice_status
+ice_flow_rem_entry_sync(struct ice_hw *hw, struct ice_flow_entry *entry)
+{
+	if (!entry)
+		return ICE_ERR_BAD_PTR;
+
+	LIST_DEL(&entry->l_entry);
+
+	ice_dealloc_flow_entry(hw, entry);
 
 	return ICE_SUCCESS;
 }
@@ -1392,9 +1406,12 @@ ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id,
 		goto out;
 	}
 
-	ice_acquire_lock(&prof->entries_lock);
-	LIST_ADD(&e->l_entry, &prof->entries);
-	ice_release_lock(&prof->entries_lock);
+	if (blk != ICE_BLK_ACL) {
+		/* ACL will handle the entry management */
+		ice_acquire_lock(&prof->entries_lock);
+		LIST_ADD(&e->l_entry, &prof->entries);
+		ice_release_lock(&prof->entries_lock);
+	}
 
 	*entry_h = ICE_FLOW_ENTRY_HNDL(e);
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 44/66] net/ice/base: check RSS flow profile list
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (42 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 43/66] net/ice/base: add new API to dealloc flow entry Leyi Rong
@ 2019-06-11 15:51   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 45/66] net/ice/base: protect list add with lock Leyi Rong
                     ` (22 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:51 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Vignesh Sridhar, Paul M Stillwell Jr

Minor change to check if there are any RSS flow profiles to remove.
This will avoid flow profile lock acquisition and release
if the list is empty.

Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 0dad62010..b25f30c3f 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -1690,6 +1690,9 @@ enum ice_status ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
 	if (!ice_is_vsi_valid(hw, vsi_handle))
 		return ICE_ERR_PARAM;
 
+	if (LIST_EMPTY(&hw->fl_profs[blk]))
+		return ICE_SUCCESS;
+
 	ice_acquire_lock(&hw->fl_profs_locks[blk]);
 	LIST_FOR_EACH_ENTRY_SAFE(p, t, &hw->fl_profs[blk], ice_flow_prof,
 				 l_entry) {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 45/66] net/ice/base: protect list add with lock
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (43 preceding siblings ...)
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 44/66] net/ice/base: check RSS flow profile list Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 46/66] net/ice/base: fix Rx functionality for ethertype filters Leyi Rong
                     ` (21 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Tarun Singh, Paul M Stillwell Jr

Function ice_add_rule_internal needs to call ice_create_pkt_fwd_rule
with lock held because it uses the LIST_ADD to modify the filter
rule list. It needs to be protected when modified.

Signed-off-by: Tarun Singh <tarun.k.singh@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 77bcf9aa8..cdecc39b0 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -2287,14 +2287,15 @@ ice_add_rule_internal(struct ice_hw *hw, u8 recp_id,
 
 	m_entry = ice_find_rule_entry(hw, recp_id, new_fltr);
 	if (!m_entry) {
-		ice_release_lock(rule_lock);
-		return ice_create_pkt_fwd_rule(hw, f_entry);
+		status = ice_create_pkt_fwd_rule(hw, f_entry);
+		goto exit_add_rule_internal;
 	}
 
 	cur_fltr = &m_entry->fltr_info;
 	status = ice_add_update_vsi_list(hw, m_entry, cur_fltr, new_fltr);
-	ice_release_lock(rule_lock);
 
+exit_add_rule_internal:
+	ice_release_lock(rule_lock);
 	return status;
 }
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 46/66] net/ice/base: fix Rx functionality for ethertype filters
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (44 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 45/66] net/ice/base: protect list add with lock Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 47/66] net/ice/base: introduce some new macros Leyi Rong
                     ` (20 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dave Ertman, Paul M Stillwell Jr

In the function ice_add_eth_mac(), there is a line that
hard-codes the filter info flag to TX. This is redundant
and inaccurate. That flag will be set by the calling
function that built the list of filters to add, and
hard-coding it eliminates the Rx functionality of this
code. The paired function ice_remove_eth_mac() does not
do this, making a mis-matched pair.

Fixes: 157d00901f97 ("net/ice/base: add functions for ethertype filter")

Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index cdecc39b0..2c376321b 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -2974,12 +2974,19 @@ ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list)
  * ice_add_eth_mac - Add ethertype and MAC based filter rule
  * @hw: pointer to the hardware structure
  * @em_list: list of ether type MAC filter, MAC is optional
+ *
+ * This function requires the caller to populate the entries in
+ * the filter list with the necessary fields (including flags to
+ * indicate Tx or Rx rules).
  */
 enum ice_status
 ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list)
 {
 	struct ice_fltr_list_entry *em_list_itr;
 
+	if (!em_list || !hw)
+		return ICE_ERR_PARAM;
+
 	LIST_FOR_EACH_ENTRY(em_list_itr, em_list, ice_fltr_list_entry,
 			    list_entry) {
 		enum ice_sw_lkup_type l_type =
@@ -2989,7 +2996,6 @@ ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list)
 		    l_type != ICE_SW_LKUP_ETHERTYPE)
 			return ICE_ERR_PARAM;
 
-		em_list_itr->fltr_info.flag = ICE_FLTR_TX;
 		em_list_itr->status = ice_add_rule_internal(hw, l_type,
 							    em_list_itr);
 		if (em_list_itr->status)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 47/66] net/ice/base: introduce some new macros
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (45 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 46/66] net/ice/base: fix Rx functionality for ethertype filters Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 48/66] net/ice/base: add init for SW recipe member rg list Leyi Rong
                     ` (19 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Paul M Stillwell Jr

Introduce some more new macros, like ICE_VSI_LB and the like.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c   |  4 +++-
 drivers/net/ice/base/ice_switch.h | 14 +++++---------
 drivers/net/ice/base/ice_type.h   |  6 +++++-
 3 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index b25f30c3f..f31557eac 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -844,6 +844,7 @@ ice_flow_proc_segs(struct ice_hw *hw, struct ice_flow_prof_params *params)
 
 #define ICE_FLOW_FIND_PROF_CHK_FLDS	0x00000001
 #define ICE_FLOW_FIND_PROF_CHK_VSI	0x00000002
+#define ICE_FLOW_FIND_PROF_NOT_CHK_DIR	0x00000004
 
 /**
  * ice_flow_find_prof_conds - Find a profile matching headers and conditions
@@ -863,7 +864,8 @@ ice_flow_find_prof_conds(struct ice_hw *hw, enum ice_block blk,
 	struct ice_flow_prof *p;
 
 	LIST_FOR_EACH_ENTRY(p, &hw->fl_profs[blk], ice_flow_prof, l_entry) {
-		if (p->dir == dir && segs_cnt && segs_cnt == p->segs_cnt) {
+		if ((p->dir == dir || conds & ICE_FLOW_FIND_PROF_NOT_CHK_DIR) &&
+		    segs_cnt && segs_cnt == p->segs_cnt) {
 			u8 i;
 
 			/* Check for profile-VSI association if specified */
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index 2f140a86d..05b1170c9 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -11,6 +11,9 @@
 #define ICE_SW_CFG_MAX_BUF_LEN 2048
 #define ICE_MAX_SW 256
 #define ICE_DFLT_VSI_INVAL 0xff
+#define ICE_FLTR_RX BIT(0)
+#define ICE_FLTR_TX BIT(1)
+#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
 
 
 /* Worst case buffer length for ice_aqc_opc_get_res_alloc */
@@ -77,9 +80,6 @@ struct ice_fltr_info {
 	/* rule ID returned by firmware once filter rule is created */
 	u16 fltr_rule_id;
 	u16 flag;
-#define ICE_FLTR_RX		BIT(0)
-#define ICE_FLTR_TX		BIT(1)
-#define ICE_FLTR_TX_RX		(ICE_FLTR_RX | ICE_FLTR_TX)
 
 	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
 	u16 src;
@@ -145,10 +145,6 @@ struct ice_sw_act_ctrl {
 	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
 	u16 src;
 	u16 flag;
-#define ICE_FLTR_RX             BIT(0)
-#define ICE_FLTR_TX             BIT(1)
-#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
-
 	enum ice_sw_fwd_act_type fltr_act;
 	/* Depending on filter action */
 	union {
@@ -368,6 +364,8 @@ ice_aq_get_res_descs(struct ice_hw *hw, u16 num_entries,
 		     struct ice_sq_cd *cd);
 enum ice_status
 ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
+enum ice_status
+ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
 void ice_rem_all_sw_rules_info(struct ice_hw *hw);
 enum ice_status ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
 enum ice_status ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
@@ -375,8 +373,6 @@ enum ice_status
 ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list);
 enum ice_status
 ice_remove_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list);
-enum ice_status
-ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
 #ifndef NO_MACVLAN_SUPPORT
 enum ice_status
 ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index e0820b679..f4e151c55 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -84,6 +84,7 @@ static inline u32 ice_round_to_num(u32 N, u32 R)
 #define ICE_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
 
 /* debug masks - set these bits in hw->debug_mask to control output */
+#define ICE_DBG_TRACE		BIT_ULL(0) /* for function-trace only */
 #define ICE_DBG_INIT		BIT_ULL(1)
 #define ICE_DBG_RELEASE		BIT_ULL(2)
 #define ICE_DBG_FW_LOG		BIT_ULL(3)
@@ -203,6 +204,7 @@ enum ice_vsi_type {
 #ifdef ADQ_SUPPORT
 	ICE_VSI_CHNL = 4,
 #endif /* ADQ_SUPPORT */
+	ICE_VSI_LB = 6,
 };
 
 struct ice_link_status {
@@ -722,6 +724,8 @@ struct ice_fw_log_cfg {
 #define ICE_FW_LOG_EVNT_INIT	(ICE_AQC_FW_LOG_INIT_EN >> ICE_AQC_FW_LOG_EN_S)
 #define ICE_FW_LOG_EVNT_FLOW	(ICE_AQC_FW_LOG_FLOW_EN >> ICE_AQC_FW_LOG_EN_S)
 #define ICE_FW_LOG_EVNT_ERR	(ICE_AQC_FW_LOG_ERR_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_ALL	(ICE_FW_LOG_EVNT_INFO | ICE_FW_LOG_EVNT_INIT | \
+				 ICE_FW_LOG_EVNT_FLOW | ICE_FW_LOG_EVNT_ERR)
 	struct ice_fw_log_evnt evnts[ICE_AQC_FW_LOG_ID_MAX];
 };
 
@@ -953,7 +957,6 @@ enum ice_sw_fwd_act_type {
 #define ICE_SR_CSR_PROTECTED_LIST_PTR		0x0D
 #define ICE_SR_MNG_CFG_PTR			0x0E
 #define ICE_SR_EMP_MODULE_PTR			0x0F
-#define ICE_SR_PBA_FLAGS			0x15
 #define ICE_SR_PBA_BLOCK_PTR			0x16
 #define ICE_SR_BOOT_CFG_PTR			0x17
 #define ICE_SR_NVM_WOL_CFG			0x19
@@ -999,6 +1002,7 @@ enum ice_sw_fwd_act_type {
 #define ICE_SR_EMP_SR_SETTINGS_PTR		0x48
 #define ICE_SR_CONFIGURATION_METADATA_PTR	0x4D
 #define ICE_SR_IMMEDIATE_VALUES_PTR		0x4E
+#define ICE_SR_POR_REGISTERS_AUTOLOAD_PTR	0x118
 
 /* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
 #define ICE_SR_VPD_SIZE_WORDS		512
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 48/66] net/ice/base: add init for SW recipe member rg list
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (46 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 47/66] net/ice/base: introduce some new macros Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 49/66] net/ice/base: code clean up Leyi Rong
                     ` (18 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong

Initialize ice_sw_recipe member rg_list in ice_init_def_sw_recp.

Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 2c376321b..373acb7a6 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -279,6 +279,7 @@ enum ice_status ice_init_def_sw_recp(struct ice_hw *hw)
 		recps[i].root_rid = i;
 		INIT_LIST_HEAD(&recps[i].filt_rules);
 		INIT_LIST_HEAD(&recps[i].filt_replay_rules);
+		INIT_LIST_HEAD(&recps[i].rg_list);
 		ice_init_lock(&recps[i].filt_rule_lock);
 	}
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 49/66] net/ice/base: code clean up
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (47 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 48/66] net/ice/base: add init for SW recipe member rg list Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 50/66] net/ice/base: cleanup ice flex pipe files Leyi Rong
                     ` (17 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Paul M Stillwell Jr

Cleanup the useless code.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_controlq.c  | 62 +---------------------------
 drivers/net/ice/base/ice_fdir.h      |  4 --
 drivers/net/ice/base/ice_flex_pipe.c |  7 ++--
 drivers/net/ice/base/ice_sched.c     |  4 +-
 drivers/net/ice/base/ice_switch.c    |  8 ----
 drivers/net/ice/base/ice_switch.h    |  2 -
 drivers/net/ice/base/ice_type.h      | 22 ++--------
 7 files changed, 11 insertions(+), 98 deletions(-)

diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
index 4cb6df113..3ef07e094 100644
--- a/drivers/net/ice/base/ice_controlq.c
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -262,7 +262,7 @@ ice_cfg_sq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
  * @hw: pointer to the hardware structure
  * @cq: pointer to the specific Control queue
  *
- * Configure base address and length registers for the receive (event q)
+ * Configure base address and length registers for the receive (event queue)
  */
 static enum ice_status
 ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
@@ -772,9 +772,6 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 	struct ice_ctl_q_ring *sq = &cq->sq;
 	u16 ntc = sq->next_to_clean;
 	struct ice_sq_cd *details;
-#if 0
-	struct ice_aq_desc desc_cb;
-#endif
 	struct ice_aq_desc *desc;
 
 	desc = ICE_CTL_Q_DESC(*sq, ntc);
@@ -783,15 +780,6 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 	while (rd32(hw, cq->sq.head) != ntc) {
 		ice_debug(hw, ICE_DBG_AQ_MSG,
 			  "ntc %d head %d.\n", ntc, rd32(hw, cq->sq.head));
-#if 0
-		if (details->callback) {
-			ICE_CTL_Q_CALLBACK cb_func =
-				(ICE_CTL_Q_CALLBACK)details->callback;
-			ice_memcpy(&desc_cb, desc, sizeof(desc_cb),
-				   ICE_DMA_TO_DMA);
-			cb_func(hw, &desc_cb);
-		}
-#endif
 		ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
 		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
 		ntc++;
@@ -941,38 +929,8 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	details = ICE_CTL_Q_DETAILS(cq->sq, cq->sq.next_to_use);
 	if (cd)
 		*details = *cd;
-#if 0
-		/* FIXME: if/when this block gets enabled (when the #if 0
-		 * is removed), add braces to both branches of the surrounding
-		 * conditional expression. The braces have been removed to
-		 * prevent checkpatch complaining.
-		 */
-
-		/* If the command details are defined copy the cookie. The
-		 * CPU_TO_LE32 is not needed here because the data is ignored
-		 * by the FW, only used by the driver
-		 */
-		if (details->cookie) {
-			desc->cookie_high =
-				CPU_TO_LE32(ICE_HI_DWORD(details->cookie));
-			desc->cookie_low =
-				CPU_TO_LE32(ICE_LO_DWORD(details->cookie));
-		}
-#endif
 	else
 		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
-#if 0
-	/* clear requested flags and then set additional flags if defined */
-	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
-	desc->flags |= CPU_TO_LE16(details->flags_ena);
-
-	if (details->postpone && !details->async) {
-		ice_debug(hw, ICE_DBG_AQ_MSG,
-			  "Async flag not set along with postpone flag\n");
-		status = ICE_ERR_PARAM;
-		goto sq_send_command_error;
-	}
-#endif
 
 	/* Call clean and check queue available function to reclaim the
 	 * descriptors that were processed by FW/MBX; the function returns the
@@ -1019,20 +977,8 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	(cq->sq.next_to_use)++;
 	if (cq->sq.next_to_use == cq->sq.count)
 		cq->sq.next_to_use = 0;
-#if 0
-	/* FIXME - handle this case? */
-	if (!details->postpone)
-#endif
 	wr32(hw, cq->sq.tail, cq->sq.next_to_use);
 
-#if 0
-	/* if command details are not defined or async flag is not set,
-	 * we need to wait for desc write back
-	 */
-	if (!details->async && !details->postpone) {
-		/* FIXME - handle this case? */
-	}
-#endif
 	do {
 		if (ice_sq_done(hw, cq))
 			break;
@@ -1087,9 +1033,6 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 
 	/* update the error if time out occurred */
 	if (!cmd_completed) {
-#if 0
-	    (!details->async && !details->postpone)) {
-#endif
 		ice_debug(hw, ICE_DBG_AQ_MSG,
 			  "Control Send Queue Writeback timeout.\n");
 		status = ICE_ERR_AQ_TIMEOUT;
@@ -1208,9 +1151,6 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	cq->rq.next_to_clean = ntc;
 	cq->rq.next_to_use = ntu;
 
-#if 0
-	ice_nvmupd_check_wait_event(hw, LE16_TO_CPU(e->desc.opcode));
-#endif
 clean_rq_elem_out:
 	/* Set pending if needed, unlock and return */
 	if (pending) {
diff --git a/drivers/net/ice/base/ice_fdir.h b/drivers/net/ice/base/ice_fdir.h
index 2ecb147f1..8490fac61 100644
--- a/drivers/net/ice/base/ice_fdir.h
+++ b/drivers/net/ice/base/ice_fdir.h
@@ -163,9 +163,6 @@ struct ice_fdir_fltr {
 
 	/* filter control */
 	u16 q_index;
-#ifdef ADQ_SUPPORT
-	u16 orig_q_index;
-#endif /* ADQ_SUPPORT */
 	u16 dest_vsi;
 	u8 dest_ctl;
 	u8 fltr_status;
@@ -173,7 +170,6 @@ struct ice_fdir_fltr {
 	u32 fltr_id;
 };
 
-
 /* Dummy packet filter definition structure. */
 struct ice_fdir_base_pkt {
 	enum ice_fltr_ptype flow;
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index fdc9eb6eb..8f19b9fef 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -398,7 +398,7 @@ ice_find_boost_entry(struct ice_seg *ice_seg, u16 addr,
  * Handles enumeration of individual label entries.
  */
 static void *
-ice_label_enum_handler(u32 __always_unused sect_type, void *section, u32 index,
+ice_label_enum_handler(u32 __ALWAYS_UNUSED sect_type, void *section, u32 index,
 		       u32 *offset)
 {
 	struct ice_label_section *labels;
@@ -640,7 +640,7 @@ static bool ice_bits_max_set(const u8 *mask, u16 size, u16 max)
  * @size: the size of the complete key in bytes (must be even)
  * @val: array of 8-bit values that makes up the value portion of the key
  * @upd: array of 8-bit masks that determine what key portion to update
- * @dc: array of 8-bit masks that make up the dont' care mask
+ * @dc: array of 8-bit masks that make up the don't care mask
  * @nm: array of 8-bit masks that make up the never match mask
  * @off: the offset of the first byte in the key to update
  * @len: the number of bytes in the key update
@@ -897,7 +897,7 @@ ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
 	u32 i;
 
 	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
-	ice_debug(hw, ICE_DBG_PKG, "Package version: %d.%d.%d.%d\n",
+	ice_debug(hw, ICE_DBG_PKG, "Package format version: %d.%d.%d.%d\n",
 		  pkg_hdr->format_ver.major, pkg_hdr->format_ver.minor,
 		  pkg_hdr->format_ver.update, pkg_hdr->format_ver.draft);
 
@@ -4544,6 +4544,7 @@ ice_move_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig,
 	status = ice_vsig_find_vsi(hw, blk, vsi, &orig_vsig);
 	if (!status)
 		status = ice_vsig_add_mv_vsi(hw, blk, vsi, vsig);
+
 	if (status) {
 		ice_free(hw, p);
 		return status;
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index a72e72982..fa3158a7b 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -1233,7 +1233,7 @@ enum ice_status ice_sched_init_port(struct ice_port_info *pi)
 		goto err_init_port;
 	}
 
-	/* If the last node is a leaf node then the index of the Q group
+	/* If the last node is a leaf node then the index of the queue group
 	 * layer is two less than the number of elements.
 	 */
 	if (num_elems > 2 && buf[0].generic[num_elems - 1].data.elem_type ==
@@ -3529,9 +3529,11 @@ ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
 		LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
 				    ice_sched_agg_vsi_info, list_entry)
 			if (agg_vsi_info->vsi_handle == vsi_handle) {
+				/* cppcheck-suppress unreadVariable */
 				vsi_handle_valid = true;
 				break;
 			}
+
 		if (!vsi_handle_valid)
 			goto exit_agg_priority_per_tc;
 
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 373acb7a6..c7fcd71a7 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -2934,7 +2934,6 @@ ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
 	return ICE_SUCCESS;
 }
 
-#ifndef NO_MACVLAN_SUPPORT
 /**
  * ice_add_mac_vlan - Add MAC and VLAN pair based filter rule
  * @hw: pointer to the hardware structure
@@ -2969,7 +2968,6 @@ ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list)
 	}
 	return ICE_SUCCESS;
 }
-#endif
 
 /**
  * ice_add_eth_mac - Add ethertype and MAC based filter rule
@@ -3307,7 +3305,6 @@ ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
 	return ICE_SUCCESS;
 }
 
-#ifndef NO_MACVLAN_SUPPORT
 /**
  * ice_remove_mac_vlan - Remove MAC VLAN based filter rule
  * @hw: pointer to the hardware structure
@@ -3335,7 +3332,6 @@ ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
 	}
 	return ICE_SUCCESS;
 }
-#endif /* !NO_MACVLAN_SUPPORT */
 
 /**
  * ice_vsi_uses_fltr - Determine if given VSI uses specified filter
@@ -3850,11 +3846,7 @@ ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
 		ice_remove_promisc(hw, lkup, &remove_list_head);
 		break;
 	case ICE_SW_LKUP_MAC_VLAN:
-#ifndef NO_MACVLAN_SUPPORT
 		ice_remove_mac_vlan(hw, &remove_list_head);
-#else
-		ice_debug(hw, ICE_DBG_SW, "MAC VLAN look up is not supported yet\n");
-#endif /* !NO_MACVLAN_SUPPORT */
 		break;
 	case ICE_SW_LKUP_ETHERTYPE:
 	case ICE_SW_LKUP_ETHERTYPE_MAC:
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index 05b1170c9..b788aa7ec 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -373,12 +373,10 @@ enum ice_status
 ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list);
 enum ice_status
 ice_remove_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list);
-#ifndef NO_MACVLAN_SUPPORT
 enum ice_status
 ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
 enum ice_status
 ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
-#endif /* !NO_MACVLAN_SUPPORT */
 
 enum ice_status
 ice_add_mac_with_sw_marker(struct ice_hw *hw, struct ice_fltr_info *f_info,
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index f4e151c55..8de5cc097 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -22,7 +22,6 @@
 #define ICE_BYTES_PER_DWORD	4
 #define ICE_MAX_TRAFFIC_CLASS	8
 
-#ifndef ROUND_UP
 /**
  * ROUND_UP - round up to next arbitrary multiple (not a power of 2)
  * @a: value to round up
@@ -32,15 +31,10 @@
  * Note, when b is a power of 2 use ICE_ALIGN() instead.
  */
 #define ROUND_UP(a, b)	((b) * DIVIDE_AND_ROUND_UP((a), (b)))
-#endif
 
-#ifndef MIN_T
 #define MIN_T(_t, _a, _b)	min((_t)(_a), (_t)(_b))
-#endif
 
-#ifndef IS_ASCII
 #define IS_ASCII(_ch)	((_ch) < 0x80)
-#endif
 
 #include "ice_status.h"
 #include "ice_hw_autogen.h"
@@ -57,9 +51,7 @@ static inline bool ice_is_tc_ena(ice_bitmap_t bitmap, u8 tc)
 	return ice_is_bit_set(&bitmap, tc);
 }
 
-#ifndef DIV_64BIT
 #define DIV_64BIT(n, d) ((n) / (d))
-#endif /* DIV_64BIT */
 
 static inline u64 round_up_64bit(u64 a, u32 b)
 {
@@ -114,6 +106,9 @@ static inline u32 ice_round_to_num(u32 N, u32 R)
 #define ICE_DBG_USER		BIT_ULL(31)
 #define ICE_DBG_ALL		0xFFFFFFFFFFFFFFFFULL
 
+#ifndef __ALWAYS_UNUSED
+#define __ALWAYS_UNUSED
+#endif
 
 
 
@@ -201,9 +196,6 @@ enum ice_media_type {
 enum ice_vsi_type {
 	ICE_VSI_PF = 0,
 	ICE_VSI_CTRL = 3,	/* equates to ICE_VSI_PF with 1 queue pair */
-#ifdef ADQ_SUPPORT
-	ICE_VSI_CHNL = 4,
-#endif /* ADQ_SUPPORT */
 	ICE_VSI_LB = 6,
 };
 
@@ -279,13 +271,8 @@ enum ice_fltr_ptype {
 	ICE_FLTR_PTYPE_MAX,
 };
 
-#ifndef ADQ_SUPPORT
 /* 2 VSI = 1 ICE_VSI_PF + 1 ICE_VSI_CTRL */
 #define ICE_MAX_FDIR_VSI_PER_FILTER	2
-#else
-/* 1 ICE_VSI_PF + 1 ICE_VSI_CTRL + ICE_MAX_TRAFFIC_CLASS */
-#define ICE_MAX_FDIR_VSI_PER_FILTER	(2 + ICE_MAX_TRAFFIC_CLASS)
-#endif /* !ADQ_SUPPORT */
 
 struct ice_fd_hw_prof {
 	struct ice_flow_seg_info *fdir_seg;
@@ -930,9 +917,6 @@ struct ice_hw_port_stats {
 	/* flow director stats */
 	u32 fd_sb_status;
 	u64 fd_sb_match;
-#ifdef ADQ_SUPPORT
-	u64 ch_atr_match;
-#endif /* ADQ_SUPPORT */
 };
 
 enum ice_sw_fwd_act_type {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 50/66] net/ice/base: cleanup ice flex pipe files
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (48 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 49/66] net/ice/base: code clean up Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 51/66] net/ice/base: refactor VSI node sched code Leyi Rong
                     ` (16 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Bruce Allan, Paul M Stillwell Jr

Make functions that can be, static. Remove some code that is not
currently called.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 579 ++++-----------------------
 drivers/net/ice/base/ice_flex_pipe.h |  59 ---
 2 files changed, 78 insertions(+), 560 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 8f19b9fef..c136c03db 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -461,7 +461,7 @@ ice_enum_labels(struct ice_seg *ice_seg, u32 type, struct ice_pkg_enum *state,
  * since the first call to ice_enum_labels requires a pointer to an actual
  * ice_seg structure.
  */
-void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg)
+static void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg)
 {
 	struct ice_pkg_enum state;
 	char *label_name;
@@ -808,27 +808,6 @@ ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
 	return status;
 }
 
-/**
- * ice_aq_upload_section
- * @hw: pointer to the hardware structure
- * @pkg_buf: the package buffer which will receive the section
- * @buf_size: the size of the package buffer
- * @cd: pointer to command details structure or NULL
- *
- * Upload Section (0x0C41)
- */
-enum ice_status
-ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
-		      u16 buf_size, struct ice_sq_cd *cd)
-{
-	struct ice_aq_desc desc;
-
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_upload_section");
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_upload_section);
-	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
-
-	return ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd);
-}
 
 /**
  * ice_aq_update_pkg
@@ -890,7 +869,7 @@ ice_aq_update_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, u16 buf_size,
  * success it returns a pointer to the segment header, otherwise it will
  * return NULL.
  */
-struct ice_generic_seg_hdr *
+static struct ice_generic_seg_hdr *
 ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
 		    struct ice_pkg_hdr *pkg_hdr)
 {
@@ -1052,7 +1031,8 @@ ice_aq_get_pkg_info_list(struct ice_hw *hw,
  *
  * Handles the download of a complete package.
  */
-enum ice_status ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
+static enum ice_status
+ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
 {
 	struct ice_buf_table *ice_buf_tbl;
 
@@ -1081,7 +1061,7 @@ enum ice_status ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
  *
  * Saves off the package details into the HW structure.
  */
-enum ice_status
+static enum ice_status
 ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
 {
 	struct ice_global_metadata_seg *meta_seg;
@@ -1133,8 +1113,7 @@ ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
  *
  * Store details of the package currently loaded in HW into the HW structure.
  */
-enum ice_status
-ice_get_pkg_info(struct ice_hw *hw)
+static enum ice_status ice_get_pkg_info(struct ice_hw *hw)
 {
 	struct ice_aqc_get_pkg_info_resp *pkg_info;
 	enum ice_status status;
@@ -1187,40 +1166,6 @@ ice_get_pkg_info(struct ice_hw *hw)
 	return status;
 }
 
-/**
- * ice_find_label_value
- * @ice_seg: pointer to the ice segment (non-NULL)
- * @name: name of the label to search for
- * @type: the section type that will contain the label
- * @value: pointer to a value that will return the label's value if found
- *
- * Finds a label's value given the label name and the section type to search.
- * The ice_seg parameter must not be NULL since the first call to
- * ice_enum_labels requires a pointer to an actual ice_seg structure.
- */
-enum ice_status
-ice_find_label_value(struct ice_seg *ice_seg, char const *name, u32 type,
-		     u16 *value)
-{
-	struct ice_pkg_enum state;
-	char *label_name;
-	u16 val;
-
-	if (!ice_seg)
-		return ICE_ERR_PARAM;
-
-	do {
-		label_name = ice_enum_labels(ice_seg, type, &state, &val);
-		if (label_name && !strcmp(label_name, name)) {
-			*value = val;
-			return ICE_SUCCESS;
-		}
-
-		ice_seg = NULL;
-	} while (label_name);
-
-	return ICE_ERR_CFG;
-}
 
 /**
  * ice_verify_pkg - verify package
@@ -1499,7 +1444,7 @@ enum ice_status ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len)
  * Allocates a package buffer and returns a pointer to the buffer header.
  * Note: all package contents must be in Little Endian form.
  */
-struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw)
+static struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw)
 {
 	struct ice_buf_build *bld;
 	struct ice_buf_hdr *buf;
@@ -1623,40 +1568,15 @@ ice_get_sw_fv_list(struct ice_hw *hw, u16 *prot_ids, u8 ids_cnt,
 }
 
 /**
- * ice_pkg_buf_alloc_single_section
+ * ice_pkg_buf_free
  * @hw: pointer to the HW structure
- * @type: the section type value
- * @size: the size of the section to reserve (in bytes)
- * @section: returns pointer to the section
+ * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
  *
- * Allocates a package buffer with a single section.
- * Note: all package contents must be in Little Endian form.
+ * Frees a package buffer
  */
-static struct ice_buf_build *
-ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size,
-				 void **section)
+static void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld)
 {
-	struct ice_buf_build *buf;
-
-	if (!section)
-		return NULL;
-
-	buf = ice_pkg_buf_alloc(hw);
-	if (!buf)
-		return NULL;
-
-	if (ice_pkg_buf_reserve_section(buf, 1))
-		goto ice_pkg_buf_alloc_single_section_err;
-
-	*section = ice_pkg_buf_alloc_section(buf, type, size);
-	if (!*section)
-		goto ice_pkg_buf_alloc_single_section_err;
-
-	return buf;
-
-ice_pkg_buf_alloc_single_section_err:
-	ice_pkg_buf_free(hw, buf);
-	return NULL;
+	ice_free(hw, bld);
 }
 
 /**
@@ -1672,7 +1592,7 @@ ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size,
  * result in some wasted space in the buffer.
  * Note: all package contents must be in Little Endian form.
  */
-enum ice_status
+static enum ice_status
 ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count)
 {
 	struct ice_buf_hdr *buf;
@@ -1700,48 +1620,6 @@ ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count)
 	return ICE_SUCCESS;
 }
 
-/**
- * ice_pkg_buf_unreserve_section
- * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
- * @count: the number of sections to unreserve
- *
- * Unreserves one or more section table entries in a package buffer, releasing
- * space that can be used for section data. This routine can be called
- * multiple times as long as they are made before calling
- * ice_pkg_buf_alloc_section(). Once ice_pkg_buf_alloc_section()
- * is called once, the number of sections that can be allocated will not be able
- * to be increased; not using all reserved sections is fine, but this will
- * result in some wasted space in the buffer.
- * Note: all package contents must be in Little Endian form.
- */
-enum ice_status
-ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count)
-{
-	struct ice_buf_hdr *buf;
-	u16 section_count;
-	u16 data_end;
-
-	if (!bld)
-		return ICE_ERR_PARAM;
-
-	buf = (struct ice_buf_hdr *)&bld->buf;
-
-	/* already an active section, can't decrease table size */
-	section_count = LE16_TO_CPU(buf->section_count);
-	if (section_count > 0)
-		return ICE_ERR_CFG;
-
-	if (count > bld->reserved_section_table_entries)
-		return ICE_ERR_CFG;
-	bld->reserved_section_table_entries -= count;
-
-	data_end = LE16_TO_CPU(buf->data_end) -
-		   (count * sizeof(buf->section_entry[0]));
-	buf->data_end = CPU_TO_LE16(data_end);
-
-	return ICE_SUCCESS;
-}
-
 /**
  * ice_pkg_buf_alloc_section
  * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
@@ -1754,7 +1632,7 @@ ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count)
  * section contents.
  * Note: all package contents must be in Little Endian form.
  */
-void *
+static void *
 ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size)
 {
 	struct ice_buf_hdr *buf;
@@ -1795,23 +1673,8 @@ ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size)
 	return NULL;
 }
 
-/**
- * ice_pkg_buf_get_free_space
- * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
- *
- * Returns the number of free bytes remaining in the buffer.
- * Note: all package contents must be in Little Endian form.
- */
-u16 ice_pkg_buf_get_free_space(struct ice_buf_build *bld)
-{
-	struct ice_buf_hdr *buf;
 
-	if (!bld)
-		return 0;
 
-	buf = (struct ice_buf_hdr *)&bld->buf;
-	return ICE_MAX_S_DATA_END - LE16_TO_CPU(buf->data_end);
-}
 
 /**
  * ice_pkg_buf_get_active_sections
@@ -1823,7 +1686,7 @@ u16 ice_pkg_buf_get_free_space(struct ice_buf_build *bld)
  * not be used.
  * Note: all package contents must be in Little Endian form.
  */
-u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld)
+static u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld)
 {
 	struct ice_buf_hdr *buf;
 
@@ -1840,7 +1703,7 @@ u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld)
  *
  * Return a pointer to the buffer's header
  */
-struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld)
+static struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld)
 {
 	if (!bld)
 		return NULL;
@@ -1848,17 +1711,6 @@ struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld)
 	return &bld->buf;
 }
 
-/**
- * ice_pkg_buf_free
- * @hw: pointer to the HW structure
- * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
- *
- * Frees a package buffer
- */
-void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld)
-{
-	ice_free(hw, bld);
-}
 
 /**
  * ice_find_prot_off - find prot ID and offset pair, based on prof and FV index
@@ -1891,38 +1743,6 @@ ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx,
 
 /* PTG Management */
 
-/**
- * ice_ptg_update_xlt1 - Updates packet type groups in HW via XLT1 table
- * @hw: pointer to the hardware structure
- * @blk: HW block
- *
- * This function will update the XLT1 hardware table to reflect the new
- * packet type group configuration.
- */
-enum ice_status ice_ptg_update_xlt1(struct ice_hw *hw, enum ice_block blk)
-{
-	struct ice_xlt1_section *sect;
-	struct ice_buf_build *bld;
-	enum ice_status status;
-	u16 index;
-
-	bld = ice_pkg_buf_alloc_single_section(hw, ice_sect_id(blk, ICE_XLT1),
-					       ICE_XLT1_SIZE(ICE_XLT1_CNT),
-					       (void **)&sect);
-	if (!bld)
-		return ICE_ERR_NO_MEMORY;
-
-	sect->count = CPU_TO_LE16(ICE_XLT1_CNT);
-	sect->offset = CPU_TO_LE16(0);
-	for (index = 0; index < ICE_XLT1_CNT; index++)
-		sect->value[index] = hw->blk[blk].xlt1.ptypes[index].ptg;
-
-	status = ice_update_pkg(hw, ice_pkg_buf(bld), 1);
-
-	ice_pkg_buf_free(hw, bld);
-
-	return status;
-}
 
 /**
  * ice_ptg_find_ptype - Search for packet type group using packet type (ptype)
@@ -1935,7 +1755,7 @@ enum ice_status ice_ptg_update_xlt1(struct ice_hw *hw, enum ice_block blk)
  * PTG ID that contains it through the ptg parameter, with the value of
  * ICE_DEFAULT_PTG (0) meaning it is part the default PTG.
  */
-enum ice_status
+static enum ice_status
 ice_ptg_find_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 *ptg)
 {
 	if (ptype >= ICE_XLT1_CNT || !ptg)
@@ -1969,7 +1789,7 @@ void ice_ptg_alloc_val(struct ice_hw *hw, enum ice_block blk, u8 ptg)
  * that 0 is the default packet type group, so successfully created PTGs will
  * have a non-zero ID value; which means a 0 return value indicates an error.
  */
-u8 ice_ptg_alloc(struct ice_hw *hw, enum ice_block blk)
+static u8 ice_ptg_alloc(struct ice_hw *hw, enum ice_block blk)
 {
 	u16 i;
 
@@ -1984,30 +1804,6 @@ u8 ice_ptg_alloc(struct ice_hw *hw, enum ice_block blk)
 	return 0;
 }
 
-/**
- * ice_ptg_free - Frees a packet type group
- * @hw: pointer to the hardware structure
- * @blk: HW block
- * @ptg: the ptg ID to free
- *
- * This function frees a packet type group, and returns all the current ptypes
- * within it to the default PTG.
- */
-void ice_ptg_free(struct ice_hw *hw, enum ice_block blk, u8 ptg)
-{
-	struct ice_ptg_ptype *p, *temp;
-
-	hw->blk[blk].xlt1.ptg_tbl[ptg].in_use = false;
-	p = hw->blk[blk].xlt1.ptg_tbl[ptg].first_ptype;
-	while (p) {
-		p->ptg = ICE_DEFAULT_PTG;
-		temp = p->next_ptype;
-		p->next_ptype = NULL;
-		p = temp;
-	}
-
-	hw->blk[blk].xlt1.ptg_tbl[ptg].first_ptype = NULL;
-}
 
 /**
  * ice_ptg_remove_ptype - Removes ptype from a particular packet type group
@@ -2066,7 +1862,7 @@ ice_ptg_remove_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg)
  * a destination PTG ID of ICE_DEFAULT_PTG (0) will move the ptype to the
  * default PTG.
  */
-enum ice_status
+static enum ice_status
 ice_ptg_add_mv_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg)
 {
 	enum ice_status status;
@@ -2202,70 +1998,6 @@ ice_match_prop_lst(struct LIST_HEAD_TYPE *list1, struct LIST_HEAD_TYPE *list2)
 
 /* VSIG Management */
 
-/**
- * ice_vsig_update_xlt2_sect - update one section of XLT2 table
- * @hw: pointer to the hardware structure
- * @blk: HW block
- * @vsi: HW VSI number to program
- * @vsig: vsig for the VSI
- *
- * This function will update the XLT2 hardware table with the input VSI
- * group configuration.
- */
-static enum ice_status
-ice_vsig_update_xlt2_sect(struct ice_hw *hw, enum ice_block blk, u16 vsi,
-			  u16 vsig)
-{
-	struct ice_xlt2_section *sect;
-	struct ice_buf_build *bld;
-	enum ice_status status;
-
-	bld = ice_pkg_buf_alloc_single_section(hw, ice_sect_id(blk, ICE_XLT2),
-					       sizeof(struct ice_xlt2_section),
-					       (void **)&sect);
-	if (!bld)
-		return ICE_ERR_NO_MEMORY;
-
-	sect->count = CPU_TO_LE16(1);
-	sect->offset = CPU_TO_LE16(vsi);
-	sect->value[0] = CPU_TO_LE16(vsig);
-
-	status = ice_update_pkg(hw, ice_pkg_buf(bld), 1);
-
-	ice_pkg_buf_free(hw, bld);
-
-	return status;
-}
-
-/**
- * ice_vsig_update_xlt2 - update XLT2 table with VSIG configuration
- * @hw: pointer to the hardware structure
- * @blk: HW block
- *
- * This function will update the XLT2 hardware table with the input VSI
- * group configuration of used vsis.
- */
-enum ice_status ice_vsig_update_xlt2(struct ice_hw *hw, enum ice_block blk)
-{
-	u16 vsi;
-
-	for (vsi = 0; vsi < ICE_MAX_VSI; vsi++) {
-		/* update only vsis that have been changed */
-		if (hw->blk[blk].xlt2.vsis[vsi].changed) {
-			enum ice_status status;
-			u16 vsig;
-
-			vsig = hw->blk[blk].xlt2.vsis[vsi].vsig;
-			status = ice_vsig_update_xlt2_sect(hw, blk, vsi, vsig);
-			if (status)
-				return status;
-
-			hw->blk[blk].xlt2.vsis[vsi].changed = 0;
-		}
-	}
-
-	return ICE_SUCCESS;
-}
 
 /**
  * ice_vsig_find_vsi - find a VSIG that contains a specified VSI
@@ -2346,7 +2078,7 @@ static u16 ice_vsig_alloc(struct ice_hw *hw, enum ice_block blk)
  * for, the list must match exactly, including the order in which the
  * characteristics are listed.
  */
-enum ice_status
+static enum ice_status
 ice_find_dup_props_vsig(struct ice_hw *hw, enum ice_block blk,
 			struct LIST_HEAD_TYPE *chs, u16 *vsig)
 {
@@ -2373,7 +2105,7 @@ ice_find_dup_props_vsig(struct ice_hw *hw, enum ice_block blk,
  * The function will remove all VSIs associated with the input VSIG and move
  * them to the DEFAULT_VSIG and mark the VSIG available.
  */
-enum ice_status
+static enum ice_status
 ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig)
 {
 	struct ice_vsig_prof *dtmp, *del;
@@ -2424,6 +2156,62 @@ ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig)
 	return ICE_SUCCESS;
 }
 
+/**
+ * ice_vsig_remove_vsi - remove VSI from VSIG
+ * @hw: pointer to the hardware structure
+ * @blk: HW block
+ * @vsi: VSI to remove
+ * @vsig: VSI group to remove from
+ *
+ * The function will remove the input VSI from its VSI group and move it
+ * to the DEFAULT_VSIG.
+ */
+static enum ice_status
+ice_vsig_remove_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig)
+{
+	struct ice_vsig_vsi **vsi_head, *vsi_cur, *vsi_tgt;
+	u16 idx;
+
+	idx = vsig & ICE_VSIG_IDX_M;
+
+	if (vsi >= ICE_MAX_VSI || idx >= ICE_MAX_VSIGS)
+		return ICE_ERR_PARAM;
+
+	if (!hw->blk[blk].xlt2.vsig_tbl[idx].in_use)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	/* entry already in default VSIG, don't have to remove */
+	if (idx == ICE_DEFAULT_VSIG)
+		return ICE_SUCCESS;
+
+	vsi_head = &hw->blk[blk].xlt2.vsig_tbl[idx].first_vsi;
+	if (!(*vsi_head))
+		return ICE_ERR_CFG;
+
+	vsi_tgt = &hw->blk[blk].xlt2.vsis[vsi];
+	vsi_cur = (*vsi_head);
+
+	/* iterate the VSI list, skip over the entry to be removed */
+	while (vsi_cur) {
+		if (vsi_tgt == vsi_cur) {
+			(*vsi_head) = vsi_cur->next_vsi;
+			break;
+		}
+		vsi_head = &vsi_cur->next_vsi;
+		vsi_cur = vsi_cur->next_vsi;
+	}
+
+	/* verify if VSI was removed from group list */
+	if (!vsi_cur)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	vsi_cur->vsig = ICE_DEFAULT_VSIG;
+	vsi_cur->changed = 1;
+	vsi_cur->next_vsi = NULL;
+
+	return ICE_SUCCESS;
+}
+
 /**
  * ice_vsig_add_mv_vsi - add or move a VSI to a VSI group
  * @hw: pointer to the hardware structure
@@ -2436,7 +2224,7 @@ ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig)
  * move the entry to the DEFAULT_VSIG, update the original VSIG and
  * then move entry to the new VSIG.
  */
-enum ice_status
+static enum ice_status
 ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig)
 {
 	struct ice_vsig_vsi *tmp;
@@ -2487,62 +2275,6 @@ ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig)
 	return ICE_SUCCESS;
 }
 
-/**
- * ice_vsig_remove_vsi - remove VSI from VSIG
- * @hw: pointer to the hardware structure
- * @blk: HW block
- * @vsi: VSI to remove
- * @vsig: VSI group to remove from
- *
- * The function will remove the input VSI from its VSI group and move it
- * to the DEFAULT_VSIG.
- */
-enum ice_status
-ice_vsig_remove_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig)
-{
-	struct ice_vsig_vsi **vsi_head, *vsi_cur, *vsi_tgt;
-	u16 idx;
-
-	idx = vsig & ICE_VSIG_IDX_M;
-
-	if (vsi >= ICE_MAX_VSI || idx >= ICE_MAX_VSIGS)
-		return ICE_ERR_PARAM;
-
-	if (!hw->blk[blk].xlt2.vsig_tbl[idx].in_use)
-		return ICE_ERR_DOES_NOT_EXIST;
-
-	/* entry already in default VSIG, don't have to remove */
-	if (idx == ICE_DEFAULT_VSIG)
-		return ICE_SUCCESS;
-
-	vsi_head = &hw->blk[blk].xlt2.vsig_tbl[idx].first_vsi;
-	if (!(*vsi_head))
-		return ICE_ERR_CFG;
-
-	vsi_tgt = &hw->blk[blk].xlt2.vsis[vsi];
-	vsi_cur = (*vsi_head);
-
-	/* iterate the VSI list, skip over the entry to be removed */
-	while (vsi_cur) {
-		if (vsi_tgt == vsi_cur) {
-			(*vsi_head) = vsi_cur->next_vsi;
-			break;
-		}
-		vsi_head = &vsi_cur->next_vsi;
-		vsi_cur = vsi_cur->next_vsi;
-	}
-
-	/* verify if VSI was removed from group list */
-	if (!vsi_cur)
-		return ICE_ERR_DOES_NOT_EXIST;
-
-	vsi_cur->vsig = ICE_DEFAULT_VSIG;
-	vsi_cur->changed = 1;
-	vsi_cur->next_vsi = NULL;
-
-	return ICE_SUCCESS;
-}
-
 /**
  * ice_find_prof_id - find profile ID for a given field vector
  * @hw: pointer to the hardware structure
@@ -3142,70 +2874,6 @@ static void ice_init_flow_profs(struct ice_hw *hw, u8 blk_idx)
 	INIT_LIST_HEAD(&hw->fl_profs[blk_idx]);
 }
 
-/**
- * ice_clear_hw_tbls - clear HW tables and flow profiles
- * @hw: pointer to the hardware structure
- */
-void ice_clear_hw_tbls(struct ice_hw *hw)
-{
-	u8 i;
-
-	for (i = 0; i < ICE_BLK_COUNT; i++) {
-		struct ice_prof_redir *prof_redir = &hw->blk[i].prof_redir;
-		struct ice_prof_tcam *prof = &hw->blk[i].prof;
-		struct ice_xlt1 *xlt1 = &hw->blk[i].xlt1;
-		struct ice_xlt2 *xlt2 = &hw->blk[i].xlt2;
-		struct ice_es *es = &hw->blk[i].es;
-
-		if (hw->blk[i].is_list_init) {
-			struct ice_prof_map *del, *tmp;
-
-			ice_acquire_lock(&es->prof_map_lock);
-			LIST_FOR_EACH_ENTRY_SAFE(del, tmp, &es->prof_map,
-						 ice_prof_map, list) {
-				LIST_DEL(&del->list);
-				ice_free(hw, del);
-			}
-			INIT_LIST_HEAD(&es->prof_map);
-			ice_release_lock(&es->prof_map_lock);
-
-			ice_acquire_lock(&hw->fl_profs_locks[i]);
-			ice_free_flow_profs(hw, i);
-			ice_release_lock(&hw->fl_profs_locks[i]);
-		}
-
-		ice_free_vsig_tbl(hw, (enum ice_block)i);
-
-		ice_memset(xlt1->ptypes, 0, xlt1->count * sizeof(*xlt1->ptypes),
-			   ICE_NONDMA_MEM);
-		ice_memset(xlt1->ptg_tbl, 0,
-			   ICE_MAX_PTGS * sizeof(*xlt1->ptg_tbl),
-			   ICE_NONDMA_MEM);
-		ice_memset(xlt1->t, 0, xlt1->count * sizeof(*xlt1->t),
-			   ICE_NONDMA_MEM);
-
-		ice_memset(xlt2->vsis, 0, xlt2->count * sizeof(*xlt2->vsis),
-			   ICE_NONDMA_MEM);
-		ice_memset(xlt2->vsig_tbl, 0,
-			   xlt2->count * sizeof(*xlt2->vsig_tbl),
-			   ICE_NONDMA_MEM);
-		ice_memset(xlt2->t, 0, xlt2->count * sizeof(*xlt2->t),
-			   ICE_NONDMA_MEM);
-
-		ice_memset(prof->t, 0, prof->count * sizeof(*prof->t),
-			   ICE_NONDMA_MEM);
-		ice_memset(prof_redir->t, 0,
-			   prof_redir->count * sizeof(*prof_redir->t),
-			   ICE_NONDMA_MEM);
-
-		ice_memset(es->t, 0, es->count * sizeof(*es->t),
-			   ICE_NONDMA_MEM);
-		ice_memset(es->ref_count, 0, es->count * sizeof(*es->ref_count),
-			   ICE_NONDMA_MEM);
-		ice_memset(es->written, 0, es->count * sizeof(*es->written),
-			   ICE_NONDMA_MEM);
-	}
-}
 
 /**
  * ice_init_hw_tbls - init hardware table memory
@@ -4100,43 +3768,6 @@ ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id)
 	return entry;
 }
 
-/**
- * ice_set_prof_context - Set context for a given profile
- * @hw: pointer to the HW struct
- * @blk: hardware block
- * @id: profile tracking ID
- * @cntxt: context
- */
-struct ice_prof_map *
-ice_set_prof_context(struct ice_hw *hw, enum ice_block blk, u64 id, u64 cntxt)
-{
-	struct ice_prof_map *entry;
-
-	entry = ice_search_prof_id(hw, blk, id);
-	if (entry)
-		entry->context = cntxt;
-
-	return entry;
-}
-
-/**
- * ice_get_prof_context - Get context for a given profile
- * @hw: pointer to the HW struct
- * @blk: hardware block
- * @id: profile tracking ID
- * @cntxt: pointer to variable to receive the context
- */
-struct ice_prof_map *
-ice_get_prof_context(struct ice_hw *hw, enum ice_block blk, u64 id, u64 *cntxt)
-{
-	struct ice_prof_map *entry;
-
-	entry = ice_search_prof_id(hw, blk, id);
-	if (entry)
-		*cntxt = entry->context;
-
-	return entry;
-}
 
 /**
  * ice_vsig_prof_id_count - count profiles in a VSIG
@@ -5094,33 +4725,6 @@ ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl)
 	return status;
 }
 
-/**
- * ice_add_flow - add flow
- * @hw: pointer to the HW struct
- * @blk: hardware block
- * @vsi: array of VSIs to enable with the profile specified by ID
- * @count: number of elements in the VSI array
- * @id: profile tracking ID
- *
- * Calling this function will update the hardware tables to enable the
- * profile indicated by the ID parameter for the VSIs specified in the VSI
- * array. Once successfully called, the flow will be enabled.
- */
-enum ice_status
-ice_add_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
-	     u64 id)
-{
-	enum ice_status status;
-	u16 i;
-
-	for (i = 0; i < count; i++) {
-		status = ice_add_prof_id_flow(hw, blk, vsi[i], id);
-		if (status)
-			return status;
-	}
-
-	return ICE_SUCCESS;
-}
 
 /**
  * ice_rem_prof_from_list - remove a profile from list
@@ -5276,30 +4880,3 @@ ice_rem_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl)
 	return status;
 }
 
-/**
- * ice_rem_flow - remove flow
- * @hw: pointer to the HW struct
- * @blk: hardware block
- * @vsi: array of VSIs from which to remove the profile specified by ID
- * @count: number of elements in the VSI array
- * @id: profile tracking ID
- *
- * The function will remove flows from the specified VSIs that were enabled
- * using ice_add_flow. The ID value will indicated which profile will be
- * removed. Once successfully called, the flow will be disabled.
- */
-enum ice_status
-ice_rem_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
-	     u64 id)
-{
-	enum ice_status status;
-	u16 i;
-
-	for (i = 0; i < count; i++) {
-		status = ice_rem_prof_id_flow(hw, blk, vsi[i], id);
-		if (status)
-			return status;
-	}
-
-	return ICE_SUCCESS;
-}
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 4714fe646..7142ae7fe 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -27,66 +27,18 @@ void ice_release_change_lock(struct ice_hw *hw);
 enum ice_status
 ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx,
 		  u8 *prot, u16 *off);
-struct ice_generic_seg_hdr *
-ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
-		    struct ice_pkg_hdr *pkg_hdr);
-enum ice_status ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg);
-
-enum ice_status
-ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_header);
-enum ice_status
-ice_get_pkg_info(struct ice_hw *hw);
-
-void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg);
-
 enum ice_status
 ice_find_label_value(struct ice_seg *ice_seg, char const *name, u32 type,
 		     u16 *value);
 enum ice_status
 ice_get_sw_fv_list(struct ice_hw *hw, u16 *prot_ids, u8 ids_cnt,
 		   struct LIST_HEAD_TYPE *fv_list);
-enum ice_status
-ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
-		      u16 buf_size, struct ice_sq_cd *cd);
 
-enum ice_status
-ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count);
-u16 ice_pkg_buf_get_free_space(struct ice_buf_build *bld);
-u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld);
-
-/* package buffer building routines */
-
-struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw);
-enum ice_status
-ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count);
-void *ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size);
-struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld);
-void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld);
-
-/* XLT1/PType group functions */
-enum ice_status ice_ptg_update_xlt1(struct ice_hw *hw, enum ice_block blk);
-enum ice_status
-ice_ptg_find_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 *ptg);
-u8 ice_ptg_alloc(struct ice_hw *hw, enum ice_block blk);
-void ice_ptg_free(struct ice_hw *hw, enum ice_block blk, u8 ptg);
-enum ice_status
-ice_ptg_add_mv_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg);
 
 /* XLT2/VSI group functions */
-enum ice_status ice_vsig_update_xlt2(struct ice_hw *hw, enum ice_block blk);
 enum ice_status
 ice_vsig_find_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 *vsig);
-enum ice_status
-ice_find_dup_props_vsig(struct ice_hw *hw, enum ice_block blk,
-			struct LIST_HEAD_TYPE *chs, u16 *vsig);
 
-enum ice_status
-ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig);
-enum ice_status ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig);
-enum ice_status
-ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig);
-enum ice_status
-ice_vsig_remove_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig);
 enum ice_status
 ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 	     struct ice_fv_word *es);
@@ -98,10 +50,6 @@ enum ice_status
 ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl);
 enum ice_status
 ice_rem_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl);
-struct ice_prof_map *
-ice_set_prof_context(struct ice_hw *hw, enum ice_block blk, u64 id, u64 cntxt);
-struct ice_prof_map *
-ice_get_prof_context(struct ice_hw *hw, enum ice_block blk, u64 id, u64 *cntxt);
 enum ice_status
 ice_init_pkg(struct ice_hw *hw, u8 *buff, u32 len);
 enum ice_status
@@ -109,15 +57,8 @@ ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len);
 enum ice_status ice_init_hw_tbls(struct ice_hw *hw);
 void ice_free_seg(struct ice_hw *hw);
 void ice_fill_blk_tbls(struct ice_hw *hw);
-void ice_clear_hw_tbls(struct ice_hw *hw);
 void ice_free_hw_tbls(struct ice_hw *hw);
 enum ice_status
-ice_add_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
-	     u64 id);
-enum ice_status
-ice_rem_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
-	     u64 id);
-enum ice_status
 ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id);
 
 enum ice_status
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 51/66] net/ice/base: refactor VSI node sched code
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (49 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 50/66] net/ice/base: cleanup ice flex pipe files Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 52/66] net/ice/base: add some minor new defines Leyi Rong
                     ` (15 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Grzegorz Nitka, Paul M Stillwell Jr

Refactored VSI node sched code to use port_info ptr as call arg.

The declaration of VSI node getter function has been modified to use
pointer to ice_port_info structure instead of pointer to hw structure.
This way suitable port_info structure is used to find VSI node.

Signed-off-by: Grzegorz Nitka <grzegorz.nitka@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_sched.c | 47 ++++++++++++++++----------------
 drivers/net/ice/base/ice_sched.h |  2 +-
 2 files changed, 24 insertions(+), 25 deletions(-)

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index fa3158a7b..0f4153146 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -1451,7 +1451,7 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
 
 /**
  * ice_sched_get_vsi_node - Get a VSI node based on VSI ID
- * @hw: pointer to the HW struct
+ * @pi: pointer to the port information structure
  * @tc_node: pointer to the TC node
  * @vsi_handle: software VSI handle
  *
@@ -1459,14 +1459,14 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
  * TC branch
  */
 struct ice_sched_node *
-ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+ice_sched_get_vsi_node(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 		       u16 vsi_handle)
 {
 	struct ice_sched_node *node;
 	u8 vsi_layer;
 
-	vsi_layer = ice_sched_get_vsi_layer(hw);
-	node = ice_sched_get_first_node(hw->port_info, tc_node, vsi_layer);
+	vsi_layer = ice_sched_get_vsi_layer(pi->hw);
+	node = ice_sched_get_first_node(pi, tc_node, vsi_layer);
 
 	/* Check whether it already exists */
 	while (node) {
@@ -1587,7 +1587,7 @@ ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
 
 	qgl = ice_sched_get_qgrp_layer(hw);
 	vsil = ice_sched_get_vsi_layer(hw);
-	parent = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	parent = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 	for (i = vsil + 1; i <= qgl; i++) {
 		if (!parent)
 			return ICE_ERR_CFG;
@@ -1620,7 +1620,7 @@ ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
 
 /**
  * ice_sched_calc_vsi_support_nodes - calculate number of VSI support nodes
- * @hw: pointer to the HW struct
+ * @pi: pointer to the port info structure
  * @tc_node: pointer to TC node
  * @num_nodes: pointer to num nodes array
  *
@@ -1629,15 +1629,15 @@ ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
  * layers
  */
 static void
-ice_sched_calc_vsi_support_nodes(struct ice_hw *hw,
+ice_sched_calc_vsi_support_nodes(struct ice_port_info *pi,
 				 struct ice_sched_node *tc_node, u16 *num_nodes)
 {
 	struct ice_sched_node *node;
 	u8 vsil;
 	int i;
 
-	vsil = ice_sched_get_vsi_layer(hw);
-	for (i = vsil; i >= hw->sw_entry_point_layer; i--)
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+	for (i = vsil; i >= pi->hw->sw_entry_point_layer; i--)
 		/* Add intermediate nodes if TC has no children and
 		 * need at least one node for VSI
 		 */
@@ -1647,11 +1647,11 @@ ice_sched_calc_vsi_support_nodes(struct ice_hw *hw,
 			/* If intermediate nodes are reached max children
 			 * then add a new one.
 			 */
-			node = ice_sched_get_first_node(hw->port_info, tc_node,
-							(u8)i);
+			node = ice_sched_get_first_node(pi, tc_node, (u8)i);
 			/* scan all the siblings */
 			while (node) {
-				if (node->num_children < hw->max_children[i])
+				if (node->num_children <
+				    pi->hw->max_children[i])
 					break;
 				node = node->sibling;
 			}
@@ -1731,14 +1731,13 @@ ice_sched_add_vsi_to_topo(struct ice_port_info *pi, u16 vsi_handle, u8 tc)
 {
 	u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
 	struct ice_sched_node *tc_node;
-	struct ice_hw *hw = pi->hw;
 
 	tc_node = ice_sched_get_tc_node(pi, tc);
 	if (!tc_node)
 		return ICE_ERR_PARAM;
 
 	/* calculate number of supported nodes needed for this VSI */
-	ice_sched_calc_vsi_support_nodes(hw, tc_node, num_nodes);
+	ice_sched_calc_vsi_support_nodes(pi, tc_node, num_nodes);
 
 	/* add VSI supported nodes to TC subtree */
 	return ice_sched_add_vsi_support_nodes(pi, vsi_handle, tc_node,
@@ -1771,7 +1770,7 @@ ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
 	if (!tc_node)
 		return ICE_ERR_CFG;
 
-	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 	if (!vsi_node)
 		return ICE_ERR_CFG;
 
@@ -1834,7 +1833,7 @@ ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
 	vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
 	if (!vsi_ctx)
 		return ICE_ERR_PARAM;
-	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 
 	/* suspend the VSI if TC is not enabled */
 	if (!enable) {
@@ -1855,7 +1854,7 @@ ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
 		if (status)
 			return status;
 
-		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			return ICE_ERR_CFG;
 
@@ -1966,7 +1965,7 @@ ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
 		if (!tc_node)
 			continue;
 
-		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			continue;
 
@@ -2256,7 +2255,7 @@ ice_sched_move_vsi_to_agg(struct ice_port_info *pi, u16 vsi_handle, u32 agg_id,
 	if (!agg_node)
 		return ICE_ERR_DOES_NOT_EXIST;
 
-	vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+	vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 	if (!vsi_node)
 		return ICE_ERR_DOES_NOT_EXIST;
 
@@ -3537,7 +3536,7 @@ ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
 		if (!vsi_handle_valid)
 			goto exit_agg_priority_per_tc;
 
-		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			goto exit_agg_priority_per_tc;
 
@@ -3593,7 +3592,7 @@ ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
 		if (!tc_node)
 			continue;
 
-		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			continue;
 
@@ -4805,7 +4804,7 @@ ice_sched_validate_vsi_srl_node(struct ice_port_info *pi, u16 vsi_handle)
 		if (!tc_node)
 			continue;
 
-		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			continue;
 
@@ -4864,7 +4863,7 @@ ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle,
 		if (!tc_node)
 			continue;
 
-		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			continue;
 
@@ -5368,7 +5367,7 @@ ice_sched_replay_vsi_bw(struct ice_hw *hw, u16 vsi_handle,
 		tc_node = ice_sched_get_tc_node(pi, tc);
 		if (!tc_node)
 			continue;
-		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			continue;
 		bw_t_info = &vsi_ctx->sched.bw_t_info[tc];
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index e444dc880..38f8f93d2 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -107,7 +107,7 @@ ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
 		  u8 owner, bool enable);
 enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle);
 struct ice_sched_node *
-ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+ice_sched_get_vsi_node(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 		       u16 vsi_handle);
 bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node);
 enum ice_status
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 52/66] net/ice/base: add some minor new defines
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (50 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 51/66] net/ice/base: refactor VSI node sched code Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 53/66] net/ice/base: add 16-byte Flex Rx Descriptor Leyi Rong
                     ` (14 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacek Naczyk, Faerman Lev, Paul M Stillwell Jr

1. Add defines for Link Topology Netlist Section.
2. Add missing Read MAC command response bits.
3. Adds AQ error 29.

Signed-off-by: Jacek Naczyk <jacek.naczyk@intel.com>
Signed-off-by: Faerman Lev <lev.faerman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 5 ++++-
 drivers/net/ice/base/ice_type.h       | 2 ++
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 4e6bce18c..f2c492d62 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -142,6 +142,8 @@ struct ice_aqc_manage_mac_read {
 #define ICE_AQC_MAN_MAC_SAN_ADDR_VALID		BIT(5)
 #define ICE_AQC_MAN_MAC_PORT_ADDR_VALID		BIT(6)
 #define ICE_AQC_MAN_MAC_WOL_ADDR_VALID		BIT(7)
+#define ICE_AQC_MAN_MAC_MC_MAG_EN		BIT(8)
+#define ICE_AQC_MAN_MAC_WOL_PRESERVE_ON_PFR	BIT(9)
 #define ICE_AQC_MAN_MAC_READ_S			4
 #define ICE_AQC_MAN_MAC_READ_M			(0xF << ICE_AQC_MAN_MAC_READ_S)
 	u8 rsvd[2];
@@ -1683,7 +1685,7 @@ struct ice_aqc_nvm {
 #define ICE_AQC_NVM_PRESERVE_SELECTED	(3 << ICE_AQC_NVM_PRESERVATION_S)
 #define ICE_AQC_NVM_ACTIV_SEL_NVM	BIT(3) /* Write Activate/SR Dump only */
 #define ICE_AQC_NVM_ACTIV_SEL_OROM	BIT(4)
-#define ICE_AQC_NVM_ACTIV_SEL_EXT_TLV	BIT(5)
+#define ICE_AQC_NVM_ACTIV_SEL_NETLIST	BIT(5)
 #define ICE_AQC_NVM_ACTIV_SEL_MASK	MAKEMASK(0x7, 3)
 #define ICE_AQC_NVM_FLASH_ONLY		BIT(7)
 	__le16 module_typeid;
@@ -2402,6 +2404,7 @@ enum ice_aq_err {
 	ICE_AQ_RC_ESVN		= 26, /* SVN number prohibits this package */
 	ICE_AQ_RC_EBADMAN	= 27, /* Manifest hash mismatch */
 	ICE_AQ_RC_EBADBUF	= 28, /* Buffer hash mismatches manifest */
+	ICE_AQ_RC_EACCES_BMCU	= 29, /* BMC Update in progress */
 };
 
 /* Admin Queue command opcodes */
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 8de5cc097..4755621bb 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -983,6 +983,8 @@ enum ice_sw_fwd_act_type {
 #define ICE_SR_NVM_BANK_SIZE			0x43
 #define ICE_SR_1ND_OROM_BANK_PTR		0x44
 #define ICE_SR_OROM_BANK_SIZE			0x45
+#define ICE_SR_NETLIST_BANK_PTR			0x46
+#define ICE_SR_NETLIST_BANK_SIZE		0x47
 #define ICE_SR_EMP_SR_SETTINGS_PTR		0x48
 #define ICE_SR_CONFIGURATION_METADATA_PTR	0x4D
 #define ICE_SR_IMMEDIATE_VALUES_PTR		0x4E
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 53/66] net/ice/base: add 16-byte Flex Rx Descriptor
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (51 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 52/66] net/ice/base: add some minor new defines Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 54/66] net/ice/base: add vxlan/generic tunnel management Leyi Rong
                     ` (13 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Add 16-byte Flex Rx descriptor structure definition.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_lan_tx_rx.h | 28 ++++++++++++++++++++++++++--
 1 file changed, 26 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_lan_tx_rx.h b/drivers/net/ice/base/ice_lan_tx_rx.h
index fa2309bf1..147185212 100644
--- a/drivers/net/ice/base/ice_lan_tx_rx.h
+++ b/drivers/net/ice/base/ice_lan_tx_rx.h
@@ -373,10 +373,34 @@ enum ice_rx_prog_status_desc_error_bits {
 	ICE_RX_PROG_STATUS_DESC_NO_FD_ENTRY_S	= 1,
 };
 
-/* Rx Flex Descriptor
- * This descriptor is used instead of the legacy version descriptor when
+/* Rx Flex Descriptors
+ * These descriptors are used instead of the legacy version descriptors when
  * ice_rlan_ctx.adv_desc is set
  */
+union ice_16b_rx_flex_desc {
+	struct {
+		__le64 pkt_addr; /* Packet buffer address */
+		__le64 hdr_addr; /* Header buffer address */
+				 /* bit 0 of hdr_addr is DD bit */
+	} read;
+	struct {
+		/* Qword 0 */
+		u8 rxdid; /* descriptor builder profile ID */
+		u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+		__le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+		__le16 pkt_len; /* [15:14] are reserved */
+		__le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+						/* sph=[11:11] */
+						/* ff1/ext=[15:12] */
+
+		/* Qword 1 */
+		__le16 status_error0;
+		__le16 l2tag1;
+		__le16 flex_meta0;
+		__le16 flex_meta1;
+	} wb; /* writeback */
+};
+
 union ice_32b_rx_flex_desc {
 	struct {
 		__le64 pkt_addr; /* Packet buffer address */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 54/66] net/ice/base: add vxlan/generic tunnel management
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (52 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 53/66] net/ice/base: add 16-byte Flex Rx Descriptor Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 55/66] net/ice/base: enable additional switch rules Leyi Rong
                     ` (12 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Added routines for handling tunnel management:
	- ice_tunnel_port_in_use()
	- ice_tunnel_get_type()
	- ice_find_free_tunnel_entry()
	- ice_create_tunnel()
	- ice_destroy_tunnel()

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 228 +++++++++++++++++++++++++++
 drivers/net/ice/base/ice_flex_pipe.h |   6 +
 2 files changed, 234 insertions(+)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index c136c03db..4206458e5 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1711,6 +1711,234 @@ static struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld)
 	return &bld->buf;
 }
 
+/**
+ * ice_tunnel_port_in_use
+ * @hw: pointer to the HW structure
+ * @port: port to search for
+ * @index: optionally returns index
+ *
+ * Returns whether a port is already in use as a tunnel, and optionally its
+ * index
+ */
+bool ice_tunnel_port_in_use(struct ice_hw *hw, u16 port, u16 *index)
+{
+	u16 i;
+
+	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
+		if (hw->tnl.tbl[i].in_use && hw->tnl.tbl[i].port == port) {
+			if (index)
+				*index = i;
+			return true;
+		}
+
+	return false;
+}
+
+/**
+ * ice_tunnel_get_type
+ * @hw: pointer to the HW structure
+ * @port: port to search for
+ * @type: returns tunnel index
+ *
+ * For a given port number, will return the type of tunnel.
+ */
+bool
+ice_tunnel_get_type(struct ice_hw *hw, u16 port, enum ice_tunnel_type *type)
+{
+	u16 i;
+
+	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
+		if (hw->tnl.tbl[i].in_use && hw->tnl.tbl[i].port == port) {
+			*type = hw->tnl.tbl[i].type;
+			return true;
+		}
+
+	return false;
+}
+
+/**
+ * ice_find_free_tunnel_entry
+ * @hw: pointer to the HW structure
+ * @type: tunnel type
+ * @index: optionally returns index
+ *
+ * Returns whether there is a free tunnel entry, and optionally its index
+ */
+static bool
+ice_find_free_tunnel_entry(struct ice_hw *hw, enum ice_tunnel_type type,
+			   u16 *index)
+{
+	u16 i;
+
+	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
+		if (hw->tnl.tbl[i].valid && !hw->tnl.tbl[i].in_use &&
+		    hw->tnl.tbl[i].type == type) {
+			if (index)
+				*index = i;
+			return true;
+		}
+
+	return false;
+}
+
+/**
+ * ice_create_tunnel
+ * @hw: pointer to the HW structure
+ * @type: type of tunnel
+ * @port: port to use for vxlan tunnel
+ *
+ * Creates a tunnel
+ */
+enum ice_status
+ice_create_tunnel(struct ice_hw *hw, enum ice_tunnel_type type, u16 port)
+{
+	struct ice_boost_tcam_section *sect_rx, *sect_tx;
+	enum ice_status status = ICE_ERR_MAX_LIMIT;
+	struct ice_buf_build *bld;
+	u16 index;
+
+	if (ice_tunnel_port_in_use(hw, port, NULL))
+		return ICE_ERR_ALREADY_EXISTS;
+
+	if (!ice_find_free_tunnel_entry(hw, type, &index))
+		return ICE_ERR_OUT_OF_RANGE;
+
+	bld = ice_pkg_buf_alloc(hw);
+	if (!bld)
+		return ICE_ERR_NO_MEMORY;
+
+	/* allocate 2 sections, one for RX parser, one for TX parser */
+	if (ice_pkg_buf_reserve_section(bld, 2))
+		goto ice_create_tunnel_err;
+
+	sect_rx = (struct ice_boost_tcam_section *)
+		ice_pkg_buf_alloc_section(bld, ICE_SID_RXPARSER_BOOST_TCAM,
+					  sizeof(*sect_rx));
+	if (!sect_rx)
+		goto ice_create_tunnel_err;
+	sect_rx->count = CPU_TO_LE16(1);
+
+	sect_tx = (struct ice_boost_tcam_section *)
+		ice_pkg_buf_alloc_section(bld, ICE_SID_TXPARSER_BOOST_TCAM,
+					  sizeof(*sect_tx));
+	if (!sect_tx)
+		goto ice_create_tunnel_err;
+	sect_tx->count = CPU_TO_LE16(1);
+
+	/* copy original boost entry to update package buffer */
+	ice_memcpy(sect_rx->tcam, hw->tnl.tbl[index].boost_entry,
+		   sizeof(*sect_rx->tcam), ICE_NONDMA_TO_NONDMA);
+
+	/* over-write the never-match dest port key bits with the encoded port
+	 * bits
+	 */
+	ice_set_key((u8 *)&sect_rx->tcam[0].key, sizeof(sect_rx->tcam[0].key),
+		    (u8 *)&port, NULL, NULL, NULL,
+		    offsetof(struct ice_boost_key_value, hv_dst_port_key),
+		    sizeof(sect_rx->tcam[0].key.key.hv_dst_port_key));
+
+	/* exact copy of entry to TX section entry */
+	ice_memcpy(sect_tx->tcam, sect_rx->tcam, sizeof(*sect_tx->tcam),
+		   ICE_NONDMA_TO_NONDMA);
+
+	status = ice_update_pkg(hw, ice_pkg_buf(bld), 1);
+	if (!status) {
+		hw->tnl.tbl[index].port = port;
+		hw->tnl.tbl[index].in_use = true;
+	}
+
+ice_create_tunnel_err:
+	ice_pkg_buf_free(hw, bld);
+
+	return status;
+}
+
+/**
+ * ice_destroy_tunnel
+ * @hw: pointer to the HW structure
+ * @port: port of tunnel to destroy (ignored if the all parameter is true)
+ * @all: flag that states to destroy all tunnels
+ *
+ * Destroys a tunnel or all tunnels by creating an update package buffer
+ * targeting the specific updates requested and then performing an update
+ * package.
+ */
+enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all)
+{
+	struct ice_boost_tcam_section *sect_rx, *sect_tx;
+	enum ice_status status = ICE_ERR_MAX_LIMIT;
+	struct ice_buf_build *bld;
+	u16 count = 0;
+	u16 size;
+	u16 i;
+
+	/* determine count */
+	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
+		if (hw->tnl.tbl[i].valid && hw->tnl.tbl[i].in_use &&
+		    (all || hw->tnl.tbl[i].port == port))
+			count++;
+
+	if (!count)
+		return ICE_ERR_PARAM;
+
+	/* size of section - there is at least one entry */
+	size = (count - 1) * sizeof(*sect_rx->tcam) + sizeof(*sect_rx);
+
+	bld = ice_pkg_buf_alloc(hw);
+	if (!bld)
+		return ICE_ERR_NO_MEMORY;
+
+	/* allocate 2 sections, one for RX parser, one for TX parser */
+	if (ice_pkg_buf_reserve_section(bld, 2))
+		goto ice_destroy_tunnel_err;
+
+	sect_rx = (struct ice_boost_tcam_section *)
+		ice_pkg_buf_alloc_section(bld, ICE_SID_RXPARSER_BOOST_TCAM,
+					  size);
+	if (!sect_rx)
+		goto ice_destroy_tunnel_err;
+	sect_rx->count = CPU_TO_LE16(1);
+
+	sect_tx = (struct ice_boost_tcam_section *)
+		ice_pkg_buf_alloc_section(bld, ICE_SID_TXPARSER_BOOST_TCAM,
+					  size);
+	if (!sect_tx)
+		goto ice_destroy_tunnel_err;
+	sect_tx->count = CPU_TO_LE16(1);
+
+	/* copy original boost entry to update package buffer, one copy to RX
+	 * section, another copy to the TX section
+	 */
+	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
+		if (hw->tnl.tbl[i].valid && hw->tnl.tbl[i].in_use &&
+		    (all || hw->tnl.tbl[i].port == port)) {
+			ice_memcpy(sect_rx->tcam + i,
+				   hw->tnl.tbl[i].boost_entry,
+				   sizeof(*sect_rx->tcam),
+				   ICE_NONDMA_TO_NONDMA);
+			ice_memcpy(sect_tx->tcam + i,
+				   hw->tnl.tbl[i].boost_entry,
+				   sizeof(*sect_tx->tcam),
+				   ICE_NONDMA_TO_NONDMA);
+			hw->tnl.tbl[i].marked = true;
+		}
+
+	status = ice_update_pkg(hw, ice_pkg_buf(bld), 1);
+	if (!status)
+		for (i = 0; i < hw->tnl.count &&
+		     i < ICE_TUNNEL_MAX_ENTRIES; i++)
+			if (hw->tnl.tbl[i].marked) {
+				hw->tnl.tbl[i].port = 0;
+				hw->tnl.tbl[i].in_use = false;
+				hw->tnl.tbl[i].marked = false;
+			}
+
+	ice_pkg_buf_free(hw, bld);
+
+ice_destroy_tunnel_err:
+	return status;
+}
+
 
 /**
  * ice_find_prot_off - find prot ID and offset pair, based on prof and FV index
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 7142ae7fe..13066808c 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -33,6 +33,12 @@ ice_find_label_value(struct ice_seg *ice_seg, char const *name, u32 type,
 enum ice_status
 ice_get_sw_fv_list(struct ice_hw *hw, u16 *prot_ids, u8 ids_cnt,
 		   struct LIST_HEAD_TYPE *fv_list);
+enum ice_status
+ice_create_tunnel(struct ice_hw *hw, enum ice_tunnel_type type, u16 port);
+enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all);
+bool ice_tunnel_port_in_use(struct ice_hw *hw, u16 port, u16 *index);
+bool
+ice_tunnel_get_type(struct ice_hw *hw, u16 port, enum ice_tunnel_type *type);
 
 
 /* XLT2/VSI group functions */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 55/66] net/ice/base: enable additional switch rules
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (53 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 54/66] net/ice/base: add vxlan/generic tunnel management Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 56/66] net/ice/base: allow forward to Q groups in switch rule Leyi Rong
                     ` (11 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Add capability to create inner IP and inner TCP switch recipes and
rules. Change UDP tunnel dummy packet to accommodate the training of
these new rules.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_protocol_type.h |   8 +-
 drivers/net/ice/base/ice_switch.c        | 361 ++++++++++++-----------
 drivers/net/ice/base/ice_switch.h        |   1 +
 3 files changed, 203 insertions(+), 167 deletions(-)

diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
index 82822fb74..38bed7a79 100644
--- a/drivers/net/ice/base/ice_protocol_type.h
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -35,6 +35,7 @@ enum ice_protocol_type {
 	ICE_IPV6_IL,
 	ICE_IPV6_OFOS,
 	ICE_TCP_IL,
+	ICE_UDP_OF,
 	ICE_UDP_ILOS,
 	ICE_SCTP_IL,
 	ICE_VXLAN,
@@ -112,6 +113,7 @@ enum ice_prot_id {
 #define ICE_IPV6_OFOS_HW	40
 #define ICE_IPV6_IL_HW		41
 #define ICE_TCP_IL_HW		49
+#define ICE_UDP_OF_HW		52
 #define ICE_UDP_ILOS_HW		53
 #define ICE_SCTP_IL_HW		96
 
@@ -188,8 +190,7 @@ struct ice_l4_hdr {
 struct ice_udp_tnl_hdr {
 	u16 field;
 	u16 proto_type;
-	u16 vni;
-	u16 reserved;
+	u32 vni;	/* only use lower 24-bits */
 };
 
 struct ice_nvgre {
@@ -225,6 +226,7 @@ struct ice_prot_lkup_ext {
 	u8 n_val_words;
 	/* create a buffer to hold max words per recipe */
 	u16 field_off[ICE_MAX_CHAIN_WORDS];
+	u16 field_mask[ICE_MAX_CHAIN_WORDS];
 
 	struct ice_fv_word fv_words[ICE_MAX_CHAIN_WORDS];
 
@@ -235,6 +237,7 @@ struct ice_prot_lkup_ext {
 struct ice_pref_recipe_group {
 	u8 n_val_pairs;		/* Number of valid pairs */
 	struct ice_fv_word pairs[ICE_NUM_WORDS_RECIPE];
+	u16 mask[ICE_NUM_WORDS_RECIPE];
 };
 
 struct ice_recp_grp_entry {
@@ -244,6 +247,7 @@ struct ice_recp_grp_entry {
 	u16 rid;
 	u8 chain_idx;
 	u16 fv_idx[ICE_NUM_WORDS_RECIPE];
+	u16 fv_mask[ICE_NUM_WORDS_RECIPE];
 	struct ice_pref_recipe_group r_group;
 };
 #endif /* _ICE_PROTOCOL_TYPE_H_ */
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index c7fcd71a7..0dae1b609 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -53,60 +53,109 @@ static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0,
 	 sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi) + \
 	 ((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi)))
 
+static const struct ice_dummy_pkt_offsets {
+	enum ice_protocol_type type;
+	u16 offset; /* ICE_PROTOCOL_LAST indicates end of list */
+} dummy_gre_packet_offsets[] = {
+	{ ICE_MAC_OFOS,		0 },
+	{ ICE_IPV4_OFOS,	14 },
+	{ ICE_VXLAN,		34 },
+	{ ICE_MAC_IL,		42 },
+	{ ICE_IPV4_IL,		54 },
+	{ ICE_PROTOCOL_LAST,	0 },
+};
+
 static const
-u8 dummy_gre_packet[] = { 0, 0, 0, 0,		/* Ether starts */
+u8 dummy_gre_packet[] = { 0, 0, 0, 0,		/* ICE_MAC_OFOS 0 */
 			  0, 0, 0, 0,
 			  0, 0, 0, 0,
-			  0x08, 0,		/* Ether ends */
-			  0x45, 0, 0, 0x3E,	/* IP starts */
+			  0x08, 0,
+			  0x45, 0, 0, 0x3E,	/* ICE_IPV4_OFOS 14 */
 			  0, 0, 0, 0,
 			  0, 0x2F, 0, 0,
 			  0, 0, 0, 0,
-			  0, 0, 0, 0,		/* IP ends */
-			  0x80, 0, 0x65, 0x58,	/* GRE starts */
-			  0, 0, 0, 0,		/* GRE ends */
-			  0, 0, 0, 0,		/* Ether starts */
-			  0, 0, 0, 0,
-			  0, 0, 0, 0,
-			  0x08, 0,		/* Ether ends */
-			  0x45, 0, 0, 0x14,	/* IP starts */
 			  0, 0, 0, 0,
+			  0x80, 0, 0x65, 0x58,	/* ICE_VXLAN_GRE 34 */
 			  0, 0, 0, 0,
+			  0, 0, 0, 0,		/* ICE_MAC_IL 42 */
 			  0, 0, 0, 0,
-			  0, 0, 0, 0		/* IP ends */
-			};
-
-static const u8
-dummy_udp_tun_packet[] = {0, 0, 0, 0,		/* Ether starts */
-			  0, 0, 0, 0,
-			  0, 0, 0, 0,
-			  0x08, 0,		/* Ether ends */
-			  0x45, 0, 0, 0x32,	/* IP starts */
 			  0, 0, 0, 0,
-			  0, 0x11, 0, 0,
+			  0x08, 0,
+			  0x45, 0, 0, 0x14,	/* ICE_IPV4_IL 54 */
 			  0, 0, 0, 0,
-			  0, 0, 0, 0,		/* IP ends */
-			  0, 0, 0x12, 0xB5,	/* UDP start*/
-			  0, 0x1E, 0, 0,	/* UDP end*/
-			  0, 0, 0, 0,		/* VXLAN start */
-			  0, 0, 0, 0,		/* VXLAN end*/
-			  0, 0, 0, 0,		/* Ether starts */
 			  0, 0, 0, 0,
 			  0, 0, 0, 0,
-			  0, 0			/* Ether ends */
+			  0, 0, 0, 0
 			};
 
+static const
+struct ice_dummy_pkt_offsets dummy_udp_tun_packet_offsets[] = {
+	{ ICE_MAC_OFOS,		0 },
+	{ ICE_IPV4_OFOS,	14 },
+	{ ICE_UDP_OF,		34 },
+	{ ICE_VXLAN,		42 },
+	{ ICE_MAC_IL,		50 },
+	{ ICE_IPV4_IL,		64 },
+	{ ICE_TCP_IL,		84 },
+	{ ICE_PROTOCOL_LAST,	0 },
+};
+
+static const
+u8 dummy_udp_tun_packet[] = {
+	0x00, 0x00, 0x00, 0x00,  /* ICE_MAC_OFOS 0 */
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x08, 0x00,
+
+	0x45, 0x00, 0x00, 0x5a, /* ICE_IPV4_OFOS 14 */
+	0x00, 0x01, 0x00, 0x00,
+	0x40, 0x11, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+
+	0x00, 0x00, 0x12, 0xb5, /* ICE_UDP_OF 34 */
+	0x00, 0x46, 0x00, 0x00,
+
+	0x04, 0x00, 0x00, 0x03, /* ICE_VXLAN 42 */
+	0x00, 0x00, 0x00, 0x00,
+
+	0x00, 0x00, 0x00, 0x00, /* ICE_MAC_IL 50 */
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x08, 0x00,
+
+	0x45, 0x00, 0x00, 0x28, /* ICE_IPV4_IL 64 */
+	0x00, 0x01, 0x00, 0x00,
+	0x40, 0x06, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+
+	0x00, 0x00, 0x00, 0x00, /* ICE_TCP_IL 84 */
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x50, 0x02, 0x20, 0x00,
+	0x00, 0x00, 0x00, 0x00
+};
+
+static const
+struct ice_dummy_pkt_offsets dummy_tcp_tun_packet_offsets[] = {
+	{ ICE_MAC_OFOS,		0 },
+	{ ICE_IPV4_OFOS,	14 },
+	{ ICE_TCP_IL,		34 },
+	{ ICE_PROTOCOL_LAST,	0 },
+};
+
 static const u8
-dummy_tcp_tun_packet[] = {0, 0, 0, 0,		/* Ether starts */
+dummy_tcp_tun_packet[] = {0, 0, 0, 0,		/* ICE_MAC_OFOS 0 */
 			  0, 0, 0, 0,
 			  0, 0, 0, 0,
-			  0x08, 0,              /* Ether ends */
-			  0x45, 0, 0, 0x28,     /* IP starts */
+			  0x08, 0,
+			  0x45, 0, 0, 0x28,     /* ICE_IPV4_OFOS 14 */
 			  0, 0x01, 0, 0,
 			  0x40, 0x06, 0xF5, 0x69,
 			  0, 0, 0, 0,
-			  0, 0, 0, 0,   /* IP ends */
 			  0, 0, 0, 0,
+			  0, 0, 0, 0,		/* ICE_TCP_IL 34 */
 			  0, 0, 0, 0,
 			  0, 0, 0, 0,
 			  0x50, 0x02, 0x20,
@@ -184,6 +233,9 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid)
 			u8 lkup_indx = root_bufs.content.lkup_indx[i + 1];
 
 			rg_entry->fv_idx[i] = lkup_indx;
+			rg_entry->fv_mask[i] =
+				LE16_TO_CPU(root_bufs.content.mask[i + 1]);
+
 			/* If the recipe is a chained recipe then all its
 			 * child recipe's result will have a result index.
 			 * To fill fv_words we should not use those result
@@ -4246,10 +4298,11 @@ static const struct ice_prot_ext_tbl_entry ice_prot_ext[] = {
 	{ ICE_IPV6_OFOS,	{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24,
 				 26, 28, 30, 32, 34, 36, 38 } },
 	{ ICE_TCP_IL,		{ 0, 2 } },
+	{ ICE_UDP_OF,		{ 0, 2 } },
 	{ ICE_UDP_ILOS,		{ 0, 2 } },
 	{ ICE_SCTP_IL,		{ 0, 2 } },
-	{ ICE_VXLAN,		{ 8, 10, 12 } },
-	{ ICE_GENEVE,		{ 8, 10, 12 } },
+	{ ICE_VXLAN,		{ 8, 10, 12, 14 } },
+	{ ICE_GENEVE,		{ 8, 10, 12, 14 } },
 	{ ICE_VXLAN_GPE,	{ 0, 2, 4 } },
 	{ ICE_NVGRE,		{ 0, 2 } },
 	{ ICE_PROTOCOL_LAST,	{ 0 } }
@@ -4262,11 +4315,14 @@ static const struct ice_prot_ext_tbl_entry ice_prot_ext[] = {
  */
 static const struct ice_pref_recipe_group ice_recipe_pack[] = {
 	{3, { { ICE_MAC_OFOS_HW, 0, 0 }, { ICE_MAC_OFOS_HW, 2, 0 },
-	      { ICE_MAC_OFOS_HW, 4, 0 } } },
+	      { ICE_MAC_OFOS_HW, 4, 0 } }, { 0xffff, 0xffff, 0xffff, 0xffff } },
 	{4, { { ICE_MAC_IL_HW, 0, 0 }, { ICE_MAC_IL_HW, 2, 0 },
-	      { ICE_MAC_IL_HW, 4, 0 }, { ICE_META_DATA_ID_HW, 44, 0 } } },
-	{2, { { ICE_IPV4_IL_HW, 0, 0 }, { ICE_IPV4_IL_HW, 2, 0 } } },
-	{2, { { ICE_IPV4_IL_HW, 12, 0 }, { ICE_IPV4_IL_HW, 14, 0 } } },
+	      { ICE_MAC_IL_HW, 4, 0 }, { ICE_META_DATA_ID_HW, 44, 0 } },
+		{ 0xffff, 0xffff, 0xffff, 0xffff } },
+	{2, { { ICE_IPV4_IL_HW, 0, 0 }, { ICE_IPV4_IL_HW, 2, 0 } },
+		{ 0xffff, 0xffff, 0xffff, 0xffff } },
+	{2, { { ICE_IPV4_IL_HW, 12, 0 }, { ICE_IPV4_IL_HW, 14, 0 } },
+		{ 0xffff, 0xffff, 0xffff, 0xffff } },
 };
 
 static const struct ice_protocol_entry ice_prot_id_tbl[] = {
@@ -4277,6 +4333,7 @@ static const struct ice_protocol_entry ice_prot_id_tbl[] = {
 	{ ICE_IPV6_OFOS,	ICE_IPV6_OFOS_HW },
 	{ ICE_IPV6_IL,		ICE_IPV6_IL_HW },
 	{ ICE_TCP_IL,		ICE_TCP_IL_HW },
+	{ ICE_UDP_OF,		ICE_UDP_OF_HW },
 	{ ICE_UDP_ILOS,		ICE_UDP_ILOS_HW },
 	{ ICE_SCTP_IL,		ICE_SCTP_IL_HW },
 	{ ICE_VXLAN,		ICE_UDP_OF_HW },
@@ -4395,7 +4452,7 @@ ice_fill_valid_words(struct ice_adv_lkup_elem *rule,
 	word = lkup_exts->n_val_words;
 
 	for (j = 0; j < sizeof(rule->m_u) / sizeof(u16); j++)
-		if (((u16 *)&rule->m_u)[j] == 0xffff &&
+		if (((u16 *)&rule->m_u)[j] &&
 		    rule->type < ARRAY_SIZE(ice_prot_ext)) {
 			/* No more space to accommodate */
 			if (word >= ICE_MAX_CHAIN_WORDS)
@@ -4404,6 +4461,7 @@ ice_fill_valid_words(struct ice_adv_lkup_elem *rule,
 				ice_prot_ext[rule->type].offs[j];
 			lkup_exts->fv_words[word].prot_id =
 				ice_prot_id_tbl[rule->type].protocol_id;
+			lkup_exts->field_mask[word] = ((u16 *)&rule->m_u)[j];
 			word++;
 		}
 
@@ -4527,6 +4585,7 @@ ice_create_first_fit_recp_def(struct ice_hw *hw,
 				lkup_exts->fv_words[j].prot_id;
 			grp->pairs[grp->n_val_pairs].off =
 				lkup_exts->fv_words[j].off;
+			grp->mask[grp->n_val_pairs] = lkup_exts->field_mask[j];
 			grp->n_val_pairs++;
 		}
 
@@ -4561,14 +4620,22 @@ ice_fill_fv_word_index(struct ice_hw *hw, struct LIST_HEAD_TYPE *fv_list,
 
 		for (i = 0; i < rg->r_group.n_val_pairs; i++) {
 			struct ice_fv_word *pr;
+			u16 mask;
 			u8 j;
 
 			pr = &rg->r_group.pairs[i];
+			mask = rg->r_group.mask[i];
+
 			for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++)
 				if (fv_ext[j].prot_id == pr->prot_id &&
 				    fv_ext[j].off == pr->off) {
 					/* Store index of field vector */
 					rg->fv_idx[i] = j;
+					/* Mask is given by caller as big
+					 * endian, but sent to FW as little
+					 * endian
+					 */
+					rg->fv_mask[i] = mask << 8 | mask >> 8;
 					break;
 				}
 		}
@@ -4666,7 +4733,8 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm,
 
 		for (i = 0; i < entry->r_group.n_val_pairs; i++) {
 			buf[recps].content.lkup_indx[i + 1] = entry->fv_idx[i];
-			buf[recps].content.mask[i + 1] = CPU_TO_LE16(0xFFFF);
+			buf[recps].content.mask[i + 1] =
+				CPU_TO_LE16(entry->fv_mask[i]);
 		}
 
 		if (rm->n_grp_count > 1) {
@@ -4888,6 +4956,8 @@ ice_create_recipe_group(struct ice_hw *hw, struct ice_sw_recipe *rm,
 		rm->n_ext_words = lkup_exts->n_val_words;
 		ice_memcpy(&rm->ext_words, lkup_exts->fv_words,
 			   sizeof(rm->ext_words), ICE_NONDMA_TO_NONDMA);
+		ice_memcpy(rm->word_masks, lkup_exts->field_mask,
+			   sizeof(rm->word_masks), ICE_NONDMA_TO_NONDMA);
 		goto out;
 	}
 
@@ -5089,16 +5159,8 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 	return status;
 }
 
-#define ICE_MAC_HDR_OFFSET	0
-#define ICE_IP_HDR_OFFSET	14
-#define ICE_GRE_HDR_OFFSET	34
-#define ICE_MAC_IL_HDR_OFFSET	42
-#define ICE_IP_IL_HDR_OFFSET	56
-#define ICE_L4_HDR_OFFSET	34
-#define ICE_UDP_TUN_HDR_OFFSET	42
-
 /**
- * ice_find_dummy_packet - find dummy packet with given match criteria
+ * ice_find_dummy_packet - find dummy packet by tunnel type
  *
  * @lkups: lookup elements or match criteria for the advanced recipe, one
  *	   structure per protocol header
@@ -5106,17 +5168,20 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
  * @tun_type: tunnel type from the match criteria
  * @pkt: dummy packet to fill according to filter match criteria
  * @pkt_len: packet length of dummy packet
+ * @offsets: pointer to receive the pointer to the offsets for the packet
  */
 static void
 ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
 		      enum ice_sw_tunnel_type tun_type, const u8 **pkt,
-		      u16 *pkt_len)
+		      u16 *pkt_len,
+		      const struct ice_dummy_pkt_offsets **offsets)
 {
 	u16 i;
 
 	if (tun_type == ICE_SW_TUN_NVGRE || tun_type == ICE_ALL_TUNNELS) {
 		*pkt = dummy_gre_packet;
 		*pkt_len = sizeof(dummy_gre_packet);
+		*offsets = dummy_gre_packet_offsets;
 		return;
 	}
 
@@ -5124,6 +5189,7 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
 	    tun_type == ICE_SW_TUN_VXLAN_GPE) {
 		*pkt = dummy_udp_tun_packet;
 		*pkt_len = sizeof(dummy_udp_tun_packet);
+		*offsets = dummy_udp_tun_packet_offsets;
 		return;
 	}
 
@@ -5131,12 +5197,14 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
 		if (lkups[i].type == ICE_UDP_ILOS) {
 			*pkt = dummy_udp_tun_packet;
 			*pkt_len = sizeof(dummy_udp_tun_packet);
+			*offsets = dummy_udp_tun_packet_offsets;
 			return;
 		}
 	}
 
 	*pkt = dummy_tcp_tun_packet;
 	*pkt_len = sizeof(dummy_tcp_tun_packet);
+	*offsets = dummy_tcp_tun_packet_offsets;
 }
 
 /**
@@ -5145,16 +5213,16 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
  * @lkups: lookup elements or match criteria for the advanced recipe, one
  *	   structure per protocol header
  * @lkups_cnt: number of protocols
- * @tun_type: to know if the dummy packet is supposed to be tunnel packet
  * @s_rule: stores rule information from the match criteria
  * @dummy_pkt: dummy packet to fill according to filter match criteria
  * @pkt_len: packet length of dummy packet
+ * @offsets: offset info for the dummy packet
  */
-static void
+static enum ice_status
 ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
-			  enum ice_sw_tunnel_type tun_type,
 			  struct ice_aqc_sw_rules_elem *s_rule,
-			  const u8 *dummy_pkt, u16 pkt_len)
+			  const u8 *dummy_pkt, u16 pkt_len,
+			  const struct ice_dummy_pkt_offsets *offsets)
 {
 	u8 *pkt;
 	u16 i;
@@ -5167,124 +5235,74 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
 	ice_memcpy(pkt, dummy_pkt, pkt_len, ICE_NONDMA_TO_NONDMA);
 
 	for (i = 0; i < lkups_cnt; i++) {
-		u32 len, pkt_off, hdr_size, field_off;
+		enum ice_protocol_type type;
+		u16 offset = 0, len = 0, j;
+		bool found = false;
+
+		/* find the start of this layer; it should be found since this
+		 * was already checked when search for the dummy packet
+		 */
+		type = lkups[i].type;
+		for (j = 0; offsets[j].type != ICE_PROTOCOL_LAST; j++) {
+			if (type == offsets[j].type) {
+				offset = offsets[j].offset;
+				found = true;
+				break;
+			}
+		}
+		/* this should never happen in a correct calling sequence */
+		if (!found)
+			return ICE_ERR_PARAM;
 
 		switch (lkups[i].type) {
 		case ICE_MAC_OFOS:
 		case ICE_MAC_IL:
-			pkt_off = offsetof(struct ice_ether_hdr, dst_addr) +
-				((lkups[i].type == ICE_MAC_IL) ?
-				 ICE_MAC_IL_HDR_OFFSET : 0);
-			len = sizeof(lkups[i].h_u.eth_hdr.dst_addr);
-			if ((tun_type == ICE_SW_TUN_VXLAN ||
-			     tun_type == ICE_SW_TUN_GENEVE ||
-			     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
-			     lkups[i].type == ICE_MAC_IL) {
-				pkt_off += sizeof(struct ice_udp_tnl_hdr);
-			}
-
-			ice_memcpy(&pkt[pkt_off],
-				   &lkups[i].h_u.eth_hdr.dst_addr, len,
-				   ICE_NONDMA_TO_NONDMA);
-			pkt_off = offsetof(struct ice_ether_hdr, src_addr) +
-				((lkups[i].type == ICE_MAC_IL) ?
-				 ICE_MAC_IL_HDR_OFFSET : 0);
-			len = sizeof(lkups[i].h_u.eth_hdr.src_addr);
-			if ((tun_type == ICE_SW_TUN_VXLAN ||
-			     tun_type == ICE_SW_TUN_GENEVE ||
-			     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
-			     lkups[i].type == ICE_MAC_IL) {
-				pkt_off += sizeof(struct ice_udp_tnl_hdr);
-			}
-			ice_memcpy(&pkt[pkt_off],
-				   &lkups[i].h_u.eth_hdr.src_addr, len,
-				   ICE_NONDMA_TO_NONDMA);
-			if (lkups[i].h_u.eth_hdr.ethtype_id) {
-				pkt_off = offsetof(struct ice_ether_hdr,
-						   ethtype_id) +
-					((lkups[i].type == ICE_MAC_IL) ?
-					 ICE_MAC_IL_HDR_OFFSET : 0);
-				len = sizeof(lkups[i].h_u.eth_hdr.ethtype_id);
-				if ((tun_type == ICE_SW_TUN_VXLAN ||
-				     tun_type == ICE_SW_TUN_GENEVE ||
-				     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
-				     lkups[i].type == ICE_MAC_IL) {
-					pkt_off +=
-						sizeof(struct ice_udp_tnl_hdr);
-				}
-				ice_memcpy(&pkt[pkt_off],
-					   &lkups[i].h_u.eth_hdr.ethtype_id,
-					   len, ICE_NONDMA_TO_NONDMA);
-			}
+			len = sizeof(struct ice_ether_hdr);
 			break;
 		case ICE_IPV4_OFOS:
-			hdr_size = sizeof(struct ice_ipv4_hdr);
-			if (lkups[i].h_u.ipv4_hdr.dst_addr) {
-				pkt_off = ICE_IP_HDR_OFFSET +
-					   offsetof(struct ice_ipv4_hdr,
-						    dst_addr);
-				field_off = offsetof(struct ice_ipv4_hdr,
-						     dst_addr);
-				len = hdr_size - field_off;
-				ice_memcpy(&pkt[pkt_off],
-					   &lkups[i].h_u.ipv4_hdr.dst_addr,
-					   len, ICE_NONDMA_TO_NONDMA);
-			}
-			if (lkups[i].h_u.ipv4_hdr.src_addr) {
-				pkt_off = ICE_IP_HDR_OFFSET +
-					   offsetof(struct ice_ipv4_hdr,
-						    src_addr);
-				field_off = offsetof(struct ice_ipv4_hdr,
-						     src_addr);
-				len = hdr_size - field_off;
-				ice_memcpy(&pkt[pkt_off],
-					   &lkups[i].h_u.ipv4_hdr.src_addr,
-					   len, ICE_NONDMA_TO_NONDMA);
-			}
-			break;
 		case ICE_IPV4_IL:
+			len = sizeof(struct ice_ipv4_hdr);
 			break;
 		case ICE_TCP_IL:
+		case ICE_UDP_OF:
 		case ICE_UDP_ILOS:
+			len = sizeof(struct ice_l4_hdr);
+			break;
 		case ICE_SCTP_IL:
-			hdr_size = sizeof(struct ice_udp_tnl_hdr);
-			if (lkups[i].h_u.l4_hdr.dst_port) {
-				pkt_off = ICE_L4_HDR_OFFSET +
-					   offsetof(struct ice_l4_hdr,
-						    dst_port);
-				field_off = offsetof(struct ice_l4_hdr,
-						     dst_port);
-				len =  hdr_size - field_off;
-				ice_memcpy(&pkt[pkt_off],
-					   &lkups[i].h_u.l4_hdr.dst_port,
-					   len, ICE_NONDMA_TO_NONDMA);
-			}
-			if (lkups[i].h_u.l4_hdr.src_port) {
-				pkt_off = ICE_L4_HDR_OFFSET +
-					offsetof(struct ice_l4_hdr, src_port);
-				field_off = offsetof(struct ice_l4_hdr,
-						     src_port);
-				len =  hdr_size - field_off;
-				ice_memcpy(&pkt[pkt_off],
-					   &lkups[i].h_u.l4_hdr.src_port,
-					   len, ICE_NONDMA_TO_NONDMA);
-			}
+			len = sizeof(struct ice_sctp_hdr);
 			break;
 		case ICE_VXLAN:
 		case ICE_GENEVE:
 		case ICE_VXLAN_GPE:
-			pkt_off = ICE_UDP_TUN_HDR_OFFSET +
-				   offsetof(struct ice_udp_tnl_hdr, vni);
-			field_off = offsetof(struct ice_udp_tnl_hdr, vni);
-			len =  sizeof(struct ice_udp_tnl_hdr) - field_off;
-			ice_memcpy(&pkt[pkt_off], &lkups[i].h_u.tnl_hdr.vni,
-				   len, ICE_NONDMA_TO_NONDMA);
+			len = sizeof(struct ice_udp_tnl_hdr);
 			break;
 		default:
-			break;
+			return ICE_ERR_PARAM;
 		}
+
+		/* the length should be a word multiple */
+		if (len % ICE_BYTES_PER_WORD)
+			return ICE_ERR_CFG;
+
+		/* We have the offset to the header start, the length, the
+		 * caller's header values and mask. Use this information to
+		 * copy the data into the dummy packet appropriately based on
+		 * the mask. Note that we need to only write the bits as
+		 * indicated by the mask to make sure we don't improperly write
+		 * over any significant packet data.
+		 */
+		for (j = 0; j < len / sizeof(u16); j++)
+			if (((u16 *)&lkups[i].m_u)[j])
+				((u16 *)(pkt + offset))[j] =
+					(((u16 *)(pkt + offset))[j] &
+					 ~((u16 *)&lkups[i].m_u)[j]) |
+					(((u16 *)&lkups[i].h_u)[j] &
+					 ((u16 *)&lkups[i].m_u)[j]);
 	}
+
 	s_rule->pdata.lkup_tx_rx.hdr_len = CPU_TO_LE16(pkt_len);
+
+	return ICE_SUCCESS;
 }
 
 /**
@@ -5438,7 +5456,7 @@ ice_adv_add_update_vsi_list(struct ice_hw *hw,
 }
 
 /**
- * ice_add_adv_rule - create an advanced switch rule
+ * ice_add_adv_rule - helper function to create an advanced switch rule
  * @hw: pointer to the hardware structure
  * @lkups: information on the words that needs to be looked up. All words
  * together makes one recipe
@@ -5462,11 +5480,13 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 {
 	struct ice_adv_fltr_mgmt_list_entry *m_entry, *adv_fltr = NULL;
 	u16 rid = 0, i, pkt_len, rule_buf_sz, vsi_handle;
-	struct ice_aqc_sw_rules_elem *s_rule;
+	const struct ice_dummy_pkt_offsets *pkt_offsets;
+	struct ice_aqc_sw_rules_elem *s_rule = NULL;
 	struct LIST_HEAD_TYPE *rule_head;
 	struct ice_switch_info *sw;
 	enum ice_status status;
 	const u8 *pkt = NULL;
+	bool found = false;
 	u32 act = 0;
 
 	if (!lkups_cnt)
@@ -5475,13 +5495,25 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 	for (i = 0; i < lkups_cnt; i++) {
 		u16 j, *ptr;
 
-		/* Validate match masks to make sure they match complete 16-bit
-		 * words.
+		/* Validate match masks to make sure that there is something
+		 * to match.
 		 */
-		ptr = (u16 *)&lkups->m_u;
+		ptr = (u16 *)&lkups[i].m_u;
 		for (j = 0; j < sizeof(lkups->m_u) / sizeof(u16); j++)
-			if (ptr[j] != 0 && ptr[j] != 0xffff)
-				return ICE_ERR_PARAM;
+			if (ptr[j] != 0) {
+				found = true;
+				break;
+			}
+	}
+	if (!found)
+		return ICE_ERR_PARAM;
+
+	/* make sure that we can locate a dummy packet */
+	ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt, &pkt_len,
+			      &pkt_offsets);
+	if (!pkt) {
+		status = ICE_ERR_PARAM;
+		goto err_ice_add_adv_rule;
 	}
 
 	if (!(rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI ||
@@ -5522,8 +5554,6 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 		}
 		return status;
 	}
-	ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt,
-			      &pkt_len);
 	rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + pkt_len;
 	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rule_buf_sz);
 	if (!s_rule)
@@ -5568,8 +5598,8 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 	s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(rid);
 	s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
 
-	ice_fill_adv_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, s_rule,
-				  pkt, pkt_len);
+	ice_fill_adv_dummy_packet(lkups, lkups_cnt, s_rule, pkt, pkt_len,
+				  pkt_offsets);
 
 	status = ice_aq_sw_rules(hw, (struct ice_aqc_sw_rules *)s_rule,
 				 rule_buf_sz, 1, ice_aqc_opc_add_sw_rules,
@@ -5745,11 +5775,12 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo)
 {
 	struct ice_adv_fltr_mgmt_list_entry *list_elem;
+	const struct ice_dummy_pkt_offsets *offsets;
 	struct ice_prot_lkup_ext lkup_exts;
 	u16 rule_buf_sz, pkt_len, i, rid;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
 	enum ice_status status = ICE_SUCCESS;
 	bool remove_rule = false;
-	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
 	const u8 *pkt = NULL;
 	u16 vsi_handle;
 
@@ -5797,7 +5828,7 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 		struct ice_aqc_sw_rules_elem *s_rule;
 
 		ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt,
-				      &pkt_len);
+				      &pkt_len, &offsets);
 		rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + pkt_len;
 		s_rule =
 			(struct ice_aqc_sw_rules_elem *)ice_malloc(hw,
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index b788aa7ec..4c34bc2ea 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -192,6 +192,7 @@ struct ice_sw_recipe {
 	 * recipe
 	 */
 	struct ice_fv_word ext_words[ICE_MAX_CHAIN_WORDS];
+	u16 word_masks[ICE_MAX_CHAIN_WORDS];
 
 	/* if this recipe is a collection of other recipe */
 	u8 big_recp;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 56/66] net/ice/base: allow forward to Q groups in switch rule
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (54 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 55/66] net/ice/base: enable additional switch rules Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 57/66] net/ice/base: changes for reducing ice add adv rule time Leyi Rong
                     ` (10 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Enable forward to Q group action in ice_add_adv_rule.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 0dae1b609..9f47ae96b 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -5488,6 +5488,7 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 	const u8 *pkt = NULL;
 	bool found = false;
 	u32 act = 0;
+	u8 q_rgn;
 
 	if (!lkups_cnt)
 		return ICE_ERR_PARAM;
@@ -5518,6 +5519,7 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 
 	if (!(rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI ||
 	      rinfo->sw_act.fltr_act == ICE_FWD_TO_Q ||
+	      rinfo->sw_act.fltr_act == ICE_FWD_TO_QGRP ||
 	      rinfo->sw_act.fltr_act == ICE_DROP_PACKET))
 		return ICE_ERR_CFG;
 
@@ -5570,6 +5572,15 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 		act |= (rinfo->sw_act.fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
 		       ICE_SINGLE_ACT_Q_INDEX_M;
 		break;
+	case ICE_FWD_TO_QGRP:
+		q_rgn = rinfo->sw_act.qgrp_size > 0 ?
+			(u8)ice_ilog2(rinfo->sw_act.qgrp_size) : 0;
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (rinfo->sw_act.fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+		       ICE_SINGLE_ACT_Q_INDEX_M;
+		act |= (q_rgn << ICE_SINGLE_ACT_Q_REGION_S) &
+		       ICE_SINGLE_ACT_Q_REGION_M;
+		break;
 	case ICE_DROP_PACKET:
 		act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_DROP |
 		       ICE_SINGLE_ACT_VALID_BIT;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 57/66] net/ice/base: changes for reducing ice add adv rule time
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (55 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 56/66] net/ice/base: allow forward to Q groups in switch rule Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 58/66] net/ice/base: deduce TSA value in the CEE mode Leyi Rong
                     ` (9 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Shivanshu Shukla, Paul M Stillwell Jr

While calling ice_find_recp we were calling ice_get_recp_to_prof_map
everytime we called ice_find_recp. ice_get_recp_to_prof_map is a very
expensive operation and we should try to reduce the number of times we
call this function. So moved it into ice_get_recp_frm_fw since we only
need to have fresh recp_to_profile mapping when we we check FW to see if
the recipe we are trying to add already exists in FW.

Signed-off-by: Shivanshu Shukla <shivanshu.shukla@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 9f47ae96b..fe4d344a4 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -168,6 +168,8 @@ static ice_declare_bitmap(recipe_to_profile[ICE_MAX_NUM_RECIPES],
 			  ICE_MAX_NUM_PROFILES);
 static ice_declare_bitmap(available_result_ids, ICE_CHAIN_FV_INDEX_START + 1);
 
+static void ice_get_recp_to_prof_map(struct ice_hw *hw);
+
 /**
  * ice_get_recp_frm_fw - update SW bookkeeping from FW recipe entries
  * @hw: pointer to hardware structure
@@ -189,6 +191,10 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid)
 	struct ice_prot_lkup_ext *lkup_exts;
 	enum ice_status status;
 
+	/* Get recipe to profile map so that we can get the fv from
+	 * lkups that we read for a recipe from FW.
+	 */
+	ice_get_recp_to_prof_map(hw);
 	/* we need a buffer big enough to accommodate all the recipes */
 	tmp = (struct ice_aqc_recipe_data_elem *)ice_calloc(hw,
 		ICE_MAX_NUM_RECIPES, sizeof(*tmp));
@@ -4355,7 +4361,6 @@ static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts)
 	struct ice_sw_recipe *recp;
 	u16 i;
 
-	ice_get_recp_to_prof_map(hw);
 	/* Initialize available_result_ids which tracks available result idx */
 	for (i = 0; i <= ICE_CHAIN_FV_INDEX_START; i++)
 		ice_set_bit(ICE_CHAIN_FV_INDEX_START - i,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 58/66] net/ice/base: deduce TSA value in the CEE mode
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (56 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 57/66] net/ice/base: changes for reducing ice add adv rule time Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 59/66] net/ice/base: rework API for ice zero bitmap Leyi Rong
                     ` (8 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Chinh T Cao, Paul M Stillwell Jr

In CEE mode, the TSA information can be derived from the reported
priority value.

Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_dcb.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c
index 008c7a110..a6fbedd18 100644
--- a/drivers/net/ice/base/ice_dcb.c
+++ b/drivers/net/ice/base/ice_dcb.c
@@ -445,9 +445,15 @@ ice_parse_cee_pgcfg_tlv(struct ice_cee_feat_tlv *tlv,
 	 *        |pg0|pg1|pg2|pg3|pg4|pg5|pg6|pg7|
 	 *        ---------------------------------
 	 */
-	ice_for_each_traffic_class(i)
+	ice_for_each_traffic_class(i) {
 		etscfg->tcbwtable[i] = buf[offset++];
 
+		if (etscfg->prio_table[i] == ICE_CEE_PGID_STRICT)
+			dcbcfg->etscfg.tsatable[i] = ICE_IEEE_TSA_STRICT;
+		else
+			dcbcfg->etscfg.tsatable[i] = ICE_IEEE_TSA_ETS;
+	}
+
 	/* Number of TCs supported (1 octet) */
 	etscfg->maxtcs = buf[offset];
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 59/66] net/ice/base: rework API for ice zero bitmap
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (57 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 58/66] net/ice/base: deduce TSA value in the CEE mode Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 60/66] net/ice/base: rework API for ice cp bitmap Leyi Rong
                     ` (7 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

Fix ice_zero_bitmap to zero the entire storage.

Fixes: c9e37832c95f ("net/ice/base: rework on bit ops")

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_bitops.h | 19 ++++++-------------
 1 file changed, 6 insertions(+), 13 deletions(-)

diff --git a/drivers/net/ice/base/ice_bitops.h b/drivers/net/ice/base/ice_bitops.h
index f52713021..aca5529a6 100644
--- a/drivers/net/ice/base/ice_bitops.h
+++ b/drivers/net/ice/base/ice_bitops.h
@@ -147,22 +147,15 @@ ice_test_and_set_bit(u16 nr, ice_bitmap_t *bitmap)
  * @bmp: bitmap to set zeros
  * @size: Size of the bitmaps in bits
  *
- * This function sets bits of a bitmap to zero.
+ * Set all of the bits in a bitmap to zero. Note that this function assumes it
+ * operates on an ice_bitmap_t which was declared using ice_declare_bitmap. It
+ * will zero every bit in the last chunk, even if those bits are beyond the
+ * size.
  */
 static inline void ice_zero_bitmap(ice_bitmap_t *bmp, u16 size)
 {
-	ice_bitmap_t mask;
-	u16 i;
-
-	/* Handle all but last chunk*/
-	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
-		bmp[i] = 0;
-	/* For the last chunk, we want to take care of not to modify bits
-	 * outside the size boundary. ~mask take care of all the bits outside
-	 * the boundary.
-	 */
-	mask = LAST_CHUNK_MASK(size);
-	bmp[i] &= ~mask;
+	ice_memset(bmp, 0, BITS_TO_CHUNKS(size) * sizeof(ice_bitmap_t),
+		   ICE_NONDMA_MEM);
 }
 
 /**
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 60/66] net/ice/base: rework API for ice cp bitmap
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (58 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 59/66] net/ice/base: rework API for ice zero bitmap Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 61/66] net/ice/base: use ice zero bitmap instead of ice memset Leyi Rong
                     ` (6 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

Fix ice_cp_bitmap to copy the entire storage.

Fixes: c9e37832c95f ("net/ice/base: rework on bit ops")

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_bitops.h | 17 +++++------------
 1 file changed, 5 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ice/base/ice_bitops.h b/drivers/net/ice/base/ice_bitops.h
index aca5529a6..c74407d9d 100644
--- a/drivers/net/ice/base/ice_bitops.h
+++ b/drivers/net/ice/base/ice_bitops.h
@@ -306,21 +306,14 @@ static inline bool ice_is_any_bit_set(ice_bitmap_t *bitmap, u16 size)
  * @src: bitmap to copy from
  * @size: Size of the bitmaps in bits
  *
- * This function copy bitmap from src to dst.
+ * This function copy bitmap from src to dst. Note that this function assumes
+ * it is operating on a bitmap declared using ice_declare_bitmap. It will copy
+ * the entire last chunk even if this contains bits beyond the size.
  */
 static inline void ice_cp_bitmap(ice_bitmap_t *dst, ice_bitmap_t *src, u16 size)
 {
-	ice_bitmap_t mask;
-	u16 i;
-
-	/* Handle all but last chunk*/
-	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
-		dst[i] = src[i];
-
-	/* We want to only copy bits within the size.*/
-	mask = LAST_CHUNK_MASK(size);
-	dst[i] &= ~mask;
-	dst[i] |= src[i] & mask;
+	ice_memcpy(dst, src, BITS_TO_CHUNKS(size) * sizeof(ice_bitmap_t),
+		   ICE_NONDMA_TO_NONDMA);
 }
 
 /**
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 61/66] net/ice/base: use ice zero bitmap instead of ice memset
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (59 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 60/66] net/ice/base: rework API for ice cp bitmap Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 62/66] net/ice/base: use the specified size for ice zero bitmap Leyi Rong
                     ` (5 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

A few places in the code used ice_memset instead of ice_zero_bitmap to
initialize a bitmap to zeros.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 4206458e5..41d45e3f2 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -3718,7 +3718,7 @@ ice_update_fd_swap(struct ice_hw *hw, u16 prof_id, struct ice_fv_word *es)
 	u32 mask_sel = 0;
 	u8 i, j, k;
 
-	ice_memset(pair_list, 0, sizeof(pair_list), ICE_NONDMA_MEM);
+	ice_zero_bitmap(pair_list, ICE_FD_SRC_DST_PAIR_COUNT);
 
 	ice_init_fd_mask_regs(hw);
 
@@ -4495,7 +4495,7 @@ ice_adj_prof_priorities(struct ice_hw *hw, enum ice_block blk, u16 vsig,
 	enum ice_status status;
 	u16 idx;
 
-	ice_memset(ptgs_used, 0, sizeof(ptgs_used), ICE_NONDMA_MEM);
+	ice_zero_bitmap(ptgs_used, ICE_XLT1_CNT);
 	idx = vsig & ICE_VSIG_IDX_M;
 
 	/* Priority is based on the order in which the profiles are added. The
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 62/66] net/ice/base: use the specified size for ice zero bitmap
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (60 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 61/66] net/ice/base: use ice zero bitmap instead of ice memset Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 63/66] net/ice/base: fix potential memory leak in destroy tunnel Leyi Rong
                     ` (4 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

A couple of places in the code use a 'sizeof(bitmap) * BITS_PER_BYTE'
construction to calculate the size of the bitmap when calling
ice_zero_bitmap. Instead of doing this, just use the same value as in
the ice_declare_bitmap declaration.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_sched.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 0f4153146..2d80af731 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -5259,8 +5259,7 @@ void ice_sched_replay_agg(struct ice_hw *hw)
 					   ICE_MAX_TRAFFIC_CLASS);
 			enum ice_status status;
 
-			ice_zero_bitmap(replay_bitmap,
-					sizeof(replay_bitmap) * BITS_PER_BYTE);
+			ice_zero_bitmap(replay_bitmap, ICE_MAX_TRAFFIC_CLASS);
 			ice_sched_get_ena_tc_bitmap(pi,
 						    agg_info->replay_tc_bitmap,
 						    replay_bitmap);
@@ -5396,7 +5395,7 @@ ice_sched_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle)
 	struct ice_sched_agg_info *agg_info;
 	enum ice_status status;
 
-	ice_zero_bitmap(replay_bitmap, sizeof(replay_bitmap) * BITS_PER_BYTE);
+	ice_zero_bitmap(replay_bitmap, ICE_MAX_TRAFFIC_CLASS);
 	if (!ice_is_vsi_valid(hw, vsi_handle))
 		return ICE_ERR_PARAM;
 	agg_info = ice_get_vsi_agg_info(hw, vsi_handle);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 63/66] net/ice/base: fix potential memory leak in destroy tunnel
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (61 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 62/66] net/ice/base: use the specified size for ice zero bitmap Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 64/66] net/ice/base: correct NVGRE header structure Leyi Rong
                     ` (3 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Fix for an issue where an error would cause a memory leak in the
ice_destroy_tunnel function.

Fixes: 8c518712ef9f ("net/ice/base: add vxlan/generic tunnel management")

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 41d45e3f2..974b43b22 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1933,9 +1933,9 @@ enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all)
 				hw->tnl.tbl[i].marked = false;
 			}
 
+ice_destroy_tunnel_err:
 	ice_pkg_buf_free(hw, bld);
 
-ice_destroy_tunnel_err:
 	return status;
 }
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 64/66] net/ice/base: correct NVGRE header structure
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (62 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 63/66] net/ice/base: fix potential memory leak in destroy tunnel Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 65/66] net/ice/base: add link event defines Leyi Rong
                     ` (2 subsequent siblings)
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Correct NVGRE header structure and its field offsets.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_protocol_type.h | 5 +++--
 drivers/net/ice/base/ice_switch.c        | 2 +-
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
index 38bed7a79..033efdb5a 100644
--- a/drivers/net/ice/base/ice_protocol_type.h
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -194,8 +194,9 @@ struct ice_udp_tnl_hdr {
 };
 
 struct ice_nvgre {
-	u16 tni;
-	u16 flow_id;
+	u16 flags;
+	u16 protocol;
+	u32 tni_flow;
 };
 
 union ice_prot_hdr {
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index fe4d344a4..636b43d69 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -4310,7 +4310,7 @@ static const struct ice_prot_ext_tbl_entry ice_prot_ext[] = {
 	{ ICE_VXLAN,		{ 8, 10, 12, 14 } },
 	{ ICE_GENEVE,		{ 8, 10, 12, 14 } },
 	{ ICE_VXLAN_GPE,	{ 0, 2, 4 } },
-	{ ICE_NVGRE,		{ 0, 2 } },
+	{ ICE_NVGRE,		{ 0, 2, 4, 6 } },
 	{ ICE_PROTOCOL_LAST,	{ 0 } }
 };
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 65/66] net/ice/base: add link event defines
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (63 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 64/66] net/ice/base: correct NVGRE header structure Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 66/66] net/ice/base: reduce calls to get profile associations Leyi Rong
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Doug Dziggel, Paul M Stillwell Jr

Add support to mask the topology and media conflict events.

Signed-off-by: Doug Dziggel <douglas.a.dziggel@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index f2c492d62..5334f1982 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -1635,6 +1635,8 @@ struct ice_aqc_set_event_mask {
 #define ICE_AQ_LINK_EVENT_AN_COMPLETED		BIT(7)
 #define ICE_AQ_LINK_EVENT_MODULE_QUAL_FAIL	BIT(8)
 #define ICE_AQ_LINK_EVENT_PORT_TX_SUSPENDED	BIT(9)
+#define ICE_AQ_LINK_EVENT_TOPO_CONFLICT		BIT(10)
+#define ICE_AQ_LINK_EVENT_MEDIA_CONFLICT	BIT(11)
 	u8	reserved1[6];
 };
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v2 66/66] net/ice/base: reduce calls to get profile associations
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (64 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 65/66] net/ice/base: add link event defines Leyi Rong
@ 2019-06-11 15:52   ` Leyi Rong
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
  66 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-11 15:52 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Shivanshu Shukla, Paul M Stillwell Jr

Added refresh_required flag to determine if we need to update the
recipe to profile mapping cache. This will reduce the number of
calls made to refresh the profile map.

Signed-off-by: Shivanshu Shukla <shivanshu.shukla@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 25 +++++++++++++++++++------
 1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 636b43d69..7f4edd274 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -175,13 +175,15 @@ static void ice_get_recp_to_prof_map(struct ice_hw *hw);
  * @hw: pointer to hardware structure
  * @recps: struct that we need to populate
  * @rid: recipe ID that we are populating
+ * @refresh_required: true if we should get recipe to profile mapping from FW
  *
  * This function is used to populate all the necessary entries into our
  * bookkeeping so that we have a current list of all the recipes that are
  * programmed in the firmware.
  */
 static enum ice_status
-ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid)
+ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid,
+		    bool *refresh_required)
 {
 	u16 i, sub_recps, fv_word_idx = 0, result_idx = 0;
 	ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_PROFILES);
@@ -191,10 +193,6 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid)
 	struct ice_prot_lkup_ext *lkup_exts;
 	enum ice_status status;
 
-	/* Get recipe to profile map so that we can get the fv from
-	 * lkups that we read for a recipe from FW.
-	 */
-	ice_get_recp_to_prof_map(hw);
 	/* we need a buffer big enough to accommodate all the recipes */
 	tmp = (struct ice_aqc_recipe_data_elem *)ice_calloc(hw,
 		ICE_MAX_NUM_RECIPES, sizeof(*tmp));
@@ -206,6 +204,19 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid)
 	/* non-zero status meaning recipe doesn't exist */
 	if (status)
 		goto err_unroll;
+
+	/* Get recipe to profile map so that we can get the fv from lkups that
+	 * we read for a recipe from FW. Since we want to minimize the number of
+	 * times we make this FW call, just make one call and cache the copy
+	 * until a new recipe is added. This operation is only required the
+	 * first time to get the changes from FW. Then to search existing
+	 * entries we don't need to update the cache again until another recipe
+	 * gets added.
+	 */
+	if (*refresh_required) {
+		ice_get_recp_to_prof_map(hw);
+		*refresh_required = false;
+	}
 	lkup_exts = &recps[rid].lkup_exts;
 	/* start populating all the entries for recps[rid] based on lkups from
 	 * firmware
@@ -4358,6 +4369,7 @@ static const struct ice_protocol_entry ice_prot_id_tbl[] = {
  */
 static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts)
 {
+	bool refresh_required = true;
 	struct ice_sw_recipe *recp;
 	u16 i;
 
@@ -4376,7 +4388,8 @@ static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts)
 		 */
 		if (!recp[i].recp_created)
 			if (ice_get_recp_frm_fw(hw,
-						hw->switch_info->recp_list, i))
+						hw->switch_info->recp_list, i,
+						&refresh_required))
 				continue;
 
 		/* if number of words we are looking for match */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH v2 17/66] net/ice/base: add API to init FW logging
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 17/66] net/ice/base: add API to init FW logging Leyi Rong
@ 2019-06-11 16:23     ` Stillwell Jr, Paul M
  2019-06-12 14:38       ` Rong, Leyi
  0 siblings, 1 reply; 225+ messages in thread
From: Stillwell Jr, Paul M @ 2019-06-11 16:23 UTC (permalink / raw)
  To: Rong, Leyi, Zhang, Qi Z; +Cc: dev, Nowlin, Dan

> -----Original Message-----
> From: Rong, Leyi
> Sent: Tuesday, June 11, 2019 8:52 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Rong, Leyi <leyi.rong@intel.com>; Nowlin, Dan
> <dan.nowlin@intel.com>; Stillwell Jr, Paul M <paul.m.stillwell.jr@intel.com>
> Subject: [PATCH v2 17/66] net/ice/base: add API to init FW logging
> 
> In order to initialize the current status of the FW logging, the api
> ice_get_fw_log_cfg is added. The function retrieves the current setting of
> the FW logging from HW and updates the ice_hw structure accordingly.
> 
> Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> ---
>  drivers/net/ice/base/ice_adminq_cmd.h |  1 +
>  drivers/net/ice/base/ice_common.c     | 48
> +++++++++++++++++++++++++++
>  2 files changed, 49 insertions(+)
> 
> diff --git a/drivers/net/ice/base/ice_adminq_cmd.h
> b/drivers/net/ice/base/ice_adminq_cmd.h
> index 7b0aa8aaa..739f79e88 100644
> --- a/drivers/net/ice/base/ice_adminq_cmd.h
> +++ b/drivers/net/ice/base/ice_adminq_cmd.h
> @@ -2196,6 +2196,7 @@ enum ice_aqc_fw_logging_mod {
>  	ICE_AQC_FW_LOG_ID_WATCHDOG,
>  	ICE_AQC_FW_LOG_ID_TASK_DISPATCH,
>  	ICE_AQC_FW_LOG_ID_MNG,
> +	ICE_AQC_FW_LOG_ID_SYNCE,
>  	ICE_AQC_FW_LOG_ID_MAX,
>  };
> 
> diff --git a/drivers/net/ice/base/ice_common.c
> b/drivers/net/ice/base/ice_common.c
> index 62c7fad0d..7093ee4f4 100644
> --- a/drivers/net/ice/base/ice_common.c
> +++ b/drivers/net/ice/base/ice_common.c
> @@ -582,6 +582,49 @@ static void ice_cleanup_fltr_mgmt_struct(struct
> ice_hw *hw)
>  #define ICE_FW_LOG_DESC_SIZE_MAX	\
>  	ICE_FW_LOG_DESC_SIZE(ICE_AQC_FW_LOG_ID_MAX)
> 
> +/**
> + * ice_get_fw_log_cfg - get FW logging configuration
> + * @hw: pointer to the HW struct
> + */
> +static enum ice_status ice_get_fw_log_cfg(struct ice_hw *hw) {
> +	struct ice_aqc_fw_logging_data *config;
> +	struct ice_aq_desc desc;
> +	enum ice_status status;
> +	u16 size;
> +
> +	size = ICE_FW_LOG_DESC_SIZE_MAX;
> +	config = (struct ice_aqc_fw_logging_data *)ice_malloc(hw, size);
> +	if (!config)
> +		return ICE_ERR_NO_MEMORY;
> +
> +	ice_fill_dflt_direct_cmd_desc(&desc,
> ice_aqc_opc_fw_logging_info);
> +
> +	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_BUF);
> +	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
> +
> +	status = ice_aq_send_cmd(hw, &desc, config, size, NULL);
> +	if (!status) {
> +		u16 i;
> +
> +		/* Save fw logging information into the HW structure */
> +		for (i = 0; i < ICE_AQC_FW_LOG_ID_MAX; i++) {
> +			u16 v, m, flgs;
> +
> +			v = LE16_TO_CPU(config->entry[i]);
> +			m = (v & ICE_AQC_FW_LOG_ID_M) >>
> ICE_AQC_FW_LOG_ID_S;
> +			flgs = (v & ICE_AQC_FW_LOG_EN_M) >>
> ICE_AQC_FW_LOG_EN_S;
> +
> +			if (m < ICE_AQC_FW_LOG_ID_MAX)
> +				hw->fw_log.evnts[m].cur = flgs;
> +		}
> +	}
> +
> +	ice_free(hw, config);
> +
> +	return status;
> +}
> +
>  /**
>   * ice_cfg_fw_log - configure FW logging
>   * @hw: pointer to the HW struct
> @@ -636,6 +679,11 @@ static enum ice_status ice_cfg_fw_log(struct ice_hw
> *hw, bool enable)

Is there code in DPDK that calls ice_cfg_fw_log()? If not then I would drop this patch.

>  	    (!hw->fw_log.actv_evnts || !ice_check_sq_alive(hw, &hw-
> >adminq)))
>  		return ICE_SUCCESS;
> 
> +	/* Get current FW log settings */
> +	status = ice_get_fw_log_cfg(hw);
> +	if (status)
> +		return status;
> +
>  	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging);
>  	cmd = &desc.params.fw_logging;
> 
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH v2 21/66] net/ice/base: add helper functions for PHY caching
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 21/66] net/ice/base: add helper functions for PHY caching Leyi Rong
@ 2019-06-11 16:26     ` Stillwell Jr, Paul M
  2019-06-12 14:45       ` Rong, Leyi
  0 siblings, 1 reply; 225+ messages in thread
From: Stillwell Jr, Paul M @ 2019-06-11 16:26 UTC (permalink / raw)
  To: Rong, Leyi, Zhang, Qi Z; +Cc: dev, Nguyen, Anthony L

> -----Original Message-----
> From: Rong, Leyi
> Sent: Tuesday, June 11, 2019 8:52 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Rong, Leyi <leyi.rong@intel.com>; Nguyen, Anthony L
> <anthony.l.nguyen@intel.com>; Stillwell Jr, Paul M
> <paul.m.stillwell.jr@intel.com>
> Subject: [PATCH v2 21/66] net/ice/base: add helper functions for PHY caching
> 
> Add additional functions to aide in caching PHY configuration.
> In order to cache the initial modes, we need to determine the operating
> mode based on capabilities. Add helper functions for flow control and FEC to
> take a set of capabilities and return the operating mode matching those
> capabilities. Also add a helper function to determine whether a PHY capability
> matches a PHY configuration.
> 
> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> ---
>  drivers/net/ice/base/ice_adminq_cmd.h |  1 +
>  drivers/net/ice/base/ice_common.c     | 83
> +++++++++++++++++++++++++++
>  drivers/net/ice/base/ice_common.h     |  9 ++-
>  3 files changed, 91 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/ice/base/ice_adminq_cmd.h
> b/drivers/net/ice/base/ice_adminq_cmd.h
> index 739f79e88..77f93b950 100644
> --- a/drivers/net/ice/base/ice_adminq_cmd.h
> +++ b/drivers/net/ice/base/ice_adminq_cmd.h
> @@ -1594,6 +1594,7 @@ struct ice_aqc_get_link_status_data {
>  #define ICE_AQ_LINK_PWR_QSFP_CLASS_3	2
>  #define ICE_AQ_LINK_PWR_QSFP_CLASS_4	3
>  	__le16 link_speed;
> +#define ICE_AQ_LINK_SPEED_M		0x7FF
>  #define ICE_AQ_LINK_SPEED_10MB		BIT(0)
>  #define ICE_AQ_LINK_SPEED_100MB		BIT(1)
>  #define ICE_AQ_LINK_SPEED_1000MB	BIT(2)
> diff --git a/drivers/net/ice/base/ice_common.c
> b/drivers/net/ice/base/ice_common.c
> index 5b4a13a41..7f7f4dad0 100644
> --- a/drivers/net/ice/base/ice_common.c
> +++ b/drivers/net/ice/base/ice_common.c
> @@ -2552,6 +2552,53 @@ ice_cache_phy_user_req(struct ice_port_info *pi,
>  	}
>  }
> 
> +/**
> + * ice_caps_to_fc_mode
> + * @caps: PHY capabilities
> + *
> + * Convert PHY FC capabilities to ice FC mode  */ enum ice_fc_mode
> +ice_caps_to_fc_mode(u8 caps) {
> +	if (caps & ICE_AQC_PHY_EN_TX_LINK_PAUSE &&
> +	    caps & ICE_AQC_PHY_EN_RX_LINK_PAUSE)
> +		return ICE_FC_FULL;
> +
> +	if (caps & ICE_AQC_PHY_EN_TX_LINK_PAUSE)
> +		return ICE_FC_TX_PAUSE;
> +
> +	if (caps & ICE_AQC_PHY_EN_RX_LINK_PAUSE)
> +		return ICE_FC_RX_PAUSE;
> +
> +	return ICE_FC_NONE;
> +}
> +
> +/**
> + * ice_caps_to_fec_mode
> + * @caps: PHY capabilities
> + * @fec_options: Link FEC options
> + *
> + * Convert PHY FEC capabilities to ice FEC mode  */ enum ice_fec_mode
> +ice_caps_to_fec_mode(u8 caps, u8 fec_options) {
> +	if (caps & ICE_AQC_PHY_EN_AUTO_FEC)
> +		return ICE_FEC_AUTO;
> +
> +	if (fec_options & (ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
> +			   ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
> +			   ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN |
> +			   ICE_AQC_PHY_FEC_25G_KR_REQ))
> +		return ICE_FEC_BASER;
> +
> +	if (fec_options & (ICE_AQC_PHY_FEC_25G_RS_528_REQ |
> +			   ICE_AQC_PHY_FEC_25G_RS_544_REQ |
> +			   ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN))
> +		return ICE_FEC_RS;
> +
> +	return ICE_FEC_NONE;
> +}
> +

Is there DPDK code to call the above functions? If not, then drop this patch.

>  /**
>   * ice_set_fc
>   * @pi: port information structure
> @@ -2658,6 +2705,42 @@ ice_set_fc(struct ice_port_info *pi, u8
> *aq_failures, bool ena_auto_link_update)
>  	return status;
>  }
> 
> +/**
> + * ice_phy_caps_equals_cfg
> + * @phy_caps: PHY capabilities
> + * @phy_cfg: PHY configuration
> + *
> + * Helper function to determine if PHY capabilities matches PHY
> + * configuration
> + */
> +bool
> +ice_phy_caps_equals_cfg(struct ice_aqc_get_phy_caps_data *phy_caps,
> +			struct ice_aqc_set_phy_cfg_data *phy_cfg) {
> +	u8 caps_mask, cfg_mask;
> +
> +	if (!phy_caps || !phy_cfg)
> +		return false;
> +
> +	/* These bits are not common between capabilities and
> configuration.
> +	 * Do not use them to determine equality.
> +	 */
> +	caps_mask = ICE_AQC_PHY_CAPS_MASK &
> ~(ICE_AQC_PHY_AN_MODE |
> +					      ICE_AQC_PHY_EN_MOD_QUAL);
> +	cfg_mask = ICE_AQ_PHY_ENA_VALID_MASK &
> ~ICE_AQ_PHY_ENA_AUTO_LINK_UPDT;
> +
> +	if (phy_caps->phy_type_low != phy_cfg->phy_type_low ||
> +	    phy_caps->phy_type_high != phy_cfg->phy_type_high ||
> +	    ((phy_caps->caps & caps_mask) != (phy_cfg->caps & cfg_mask)) ||
> +	    phy_caps->low_power_ctrl != phy_cfg->low_power_ctrl ||
> +	    phy_caps->eee_cap != phy_cfg->eee_cap ||
> +	    phy_caps->eeer_value != phy_cfg->eeer_value ||
> +	    phy_caps->link_fec_options != phy_cfg->link_fec_opt)
> +		return false;
> +
> +	return true;
> +}
> +
>  /**
>   * ice_copy_phy_caps_to_cfg - Copy PHY ability data to configuration data
>   * @caps: PHY ability structure to copy date from diff --git
> a/drivers/net/ice/base/ice_common.h
> b/drivers/net/ice/base/ice_common.h
> index 4cd87fc1e..10131b473 100644
> --- a/drivers/net/ice/base/ice_common.h
> +++ b/drivers/net/ice/base/ice_common.h
> @@ -136,14 +136,19 @@ enum ice_status ice_clear_pf_cfg(struct ice_hw
> *hw);  enum ice_status  ice_aq_set_phy_cfg(struct ice_hw *hw, struct
> ice_port_info *pi,
>  		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd
> *cd);
> +enum ice_fc_mode ice_caps_to_fc_mode(u8 caps); enum ice_fec_mode
> +ice_caps_to_fec_mode(u8 caps, u8 fec_options);
>  enum ice_status
>  ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
>  	   bool ena_auto_link_update);
> -void
> -ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum
> ice_fec_mode fec);
> +bool
> +ice_phy_caps_equals_cfg(struct ice_aqc_get_phy_caps_data *caps,
> +			struct ice_aqc_set_phy_cfg_data *cfg);
>  void
>  ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps,
>  			 struct ice_aqc_set_phy_cfg_data *cfg);
> +void
> +ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum
> ice_fec_mode
> +fec);
>  enum ice_status
>  ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link,
>  			   struct ice_sq_cd *cd);
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH v2 24/66] net/ice/base: add support for reading REPC statistics
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 24/66] net/ice/base: add support for reading REPC statistics Leyi Rong
@ 2019-06-11 16:28     ` Stillwell Jr, Paul M
  2019-06-12 14:48       ` Rong, Leyi
  0 siblings, 1 reply; 225+ messages in thread
From: Stillwell Jr, Paul M @ 2019-06-11 16:28 UTC (permalink / raw)
  To: Rong, Leyi, Zhang, Qi Z; +Cc: dev, Keller, Jacob E

> -----Original Message-----
> From: Rong, Leyi
> Sent: Tuesday, June 11, 2019 8:52 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Rong, Leyi <leyi.rong@intel.com>; Keller, Jacob E
> <jacob.e.keller@intel.com>; Stillwell Jr, Paul M
> <paul.m.stillwell.jr@intel.com>
> Subject: [PATCH v2 24/66] net/ice/base: add support for reading REPC
> statistics
> 
> Add a new ice_stat_update_repc function which will read the register and
> increment the appropriate statistics in the ice_eth_stats structure.
> 
> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> ---
>  drivers/net/ice/base/ice_common.c | 51
> +++++++++++++++++++++++++++++++
> drivers/net/ice/base/ice_common.h |  3 ++
>  drivers/net/ice/base/ice_type.h   |  2 ++
>  3 files changed, 56 insertions(+)
> 
> diff --git a/drivers/net/ice/base/ice_common.c
> b/drivers/net/ice/base/ice_common.c
> index da72434d3..b4a9172b9 100644
> --- a/drivers/net/ice/base/ice_common.c
> +++ b/drivers/net/ice/base/ice_common.c
> @@ -4138,6 +4138,57 @@ ice_stat_update32(struct ice_hw *hw, u32 reg,
> bool prev_stat_loaded,
>  		*cur_stat = (new_data + BIT_ULL(32)) - *prev_stat;  }
> 
> +/**
> + * ice_stat_update_repc - read GLV_REPC stats from chip and update stat
> +values
> + * @hw: ptr to the hardware info
> + * @vsi_handle: VSI handle
> + * @prev_stat_loaded: bool to specify if the previous stat values are
> +loaded
> + * @cur_stats: ptr to current stats structure
> + *
> + * The GLV_REPC statistic register actually tracks two 16bit
> +statistics, and
> + * thus cannot be read using the normal ice_stat_update32 function.
> + *
> + * Read the GLV_REPC register associated with the given VSI, and update
> +the
> + * rx_no_desc and rx_error values in the ice_eth_stats structure.
> + *
> + * Because the statistics in GLV_REPC stick at 0xFFFF, the register
> +must be
> + * cleared each time it's read.
> + *
> + * Note that the GLV_RDPC register also counts the causes that would
> +trigger
> + * GLV_REPC. However, it does not give the finer grained detail about
> +why the
> + * packets are being dropped. The GLV_REPC values can be used to
> +distinguish
> + * whether Rx packets are dropped due to errors or due to no available
> + * descriptors.
> + */
> +void
> +ice_stat_update_repc(struct ice_hw *hw, u16 vsi_handle, bool
> prev_stat_loaded,
> +		     struct ice_eth_stats *cur_stats) {
> +	u16 vsi_num, no_desc, error_cnt;
> +	u32 repc;
> +
> +	if (!ice_is_vsi_valid(hw, vsi_handle))
> +		return;
> +
> +	vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
> +
> +	/* If we haven't loaded stats yet, just clear the current value */
> +	if (!prev_stat_loaded) {
> +		wr32(hw, GLV_REPC(vsi_num), 0);
> +		return;
> +	}
> +
> +	repc = rd32(hw, GLV_REPC(vsi_num));
> +	no_desc = (repc & GLV_REPC_NO_DESC_CNT_M) >>
> GLV_REPC_NO_DESC_CNT_S;
> +	error_cnt = (repc & GLV_REPC_ERROR_CNT_M) >>
> GLV_REPC_ERROR_CNT_S;
> +
> +	/* Clear the count by writing to the stats register */
> +	wr32(hw, GLV_REPC(vsi_num), 0);
> +
> +	cur_stats->rx_no_desc += no_desc;
> +	cur_stats->rx_errors += error_cnt;
> +}
> +
> 

Is there code in DPDK to call these functions? If not then drop this patch.

>  /**
>   * ice_sched_query_elem - query element information from HW diff --git
> a/drivers/net/ice/base/ice_common.h
> b/drivers/net/ice/base/ice_common.h
> index 10131b473..2ea4a6e8e 100644
> --- a/drivers/net/ice/base/ice_common.h
> +++ b/drivers/net/ice/base/ice_common.h
> @@ -205,6 +205,9 @@ ice_stat_update40(struct ice_hw *hw, u32 hireg, u32
> loreg,  void  ice_stat_update32(struct ice_hw *hw, u32 reg, bool
> prev_stat_loaded,
>  		  u64 *prev_stat, u64 *cur_stat);
> +void
> +ice_stat_update_repc(struct ice_hw *hw, u16 vsi_handle, bool
> prev_stat_loaded,
> +		     struct ice_eth_stats *cur_stats);
>  enum ice_status
>  ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
>  		     struct ice_aqc_get_elem *buf);
> diff --git a/drivers/net/ice/base/ice_type.h
> b/drivers/net/ice/base/ice_type.h index 3523b0c35..477f34595 100644
> --- a/drivers/net/ice/base/ice_type.h
> +++ b/drivers/net/ice/base/ice_type.h
> @@ -853,6 +853,8 @@ struct ice_eth_stats {
>  	u64 rx_broadcast;		/* bprc */
>  	u64 rx_discards;		/* rdpc */
>  	u64 rx_unknown_protocol;	/* rupp */
> +	u64 rx_no_desc;			/* repc */
> +	u64 rx_errors;			/* repc */
>  	u64 tx_bytes;			/* gotc */
>  	u64 tx_unicast;			/* uptc */
>  	u64 tx_multicast;		/* mptc */
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH v2 27/66] net/ice/base: add some minor features
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 27/66] net/ice/base: add some minor features Leyi Rong
@ 2019-06-11 16:30     ` Stillwell Jr, Paul M
  0 siblings, 0 replies; 225+ messages in thread
From: Stillwell Jr, Paul M @ 2019-06-11 16:30 UTC (permalink / raw)
  To: Rong, Leyi, Zhang, Qi Z; +Cc: dev

> -----Original Message-----
> From: Rong, Leyi
> Sent: Tuesday, June 11, 2019 8:52 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Rong, Leyi <leyi.rong@intel.com>; Stillwell Jr, Paul M
> <paul.m.stillwell.jr@intel.com>
> Subject: [PATCH v2 27/66] net/ice/base: add some minor features
> 
> 1. Add loopback reporting to get link response.
> 2. Add infrastructure for NVM Write/Write Activate calls.
> 3. Add opcode for NVM save factory settings/NVM Update EMPR command.
> 4. Add lan overflow event to ice_aq_desc.
> 

This seems like it should be split into separate patches. You also have 2 patches with the same commit title (27/66 & 30/66) which seems like a bad idea.

> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> ---
>  drivers/net/ice/base/ice_adminq_cmd.h | 47 ++++++++++++++++++--------
> -
>  1 file changed, 32 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/net/ice/base/ice_adminq_cmd.h
> b/drivers/net/ice/base/ice_adminq_cmd.h
> index 77f93b950..4e6bce18c 100644
> --- a/drivers/net/ice/base/ice_adminq_cmd.h
> +++ b/drivers/net/ice/base/ice_adminq_cmd.h
> @@ -110,6 +110,7 @@ struct ice_aqc_list_caps {  struct
> ice_aqc_list_caps_elem {
>  	__le16 cap;
>  #define ICE_AQC_CAPS_VALID_FUNCTIONS			0x0005
> +#define ICE_AQC_MAX_VALID_FUNCTIONS			0x8
>  #define ICE_AQC_CAPS_VSI				0x0017
>  #define ICE_AQC_CAPS_DCB				0x0018
>  #define ICE_AQC_CAPS_RSS				0x0040
> @@ -143,11 +144,9 @@ struct ice_aqc_manage_mac_read {
>  #define ICE_AQC_MAN_MAC_WOL_ADDR_VALID		BIT(7)
>  #define ICE_AQC_MAN_MAC_READ_S			4
>  #define ICE_AQC_MAN_MAC_READ_M			(0xF <<
> ICE_AQC_MAN_MAC_READ_S)
> -	u8 lport_num;
> -	u8 lport_num_valid;
> -#define ICE_AQC_MAN_MAC_PORT_NUM_IS_VALID	BIT(0)
> +	u8 rsvd[2];
>  	u8 num_addr; /* Used in response */
> -	u8 reserved[3];
> +	u8 rsvd1[3];
>  	__le32 addr_high;
>  	__le32 addr_low;
>  };
> @@ -165,7 +164,7 @@ struct ice_aqc_manage_mac_read_resp {
> 
>  /* Manage MAC address, write command - direct (0x0108) */  struct
> ice_aqc_manage_mac_write {
> -	u8 port_num;
> +	u8 rsvd;
>  	u8 flags;
>  #define ICE_AQC_MAN_MAC_WR_MC_MAG_EN		BIT(0)
>  #define ICE_AQC_MAN_MAC_WR_WOL_LAA_PFR_KEEP	BIT(1)
> @@ -481,8 +480,8 @@ struct ice_aqc_vsi_props {
>  #define ICE_AQ_VSI_VLAN_MODE_TAGGED	0x2
>  #define ICE_AQ_VSI_VLAN_MODE_ALL	0x3
>  #define ICE_AQ_VSI_PVLAN_INSERT_PVID	BIT(2)
> -#define ICE_AQ_VSI_VLAN_EMOD_S	3
> -#define ICE_AQ_VSI_VLAN_EMOD_M	(0x3 <<
> ICE_AQ_VSI_VLAN_EMOD_S)
> +#define ICE_AQ_VSI_VLAN_EMOD_S		3
> +#define ICE_AQ_VSI_VLAN_EMOD_M		(0x3 <<
> ICE_AQ_VSI_VLAN_EMOD_S)
>  #define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH	(0x0 <<
> ICE_AQ_VSI_VLAN_EMOD_S)
>  #define ICE_AQ_VSI_VLAN_EMOD_STR_UP	(0x1 <<
> ICE_AQ_VSI_VLAN_EMOD_S)
>  #define ICE_AQ_VSI_VLAN_EMOD_STR	(0x2 <<
> ICE_AQ_VSI_VLAN_EMOD_S)
> @@ -1425,6 +1424,7 @@ struct ice_aqc_get_phy_caps_data {
>  #define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN		BIT(6)
>  #define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN		BIT(7)
>  #define ICE_AQC_PHY_FEC_MASK
> 	MAKEMASK(0xdf, 0)
> +	u8 rsvd1;	/* Byte 35 reserved */
>  	u8 extended_compliance_code;
>  #define ICE_MODULE_TYPE_TOTAL_BYTE			3
>  	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
> @@ -1439,13 +1439,14 @@ struct ice_aqc_get_phy_caps_data {
>  #define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS			0xA0
>  #define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS		0x86
>  	u8 qualified_module_count;
> +	u8 rsvd2[7];	/* Bytes 47:41 reserved */
>  #define ICE_AQC_QUAL_MOD_COUNT_MAX			16
>  	struct {
>  		u8 v_oui[3];
>  		u8 rsvd3;
>  		u8 v_part[16];
>  		__le32 v_rev;
> -		__le64 rsvd8;
> +		__le64 rsvd4;
>  	} qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
>  };
> 
> @@ -1571,7 +1572,12 @@ struct ice_aqc_get_link_status_data {
>  #define ICE_AQ_LINK_TX_ACTIVE		0
>  #define ICE_AQ_LINK_TX_DRAINED		1
>  #define ICE_AQ_LINK_TX_FLUSHED		3
> -	u8 reserved2;
> +	u8 lb_status;
> +#define ICE_AQ_LINK_LB_PHY_LCL		BIT(0)
> +#define ICE_AQ_LINK_LB_PHY_RMT		BIT(1)
> +#define ICE_AQ_LINK_LB_MAC_LCL		BIT(2)
> +#define ICE_AQ_LINK_LB_PHY_IDX_S	3
> +#define ICE_AQ_LINK_LB_PHY_IDX_M	(0x7 <<
> ICE_AQ_LB_PHY_IDX_S)
>  	__le16 max_frame_size;
>  	u8 cfg;
>  #define ICE_AQ_LINK_25G_KR_FEC_EN	BIT(0)
> @@ -1659,20 +1665,26 @@ struct ice_aqc_set_port_id_led {
> 
>  /* NVM Read command (indirect 0x0701)
>   * NVM Erase commands (direct 0x0702)
> - * NVM Update commands (indirect 0x0703)
> + * NVM Write commands (indirect 0x0703)
> + * NVM Write Activate commands (direct 0x0707)
> + * NVM Shadow RAM Dump commands (direct 0x0707)
>   */
>  struct ice_aqc_nvm {
>  	__le16 offset_low;
>  	u8 offset_high;
>  	u8 cmd_flags;
>  #define ICE_AQC_NVM_LAST_CMD		BIT(0)
> -#define ICE_AQC_NVM_PCIR_REQ		BIT(0)	/* Used by NVM
> Update reply */
> -#define ICE_AQC_NVM_PRESERVATION_S	1
> +#define ICE_AQC_NVM_PCIR_REQ		BIT(0)	/* Used by NVM
> Write reply */
> +#define ICE_AQC_NVM_PRESERVATION_S	1 /* Used by NVM Write
> Activate only */
>  #define ICE_AQC_NVM_PRESERVATION_M	(3 <<
> ICE_AQC_NVM_PRESERVATION_S)
>  #define ICE_AQC_NVM_NO_PRESERVATION	(0 <<
> ICE_AQC_NVM_PRESERVATION_S)
>  #define ICE_AQC_NVM_PRESERVE_ALL	BIT(1)
>  #define ICE_AQC_NVM_FACTORY_DEFAULT	(2 <<
> ICE_AQC_NVM_PRESERVATION_S)
>  #define ICE_AQC_NVM_PRESERVE_SELECTED	(3 <<
> ICE_AQC_NVM_PRESERVATION_S)
> +#define ICE_AQC_NVM_ACTIV_SEL_NVM	BIT(3) /* Write Activate/SR
> Dump only */
> +#define ICE_AQC_NVM_ACTIV_SEL_OROM	BIT(4)
> +#define ICE_AQC_NVM_ACTIV_SEL_EXT_TLV	BIT(5)
> +#define ICE_AQC_NVM_ACTIV_SEL_MASK	MAKEMASK(0x7, 3)
>  #define ICE_AQC_NVM_FLASH_ONLY		BIT(7)
>  	__le16 module_typeid;
>  	__le16 length;
> @@ -1832,7 +1844,7 @@ struct ice_aqc_get_cee_dcb_cfg_resp {  };
> 
>  /* Set Local LLDP MIB (indirect 0x0A08)
> - * Used to replace the local MIB of a given LLDP agent. e.g. DCBx
> + * Used to replace the local MIB of a given LLDP agent. e.g. DCBX
>   */
>  struct ice_aqc_lldp_set_local_mib {
>  	u8 type;
> @@ -1857,7 +1869,7 @@ struct ice_aqc_lldp_set_local_mib_resp {  };
> 
>  /* Stop/Start LLDP Agent (direct 0x0A09)
> - * Used for stopping/starting specific LLDP agent. e.g. DCBx.
> + * Used for stopping/starting specific LLDP agent. e.g. DCBX.
>   * The same structure is used for the response, with the command field
>   * being used as the status field.
>   */
> @@ -2321,6 +2333,7 @@ struct ice_aq_desc {
>  		struct ice_aqc_set_mac_cfg set_mac_cfg;
>  		struct ice_aqc_set_event_mask set_event_mask;
>  		struct ice_aqc_get_link_status get_link_status;
> +		struct ice_aqc_event_lan_overflow lan_overflow;
>  	} params;
>  };
> 
> @@ -2492,10 +2505,14 @@ enum ice_adminq_opc {
>  	/* NVM commands */
>  	ice_aqc_opc_nvm_read				= 0x0701,
>  	ice_aqc_opc_nvm_erase				= 0x0702,
> -	ice_aqc_opc_nvm_update				= 0x0703,
> +	ice_aqc_opc_nvm_write				= 0x0703,
>  	ice_aqc_opc_nvm_cfg_read			= 0x0704,
>  	ice_aqc_opc_nvm_cfg_write			= 0x0705,
>  	ice_aqc_opc_nvm_checksum			= 0x0706,
> +	ice_aqc_opc_nvm_write_activate			= 0x0707,
> +	ice_aqc_opc_nvm_sr_dump				= 0x0707,
> +	ice_aqc_opc_nvm_save_factory_settings		= 0x0708,
> +	ice_aqc_opc_nvm_update_empr			= 0x0709,
> 
>  	/* LLDP commands */
>  	ice_aqc_opc_lldp_get_mib			= 0x0A00,
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH v2 30/66] net/ice/base: add some minor features
  2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 30/66] net/ice/base: add some minor features Leyi Rong
@ 2019-06-11 16:30     ` Stillwell Jr, Paul M
  0 siblings, 0 replies; 225+ messages in thread
From: Stillwell Jr, Paul M @ 2019-06-11 16:30 UTC (permalink / raw)
  To: Rong, Leyi, Zhang, Qi Z; +Cc: dev

> -----Original Message-----
> From: Rong, Leyi
> Sent: Tuesday, June 11, 2019 8:52 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Rong, Leyi <leyi.rong@intel.com>; Stillwell Jr, Paul M
> <paul.m.stillwell.jr@intel.com>
> Subject: [PATCH v2 30/66] net/ice/base: add some minor features
> 
> 1. Disable TX pacing option.
> 2. Use a different ICE_DBG bit for firmware log messages.
> 3. Always set prefena when configuring a RX queue.
> 4. make FDID available for FlexDescriptor.
> 

I think this should be split into separate patches.

> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> ---
>  drivers/net/ice/base/ice_common.c    | 44 +++++++++++++---------------
>  drivers/net/ice/base/ice_fdir.c      |  2 +-
>  drivers/net/ice/base/ice_lan_tx_rx.h |  3 +-
>  drivers/net/ice/base/ice_type.h      |  2 +-
>  4 files changed, 25 insertions(+), 26 deletions(-)
> 
> diff --git a/drivers/net/ice/base/ice_common.c
> b/drivers/net/ice/base/ice_common.c
> index 6e5a60a38..89c922bed 100644
> --- a/drivers/net/ice/base/ice_common.c
> +++ b/drivers/net/ice/base/ice_common.c
> @@ -449,11 +449,7 @@ ice_aq_set_mac_cfg(struct ice_hw *hw, u16
> max_frame_size, struct ice_sq_cd *cd)  {
>  	u16 fc_threshold_val, tx_timer_val;
>  	struct ice_aqc_set_mac_cfg *cmd;
> -	struct ice_port_info *pi;
>  	struct ice_aq_desc desc;
> -	enum ice_status status;
> -	u8 port_num = 0;
> -	bool link_up;
>  	u32 reg_val;
> 
>  	cmd = &desc.params.set_mac_cfg;
> @@ -465,21 +461,6 @@ ice_aq_set_mac_cfg(struct ice_hw *hw, u16
> max_frame_size, struct ice_sq_cd *cd)
> 
>  	cmd->max_frame_size = CPU_TO_LE16(max_frame_size);
> 
> -	/* Retrieve the current data_pacing value in FW*/
> -	pi = &hw->port_info[port_num];
> -
> -	/* We turn on the get_link_info so that ice_update_link_info(...)
> -	 * can be called.
> -	 */
> -	pi->phy.get_link_info = 1;
> -
> -	status = ice_get_link_status(pi, &link_up);
> -
> -	if (status)
> -		return status;
> -
> -	cmd->params = pi->phy.link_info.pacing;
> -
>  	/* We read back the transmit timer and fc threshold value of
>  	 * LFC. Thus, we will use index =
>  	 * PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX.
> @@ -544,7 +525,15 @@ static void ice_cleanup_fltr_mgmt_struct(struct
> ice_hw *hw)
>  	}
>  	recps = hw->switch_info->recp_list;
>  	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
> +		struct ice_recp_grp_entry *rg_entry, *tmprg_entry;
> +
>  		recps[i].root_rid = i;
> +		LIST_FOR_EACH_ENTRY_SAFE(rg_entry, tmprg_entry,
> +					 &recps[i].rg_list,
> ice_recp_grp_entry,
> +					 l_entry) {
> +			LIST_DEL(&rg_entry->l_entry);
> +			ice_free(hw, rg_entry);
> +		}
> 
>  		if (recps[i].adv_rule) {
>  			struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
> @@ -571,6 +560,8 @@ static void ice_cleanup_fltr_mgmt_struct(struct
> ice_hw *hw)
>  				ice_free(hw, lst_itr);
>  			}
>  		}
> +		if (recps[i].root_buf)
> +			ice_free(hw, recps[i].root_buf);
>  	}
>  	ice_rm_all_sw_replay_rule_info(hw);
>  	ice_free(hw, sw->recp_list);
> @@ -789,10 +780,10 @@ static enum ice_status ice_cfg_fw_log(struct
> ice_hw *hw, bool enable)
>   */
>  void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void
> *buf)  {
> -	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg Start ]\n");
> -	ice_debug_array(hw, ICE_DBG_AQ_MSG, 16, 1, (u8 *)buf,
> +	ice_debug(hw, ICE_DBG_FW_LOG, "[ FW Log Msg Start ]\n");
> +	ice_debug_array(hw, ICE_DBG_FW_LOG, 16, 1, (u8 *)buf,
>  			LE16_TO_CPU(desc->datalen));
> -	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg End ]\n");
> +	ice_debug(hw, ICE_DBG_FW_LOG, "[ FW Log Msg End ]\n");
>  }
> 
>  /**
> @@ -1213,6 +1204,7 @@ static const struct ice_ctx_ele ice_rlan_ctx_info[] =
> {
>  	ICE_CTX_STORE(ice_rlan_ctx, tphdata_ena,	1,	195),
>  	ICE_CTX_STORE(ice_rlan_ctx, tphhead_ena,	1,	196),
>  	ICE_CTX_STORE(ice_rlan_ctx, lrxqthresh,		3,	198),
> +	ICE_CTX_STORE(ice_rlan_ctx, prefena,		1,	201),
>  	{ 0 }
>  };
> 
> @@ -1223,7 +1215,8 @@ static const struct ice_ctx_ele ice_rlan_ctx_info[] =
> {
>   * @rxq_index: the index of the Rx queue
>   *
>   * Converts rxq context from sparse to dense structure and then writes
> - * it to HW register space
> + * it to HW register space and enables the hardware to prefetch
> + descriptors
> + * instead of only fetching them on demand
>   */
>  enum ice_status
>  ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx, @@ -
> 1231,6 +1224,11 @@ ice_write_rxq_ctx(struct ice_hw *hw, struct
> ice_rlan_ctx *rlan_ctx,  {
>  	u8 ctx_buf[ICE_RXQ_CTX_SZ] = { 0 };
> 
> +	if (!rlan_ctx)
> +		return ICE_ERR_BAD_PTR;
> +
> +	rlan_ctx->prefena = 1;
> +
>  	ice_set_ctx((u8 *)rlan_ctx, ctx_buf, ice_rlan_ctx_info);
>  	return ice_copy_rxq_ctx_to_hw(hw, ctx_buf, rxq_index);  } diff --git
> a/drivers/net/ice/base/ice_fdir.c b/drivers/net/ice/base/ice_fdir.c index
> 4bc8e6dcb..bde676a8f 100644
> --- a/drivers/net/ice/base/ice_fdir.c
> +++ b/drivers/net/ice/base/ice_fdir.c
> @@ -186,7 +186,7 @@ ice_set_dflt_val_fd_desc(struct ice_fd_fltr_desc_ctx
> *fd_fltr_ctx)
>  	fd_fltr_ctx->desc_prof_prio =
> ICE_FXD_FLTR_QW1_PROF_PRIO_ZERO;
>  	fd_fltr_ctx->desc_prof = ICE_FXD_FLTR_QW1_PROF_ZERO;
>  	fd_fltr_ctx->swap = ICE_FXD_FLTR_QW1_SWAP_SET;
> -	fd_fltr_ctx->fdid_prio = ICE_FXD_FLTR_QW1_FDID_PRI_ZERO;
> +	fd_fltr_ctx->fdid_prio = ICE_FXD_FLTR_QW1_FDID_PRI_ONE;
>  	fd_fltr_ctx->fdid_mdid = ICE_FXD_FLTR_QW1_FDID_MDID_FD;
>  	fd_fltr_ctx->fdid = ICE_FXD_FLTR_QW1_FDID_ZERO;  } diff --git
> a/drivers/net/ice/base/ice_lan_tx_rx.h
> b/drivers/net/ice/base/ice_lan_tx_rx.h
> index 8c9902994..fa2309bf1 100644
> --- a/drivers/net/ice/base/ice_lan_tx_rx.h
> +++ b/drivers/net/ice/base/ice_lan_tx_rx.h
> @@ -162,7 +162,7 @@ struct ice_fltr_desc {
> 
>  #define ICE_FXD_FLTR_QW1_FDID_PRI_S	25
>  #define ICE_FXD_FLTR_QW1_FDID_PRI_M	(0x7ULL <<
> ICE_FXD_FLTR_QW1_FDID_PRI_S)
> -#define ICE_FXD_FLTR_QW1_FDID_PRI_ZERO	0x0ULL
> +#define ICE_FXD_FLTR_QW1_FDID_PRI_ONE	0x1ULL
> 
>  #define ICE_FXD_FLTR_QW1_FDID_MDID_S	28
>  #define ICE_FXD_FLTR_QW1_FDID_MDID_M	(0xFULL <<
> ICE_FXD_FLTR_QW1_FDID_MDID_S)
> @@ -807,6 +807,7 @@ struct ice_rlan_ctx {
>  	u8 tphdata_ena;
>  	u8 tphhead_ena;
>  	u16 lrxqthresh; /* bigger than needed, see above for reason */
> +	u8 prefena;	/* NOTE: normally must be set to 1 at init */
>  };
> 
>  struct ice_ctx_ele {
> diff --git a/drivers/net/ice/base/ice_type.h
> b/drivers/net/ice/base/ice_type.h index 477f34595..116cfe647 100644
> --- a/drivers/net/ice/base/ice_type.h
> +++ b/drivers/net/ice/base/ice_type.h
> @@ -82,7 +82,7 @@ static inline u32 ice_round_to_num(u32 N, u32 R)
>  /* debug masks - set these bits in hw->debug_mask to control output */
>  #define ICE_DBG_INIT		BIT_ULL(1)
>  #define ICE_DBG_RELEASE		BIT_ULL(2)
> -
> +#define ICE_DBG_FW_LOG		BIT_ULL(3)
>  #define ICE_DBG_LINK		BIT_ULL(4)
>  #define ICE_DBG_PHY		BIT_ULL(5)
>  #define ICE_DBG_QCTX		BIT_ULL(6)
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH v2 17/66] net/ice/base: add API to init FW logging
  2019-06-11 16:23     ` Stillwell Jr, Paul M
@ 2019-06-12 14:38       ` Rong, Leyi
  0 siblings, 0 replies; 225+ messages in thread
From: Rong, Leyi @ 2019-06-12 14:38 UTC (permalink / raw)
  To: Stillwell Jr, Paul M, Zhang, Qi Z; +Cc: dev, Nowlin, Dan


> -----Original Message-----
> From: Stillwell Jr, Paul M
> Sent: Wednesday, June 12, 2019 12:24 AM
> To: Rong, Leyi <leyi.rong@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Nowlin, Dan <dan.nowlin@intel.com>
> Subject: RE: [PATCH v2 17/66] net/ice/base: add API to init FW logging
> 
> > -----Original Message-----
> > From: Rong, Leyi
> > Sent: Tuesday, June 11, 2019 8:52 AM
> > To: Zhang, Qi Z <qi.z.zhang@intel.com>
> > Cc: dev@dpdk.org; Rong, Leyi <leyi.rong@intel.com>; Nowlin, Dan
> > <dan.nowlin@intel.com>; Stillwell Jr, Paul M
> > <paul.m.stillwell.jr@intel.com>
> > Subject: [PATCH v2 17/66] net/ice/base: add API to init FW logging
> >
> > In order to initialize the current status of the FW logging, the api
> > ice_get_fw_log_cfg is added. The function retrieves the current
> > setting of the FW logging from HW and updates the ice_hw structure accordingly.
> >
> > Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
> > Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> > Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> > ---
> >  drivers/net/ice/base/ice_adminq_cmd.h |  1 +
> >  drivers/net/ice/base/ice_common.c     | 48
> > +++++++++++++++++++++++++++
> >  2 files changed, 49 insertions(+)
> >
> > diff --git a/drivers/net/ice/base/ice_adminq_cmd.h
> > b/drivers/net/ice/base/ice_adminq_cmd.h
> > index 7b0aa8aaa..739f79e88 100644
> > --- a/drivers/net/ice/base/ice_adminq_cmd.h
> > +++ b/drivers/net/ice/base/ice_adminq_cmd.h
> > @@ -2196,6 +2196,7 @@ enum ice_aqc_fw_logging_mod {
> >  	ICE_AQC_FW_LOG_ID_WATCHDOG,
> >  	ICE_AQC_FW_LOG_ID_TASK_DISPATCH,
> >  	ICE_AQC_FW_LOG_ID_MNG,
> > +	ICE_AQC_FW_LOG_ID_SYNCE,
> >  	ICE_AQC_FW_LOG_ID_MAX,
> >  };
> >
> > diff --git a/drivers/net/ice/base/ice_common.c
> > b/drivers/net/ice/base/ice_common.c
> > index 62c7fad0d..7093ee4f4 100644
> > --- a/drivers/net/ice/base/ice_common.c
> > +++ b/drivers/net/ice/base/ice_common.c
> > @@ -582,6 +582,49 @@ static void ice_cleanup_fltr_mgmt_struct(struct
> > ice_hw *hw)
> >  #define ICE_FW_LOG_DESC_SIZE_MAX	\
> >  	ICE_FW_LOG_DESC_SIZE(ICE_AQC_FW_LOG_ID_MAX)
> >
> > +/**
> > + * ice_get_fw_log_cfg - get FW logging configuration
> > + * @hw: pointer to the HW struct
> > + */
> > +static enum ice_status ice_get_fw_log_cfg(struct ice_hw *hw) {
> > +	struct ice_aqc_fw_logging_data *config;
> > +	struct ice_aq_desc desc;
> > +	enum ice_status status;
> > +	u16 size;
> > +
> > +	size = ICE_FW_LOG_DESC_SIZE_MAX;
> > +	config = (struct ice_aqc_fw_logging_data *)ice_malloc(hw, size);
> > +	if (!config)
> > +		return ICE_ERR_NO_MEMORY;
> > +
> > +	ice_fill_dflt_direct_cmd_desc(&desc,
> > ice_aqc_opc_fw_logging_info);
> > +
> > +	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_BUF);
> > +	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
> > +
> > +	status = ice_aq_send_cmd(hw, &desc, config, size, NULL);
> > +	if (!status) {
> > +		u16 i;
> > +
> > +		/* Save fw logging information into the HW structure */
> > +		for (i = 0; i < ICE_AQC_FW_LOG_ID_MAX; i++) {
> > +			u16 v, m, flgs;
> > +
> > +			v = LE16_TO_CPU(config->entry[i]);
> > +			m = (v & ICE_AQC_FW_LOG_ID_M) >>
> > ICE_AQC_FW_LOG_ID_S;
> > +			flgs = (v & ICE_AQC_FW_LOG_EN_M) >>
> > ICE_AQC_FW_LOG_EN_S;
> > +
> > +			if (m < ICE_AQC_FW_LOG_ID_MAX)
> > +				hw->fw_log.evnts[m].cur = flgs;
> > +		}
> > +	}
> > +
> > +	ice_free(hw, config);
> > +
> > +	return status;
> > +}
> > +
> >  /**
> >   * ice_cfg_fw_log - configure FW logging
> >   * @hw: pointer to the HW struct
> > @@ -636,6 +679,11 @@ static enum ice_status ice_cfg_fw_log(struct
> > ice_hw *hw, bool enable)
> 
> Is there code in DPDK that calls ice_cfg_fw_log()? If not then I would drop this patch.
> 

Yes, ice_cfg_fw_log() can be called indirectly.
ice_dev_init() -> ice_init_hw() -> ice_cfg_fw_log()


^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH v2 21/66] net/ice/base: add helper functions for PHY caching
  2019-06-11 16:26     ` Stillwell Jr, Paul M
@ 2019-06-12 14:45       ` Rong, Leyi
  0 siblings, 0 replies; 225+ messages in thread
From: Rong, Leyi @ 2019-06-12 14:45 UTC (permalink / raw)
  To: Stillwell Jr, Paul M, Zhang, Qi Z; +Cc: dev, Nguyen, Anthony L


> -----Original Message-----
> From: Stillwell Jr, Paul M
> Sent: Wednesday, June 12, 2019 12:27 AM
> To: Rong, Leyi <leyi.rong@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Nguyen, Anthony L <anthony.l.nguyen@intel.com>
> Subject: RE: [PATCH v2 21/66] net/ice/base: add helper functions for PHY caching
> 
> > -----Original Message-----
> > From: Rong, Leyi
> > Sent: Tuesday, June 11, 2019 8:52 AM
> > To: Zhang, Qi Z <qi.z.zhang@intel.com>
> > Cc: dev@dpdk.org; Rong, Leyi <leyi.rong@intel.com>; Nguyen, Anthony L
> > <anthony.l.nguyen@intel.com>; Stillwell Jr, Paul M
> > <paul.m.stillwell.jr@intel.com>
> > Subject: [PATCH v2 21/66] net/ice/base: add helper functions for PHY
> > caching
> >
> > Add additional functions to aide in caching PHY configuration.
> > In order to cache the initial modes, we need to determine the
> > operating mode based on capabilities. Add helper functions for flow
> > control and FEC to take a set of capabilities and return the operating
> > mode matching those capabilities. Also add a helper function to
> > determine whether a PHY capability matches a PHY configuration.
> >
> > Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> > Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> > Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> > ---
> >  drivers/net/ice/base/ice_adminq_cmd.h |  1 +
> >  drivers/net/ice/base/ice_common.c     | 83
> > +++++++++++++++++++++++++++
> >  drivers/net/ice/base/ice_common.h     |  9 ++-
> >  3 files changed, 91 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/ice/base/ice_adminq_cmd.h
> > b/drivers/net/ice/base/ice_adminq_cmd.h
> > index 739f79e88..77f93b950 100644
> > --- a/drivers/net/ice/base/ice_adminq_cmd.h
> > +++ b/drivers/net/ice/base/ice_adminq_cmd.h
> > @@ -1594,6 +1594,7 @@ struct ice_aqc_get_link_status_data {
> >  #define ICE_AQ_LINK_PWR_QSFP_CLASS_3	2
> >  #define ICE_AQ_LINK_PWR_QSFP_CLASS_4	3
> >  	__le16 link_speed;
> > +#define ICE_AQ_LINK_SPEED_M		0x7FF
> >  #define ICE_AQ_LINK_SPEED_10MB		BIT(0)
> >  #define ICE_AQ_LINK_SPEED_100MB		BIT(1)
> >  #define ICE_AQ_LINK_SPEED_1000MB	BIT(2)
> > diff --git a/drivers/net/ice/base/ice_common.c
> > b/drivers/net/ice/base/ice_common.c
> > index 5b4a13a41..7f7f4dad0 100644
> > --- a/drivers/net/ice/base/ice_common.c
> > +++ b/drivers/net/ice/base/ice_common.c
> > @@ -2552,6 +2552,53 @@ ice_cache_phy_user_req(struct ice_port_info *pi,
> >  	}
> >  }
> >
> > +/**
> > + * ice_caps_to_fc_mode
> > + * @caps: PHY capabilities
> > + *
> > + * Convert PHY FC capabilities to ice FC mode  */ enum ice_fc_mode
> > +ice_caps_to_fc_mode(u8 caps) {
> > +	if (caps & ICE_AQC_PHY_EN_TX_LINK_PAUSE &&
> > +	    caps & ICE_AQC_PHY_EN_RX_LINK_PAUSE)
> > +		return ICE_FC_FULL;
> > +
> > +	if (caps & ICE_AQC_PHY_EN_TX_LINK_PAUSE)
> > +		return ICE_FC_TX_PAUSE;
> > +
> > +	if (caps & ICE_AQC_PHY_EN_RX_LINK_PAUSE)
> > +		return ICE_FC_RX_PAUSE;
> > +
> > +	return ICE_FC_NONE;
> > +}
> > +
> > +/**
> > + * ice_caps_to_fec_mode
> > + * @caps: PHY capabilities
> > + * @fec_options: Link FEC options
> > + *
> > + * Convert PHY FEC capabilities to ice FEC mode  */ enum ice_fec_mode
> > +ice_caps_to_fec_mode(u8 caps, u8 fec_options) {
> > +	if (caps & ICE_AQC_PHY_EN_AUTO_FEC)
> > +		return ICE_FEC_AUTO;
> > +
> > +	if (fec_options & (ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
> > +			   ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
> > +			   ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN |
> > +			   ICE_AQC_PHY_FEC_25G_KR_REQ))
> > +		return ICE_FEC_BASER;
> > +
> > +	if (fec_options & (ICE_AQC_PHY_FEC_25G_RS_528_REQ |
> > +			   ICE_AQC_PHY_FEC_25G_RS_544_REQ |
> > +			   ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN))
> > +		return ICE_FEC_RS;
> > +
> > +	return ICE_FEC_NONE;
> > +}
> > +
> 
> Is there DPDK code to call the above functions? If not, then drop this patch.
> 

They will not be called.

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH v2 24/66] net/ice/base: add support for reading REPC statistics
  2019-06-11 16:28     ` Stillwell Jr, Paul M
@ 2019-06-12 14:48       ` Rong, Leyi
  0 siblings, 0 replies; 225+ messages in thread
From: Rong, Leyi @ 2019-06-12 14:48 UTC (permalink / raw)
  To: Stillwell Jr, Paul M, Zhang, Qi Z; +Cc: dev, Keller, Jacob E


> -----Original Message-----
> From: Stillwell Jr, Paul M
> Sent: Wednesday, June 12, 2019 12:28 AM
> To: Rong, Leyi <leyi.rong@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Keller, Jacob E <jacob.e.keller@intel.com>
> Subject: RE: [PATCH v2 24/66] net/ice/base: add support for reading REPC statistics
> 
> > -----Original Message-----
> > From: Rong, Leyi
> > Sent: Tuesday, June 11, 2019 8:52 AM
> > To: Zhang, Qi Z <qi.z.zhang@intel.com>
> > Cc: dev@dpdk.org; Rong, Leyi <leyi.rong@intel.com>; Keller, Jacob E
> > <jacob.e.keller@intel.com>; Stillwell Jr, Paul M
> > <paul.m.stillwell.jr@intel.com>
> > Subject: [PATCH v2 24/66] net/ice/base: add support for reading REPC
> > statistics
> >
> > Add a new ice_stat_update_repc function which will read the register
> > and increment the appropriate statistics in the ice_eth_stats structure.
> >
> > Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
> > Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> > Signed-off-by: Leyi Rong <leyi.rong@intel.com>
> > ---
> >  drivers/net/ice/base/ice_common.c | 51
> > +++++++++++++++++++++++++++++++
> > drivers/net/ice/base/ice_common.h |  3 ++
> >  drivers/net/ice/base/ice_type.h   |  2 ++
> >  3 files changed, 56 insertions(+)
> >
> > diff --git a/drivers/net/ice/base/ice_common.c
> > b/drivers/net/ice/base/ice_common.c
> > index da72434d3..b4a9172b9 100644
> > --- a/drivers/net/ice/base/ice_common.c
> > +++ b/drivers/net/ice/base/ice_common.c
> > @@ -4138,6 +4138,57 @@ ice_stat_update32(struct ice_hw *hw, u32 reg,
> > bool prev_stat_loaded,
> >  		*cur_stat = (new_data + BIT_ULL(32)) - *prev_stat;  }
> >
> > +/**
> > + * ice_stat_update_repc - read GLV_REPC stats from chip and update
> > +stat values
> > + * @hw: ptr to the hardware info
> > + * @vsi_handle: VSI handle
> > + * @prev_stat_loaded: bool to specify if the previous stat values are
> > +loaded
> > + * @cur_stats: ptr to current stats structure
> > + *
> > + * The GLV_REPC statistic register actually tracks two 16bit
> > +statistics, and
> > + * thus cannot be read using the normal ice_stat_update32 function.
> > + *
> > + * Read the GLV_REPC register associated with the given VSI, and
> > +update the
> > + * rx_no_desc and rx_error values in the ice_eth_stats structure.
> > + *
> > + * Because the statistics in GLV_REPC stick at 0xFFFF, the register
> > +must be
> > + * cleared each time it's read.
> > + *
> > + * Note that the GLV_RDPC register also counts the causes that would
> > +trigger
> > + * GLV_REPC. However, it does not give the finer grained detail about
> > +why the
> > + * packets are being dropped. The GLV_REPC values can be used to
> > +distinguish
> > + * whether Rx packets are dropped due to errors or due to no
> > +available
> > + * descriptors.
> > + */
> > +void
> > +ice_stat_update_repc(struct ice_hw *hw, u16 vsi_handle, bool
> > prev_stat_loaded,
> > +		     struct ice_eth_stats *cur_stats) {
> > +	u16 vsi_num, no_desc, error_cnt;
> > +	u32 repc;
> > +
> > +	if (!ice_is_vsi_valid(hw, vsi_handle))
> > +		return;
> > +
> > +	vsi_num = ice_get_hw_vsi_num(hw, vsi_handle);
> > +
> > +	/* If we haven't loaded stats yet, just clear the current value */
> > +	if (!prev_stat_loaded) {
> > +		wr32(hw, GLV_REPC(vsi_num), 0);
> > +		return;
> > +	}
> > +
> > +	repc = rd32(hw, GLV_REPC(vsi_num));
> > +	no_desc = (repc & GLV_REPC_NO_DESC_CNT_M) >>
> > GLV_REPC_NO_DESC_CNT_S;
> > +	error_cnt = (repc & GLV_REPC_ERROR_CNT_M) >>
> > GLV_REPC_ERROR_CNT_S;
> > +
> > +	/* Clear the count by writing to the stats register */
> > +	wr32(hw, GLV_REPC(vsi_num), 0);
> > +
> > +	cur_stats->rx_no_desc += no_desc;
> > +	cur_stats->rx_errors += error_cnt;
> > +}
> > +
> >
> 
> Is there code in DPDK to call these functions? If not then drop this patch.
> 

This function will not be called.

^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 00/69] shared code update
  2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
                     ` (65 preceding siblings ...)
  2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 66/66] net/ice/base: reduce calls to get profile associations Leyi Rong
@ 2019-06-19 15:17   ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 01/69] net/ice/base: update standard extr seq to include DIR flag Leyi Rong
                       ` (69 more replies)
  66 siblings, 70 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong

Main changes:
1. Advanced switch rule support.
2. Add more APIs for tunnel management.
3. Add some minor features.
4. Code clean and bug fix.

---
v3:
- Drop some patches which do not used.
- Split some patches which include irrelevant code.
- Squash some patches which needs to be put together.
- Add some new patches from latest shared code release.

v2:
- Split [03/49] into 2 commits.
- Split [27/49] with a standalone commit for code change in ice_osdep.h.
- Split [39/48] by kind of changes.
- Remove [42/49].
- Add some new patches from latest shared code release.


Leyi Rong (69):
  net/ice/base: update standard extr seq to include DIR flag
  net/ice/base: add API to configure MIB
  net/ice/base: add another valid DCBx state
  net/ice/base: add more recipe commands
  net/ice/base: add funcs to create new switch recipe
  net/ice/base: programming a new switch recipe
  net/ice/base: replay advanced rule after reset
  net/ice/base: code for removing advanced rule
  net/ice/base: save and post reset replay q bandwidth
  net/ice/base: rollback AVF RSS configurations
  net/ice/base: move RSS replay list
  net/ice/base: cache the data of set PHY cfg AQ in SW
  net/ice/base: refactor HW table init function
  net/ice/base: add lock around profile map list
  net/ice/base: add compatibility check for package version
  net/ice/base: add API to init FW logging
  net/ice/base: use macro instead of magic 8
  net/ice/base: move and redefine ice debug cq API
  net/ice/base: separate out control queue lock creation
  net/ice/base: added sibling head to parse nodes
  net/ice/base: add and fix debuglogs
  net/ice/base: forbid VSI to remove unassociated ucast filter
  net/ice/base: update some defines
  net/ice/base: add hweight32 support
  net/ice/base: call out dev/func caps when printing
  net/ice/base: set the max number of TCs per port to 4
  net/ice/base: make FDID available for FlexDescriptor
  net/ice/base: use a different debug bit for FW log
  net/ice/base: always set prefena when configuring a Rx queue
  net/ice/base: disable Tx pacing option
  net/ice/base: delete the index for chaining other recipe
  net/ice/base: cleanup update link info
  net/ice/base: add rd64 support
  net/ice/base: track HW stat registers past rollover
  net/ice/base: implement LLDP persistent settings
  net/ice/base: check new FD filter duplicate location
  net/ice/base: correct UDP/TCP PTYPE assignments
  net/ice/base: calculate rate limit burst size correctly
  net/ice/base: fix Flow Director VSI count
  net/ice/base: use more efficient structures
  net/ice/base: silent semantic parser warnings
  net/ice/base: fix for signed package download
  net/ice/base: add new API to dealloc flow entry
  net/ice/base: check RSS flow profile list
  net/ice/base: protect list add with lock
  net/ice/base: fix Rx functionality for ethertype filters
  net/ice/base: introduce some new macros
  net/ice/base: new marker to mark func parameters unused
  net/ice/base: code clean up
  net/ice/base: cleanup ice flex pipe files
  net/ice/base: refactor VSI node sched code
  net/ice/base: add some minor new defines
  net/ice/base: add vxlan/generic tunnel management
  net/ice/base: enable additional switch rules
  net/ice/base: allow forward to Q groups in switch rule
  net/ice/base: changes for reducing ice add adv rule time
  net/ice/base: deduce TSA value in the CEE mode
  net/ice/base: rework API for ice zero bitmap
  net/ice/base: rework API for ice cp bitmap
  net/ice/base: use ice zero bitmap instead of ice memset
  net/ice/base: use the specified size for ice zero bitmap
  net/ice/base: correct NVGRE header structure
  net/ice/base: reduce calls to get profile associations
  net/ice/base: fix for chained recipe switch ID index
  net/ice/base: update driver unloading field
  net/ice/base: fix for UDP and TCP related switch rules
  net/ice/base: changes in flow and profile removal
  net/ice/base: update Tx context struct
  net/ice/base: fixes for GRE

 drivers/net/ice/base/ice_adminq_cmd.h    |  103 +-
 drivers/net/ice/base/ice_bitops.h        |   36 +-
 drivers/net/ice/base/ice_common.c        |  482 +++--
 drivers/net/ice/base/ice_common.h        |   18 +-
 drivers/net/ice/base/ice_controlq.c      |  247 ++-
 drivers/net/ice/base/ice_controlq.h      |    4 +-
 drivers/net/ice/base/ice_dcb.c           |   82 +-
 drivers/net/ice/base/ice_dcb.h           |   12 +-
 drivers/net/ice/base/ice_fdir.c          |   11 +-
 drivers/net/ice/base/ice_fdir.h          |    4 -
 drivers/net/ice/base/ice_flex_pipe.c     | 1198 +++++------
 drivers/net/ice/base/ice_flex_pipe.h     |   73 +-
 drivers/net/ice/base/ice_flex_type.h     |   54 +-
 drivers/net/ice/base/ice_flow.c          |  410 +++-
 drivers/net/ice/base/ice_flow.h          |   22 +-
 drivers/net/ice/base/ice_lan_tx_rx.h     |    4 +-
 drivers/net/ice/base/ice_nvm.c           |   18 +-
 drivers/net/ice/base/ice_osdep.h         |   23 +
 drivers/net/ice/base/ice_protocol_type.h |   12 +-
 drivers/net/ice/base/ice_sched.c         |  219 +-
 drivers/net/ice/base/ice_sched.h         |   24 +-
 drivers/net/ice/base/ice_switch.c        | 2498 +++++++++++++++++++++-
 drivers/net/ice/base/ice_switch.h        |   66 +-
 drivers/net/ice/base/ice_type.h          |   80 +-
 drivers/net/ice/ice_ethdev.c             |    4 +-
 25 files changed, 4303 insertions(+), 1401 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 01/69] net/ice/base: update standard extr seq to include DIR flag
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 02/69] net/ice/base: add API to configure MIB Leyi Rong
                       ` (68 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Chinh T Cao, Paul M Stillwell Jr

Once upon a time, the ice_flow_create_xtrct_seq() function in ice_flow.c
extracted only protocol fields explicitly specified by the caller of the
ice_flow_add_prof() function via its struct ice_flow_seg_info instances.
However, to support different ingress and egress flow profiles with the
same matching criteria, it would be necessary to also match on the packet
Direction metadata. The primary reason was because there could not be more
than one HW profile with the same CDID, PTG, and VSIG. The Direction
metadata was not a parameter used to select HW profile IDs.

Thus, for ACL, the direction flag would need to be added to the extraction
sequence. This information will be use later as one criteria for ACL
scenario entry matching.

Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c | 43 +++++++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index be819e0e9..f1bf5b5e7 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -495,6 +495,42 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
 	return ICE_SUCCESS;
 }
 
+/**
+ * ice_flow_xtract_pkt_flags - Create an extr sequence entry for packet flags
+ * @hw: pointer to the HW struct
+ * @params: information about the flow to be processed
+ * @flags: The value of pkt_flags[x:x] in RX/TX MDID metadata.
+ *
+ * This function will allocate an extraction sequence entries for a DWORD size
+ * chunk of the packet flags.
+ */
+static enum ice_status
+ice_flow_xtract_pkt_flags(struct ice_hw *hw,
+			  struct ice_flow_prof_params *params,
+			  enum ice_flex_mdid_pkt_flags flags)
+{
+	u8 fv_words = hw->blk[params->blk].es.fvw;
+	u8 idx;
+
+	/* Make sure the number of extraction sequence entries required does not
+	 * exceed the block's capacity.
+	 */
+	if (params->es_cnt >= fv_words)
+		return ICE_ERR_MAX_LIMIT;
+
+	/* some blocks require a reversed field vector layout */
+	if (hw->blk[params->blk].es.reverse)
+		idx = fv_words - params->es_cnt - 1;
+	else
+		idx = params->es_cnt;
+
+	params->es[idx].prot_id = ICE_PROT_META_ID;
+	params->es[idx].off = flags;
+	params->es_cnt++;
+
+	return ICE_SUCCESS;
+}
+
 /**
  * ice_flow_xtract_fld - Create an extraction sequence entry for the given field
  * @hw: pointer to the HW struct
@@ -744,6 +780,13 @@ ice_flow_create_xtrct_seq(struct ice_hw *hw,
 	enum ice_status status = ICE_SUCCESS;
 	u8 i;
 
+	/* For ACL, we also need to extract the direction bit (Rx,Tx) data from
+	 * packet flags
+	 */
+	if (params->blk == ICE_BLK_ACL)
+		ice_flow_xtract_pkt_flags(hw, params,
+					  ICE_RX_MDID_PKT_FLAGS_15_0);
+
 	for (i = 0; i < params->prof->segs_cnt; i++) {
 		u64 match = params->prof->segs[i].match;
 		u16 j;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 02/69] net/ice/base: add API to configure MIB
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 01/69] net/ice/base: update standard extr seq to include DIR flag Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 03/69] net/ice/base: add another valid DCBx state Leyi Rong
                       ` (67 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Chinh T Cao, Paul M Stillwell Jr

Decouple ice_cfg_lldp_mib_change from the ice_init_dcb function call.

Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_dcb.c | 38 ++++++++++++++++++++++++++++++----
 drivers/net/ice/base/ice_dcb.h |  3 ++-
 2 files changed, 36 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c
index a7810578d..4e213d4f9 100644
--- a/drivers/net/ice/base/ice_dcb.c
+++ b/drivers/net/ice/base/ice_dcb.c
@@ -927,10 +927,11 @@ enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi)
 /**
  * ice_init_dcb
  * @hw: pointer to the HW struct
+ * @enable_mib_change: enable MIB change event
  *
  * Update DCB configuration from the Firmware
  */
-enum ice_status ice_init_dcb(struct ice_hw *hw)
+enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change)
 {
 	struct ice_port_info *pi = hw->port_info;
 	enum ice_status ret = ICE_SUCCESS;
@@ -952,13 +953,42 @@ enum ice_status ice_init_dcb(struct ice_hw *hw)
 			return ret;
 	} else if (pi->dcbx_status == ICE_DCBX_STATUS_DIS) {
 		return ICE_ERR_NOT_READY;
-	} else if (pi->dcbx_status == ICE_DCBX_STATUS_MULTIPLE_PEERS) {
 	}
 
 	/* Configure the LLDP MIB change event */
-	ret = ice_aq_cfg_lldp_mib_change(hw, true, NULL);
+	if (enable_mib_change) {
+		ret = ice_aq_cfg_lldp_mib_change(hw, true, NULL);
+		if (!ret)
+			pi->is_sw_lldp = false;
+	}
+
+	return ret;
+}
+
+/**
+ * ice_cfg_lldp_mib_change
+ * @hw: pointer to the HW struct
+ * @ena_mib: enable/disable MIB change event
+ *
+ * Configure (disable/enable) MIB
+ */
+enum ice_status ice_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_mib)
+{
+	struct ice_port_info *pi = hw->port_info;
+	enum ice_status ret;
+
+	if (!hw->func_caps.common_cap.dcb)
+		return ICE_ERR_NOT_SUPPORTED;
+
+	/* Get DCBX status */
+	pi->dcbx_status = ice_get_dcbx_status(hw);
+
+	if (pi->dcbx_status == ICE_DCBX_STATUS_DIS)
+		return ICE_ERR_NOT_READY;
+
+	ret = ice_aq_cfg_lldp_mib_change(hw, ena_mib, NULL);
 	if (!ret)
-		pi->is_sw_lldp = false;
+		pi->is_sw_lldp = !ena_mib;
 
 	return ret;
 }
diff --git a/drivers/net/ice/base/ice_dcb.h b/drivers/net/ice/base/ice_dcb.h
index d922c8a29..65d2bafef 100644
--- a/drivers/net/ice/base/ice_dcb.h
+++ b/drivers/net/ice/base/ice_dcb.h
@@ -197,7 +197,7 @@ ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
 		   struct ice_dcbx_cfg *dcbcfg);
 enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi);
 enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi);
-enum ice_status ice_init_dcb(struct ice_hw *hw);
+enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change);
 void ice_dcb_cfg_to_lldp(u8 *lldpmib, u16 *miblen, struct ice_dcbx_cfg *dcbcfg);
 enum ice_status
 ice_query_port_ets(struct ice_port_info *pi,
@@ -217,6 +217,7 @@ enum ice_status ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd);
 enum ice_status
 ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent,
 		       bool *dcbx_agent_status, struct ice_sq_cd *cd);
+enum ice_status ice_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_mib);
 enum ice_status
 ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update,
 			   struct ice_sq_cd *cd);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 03/69] net/ice/base: add another valid DCBx state
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 01/69] net/ice/base: update standard extr seq to include DIR flag Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 02/69] net/ice/base: add API to configure MIB Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 04/69] net/ice/base: add more recipe commands Leyi Rong
                       ` (66 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dave Ertman, Paul M Stillwell Jr

When a port is not cabled, but DCBx is enabled in the
firmware, the status of DCBx will be NOT_STARTED. This
is a valid state for FW enabled and should not be
treated as a is_fw_lldp true automatically.

Add the code to treat NOT_STARTED as another valid state.

Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_dcb.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c
index 4e213d4f9..100c4bb0f 100644
--- a/drivers/net/ice/base/ice_dcb.c
+++ b/drivers/net/ice/base/ice_dcb.c
@@ -945,7 +945,8 @@ enum ice_status ice_init_dcb(struct ice_hw *hw, bool enable_mib_change)
 	pi->dcbx_status = ice_get_dcbx_status(hw);
 
 	if (pi->dcbx_status == ICE_DCBX_STATUS_DONE ||
-	    pi->dcbx_status == ICE_DCBX_STATUS_IN_PROGRESS) {
+	    pi->dcbx_status == ICE_DCBX_STATUS_IN_PROGRESS ||
+	    pi->dcbx_status == ICE_DCBX_STATUS_NOT_STARTED) {
 		/* Get current DCBX configuration */
 		ret = ice_get_dcb_cfg(pi);
 		pi->is_sw_lldp = (hw->adminq.sq_last_status == ICE_AQ_RC_EPERM);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 04/69] net/ice/base: add more recipe commands
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (2 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 03/69] net/ice/base: add another valid DCBx state Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 05/69] net/ice/base: add funcs to create new switch recipe Leyi Rong
                       ` (65 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Lev Faerman, Paul M Stillwell Jr

Add the Add Recipe (0x0290), Recipe to Profile (0x0291), Get Recipe
(0x0292) and Get Recipe to Profile (0x0293) Commands.

Signed-off-by: Lev Faerman <lev.faerman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 73 +++++++++++++++++++++++++++
 1 file changed, 73 insertions(+)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index bbdca83fc..7b0aa8aaa 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -696,6 +696,72 @@ struct ice_aqc_storm_cfg {
 
 #define ICE_MAX_NUM_RECIPES 64
 
+/* Add/Get Recipe (indirect 0x0290/0x0292)*/
+struct ice_aqc_add_get_recipe {
+	__le16 num_sub_recipes;	/* Input in Add cmd, Output in Get cmd */
+	__le16 return_index;	/* Input, used for Get cmd only */
+	u8 reserved[4];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+struct ice_aqc_recipe_content {
+	u8 rid;
+#define ICE_AQ_RECIPE_ID_S		0
+#define ICE_AQ_RECIPE_ID_M		(0x3F << ICE_AQ_RECIPE_ID_S)
+#define ICE_AQ_RECIPE_ID_IS_ROOT	BIT(7)
+	u8 lkup_indx[5];
+#define ICE_AQ_RECIPE_LKUP_DATA_S	0
+#define ICE_AQ_RECIPE_LKUP_DATA_M	(0x3F << ICE_AQ_RECIPE_LKUP_DATA_S)
+#define ICE_AQ_RECIPE_LKUP_IGNORE	BIT(7)
+#define ICE_AQ_SW_ID_LKUP_MASK		0x00FF
+	__le16 mask[5];
+	u8 result_indx;
+#define ICE_AQ_RECIPE_RESULT_DATA_S	0
+#define ICE_AQ_RECIPE_RESULT_DATA_M	(0x3F << ICE_AQ_RECIPE_RESULT_DATA_S)
+#define ICE_AQ_RECIPE_RESULT_EN		BIT(7)
+	u8 rsvd0[3];
+	u8 act_ctrl_join_priority;
+	u8 act_ctrl_fwd_priority;
+#define ICE_AQ_RECIPE_FWD_PRIORITY_S	0
+#define ICE_AQ_RECIPE_FWD_PRIORITY_M	(0xF << ICE_AQ_RECIPE_FWD_PRIORITY_S)
+	u8 act_ctrl;
+#define ICE_AQ_RECIPE_ACT_NEED_PASS_L2	BIT(0)
+#define ICE_AQ_RECIPE_ACT_ALLOW_PASS_L2	BIT(1)
+#define ICE_AQ_RECIPE_ACT_INV_ACT	BIT(2)
+#define ICE_AQ_RECIPE_ACT_PRUNE_INDX_S	4
+#define ICE_AQ_RECIPE_ACT_PRUNE_INDX_M	(0x3 << ICE_AQ_RECIPE_ACT_PRUNE_INDX_S)
+	u8 rsvd1;
+	__le32 dflt_act;
+#define ICE_AQ_RECIPE_DFLT_ACT_S	0
+#define ICE_AQ_RECIPE_DFLT_ACT_M	(0x7FFFF << ICE_AQ_RECIPE_DFLT_ACT_S)
+#define ICE_AQ_RECIPE_DFLT_ACT_VALID	BIT(31)
+};
+
+struct ice_aqc_recipe_data_elem {
+	u8 recipe_indx;
+	u8 resp_bits;
+#define ICE_AQ_RECIPE_WAS_UPDATED	BIT(0)
+	u8 rsvd0[2];
+	u8 recipe_bitmap[8];
+	u8 rsvd1[4];
+	struct ice_aqc_recipe_content content;
+	u8 rsvd2[20];
+};
+
+/* This struct contains a number of entries as per the
+ * num_sub_recipes in the command
+ */
+struct ice_aqc_add_get_recipe_data {
+	struct ice_aqc_recipe_data_elem recipe[1];
+};
+
+/* Set/Get Recipes to Profile Association (direct 0x0291/0x0293) */
+struct ice_aqc_recipe_to_profile {
+	__le16 profile_id;
+	u8 rsvd[6];
+	ice_declare_bitmap(recipe_assoc, ICE_MAX_NUM_RECIPES);
+};
 
 /* Add/Update/Remove/Get switch rules (indirect 0x02A0, 0x02A1, 0x02A2, 0x02A3)
  */
@@ -2210,6 +2276,8 @@ struct ice_aq_desc {
 		struct ice_aqc_get_sw_cfg get_sw_conf;
 		struct ice_aqc_sw_rules sw_rules;
 		struct ice_aqc_storm_cfg storm_conf;
+		struct ice_aqc_add_get_recipe add_get_recipe;
+		struct ice_aqc_recipe_to_profile recipe_to_profile;
 		struct ice_aqc_get_topo get_topo;
 		struct ice_aqc_sched_elem_cmd sched_elem_cmd;
 		struct ice_aqc_query_txsched_res query_sched_res;
@@ -2369,6 +2437,11 @@ enum ice_adminq_opc {
 	ice_aqc_opc_set_storm_cfg			= 0x0280,
 	ice_aqc_opc_get_storm_cfg			= 0x0281,
 
+	/* recipe commands */
+	ice_aqc_opc_add_recipe				= 0x0290,
+	ice_aqc_opc_recipe_to_profile			= 0x0291,
+	ice_aqc_opc_get_recipe				= 0x0292,
+	ice_aqc_opc_get_recipe_to_profile		= 0x0293,
 
 	/* switch rules population commands */
 	ice_aqc_opc_add_sw_rules			= 0x02A0,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 05/69] net/ice/base: add funcs to create new switch recipe
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (3 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 04/69] net/ice/base: add more recipe commands Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 06/69] net/ice/base: programming a " Leyi Rong
                       ` (64 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Grishma Kotecha, Paul M Stillwell Jr

Add functions to support following admin queue commands:
1. 0x0208: allocate resource to hold a switch recipe. This is needed
when a new switch recipe needs to be created.
2. 0x0290: create a recipe with protocol header information and
other details that determine how this recipe filter work.
3. 0x0292: get details of an existing recipe.
4. 0x0291: associate a switch recipe to a profile.

Signed-off-by: Grishma Kotecha <grishma.kotecha@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 drivers/net/ice/base/ice_switch.c | 132 ++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_switch.h |  12 +++
 2 files changed, 144 insertions(+)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index a1c29d606..b84a07459 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -914,6 +914,138 @@ ice_aq_sw_rules(struct ice_hw *hw, void *rule_list, u16 rule_list_sz,
 	return ice_aq_send_cmd(hw, &desc, rule_list, rule_list_sz, cd);
 }
 
+/**
+ * ice_aq_add_recipe - add switch recipe
+ * @hw: pointer to the HW struct
+ * @s_recipe_list: pointer to switch rule population list
+ * @num_recipes: number of switch recipes in the list
+ * @cd: pointer to command details structure or NULL
+ *
+ * Add(0x0290)
+ */
+enum ice_status
+ice_aq_add_recipe(struct ice_hw *hw,
+		  struct ice_aqc_recipe_data_elem *s_recipe_list,
+		  u16 num_recipes, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_get_recipe *cmd;
+	struct ice_aq_desc desc;
+	u16 buf_size;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_add_recipe");
+	cmd = &desc.params.add_get_recipe;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_add_recipe);
+
+	cmd->num_sub_recipes = CPU_TO_LE16(num_recipes);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	buf_size = num_recipes * sizeof(*s_recipe_list);
+
+	return ice_aq_send_cmd(hw, &desc, s_recipe_list, buf_size, cd);
+}
+
+/**
+ * ice_aq_get_recipe - get switch recipe
+ * @hw: pointer to the HW struct
+ * @s_recipe_list: pointer to switch rule population list
+ * @num_recipes: pointer to the number of recipes (input and output)
+ * @recipe_root: root recipe number of recipe(s) to retrieve
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get(0x0292)
+ *
+ * On input, *num_recipes should equal the number of entries in s_recipe_list.
+ * On output, *num_recipes will equal the number of entries returned in
+ * s_recipe_list.
+ *
+ * The caller must supply enough space in s_recipe_list to hold all possible
+ * recipes and *num_recipes must equal ICE_MAX_NUM_RECIPES.
+ */
+enum ice_status
+ice_aq_get_recipe(struct ice_hw *hw,
+		  struct ice_aqc_recipe_data_elem *s_recipe_list,
+		  u16 *num_recipes, u16 recipe_root, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_add_get_recipe *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 buf_size;
+
+	if (*num_recipes != ICE_MAX_NUM_RECIPES)
+		return ICE_ERR_PARAM;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_get_recipe");
+	cmd = &desc.params.add_get_recipe;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_recipe);
+
+	cmd->return_index = CPU_TO_LE16(recipe_root);
+	cmd->num_sub_recipes = 0;
+
+	buf_size = *num_recipes * sizeof(*s_recipe_list);
+
+	status = ice_aq_send_cmd(hw, &desc, s_recipe_list, buf_size, cd);
+	/* cppcheck-suppress constArgument */
+	*num_recipes = LE16_TO_CPU(cmd->num_sub_recipes);
+
+	return status;
+}
+
+/**
+ * ice_aq_map_recipe_to_profile - Map recipe to packet profile
+ * @hw: pointer to the HW struct
+ * @profile_id: package profile ID to associate the recipe with
+ * @r_bitmap: Recipe bitmap filled in and need to be returned as response
+ * @cd: pointer to command details structure or NULL
+ * Recipe to profile association (0x0291)
+ */
+enum ice_status
+ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
+			     struct ice_sq_cd *cd)
+{
+	struct ice_aqc_recipe_to_profile *cmd;
+	struct ice_aq_desc desc;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_assoc_recipe_to_prof");
+	cmd = &desc.params.recipe_to_profile;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_recipe_to_profile);
+	cmd->profile_id = CPU_TO_LE16(profile_id);
+	/* Set the recipe ID bit in the bitmask to let the device know which
+	 * profile we are associating the recipe to
+	 */
+	ice_memcpy(cmd->recipe_assoc, r_bitmap, sizeof(cmd->recipe_assoc),
+		   ICE_NONDMA_TO_NONDMA);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_alloc_recipe - add recipe resource
+ * @hw: pointer to the hardware structure
+ * @rid: recipe ID returned as response to AQ call
+ */
+enum ice_status ice_alloc_recipe(struct ice_hw *hw, u16 *rid)
+{
+	struct ice_aqc_alloc_free_res_elem *sw_buf;
+	enum ice_status status;
+	u16 buf_len;
+
+	buf_len = sizeof(*sw_buf);
+	sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len);
+	if (!sw_buf)
+		return ICE_ERR_NO_MEMORY;
+
+	sw_buf->num_elems = CPU_TO_LE16(1);
+	sw_buf->res_type = CPU_TO_LE16((ICE_AQC_RES_TYPE_RECIPE <<
+					ICE_AQC_RES_TYPE_S) |
+					ICE_AQC_RES_TYPE_FLAG_SHARED);
+	status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len,
+				       ice_aqc_opc_alloc_res, NULL);
+	if (!status)
+		*rid = LE16_TO_CPU(sw_buf->elem[0].e.sw_resp);
+	ice_free(hw, sw_buf);
+
+	return status;
+}
 
 /* ice_init_port_info - Initialize port_info with switch configuration data
  * @pi: pointer to port_info
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index 13525d8d0..fd61c0eea 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -408,8 +408,20 @@ enum ice_status
 ice_get_vsi_vlan_promisc(struct ice_hw *hw, u16 vsi_handle, u8 *promisc_mask,
 			 u16 *vid);
 
+enum ice_status
+ice_aq_add_recipe(struct ice_hw *hw,
+		  struct ice_aqc_recipe_data_elem *s_recipe_list,
+		  u16 num_recipes, struct ice_sq_cd *cd);
 
+enum ice_status
+ice_aq_get_recipe(struct ice_hw *hw,
+		  struct ice_aqc_recipe_data_elem *s_recipe_list,
+		  u16 *num_recipes, u16 recipe_root, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
+			     struct ice_sq_cd *cd);
 
+enum ice_status ice_alloc_recipe(struct ice_hw *hw, u16 *recipe_id);
 enum ice_status ice_replay_all_fltr(struct ice_hw *hw);
 
 enum ice_status ice_init_def_sw_recp(struct ice_hw *hw);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 06/69] net/ice/base: programming a new switch recipe
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (4 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 05/69] net/ice/base: add funcs to create new switch recipe Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 07/69] net/ice/base: replay advanced rule after reset Leyi Rong
                       ` (63 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Grishma Kotecha, Paul M Stillwell Jr

1. Added an interface to support adding advanced switch rules.
2. Advanced rules are provided in a form of protocol headers and values
to match in addition to actions (limited actions are current supported).
3. Retrieve field vectors for ICE configuration package to determine
extracted fields and extracted locations for recipe creation.
4. Chain multiple recipes together to match multiple protocol headers.
5. Add structure to manage the dynamic recipes.

Signed-off-by: Grishma Kotecha <grishma.kotecha@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c |   33 +-
 drivers/net/ice/base/ice_flex_pipe.h |    7 +-
 drivers/net/ice/base/ice_switch.c    | 1641 ++++++++++++++++++++++++++
 drivers/net/ice/base/ice_switch.h    |   21 +
 4 files changed, 1699 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 14e632fab..babad94f8 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -734,7 +734,7 @@ static void ice_release_global_cfg_lock(struct ice_hw *hw)
  *
  * This function will request ownership of the change lock.
  */
-static enum ice_status
+enum ice_status
 ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type access)
 {
 	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_change_lock");
@@ -749,7 +749,7 @@ ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type access)
  *
  * This function will release the change lock using the proper Admin Command.
  */
-static void ice_release_change_lock(struct ice_hw *hw)
+void ice_release_change_lock(struct ice_hw *hw)
 {
 	ice_debug(hw, ICE_DBG_TRACE, "ice_release_change_lock");
 
@@ -1801,6 +1801,35 @@ void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld)
 	ice_free(hw, bld);
 }
 
+/**
+ * ice_find_prot_off - find prot ID and offset pair, based on prof and FV index
+ * @hw: pointer to the hardware structure
+ * @blk: hardware block
+ * @prof: profile ID
+ * @fv_idx: field vector word index
+ * @prot: variable to receive the protocol ID
+ * @off: variable to receive the protocol offset
+ */
+enum ice_status
+ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx,
+		  u8 *prot, u16 *off)
+{
+	struct ice_fv_word *fv_ext;
+
+	if (prof >= hw->blk[blk].es.count)
+		return ICE_ERR_PARAM;
+
+	if (fv_idx >= hw->blk[blk].es.fvw)
+		return ICE_ERR_PARAM;
+
+	fv_ext = hw->blk[blk].es.t + (prof * hw->blk[blk].es.fvw);
+
+	*prot = fv_ext[fv_idx].prot_id;
+	*off = fv_ext[fv_idx].off;
+
+	return ICE_SUCCESS;
+}
+
 /* PTG Management */
 
 /**
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 00c2b6682..2710dded6 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -15,7 +15,12 @@
 
 enum ice_status
 ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count);
-
+enum ice_status
+ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type access);
+void ice_release_change_lock(struct ice_hw *hw);
+enum ice_status
+ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx,
+		  u8 *prot, u16 *off);
 struct ice_generic_seg_hdr *
 ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
 		    struct ice_pkg_hdr *pkg_hdr);
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index b84a07459..30a908bc8 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -53,6 +53,210 @@ static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0,
 	 sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi) + \
 	 ((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi)))
 
+static const
+u8 dummy_gre_packet[] = { 0, 0, 0, 0,		/* Ether starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0x08, 0,		/* Ether ends */
+			  0x45, 0, 0, 0x3E,	/* IP starts */
+			  0, 0, 0, 0,
+			  0, 0x2F, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,		/* IP ends */
+			  0x80, 0, 0x65, 0x58,	/* GRE starts */
+			  0, 0, 0, 0,		/* GRE ends */
+			  0, 0, 0, 0,		/* Ether starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0x08, 0,		/* Ether ends */
+			  0x45, 0, 0, 0x14,	/* IP starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0		/* IP ends */
+			};
+
+static const u8
+dummy_udp_tun_packet[] = {0, 0, 0, 0,		/* Ether starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0x08, 0,		/* Ether ends */
+			  0x45, 0, 0, 0x32,	/* IP starts */
+			  0, 0, 0, 0,
+			  0, 0x11, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,		/* IP ends */
+			  0, 0, 0x12, 0xB5,	/* UDP start*/
+			  0, 0x1E, 0, 0,	/* UDP end*/
+			  0, 0, 0, 0,		/* VXLAN start */
+			  0, 0, 0, 0,		/* VXLAN end*/
+			  0, 0, 0, 0,		/* Ether starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0			/* Ether ends */
+			};
+
+static const u8
+dummy_tcp_tun_packet[] = {0, 0, 0, 0,		/* Ether starts */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0x08, 0,              /* Ether ends */
+			  0x45, 0, 0, 0x28,     /* IP starts */
+			  0, 0x01, 0, 0,
+			  0x40, 0x06, 0xF5, 0x69,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,   /* IP ends */
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0, 0, 0, 0,
+			  0x50, 0x02, 0x20,
+			  0, 0x9, 0x79, 0, 0,
+			  0, 0 /* 2 bytes padding for 4 byte alignment*/
+			};
+
+/* this is a recipe to profile bitmap association */
+static ice_declare_bitmap(recipe_to_profile[ICE_MAX_NUM_RECIPES],
+			  ICE_MAX_NUM_PROFILES);
+static ice_declare_bitmap(available_result_ids, ICE_CHAIN_FV_INDEX_START + 1);
+
+/**
+ * ice_get_recp_frm_fw - update SW bookkeeping from FW recipe entries
+ * @hw: pointer to hardware structure
+ * @recps: struct that we need to populate
+ * @rid: recipe ID that we are populating
+ *
+ * This function is used to populate all the necessary entries into our
+ * bookkeeping so that we have a current list of all the recipes that are
+ * programmed in the firmware.
+ */
+static enum ice_status
+ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid)
+{
+	u16 i, sub_recps, fv_word_idx = 0, result_idx = 0;
+	ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_PROFILES);
+	u16 result_idxs[ICE_MAX_CHAIN_RECIPE] = { 0 };
+	struct ice_aqc_recipe_data_elem *tmp;
+	u16 num_recps = ICE_MAX_NUM_RECIPES;
+	struct ice_prot_lkup_ext *lkup_exts;
+	enum ice_status status;
+
+	/* we need a buffer big enough to accommodate all the recipes */
+	tmp = (struct ice_aqc_recipe_data_elem *)ice_calloc(hw,
+		ICE_MAX_NUM_RECIPES, sizeof(*tmp));
+	if (!tmp)
+		return ICE_ERR_NO_MEMORY;
+
+	tmp[0].recipe_indx = rid;
+	status = ice_aq_get_recipe(hw, tmp, &num_recps, rid, NULL);
+	/* non-zero status meaning recipe doesn't exist */
+	if (status)
+		goto err_unroll;
+	lkup_exts = &recps[rid].lkup_exts;
+	/* start populating all the entries for recps[rid] based on lkups from
+	 * firmware
+	 */
+	for (sub_recps = 0; sub_recps < num_recps; sub_recps++) {
+		struct ice_aqc_recipe_data_elem root_bufs = tmp[sub_recps];
+		struct ice_recp_grp_entry *rg_entry;
+		u8 prof_id, prot = 0;
+		u16 off = 0;
+
+		rg_entry = (struct ice_recp_grp_entry *)
+			ice_malloc(hw, sizeof(*rg_entry));
+		if (!rg_entry) {
+			status = ICE_ERR_NO_MEMORY;
+			goto err_unroll;
+		}
+		/* Avoid 8th bit since its result enable bit */
+		result_idxs[result_idx] = root_bufs.content.result_indx &
+			~ICE_AQ_RECIPE_RESULT_EN;
+		/* Check if result enable bit is set */
+		if (root_bufs.content.result_indx & ICE_AQ_RECIPE_RESULT_EN)
+			ice_clear_bit(ICE_CHAIN_FV_INDEX_START -
+				      result_idxs[result_idx++],
+				      available_result_ids);
+		ice_memcpy(r_bitmap,
+			   recipe_to_profile[tmp[sub_recps].recipe_indx],
+			   sizeof(r_bitmap), ICE_NONDMA_TO_NONDMA);
+		/* get the first profile that is associated with rid */
+		prof_id = ice_find_first_bit(r_bitmap, ICE_MAX_NUM_PROFILES);
+		for (i = 0; i < ICE_NUM_WORDS_RECIPE; i++) {
+			u8 lkup_indx = root_bufs.content.lkup_indx[i + 1];
+
+			rg_entry->fv_idx[i] = lkup_indx;
+			/* If the recipe is a chained recipe then all its
+			 * child recipe's result will have a result index.
+			 * To fill fv_words we should not use those result
+			 * index, we only need the protocol ids and offsets.
+			 * We will skip all the fv_idx which stores result
+			 * index in them. We also need to skip any fv_idx which
+			 * has ICE_AQ_RECIPE_LKUP_IGNORE or 0 since it isn't a
+			 * valid offset value.
+			 */
+			if (result_idxs[0] == rg_entry->fv_idx[i] ||
+			    result_idxs[1] == rg_entry->fv_idx[i] ||
+			    result_idxs[2] == rg_entry->fv_idx[i] ||
+			    result_idxs[3] == rg_entry->fv_idx[i] ||
+			    result_idxs[4] == rg_entry->fv_idx[i] ||
+			    rg_entry->fv_idx[i] == ICE_AQ_RECIPE_LKUP_IGNORE ||
+			    rg_entry->fv_idx[i] == 0)
+				continue;
+
+			ice_find_prot_off(hw, ICE_BLK_SW, prof_id,
+					  rg_entry->fv_idx[i], &prot, &off);
+			lkup_exts->fv_words[fv_word_idx].prot_id = prot;
+			lkup_exts->fv_words[fv_word_idx].off = off;
+			fv_word_idx++;
+		}
+		/* populate rg_list with the data from the child entry of this
+		 * recipe
+		 */
+		LIST_ADD(&rg_entry->l_entry, &recps[rid].rg_list);
+	}
+	lkup_exts->n_val_words = fv_word_idx;
+	recps[rid].n_grp_count = num_recps;
+	recps[rid].root_buf = (struct ice_aqc_recipe_data_elem *)
+		ice_calloc(hw, recps[rid].n_grp_count,
+			   sizeof(struct ice_aqc_recipe_data_elem));
+	if (!recps[rid].root_buf)
+		goto err_unroll;
+
+	ice_memcpy(recps[rid].root_buf, tmp, recps[rid].n_grp_count *
+		   sizeof(*recps[rid].root_buf), ICE_NONDMA_TO_NONDMA);
+	recps[rid].recp_created = true;
+	if (tmp[sub_recps].content.rid & ICE_AQ_RECIPE_ID_IS_ROOT)
+		recps[rid].root_rid = rid;
+err_unroll:
+	ice_free(hw, tmp);
+	return status;
+}
+
+/**
+ * ice_get_recp_to_prof_map - updates recipe to profile mapping
+ * @hw: pointer to hardware structure
+ *
+ * This function is used to populate recipe_to_profile matrix where index to
+ * this array is the recipe ID and the element is the mapping of which profiles
+ * is this recipe mapped to.
+ */
+static void
+ice_get_recp_to_prof_map(struct ice_hw *hw)
+{
+	ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_RECIPES);
+	u16 i;
+
+	for (i = 0; i < ICE_MAX_NUM_PROFILES; i++) {
+		u16 j;
+
+		ice_zero_bitmap(r_bitmap, ICE_MAX_NUM_RECIPES);
+		if (ice_aq_get_recipe_to_profile(hw, i, (u8 *)r_bitmap, NULL))
+			continue;
+
+		for (j = 0; j < ICE_MAX_NUM_RECIPES; j++)
+			if (ice_is_bit_set(r_bitmap, j))
+				ice_set_bit(i, recipe_to_profile[j]);
+	}
+}
 
 /**
  * ice_init_def_sw_recp - initialize the recipe book keeping tables
@@ -75,6 +279,7 @@ enum ice_status ice_init_def_sw_recp(struct ice_hw *hw)
 		recps[i].root_rid = i;
 		INIT_LIST_HEAD(&recps[i].filt_rules);
 		INIT_LIST_HEAD(&recps[i].filt_replay_rules);
+		INIT_LIST_HEAD(&recps[i].rg_list);
 		ice_init_lock(&recps[i].filt_rule_lock);
 	}
 
@@ -1018,6 +1223,35 @@ ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
 	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
 }
 
+/**
+ * ice_aq_get_recipe_to_profile - Map recipe to packet profile
+ * @hw: pointer to the HW struct
+ * @profile_id: package profile ID to associate the recipe with
+ * @r_bitmap: Recipe bitmap filled in and need to be returned as response
+ * @cd: pointer to command details structure or NULL
+ * Associate profile ID with given recipe (0x0293)
+ */
+enum ice_status
+ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
+			     struct ice_sq_cd *cd)
+{
+	struct ice_aqc_recipe_to_profile *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_get_recipe_to_prof");
+	cmd = &desc.params.recipe_to_profile;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_recipe_to_profile);
+	cmd->profile_id = CPU_TO_LE16(profile_id);
+
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+	if (!status)
+		ice_memcpy(r_bitmap, cmd->recipe_assoc,
+			   sizeof(cmd->recipe_assoc), ICE_NONDMA_TO_NONDMA);
+
+	return status;
+}
+
 /**
  * ice_alloc_recipe - add recipe resource
  * @hw: pointer to the hardware structure
@@ -3899,6 +4133,1413 @@ ice_add_mac_with_counter(struct ice_hw *hw, struct ice_fltr_info *f_info)
 	return ret;
 }
 
+/* This is mapping table entry that maps every word within a given protocol
+ * structure to the real byte offset as per the specification of that
+ * protocol header.
+ * for example dst address is 3 words in ethertype header and corresponding
+ * bytes are 0, 2, 3 in the actual packet header and src address is at 4, 6, 8
+ * IMPORTANT: Every structure part of "ice_prot_hdr" union should have a
+ * matching entry describing its field. This needs to be updated if new
+ * structure is added to that union.
+ */
+static const struct ice_prot_ext_tbl_entry ice_prot_ext[] = {
+	{ ICE_MAC_OFOS,		{ 0, 2, 4, 6, 8, 10, 12 } },
+	{ ICE_MAC_IL,		{ 0, 2, 4, 6, 8, 10, 12 } },
+	{ ICE_IPV4_OFOS,	{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18 } },
+	{ ICE_IPV4_IL,		{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18 } },
+	{ ICE_IPV6_IL,		{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24,
+				 26, 28, 30, 32, 34, 36, 38 } },
+	{ ICE_IPV6_OFOS,	{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24,
+				 26, 28, 30, 32, 34, 36, 38 } },
+	{ ICE_TCP_IL,		{ 0, 2 } },
+	{ ICE_UDP_ILOS,		{ 0, 2 } },
+	{ ICE_SCTP_IL,		{ 0, 2 } },
+	{ ICE_VXLAN,		{ 8, 10, 12 } },
+	{ ICE_GENEVE,		{ 8, 10, 12 } },
+	{ ICE_VXLAN_GPE,	{ 0, 2, 4 } },
+	{ ICE_NVGRE,		{ 0, 2 } },
+	{ ICE_PROTOCOL_LAST,	{ 0 } }
+};
+
+/* The following table describes preferred grouping of recipes.
+ * If a recipe that needs to be programmed is a superset or matches one of the
+ * following combinations, then the recipe needs to be chained as per the
+ * following policy.
+ */
+static const struct ice_pref_recipe_group ice_recipe_pack[] = {
+	{3, { { ICE_MAC_OFOS_HW, 0, 0 }, { ICE_MAC_OFOS_HW, 2, 0 },
+	      { ICE_MAC_OFOS_HW, 4, 0 } } },
+	{4, { { ICE_MAC_IL_HW, 0, 0 }, { ICE_MAC_IL_HW, 2, 0 },
+	      { ICE_MAC_IL_HW, 4, 0 }, { ICE_META_DATA_ID_HW, 44, 0 } } },
+	{2, { { ICE_IPV4_IL_HW, 0, 0 }, { ICE_IPV4_IL_HW, 2, 0 } } },
+	{2, { { ICE_IPV4_IL_HW, 12, 0 }, { ICE_IPV4_IL_HW, 14, 0 } } },
+};
+
+static const struct ice_protocol_entry ice_prot_id_tbl[] = {
+	{ ICE_MAC_OFOS,		ICE_MAC_OFOS_HW },
+	{ ICE_MAC_IL,		ICE_MAC_IL_HW },
+	{ ICE_IPV4_OFOS,	ICE_IPV4_OFOS_HW },
+	{ ICE_IPV4_IL,		ICE_IPV4_IL_HW },
+	{ ICE_IPV6_OFOS,	ICE_IPV6_OFOS_HW },
+	{ ICE_IPV6_IL,		ICE_IPV6_IL_HW },
+	{ ICE_TCP_IL,		ICE_TCP_IL_HW },
+	{ ICE_UDP_ILOS,		ICE_UDP_ILOS_HW },
+	{ ICE_SCTP_IL,		ICE_SCTP_IL_HW },
+	{ ICE_VXLAN,		ICE_UDP_OF_HW },
+	{ ICE_GENEVE,		ICE_UDP_OF_HW },
+	{ ICE_VXLAN_GPE,	ICE_UDP_OF_HW },
+	{ ICE_NVGRE,		ICE_GRE_OF_HW },
+	{ ICE_PROTOCOL_LAST,	0 }
+};
+
+/**
+ * ice_find_recp - find a recipe
+ * @hw: pointer to the hardware structure
+ * @lkup_exts: extension sequence to match
+ *
+ * Returns index of matching recipe, or ICE_MAX_NUM_RECIPES if not found.
+ */
+static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts)
+{
+	struct ice_sw_recipe *recp;
+	u16 i;
+
+	ice_get_recp_to_prof_map(hw);
+	/* Initialize available_result_ids which tracks available result idx */
+	for (i = 0; i <= ICE_CHAIN_FV_INDEX_START; i++)
+		ice_set_bit(ICE_CHAIN_FV_INDEX_START - i,
+			    available_result_ids);
+
+	/* Walk through existing recipes to find a match */
+	recp = hw->switch_info->recp_list;
+	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		/* If recipe was not created for this ID, in SW bookkeeping,
+		 * check if FW has an entry for this recipe. If the FW has an
+		 * entry update it in our SW bookkeeping and continue with the
+		 * matching.
+		 */
+		if (!recp[i].recp_created)
+			if (ice_get_recp_frm_fw(hw,
+						hw->switch_info->recp_list, i))
+				continue;
+
+		/* if number of words we are looking for match */
+		if (lkup_exts->n_val_words == recp[i].lkup_exts.n_val_words) {
+			struct ice_fv_word *a = lkup_exts->fv_words;
+			struct ice_fv_word *b = recp[i].lkup_exts.fv_words;
+			bool found = true;
+			u8 p, q;
+
+			for (p = 0; p < lkup_exts->n_val_words; p++) {
+				for (q = 0; q < recp[i].lkup_exts.n_val_words;
+				     q++) {
+					if (a[p].off == b[q].off &&
+					    a[p].prot_id == b[q].prot_id)
+						/* Found the "p"th word in the
+						 * given recipe
+						 */
+						break;
+				}
+				/* After walking through all the words in the
+				 * "i"th recipe if "p"th word was not found then
+				 * this recipe is not what we are looking for.
+				 * So break out from this loop and try the next
+				 * recipe
+				 */
+				if (q >= recp[i].lkup_exts.n_val_words) {
+					found = false;
+					break;
+				}
+			}
+			/* If for "i"th recipe the found was never set to false
+			 * then it means we found our match
+			 */
+			if (found)
+				return i; /* Return the recipe ID */
+		}
+	}
+	return ICE_MAX_NUM_RECIPES;
+}
+
+/**
+ * ice_prot_type_to_id - get protocol ID from protocol type
+ * @type: protocol type
+ * @id: pointer to variable that will receive the ID
+ *
+ * Returns true if found, false otherwise
+ */
+static bool ice_prot_type_to_id(enum ice_protocol_type type, u16 *id)
+{
+	u16 i;
+
+	for (i = 0; ice_prot_id_tbl[i].type != ICE_PROTOCOL_LAST; i++)
+		if (ice_prot_id_tbl[i].type == type) {
+			*id = ice_prot_id_tbl[i].protocol_id;
+			return true;
+		}
+	return false;
+}
+
+/**
+ * ice_find_valid_words - count valid words
+ * @rule: advanced rule with lookup information
+ * @lkup_exts: byte offset extractions of the words that are valid
+ *
+ * calculate valid words in a lookup rule using mask value
+ */
+static u16
+ice_fill_valid_words(struct ice_adv_lkup_elem *rule,
+		     struct ice_prot_lkup_ext *lkup_exts)
+{
+	u16 j, word = 0;
+	u16 prot_id;
+	u16 ret_val;
+
+	if (!ice_prot_type_to_id(rule->type, &prot_id))
+		return 0;
+
+	word = lkup_exts->n_val_words;
+
+	for (j = 0; j < sizeof(rule->m_u) / sizeof(u16); j++)
+		if (((u16 *)&rule->m_u)[j] == 0xffff &&
+		    rule->type < ARRAY_SIZE(ice_prot_ext)) {
+			/* No more space to accommodate */
+			if (word >= ICE_MAX_CHAIN_WORDS)
+				return 0;
+			lkup_exts->fv_words[word].off =
+				ice_prot_ext[rule->type].offs[j];
+			lkup_exts->fv_words[word].prot_id =
+				ice_prot_id_tbl[rule->type].protocol_id;
+			word++;
+		}
+
+	ret_val = word - lkup_exts->n_val_words;
+	lkup_exts->n_val_words = word;
+
+	return ret_val;
+}
+
+/**
+ * ice_find_prot_off_ind - check for specific ID and offset in rule
+ * @lkup_exts: an array of protocol header extractions
+ * @prot_type: protocol type to check
+ * @off: expected offset of the extraction
+ *
+ * Check if the prot_ext has given protocol ID and offset
+ */
+static u8
+ice_find_prot_off_ind(struct ice_prot_lkup_ext *lkup_exts, u8 prot_type,
+		      u16 off)
+{
+	u8 j;
+
+	for (j = 0; j < lkup_exts->n_val_words; j++)
+		if (lkup_exts->fv_words[j].off == off &&
+		    lkup_exts->fv_words[j].prot_id == prot_type)
+			return j;
+
+	return ICE_MAX_CHAIN_WORDS;
+}
+
+/**
+ * ice_is_recipe_subset - check if recipe group policy is a subset of lookup
+ * @lkup_exts: an array of protocol header extractions
+ * @r_policy: preferred recipe grouping policy
+ *
+ * Helper function to check if given recipe group is subset we need to check if
+ * all the words described by the given recipe group exist in the advanced rule
+ * look up information
+ */
+static bool
+ice_is_recipe_subset(struct ice_prot_lkup_ext *lkup_exts,
+		     const struct ice_pref_recipe_group *r_policy)
+{
+	u8 ind[ICE_NUM_WORDS_RECIPE];
+	u8 count = 0;
+	u8 i;
+
+	/* check if everything in the r_policy is part of the entire rule */
+	for (i = 0; i < r_policy->n_val_pairs; i++) {
+		u8 j;
+
+		j = ice_find_prot_off_ind(lkup_exts, r_policy->pairs[i].prot_id,
+					  r_policy->pairs[i].off);
+		if (j >= ICE_MAX_CHAIN_WORDS)
+			return false;
+
+		/* store the indexes temporarily found by the find function
+		 * this will be used to mark the words as 'done'
+		 */
+		ind[count++] = j;
+	}
+
+	/* If the entire policy recipe was a true match, then mark the fields
+	 * that are covered by the recipe as 'done' meaning that these words
+	 * will be clumped together in one recipe.
+	 * "Done" here means in our searching if certain recipe group
+	 * matches or is subset of the given rule, then we mark all
+	 * the corresponding offsets as found. So the remaining recipes should
+	 * be created with whatever words that were left.
+	 */
+	for (i = 0; i < count; i++) {
+		u8 in = ind[i];
+
+		ice_set_bit(in, lkup_exts->done);
+	}
+	return true;
+}
+
+/**
+ * ice_create_first_fit_recp_def - Create a recipe grouping
+ * @hw: pointer to the hardware structure
+ * @lkup_exts: an array of protocol header extractions
+ * @rg_list: pointer to a list that stores new recipe groups
+ * @recp_cnt: pointer to a variable that stores returned number of recipe groups
+ *
+ * Using first fit algorithm, take all the words that are still not done
+ * and start grouping them in 4-word groups. Each group makes up one
+ * recipe.
+ */
+static enum ice_status
+ice_create_first_fit_recp_def(struct ice_hw *hw,
+			      struct ice_prot_lkup_ext *lkup_exts,
+			      struct LIST_HEAD_TYPE *rg_list,
+			      u8 *recp_cnt)
+{
+	struct ice_pref_recipe_group *grp = NULL;
+	u8 j;
+
+	*recp_cnt = 0;
+
+	/* Walk through every word in the rule to check if it is not done. If so
+	 * then this word needs to be part of a new recipe.
+	 */
+	for (j = 0; j < lkup_exts->n_val_words; j++)
+		if (!ice_is_bit_set(lkup_exts->done, j)) {
+			if (!grp ||
+			    grp->n_val_pairs == ICE_NUM_WORDS_RECIPE) {
+				struct ice_recp_grp_entry *entry;
+
+				entry = (struct ice_recp_grp_entry *)
+					ice_malloc(hw, sizeof(*entry));
+				if (!entry)
+					return ICE_ERR_NO_MEMORY;
+				LIST_ADD(&entry->l_entry, rg_list);
+				grp = &entry->r_group;
+				(*recp_cnt)++;
+			}
+
+			grp->pairs[grp->n_val_pairs].prot_id =
+				lkup_exts->fv_words[j].prot_id;
+			grp->pairs[grp->n_val_pairs].off =
+				lkup_exts->fv_words[j].off;
+			grp->n_val_pairs++;
+		}
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_fill_fv_word_index - fill in the field vector indices for a recipe group
+ * @hw: pointer to the hardware structure
+ * @fv_list: field vector with the extraction sequence information
+ * @rg_list: recipe groupings with protocol-offset pairs
+ *
+ * Helper function to fill in the field vector indices for protocol-offset
+ * pairs. These indexes are then ultimately programmed into a recipe.
+ */
+static void
+ice_fill_fv_word_index(struct ice_hw *hw, struct LIST_HEAD_TYPE *fv_list,
+		       struct LIST_HEAD_TYPE *rg_list)
+{
+	struct ice_sw_fv_list_entry *fv;
+	struct ice_recp_grp_entry *rg;
+	struct ice_fv_word *fv_ext;
+
+	if (LIST_EMPTY(fv_list))
+		return;
+
+	fv = LIST_FIRST_ENTRY(fv_list, struct ice_sw_fv_list_entry, list_entry);
+	fv_ext = fv->fv_ptr->ew;
+
+	LIST_FOR_EACH_ENTRY(rg, rg_list, ice_recp_grp_entry, l_entry) {
+		u8 i;
+
+		for (i = 0; i < rg->r_group.n_val_pairs; i++) {
+			struct ice_fv_word *pr;
+			u8 j;
+
+			pr = &rg->r_group.pairs[i];
+			for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++)
+				if (fv_ext[j].prot_id == pr->prot_id &&
+				    fv_ext[j].off == pr->off) {
+					/* Store index of field vector */
+					rg->fv_idx[i] = j;
+					break;
+				}
+		}
+	}
+}
+
+/**
+ * ice_add_sw_recipe - function to call AQ calls to create switch recipe
+ * @hw: pointer to hardware structure
+ * @rm: recipe management list entry
+ * @match_tun: if field vector index for tunnel needs to be programmed
+ */
+static enum ice_status
+ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm,
+		  bool match_tun)
+{
+	struct ice_aqc_recipe_data_elem *tmp;
+	struct ice_aqc_recipe_data_elem *buf;
+	struct ice_recp_grp_entry *entry;
+	enum ice_status status;
+	u16 recipe_count;
+	u8 chain_idx;
+	u8 recps = 0;
+
+	/* When more than one recipe are required, another recipe is needed to
+	 * chain them together. Matching a tunnel metadata ID takes up one of
+	 * the match fields in the chaining recipe reducing the number of
+	 * chained recipes by one.
+	 */
+	if (rm->n_grp_count > 1)
+		rm->n_grp_count++;
+	if (rm->n_grp_count > ICE_MAX_CHAIN_RECIPE ||
+	    (match_tun && rm->n_grp_count > (ICE_MAX_CHAIN_RECIPE - 1)))
+		return ICE_ERR_MAX_LIMIT;
+
+	tmp = (struct ice_aqc_recipe_data_elem *)ice_calloc(hw,
+							    ICE_MAX_NUM_RECIPES,
+							    sizeof(*tmp));
+	if (!tmp)
+		return ICE_ERR_NO_MEMORY;
+
+	buf = (struct ice_aqc_recipe_data_elem *)
+		ice_calloc(hw, rm->n_grp_count, sizeof(*buf));
+	if (!buf) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_mem;
+	}
+
+	ice_zero_bitmap(rm->r_bitmap, ICE_MAX_NUM_RECIPES);
+	recipe_count = ICE_MAX_NUM_RECIPES;
+	status = ice_aq_get_recipe(hw, tmp, &recipe_count, ICE_SW_LKUP_MAC,
+				   NULL);
+	if (status || recipe_count == 0)
+		goto err_unroll;
+
+	/* Allocate the recipe resources, and configure them according to the
+	 * match fields from protocol headers and extracted field vectors.
+	 */
+	chain_idx = ICE_CHAIN_FV_INDEX_START -
+		ice_find_first_bit(available_result_ids,
+				   ICE_CHAIN_FV_INDEX_START + 1);
+	LIST_FOR_EACH_ENTRY(entry, &rm->rg_list, ice_recp_grp_entry, l_entry) {
+		u8 i;
+
+		status = ice_alloc_recipe(hw, &entry->rid);
+		if (status)
+			goto err_unroll;
+
+		/* Clear the result index of the located recipe, as this will be
+		 * updated, if needed, later in the recipe creation process.
+		 */
+		tmp[0].content.result_indx = 0;
+
+		buf[recps] = tmp[0];
+		buf[recps].recipe_indx = (u8)entry->rid;
+		/* if the recipe is a non-root recipe RID should be programmed
+		 * as 0 for the rules to be applied correctly.
+		 */
+		buf[recps].content.rid = 0;
+		ice_memset(&buf[recps].content.lkup_indx, 0,
+			   sizeof(buf[recps].content.lkup_indx),
+			   ICE_NONDMA_MEM);
+
+		/* All recipes use look-up field index 0 to match switch ID. */
+		buf[recps].content.lkup_indx[0] = 0;
+		buf[recps].content.mask[0] =
+			CPU_TO_LE16(ICE_AQ_SW_ID_LKUP_MASK);
+		/* Setup lkup_indx 1..4 to INVALID/ignore and set the mask
+		 * to be 0
+		 */
+		for (i = 1; i <= ICE_NUM_WORDS_RECIPE; i++) {
+			buf[recps].content.lkup_indx[i] = 0x80;
+			buf[recps].content.mask[i] = 0;
+		}
+
+		for (i = 0; i < entry->r_group.n_val_pairs; i++) {
+			buf[recps].content.lkup_indx[i + 1] = entry->fv_idx[i];
+			buf[recps].content.mask[i + 1] = CPU_TO_LE16(0xFFFF);
+		}
+
+		if (rm->n_grp_count > 1) {
+			entry->chain_idx = chain_idx;
+			buf[recps].content.result_indx =
+				ICE_AQ_RECIPE_RESULT_EN |
+				((chain_idx << ICE_AQ_RECIPE_RESULT_DATA_S) &
+				 ICE_AQ_RECIPE_RESULT_DATA_M);
+			ice_clear_bit(ICE_CHAIN_FV_INDEX_START - chain_idx,
+				      available_result_ids);
+			chain_idx = ICE_CHAIN_FV_INDEX_START -
+				ice_find_first_bit(available_result_ids,
+						   ICE_CHAIN_FV_INDEX_START +
+						   1);
+		}
+
+		/* fill recipe dependencies */
+		ice_zero_bitmap((ice_bitmap_t *)buf[recps].recipe_bitmap,
+				ICE_MAX_NUM_RECIPES);
+		ice_set_bit(buf[recps].recipe_indx,
+			    (ice_bitmap_t *)buf[recps].recipe_bitmap);
+		buf[recps].content.act_ctrl_fwd_priority = rm->priority;
+		recps++;
+	}
+
+	if (rm->n_grp_count == 1) {
+		rm->root_rid = buf[0].recipe_indx;
+		ice_set_bit(buf[0].recipe_indx, rm->r_bitmap);
+		buf[0].content.rid = rm->root_rid | ICE_AQ_RECIPE_ID_IS_ROOT;
+		if (sizeof(buf[0].recipe_bitmap) >= sizeof(rm->r_bitmap)) {
+			ice_memcpy(buf[0].recipe_bitmap, rm->r_bitmap,
+				   sizeof(buf[0].recipe_bitmap),
+				   ICE_NONDMA_TO_NONDMA);
+		} else {
+			status = ICE_ERR_BAD_PTR;
+			goto err_unroll;
+		}
+		/* Applicable only for ROOT_RECIPE, set the fwd_priority for
+		 * the recipe which is getting created if specified
+		 * by user. Usually any advanced switch filter, which results
+		 * into new extraction sequence, ended up creating a new recipe
+		 * of type ROOT and usually recipes are associated with profiles
+		 * Switch rule referreing newly created recipe, needs to have
+		 * either/or 'fwd' or 'join' priority, otherwise switch rule
+		 * evaluation will not happen correctly. In other words, if
+		 * switch rule to be evaluated on priority basis, then recipe
+		 * needs to have priority, otherwise it will be evaluated last.
+		 */
+		buf[0].content.act_ctrl_fwd_priority = rm->priority;
+	} else {
+		struct ice_recp_grp_entry *last_chain_entry;
+		u16 rid, i = 0;
+
+		/* Allocate the last recipe that will chain the outcomes of the
+		 * other recipes together
+		 */
+		status = ice_alloc_recipe(hw, &rid);
+		if (status)
+			goto err_unroll;
+
+		buf[recps].recipe_indx = (u8)rid;
+		buf[recps].content.rid = (u8)rid;
+		buf[recps].content.rid |= ICE_AQ_RECIPE_ID_IS_ROOT;
+		/* the new entry created should also be part of rg_list to
+		 * make sure we have complete recipe
+		 */
+		last_chain_entry = (struct ice_recp_grp_entry *)ice_malloc(hw,
+			sizeof(*last_chain_entry));
+		if (!last_chain_entry) {
+			status = ICE_ERR_NO_MEMORY;
+			goto err_unroll;
+		}
+		last_chain_entry->rid = rid;
+		ice_memset(&buf[recps].content.lkup_indx, 0,
+			   sizeof(buf[recps].content.lkup_indx),
+			   ICE_NONDMA_MEM);
+		buf[recps].content.lkup_indx[i] = hw->port_info->sw_id;
+		buf[recps].content.mask[i] =
+			CPU_TO_LE16(ICE_AQ_SW_ID_LKUP_MASK);
+		for (i = 1; i <= ICE_NUM_WORDS_RECIPE; i++) {
+			buf[recps].content.lkup_indx[i] =
+				ICE_AQ_RECIPE_LKUP_IGNORE;
+			buf[recps].content.mask[i] = 0;
+		}
+
+		i = 1;
+		/* update r_bitmap with the recp that is used for chaining */
+		ice_set_bit(rid, rm->r_bitmap);
+		/* this is the recipe that chains all the other recipes so it
+		 * should not have a chaining ID to indicate the same
+		 */
+		last_chain_entry->chain_idx = ICE_INVAL_CHAIN_IND;
+		LIST_FOR_EACH_ENTRY(entry, &rm->rg_list, ice_recp_grp_entry,
+				    l_entry) {
+			last_chain_entry->fv_idx[i] = entry->chain_idx;
+			buf[recps].content.lkup_indx[i] = entry->chain_idx;
+			buf[recps].content.mask[i++] = CPU_TO_LE16(0xFFFF);
+			ice_set_bit(entry->rid, rm->r_bitmap);
+		}
+		LIST_ADD(&last_chain_entry->l_entry, &rm->rg_list);
+		if (sizeof(buf[recps].recipe_bitmap) >=
+		    sizeof(rm->r_bitmap)) {
+			ice_memcpy(buf[recps].recipe_bitmap, rm->r_bitmap,
+				   sizeof(buf[recps].recipe_bitmap),
+				   ICE_NONDMA_TO_NONDMA);
+		} else {
+			status = ICE_ERR_BAD_PTR;
+			goto err_unroll;
+		}
+		buf[recps].content.act_ctrl_fwd_priority = rm->priority;
+
+		/* To differentiate among different UDP tunnels, a meta data ID
+		 * flag is used.
+		 */
+		if (match_tun) {
+			buf[recps].content.lkup_indx[i] = ICE_TUN_FLAG_FV_IND;
+			buf[recps].content.mask[i] =
+				CPU_TO_LE16(ICE_TUN_FLAG_MASK);
+		}
+
+		recps++;
+		rm->root_rid = (u8)rid;
+	}
+	status = ice_acquire_change_lock(hw, ICE_RES_WRITE);
+	if (status)
+		goto err_unroll;
+
+	status = ice_aq_add_recipe(hw, buf, rm->n_grp_count, NULL);
+	ice_release_change_lock(hw);
+	if (status)
+		goto err_unroll;
+
+	/* Every recipe that just got created add it to the recipe
+	 * book keeping list
+	 */
+	LIST_FOR_EACH_ENTRY(entry, &rm->rg_list, ice_recp_grp_entry, l_entry) {
+		struct ice_switch_info *sw = hw->switch_info;
+		struct ice_sw_recipe *recp;
+
+		recp = &sw->recp_list[entry->rid];
+		recp->root_rid = entry->rid;
+		ice_memcpy(&recp->ext_words, entry->r_group.pairs,
+			   entry->r_group.n_val_pairs *
+			   sizeof(struct ice_fv_word),
+			   ICE_NONDMA_TO_NONDMA);
+
+		recp->n_ext_words = entry->r_group.n_val_pairs;
+		recp->chain_idx = entry->chain_idx;
+		recp->recp_created = true;
+		recp->big_recp = false;
+	}
+	rm->root_buf = buf;
+	ice_free(hw, tmp);
+	return status;
+
+err_unroll:
+err_mem:
+	ice_free(hw, tmp);
+	ice_free(hw, buf);
+	return status;
+}
+
+/**
+ * ice_create_recipe_group - creates recipe group
+ * @hw: pointer to hardware structure
+ * @rm: recipe management list entry
+ * @lkup_exts: lookup elements
+ */
+static enum ice_status
+ice_create_recipe_group(struct ice_hw *hw, struct ice_sw_recipe *rm,
+			struct ice_prot_lkup_ext *lkup_exts)
+{
+	struct ice_recp_grp_entry *entry;
+	struct ice_recp_grp_entry *tmp;
+	enum ice_status status;
+	u8 recp_count = 0;
+	u16 groups, i;
+
+	rm->n_grp_count = 0;
+
+	/* Each switch recipe can match up to 5 words or metadata. One word in
+	 * each recipe is used to match the switch ID. Four words are left for
+	 * matching other values. If the new advanced recipe requires more than
+	 * 4 words, it needs to be split into multiple recipes which are chained
+	 * together using the intermediate result that each produces as input to
+	 * the other recipes in the sequence.
+	 */
+	groups = ARRAY_SIZE(ice_recipe_pack);
+
+	/* Check if any of the preferred recipes from the grouping policy
+	 * matches.
+	 */
+	for (i = 0; i < groups; i++)
+		/* Check if the recipe from the preferred grouping matches
+		 * or is a subset of the fields that needs to be looked up.
+		 */
+		if (ice_is_recipe_subset(lkup_exts, &ice_recipe_pack[i])) {
+			/* This recipe can be used by itself or grouped with
+			 * other recipes.
+			 */
+			entry = (struct ice_recp_grp_entry *)
+				ice_malloc(hw, sizeof(*entry));
+			if (!entry) {
+				status = ICE_ERR_NO_MEMORY;
+				goto err_unroll;
+			}
+			entry->r_group = ice_recipe_pack[i];
+			LIST_ADD(&entry->l_entry, &rm->rg_list);
+			rm->n_grp_count++;
+		}
+
+	/* Create recipes for words that are marked not done by packing them
+	 * as best fit.
+	 */
+	status = ice_create_first_fit_recp_def(hw, lkup_exts,
+					       &rm->rg_list, &recp_count);
+	if (!status) {
+		rm->n_grp_count += recp_count;
+		rm->n_ext_words = lkup_exts->n_val_words;
+		ice_memcpy(&rm->ext_words, lkup_exts->fv_words,
+			   sizeof(rm->ext_words), ICE_NONDMA_TO_NONDMA);
+		goto out;
+	}
+
+err_unroll:
+	LIST_FOR_EACH_ENTRY_SAFE(entry, tmp, &rm->rg_list, ice_recp_grp_entry,
+				 l_entry) {
+		LIST_DEL(&entry->l_entry);
+		ice_free(hw, entry);
+	}
+
+out:
+	return status;
+}
+
+/**
+ * ice_get_fv - get field vectors/extraction sequences for spec. lookup types
+ * @hw: pointer to hardware structure
+ * @lkups: lookup elements or match criteria for the advanced recipe, one
+ *	   structure per protocol header
+ * @lkups_cnt: number of protocols
+ * @fv_list: pointer to a list that holds the returned field vectors
+ */
+static enum ice_status
+ice_get_fv(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
+	   struct LIST_HEAD_TYPE *fv_list)
+{
+	enum ice_status status;
+	u16 *prot_ids;
+	u16 i;
+
+	prot_ids = (u16 *)ice_calloc(hw, lkups_cnt, sizeof(*prot_ids));
+	if (!prot_ids)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < lkups_cnt; i++)
+		if (!ice_prot_type_to_id(lkups[i].type, &prot_ids[i])) {
+			status = ICE_ERR_CFG;
+			goto free_mem;
+		}
+
+	/* Find field vectors that include all specified protocol types */
+	status = ice_get_sw_fv_list(hw, prot_ids, lkups_cnt, fv_list);
+
+free_mem:
+	ice_free(hw, prot_ids);
+	return status;
+}
+
+/**
+ * ice_add_adv_recipe - Add an advanced recipe that is not part of the default
+ * @hw: pointer to hardware structure
+ * @lkups: lookup elements or match criteria for the advanced recipe, one
+ *  structure per protocol header
+ * @lkups_cnt: number of protocols
+ * @rinfo: other information regarding the rule e.g. priority and action info
+ * @rid: return the recipe ID of the recipe created
+ */
+static enum ice_status
+ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+		   u16 lkups_cnt, struct ice_adv_rule_info *rinfo, u16 *rid)
+{
+	struct ice_prot_lkup_ext *lkup_exts;
+	struct ice_recp_grp_entry *r_entry;
+	struct ice_sw_fv_list_entry *fvit;
+	struct ice_recp_grp_entry *r_tmp;
+	struct ice_sw_fv_list_entry *tmp;
+	enum ice_status status = ICE_SUCCESS;
+	struct ice_sw_recipe *rm;
+	bool match_tun = false;
+	u8 i;
+
+	if (!lkups_cnt)
+		return ICE_ERR_PARAM;
+
+	lkup_exts = (struct ice_prot_lkup_ext *)
+		ice_malloc(hw, sizeof(*lkup_exts));
+	if (!lkup_exts)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Determine the number of words to be matched and if it exceeds a
+	 * recipe's restrictions
+	 */
+	for (i = 0; i < lkups_cnt; i++) {
+		u16 count;
+
+		if (lkups[i].type >= ICE_PROTOCOL_LAST) {
+			status = ICE_ERR_CFG;
+			goto err_free_lkup_exts;
+		}
+
+		count = ice_fill_valid_words(&lkups[i], lkup_exts);
+		if (!count) {
+			status = ICE_ERR_CFG;
+			goto err_free_lkup_exts;
+		}
+	}
+
+	*rid = ice_find_recp(hw, lkup_exts);
+	if (*rid < ICE_MAX_NUM_RECIPES)
+		/* Success if found a recipe that match the existing criteria */
+		goto err_free_lkup_exts;
+
+	/* Recipe we need does not exist, add a recipe */
+
+	rm = (struct ice_sw_recipe *)ice_malloc(hw, sizeof(*rm));
+	if (!rm) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_free_lkup_exts;
+	}
+
+	/* Get field vectors that contain fields extracted from all the protocol
+	 * headers being programmed.
+	 */
+	INIT_LIST_HEAD(&rm->fv_list);
+	INIT_LIST_HEAD(&rm->rg_list);
+
+	status = ice_get_fv(hw, lkups, lkups_cnt, &rm->fv_list);
+	if (status)
+		goto err_unroll;
+
+	/* Group match words into recipes using preferred recipe grouping
+	 * criteria.
+	 */
+	status = ice_create_recipe_group(hw, rm, lkup_exts);
+	if (status)
+		goto err_unroll;
+
+	/* There is only profile for UDP tunnels. So, it is necessary to use a
+	 * metadata ID flag to differentiate different tunnel types. A separate
+	 * recipe needs to be used for the metadata.
+	 */
+	if ((rinfo->tun_type == ICE_SW_TUN_VXLAN_GPE ||
+	     rinfo->tun_type == ICE_SW_TUN_GENEVE ||
+	     rinfo->tun_type == ICE_SW_TUN_VXLAN) && rm->n_grp_count > 1)
+		match_tun = true;
+
+	/* set the recipe priority if specified */
+	rm->priority = rinfo->priority ? rinfo->priority : 0;
+
+	/* Find offsets from the field vector. Pick the first one for all the
+	 * recipes.
+	 */
+	ice_fill_fv_word_index(hw, &rm->fv_list, &rm->rg_list);
+	status = ice_add_sw_recipe(hw, rm, match_tun);
+	if (status)
+		goto err_unroll;
+
+	/* Associate all the recipes created with all the profiles in the
+	 * common field vector.
+	 */
+	LIST_FOR_EACH_ENTRY(fvit, &rm->fv_list, ice_sw_fv_list_entry,
+			    list_entry) {
+		ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_RECIPES);
+
+		status = ice_aq_get_recipe_to_profile(hw, fvit->profile_id,
+						      (u8 *)r_bitmap, NULL);
+		if (status)
+			goto err_unroll;
+
+		ice_or_bitmap(rm->r_bitmap, r_bitmap, rm->r_bitmap,
+			      ICE_MAX_NUM_RECIPES);
+		status = ice_acquire_change_lock(hw, ICE_RES_WRITE);
+		if (status)
+			goto err_unroll;
+
+		status = ice_aq_map_recipe_to_profile(hw, fvit->profile_id,
+						      (u8 *)rm->r_bitmap,
+						      NULL);
+		ice_release_change_lock(hw);
+
+		if (status)
+			goto err_unroll;
+	}
+
+	*rid = rm->root_rid;
+	ice_memcpy(&hw->switch_info->recp_list[*rid].lkup_exts,
+		   lkup_exts, sizeof(*lkup_exts), ICE_NONDMA_TO_NONDMA);
+err_unroll:
+	LIST_FOR_EACH_ENTRY_SAFE(r_entry, r_tmp, &rm->rg_list,
+				 ice_recp_grp_entry, l_entry) {
+		LIST_DEL(&r_entry->l_entry);
+		ice_free(hw, r_entry);
+	}
+
+	LIST_FOR_EACH_ENTRY_SAFE(fvit, tmp, &rm->fv_list, ice_sw_fv_list_entry,
+				 list_entry) {
+		LIST_DEL(&fvit->list_entry);
+		ice_free(hw, fvit);
+	}
+
+	if (rm->root_buf)
+		ice_free(hw, rm->root_buf);
+
+	ice_free(hw, rm);
+
+err_free_lkup_exts:
+	ice_free(hw, lkup_exts);
+
+	return status;
+}
+
+#define ICE_MAC_HDR_OFFSET	0
+#define ICE_IP_HDR_OFFSET	14
+#define ICE_GRE_HDR_OFFSET	34
+#define ICE_MAC_IL_HDR_OFFSET	42
+#define ICE_IP_IL_HDR_OFFSET	56
+#define ICE_L4_HDR_OFFSET	34
+#define ICE_UDP_TUN_HDR_OFFSET	42
+
+/**
+ * ice_find_dummy_packet - find dummy packet with given match criteria
+ *
+ * @lkups: lookup elements or match criteria for the advanced recipe, one
+ *	   structure per protocol header
+ * @lkups_cnt: number of protocols
+ * @tun_type: tunnel type from the match criteria
+ * @pkt: dummy packet to fill according to filter match criteria
+ * @pkt_len: packet length of dummy packet
+ */
+static void
+ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
+		      enum ice_sw_tunnel_type tun_type, const u8 **pkt,
+		      u16 *pkt_len)
+{
+	u16 i;
+
+	if (tun_type == ICE_SW_TUN_NVGRE || tun_type == ICE_ALL_TUNNELS) {
+		*pkt = dummy_gre_packet;
+		*pkt_len = sizeof(dummy_gre_packet);
+		return;
+	}
+
+	if (tun_type == ICE_SW_TUN_VXLAN || tun_type == ICE_SW_TUN_GENEVE ||
+	    tun_type == ICE_SW_TUN_VXLAN_GPE) {
+		*pkt = dummy_udp_tun_packet;
+		*pkt_len = sizeof(dummy_udp_tun_packet);
+		return;
+	}
+
+	for (i = 0; i < lkups_cnt; i++) {
+		if (lkups[i].type == ICE_UDP_ILOS) {
+			*pkt = dummy_udp_tun_packet;
+			*pkt_len = sizeof(dummy_udp_tun_packet);
+			return;
+		}
+	}
+
+	*pkt = dummy_tcp_tun_packet;
+	*pkt_len = sizeof(dummy_tcp_tun_packet);
+}
+
+/**
+ * ice_fill_adv_dummy_packet - fill a dummy packet with given match criteria
+ *
+ * @lkups: lookup elements or match criteria for the advanced recipe, one
+ *	   structure per protocol header
+ * @lkups_cnt: number of protocols
+ * @tun_type: to know if the dummy packet is supposed to be tunnel packet
+ * @s_rule: stores rule information from the match criteria
+ * @dummy_pkt: dummy packet to fill according to filter match criteria
+ * @pkt_len: packet length of dummy packet
+ */
+static void
+ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
+			  enum ice_sw_tunnel_type tun_type,
+			  struct ice_aqc_sw_rules_elem *s_rule,
+			  const u8 *dummy_pkt, u16 pkt_len)
+{
+	u8 *pkt;
+	u16 i;
+
+	/* Start with a packet with a pre-defined/dummy content. Then, fill
+	 * in the header values to be looked up or matched.
+	 */
+	pkt = s_rule->pdata.lkup_tx_rx.hdr;
+
+	ice_memcpy(pkt, dummy_pkt, pkt_len, ICE_NONDMA_TO_NONDMA);
+
+	for (i = 0; i < lkups_cnt; i++) {
+		u32 len, pkt_off, hdr_size, field_off;
+
+		switch (lkups[i].type) {
+		case ICE_MAC_OFOS:
+		case ICE_MAC_IL:
+			pkt_off = offsetof(struct ice_ether_hdr, dst_addr) +
+				((lkups[i].type == ICE_MAC_IL) ?
+				 ICE_MAC_IL_HDR_OFFSET : 0);
+			len = sizeof(lkups[i].h_u.eth_hdr.dst_addr);
+			if ((tun_type == ICE_SW_TUN_VXLAN ||
+			     tun_type == ICE_SW_TUN_GENEVE ||
+			     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
+			     lkups[i].type == ICE_MAC_IL) {
+				pkt_off += sizeof(struct ice_udp_tnl_hdr);
+			}
+
+			ice_memcpy(&pkt[pkt_off],
+				   &lkups[i].h_u.eth_hdr.dst_addr, len,
+				   ICE_NONDMA_TO_NONDMA);
+			pkt_off = offsetof(struct ice_ether_hdr, src_addr) +
+				((lkups[i].type == ICE_MAC_IL) ?
+				 ICE_MAC_IL_HDR_OFFSET : 0);
+			len = sizeof(lkups[i].h_u.eth_hdr.src_addr);
+			if ((tun_type == ICE_SW_TUN_VXLAN ||
+			     tun_type == ICE_SW_TUN_GENEVE ||
+			     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
+			     lkups[i].type == ICE_MAC_IL) {
+				pkt_off += sizeof(struct ice_udp_tnl_hdr);
+			}
+			ice_memcpy(&pkt[pkt_off],
+				   &lkups[i].h_u.eth_hdr.src_addr, len,
+				   ICE_NONDMA_TO_NONDMA);
+			if (lkups[i].h_u.eth_hdr.ethtype_id) {
+				pkt_off = offsetof(struct ice_ether_hdr,
+						   ethtype_id) +
+					((lkups[i].type == ICE_MAC_IL) ?
+					 ICE_MAC_IL_HDR_OFFSET : 0);
+				len = sizeof(lkups[i].h_u.eth_hdr.ethtype_id);
+				if ((tun_type == ICE_SW_TUN_VXLAN ||
+				     tun_type == ICE_SW_TUN_GENEVE ||
+				     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
+				     lkups[i].type == ICE_MAC_IL) {
+					pkt_off +=
+						sizeof(struct ice_udp_tnl_hdr);
+				}
+				ice_memcpy(&pkt[pkt_off],
+					   &lkups[i].h_u.eth_hdr.ethtype_id,
+					   len, ICE_NONDMA_TO_NONDMA);
+			}
+			break;
+		case ICE_IPV4_OFOS:
+			hdr_size = sizeof(struct ice_ipv4_hdr);
+			if (lkups[i].h_u.ipv4_hdr.dst_addr) {
+				pkt_off = ICE_IP_HDR_OFFSET +
+					   offsetof(struct ice_ipv4_hdr,
+						    dst_addr);
+				field_off = offsetof(struct ice_ipv4_hdr,
+						     dst_addr);
+				len = hdr_size - field_off;
+				ice_memcpy(&pkt[pkt_off],
+					   &lkups[i].h_u.ipv4_hdr.dst_addr,
+					   len, ICE_NONDMA_TO_NONDMA);
+			}
+			if (lkups[i].h_u.ipv4_hdr.src_addr) {
+				pkt_off = ICE_IP_HDR_OFFSET +
+					   offsetof(struct ice_ipv4_hdr,
+						    src_addr);
+				field_off = offsetof(struct ice_ipv4_hdr,
+						     src_addr);
+				len = hdr_size - field_off;
+				ice_memcpy(&pkt[pkt_off],
+					   &lkups[i].h_u.ipv4_hdr.src_addr,
+					   len, ICE_NONDMA_TO_NONDMA);
+			}
+			break;
+		case ICE_IPV4_IL:
+			break;
+		case ICE_TCP_IL:
+		case ICE_UDP_ILOS:
+		case ICE_SCTP_IL:
+			hdr_size = sizeof(struct ice_udp_tnl_hdr);
+			if (lkups[i].h_u.l4_hdr.dst_port) {
+				pkt_off = ICE_L4_HDR_OFFSET +
+					   offsetof(struct ice_l4_hdr,
+						    dst_port);
+				field_off = offsetof(struct ice_l4_hdr,
+						     dst_port);
+				len =  hdr_size - field_off;
+				ice_memcpy(&pkt[pkt_off],
+					   &lkups[i].h_u.l4_hdr.dst_port,
+					   len, ICE_NONDMA_TO_NONDMA);
+			}
+			if (lkups[i].h_u.l4_hdr.src_port) {
+				pkt_off = ICE_L4_HDR_OFFSET +
+					offsetof(struct ice_l4_hdr, src_port);
+				field_off = offsetof(struct ice_l4_hdr,
+						     src_port);
+				len =  hdr_size - field_off;
+				ice_memcpy(&pkt[pkt_off],
+					   &lkups[i].h_u.l4_hdr.src_port,
+					   len, ICE_NONDMA_TO_NONDMA);
+			}
+			break;
+		case ICE_VXLAN:
+		case ICE_GENEVE:
+		case ICE_VXLAN_GPE:
+			pkt_off = ICE_UDP_TUN_HDR_OFFSET +
+				   offsetof(struct ice_udp_tnl_hdr, vni);
+			field_off = offsetof(struct ice_udp_tnl_hdr, vni);
+			len =  sizeof(struct ice_udp_tnl_hdr) - field_off;
+			ice_memcpy(&pkt[pkt_off], &lkups[i].h_u.tnl_hdr.vni,
+				   len, ICE_NONDMA_TO_NONDMA);
+			break;
+		default:
+			break;
+		}
+	}
+	s_rule->pdata.lkup_tx_rx.hdr_len = CPU_TO_LE16(pkt_len);
+}
+
+/**
+ * ice_find_adv_rule_entry - Search a rule entry
+ * @hw: pointer to the hardware structure
+ * @lkups: lookup elements or match criteria for the advanced recipe, one
+ *	   structure per protocol header
+ * @lkups_cnt: number of protocols
+ * @recp_id: recipe ID for which we are finding the rule
+ * @rinfo: other information regarding the rule e.g. priority and action info
+ *
+ * Helper function to search for a given advance rule entry
+ * Returns pointer to entry storing the rule if found
+ */
+static struct ice_adv_fltr_mgmt_list_entry *
+ice_find_adv_rule_entry(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+			u16 lkups_cnt, u8 recp_id,
+			struct ice_adv_rule_info *rinfo)
+{
+	struct ice_adv_fltr_mgmt_list_entry *list_itr;
+	struct ice_switch_info *sw = hw->switch_info;
+	int i;
+
+	LIST_FOR_EACH_ENTRY(list_itr, &sw->recp_list[recp_id].filt_rules,
+			    ice_adv_fltr_mgmt_list_entry, list_entry) {
+		bool lkups_matched = true;
+
+		if (lkups_cnt != list_itr->lkups_cnt)
+			continue;
+		for (i = 0; i < list_itr->lkups_cnt; i++)
+			if (memcmp(&list_itr->lkups[i], &lkups[i],
+				   sizeof(*lkups))) {
+				lkups_matched = false;
+				break;
+			}
+		if (rinfo->sw_act.flag == list_itr->rule_info.sw_act.flag &&
+		    rinfo->tun_type == list_itr->rule_info.tun_type &&
+		    lkups_matched)
+			return list_itr;
+	}
+	return NULL;
+}
+
+/**
+ * ice_adv_add_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @m_entry: pointer to current adv filter management list entry
+ * @cur_fltr: filter information from the book keeping entry
+ * @new_fltr: filter information with the new VSI to be added
+ *
+ * Call AQ command to add or update previously created VSI list with new VSI.
+ *
+ * Helper function to do book keeping associated with adding filter information
+ * The algorithm to do the booking keeping is described below :
+ * When a VSI needs to subscribe to a given advanced filter
+ *	if only one VSI has been added till now
+ *		Allocate a new VSI list and add two VSIs
+ *		to this list using switch rule command
+ *		Update the previously created switch rule with the
+ *		newly created VSI list ID
+ *	if a VSI list was previously created
+ *		Add the new VSI to the previously created VSI list set
+ *		using the update switch rule command
+ */
+static enum ice_status
+ice_adv_add_update_vsi_list(struct ice_hw *hw,
+			    struct ice_adv_fltr_mgmt_list_entry *m_entry,
+			    struct ice_adv_rule_info *cur_fltr,
+			    struct ice_adv_rule_info *new_fltr)
+{
+	enum ice_status status;
+	u16 vsi_list_id = 0;
+
+	if (cur_fltr->sw_act.fltr_act == ICE_FWD_TO_Q ||
+	    cur_fltr->sw_act.fltr_act == ICE_FWD_TO_QGRP)
+		return ICE_ERR_NOT_IMPL;
+
+	if (cur_fltr->sw_act.fltr_act == ICE_DROP_PACKET &&
+	    new_fltr->sw_act.fltr_act == ICE_DROP_PACKET)
+		return ICE_ERR_ALREADY_EXISTS;
+
+	if ((new_fltr->sw_act.fltr_act == ICE_FWD_TO_Q ||
+	     new_fltr->sw_act.fltr_act == ICE_FWD_TO_QGRP) &&
+	    (cur_fltr->sw_act.fltr_act == ICE_FWD_TO_VSI ||
+	     cur_fltr->sw_act.fltr_act == ICE_FWD_TO_VSI_LIST))
+		return ICE_ERR_NOT_IMPL;
+
+	if (m_entry->vsi_count < 2 && !m_entry->vsi_list_info) {
+		 /* Only one entry existed in the mapping and it was not already
+		  * a part of a VSI list. So, create a VSI list with the old and
+		  * new VSIs.
+		  */
+		struct ice_fltr_info tmp_fltr;
+		u16 vsi_handle_arr[2];
+
+		/* A rule already exists with the new VSI being added */
+		if (cur_fltr->sw_act.fwd_id.hw_vsi_id ==
+		    new_fltr->sw_act.fwd_id.hw_vsi_id)
+			return ICE_ERR_ALREADY_EXISTS;
+
+		vsi_handle_arr[0] = cur_fltr->sw_act.vsi_handle;
+		vsi_handle_arr[1] = new_fltr->sw_act.vsi_handle;
+		status = ice_create_vsi_list_rule(hw, &vsi_handle_arr[0], 2,
+						  &vsi_list_id,
+						  ICE_SW_LKUP_LAST);
+		if (status)
+			return status;
+
+		tmp_fltr.fltr_rule_id = cur_fltr->fltr_rule_id;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI_LIST;
+		tmp_fltr.fwd_id.vsi_list_id = vsi_list_id;
+		/* Update the previous switch rule of "forward to VSI" to
+		 * "fwd to VSI list"
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			return status;
+
+		cur_fltr->sw_act.fwd_id.vsi_list_id = vsi_list_id;
+		cur_fltr->sw_act.fltr_act = ICE_FWD_TO_VSI_LIST;
+		m_entry->vsi_list_info =
+			ice_create_vsi_list_map(hw, &vsi_handle_arr[0], 2,
+						vsi_list_id);
+	} else {
+		u16 vsi_handle = new_fltr->sw_act.vsi_handle;
+
+		if (!m_entry->vsi_list_info)
+			return ICE_ERR_CFG;
+
+		/* A rule already exists with the new VSI being added */
+		if (ice_is_bit_set(m_entry->vsi_list_info->vsi_map, vsi_handle))
+			return ICE_SUCCESS;
+
+		/* Update the previously created VSI list set with
+		 * the new VSI ID passed in
+		 */
+		vsi_list_id = cur_fltr->sw_act.fwd_id.vsi_list_id;
+
+		status = ice_update_vsi_list_rule(hw, &vsi_handle, 1,
+						  vsi_list_id, false,
+						  ice_aqc_opc_update_sw_rules,
+						  ICE_SW_LKUP_LAST);
+		/* update VSI list mapping info with new VSI ID */
+		if (!status)
+			ice_set_bit(vsi_handle,
+				    m_entry->vsi_list_info->vsi_map);
+	}
+	if (!status)
+		m_entry->vsi_count++;
+	return status;
+}
+
+/**
+ * ice_add_adv_rule - create an advanced switch rule
+ * @hw: pointer to the hardware structure
+ * @lkups: information on the words that needs to be looked up. All words
+ * together makes one recipe
+ * @lkups_cnt: num of entries in the lkups array
+ * @rinfo: other information related to the rule that needs to be programmed
+ * @added_entry: this will return recipe_id, rule_id and vsi_handle. should be
+ *               ignored is case of error.
+ *
+ * This function can program only 1 rule at a time. The lkups is used to
+ * describe the all the words that forms the "lookup" portion of the recipe.
+ * These words can span multiple protocols. Callers to this function need to
+ * pass in a list of protocol headers with lookup information along and mask
+ * that determines which words are valid from the given protocol header.
+ * rinfo describes other information related to this rule such as forwarding
+ * IDs, priority of this rule, etc.
+ */
+enum ice_status
+ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo,
+		 struct ice_rule_query_data *added_entry)
+{
+	struct ice_adv_fltr_mgmt_list_entry *m_entry, *adv_fltr = NULL;
+	u16 rid = 0, i, pkt_len, rule_buf_sz, vsi_handle;
+	struct ice_aqc_sw_rules_elem *s_rule;
+	struct LIST_HEAD_TYPE *rule_head;
+	struct ice_switch_info *sw;
+	enum ice_status status;
+	const u8 *pkt = NULL;
+	u32 act = 0;
+
+	if (!lkups_cnt)
+		return ICE_ERR_PARAM;
+
+	for (i = 0; i < lkups_cnt; i++) {
+		u16 j, *ptr;
+
+		/* Validate match masks to make sure they match complete 16-bit
+		 * words.
+		 */
+		ptr = (u16 *)&lkups->m_u;
+		for (j = 0; j < sizeof(lkups->m_u) / sizeof(u16); j++)
+			if (ptr[j] != 0 && ptr[j] != 0xffff)
+				return ICE_ERR_PARAM;
+	}
+
+	if (!(rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI ||
+	      rinfo->sw_act.fltr_act == ICE_FWD_TO_Q ||
+	      rinfo->sw_act.fltr_act == ICE_DROP_PACKET))
+		return ICE_ERR_CFG;
+
+	vsi_handle = rinfo->sw_act.vsi_handle;
+	if (!ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	if (rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI)
+		rinfo->sw_act.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, vsi_handle);
+	if (rinfo->sw_act.flag & ICE_FLTR_TX)
+		rinfo->sw_act.src = ice_get_hw_vsi_num(hw, vsi_handle);
+
+	status = ice_add_adv_recipe(hw, lkups, lkups_cnt, rinfo, &rid);
+	if (status)
+		return status;
+	m_entry = ice_find_adv_rule_entry(hw, lkups, lkups_cnt, rid, rinfo);
+	if (m_entry) {
+		/* we have to add VSI to VSI_LIST and increment vsi_count.
+		 * Also Update VSI list so that we can change forwarding rule
+		 * if the rule already exists, we will check if it exists with
+		 * same vsi_id, if not then add it to the VSI list if it already
+		 * exists if not then create a VSI list and add the existing VSI
+		 * ID and the new VSI ID to the list
+		 * We will add that VSI to the list
+		 */
+		status = ice_adv_add_update_vsi_list(hw, m_entry,
+						     &m_entry->rule_info,
+						     rinfo);
+		if (added_entry) {
+			added_entry->rid = rid;
+			added_entry->rule_id = m_entry->rule_info.fltr_rule_id;
+			added_entry->vsi_handle = rinfo->sw_act.vsi_handle;
+		}
+		return status;
+	}
+	ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt,
+			      &pkt_len);
+	rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + pkt_len;
+	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rule_buf_sz);
+	if (!s_rule)
+		return ICE_ERR_NO_MEMORY;
+	act |= ICE_SINGLE_ACT_LB_ENABLE | ICE_SINGLE_ACT_LAN_ENABLE;
+	switch (rinfo->sw_act.fltr_act) {
+	case ICE_FWD_TO_VSI:
+		act |= (rinfo->sw_act.fwd_id.hw_vsi_id <<
+			ICE_SINGLE_ACT_VSI_ID_S) & ICE_SINGLE_ACT_VSI_ID_M;
+		act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	case ICE_FWD_TO_Q:
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (rinfo->sw_act.fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+		       ICE_SINGLE_ACT_Q_INDEX_M;
+		break;
+	case ICE_DROP_PACKET:
+		act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_DROP |
+		       ICE_SINGLE_ACT_VALID_BIT;
+		break;
+	default:
+		status = ICE_ERR_CFG;
+		goto err_ice_add_adv_rule;
+	}
+
+	/* set the rule LOOKUP type based on caller specified 'RX'
+	 * instead of hardcoding it to be either LOOKUP_TX/RX
+	 *
+	 * for 'RX' set the source to be the port number
+	 * for 'TX' set the source to be the source HW VSI number (determined
+	 * by caller)
+	 */
+	if (rinfo->rx) {
+		s_rule->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_RX);
+		s_rule->pdata.lkup_tx_rx.src =
+			CPU_TO_LE16(hw->port_info->lport);
+	} else {
+		s_rule->type = CPU_TO_LE16(ICE_AQC_SW_RULES_T_LKUP_TX);
+		s_rule->pdata.lkup_tx_rx.src = CPU_TO_LE16(rinfo->sw_act.src);
+	}
+
+	s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(rid);
+	s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
+
+	ice_fill_adv_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, s_rule,
+				  pkt, pkt_len);
+
+	status = ice_aq_sw_rules(hw, (struct ice_aqc_sw_rules *)s_rule,
+				 rule_buf_sz, 1, ice_aqc_opc_add_sw_rules,
+				 NULL);
+	if (status)
+		goto err_ice_add_adv_rule;
+	adv_fltr = (struct ice_adv_fltr_mgmt_list_entry *)
+		ice_malloc(hw, sizeof(struct ice_adv_fltr_mgmt_list_entry));
+	if (!adv_fltr) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_ice_add_adv_rule;
+	}
+
+	adv_fltr->lkups = (struct ice_adv_lkup_elem *)
+		ice_memdup(hw, lkups, lkups_cnt * sizeof(*lkups),
+			   ICE_NONDMA_TO_NONDMA);
+	if (!adv_fltr->lkups) {
+		status = ICE_ERR_NO_MEMORY;
+		goto err_ice_add_adv_rule;
+	}
+
+	adv_fltr->lkups_cnt = lkups_cnt;
+	adv_fltr->rule_info = *rinfo;
+	adv_fltr->rule_info.fltr_rule_id =
+		LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+	sw = hw->switch_info;
+	sw->recp_list[rid].adv_rule = true;
+	rule_head = &sw->recp_list[rid].filt_rules;
+
+	if (rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI) {
+		struct ice_fltr_info tmp_fltr;
+
+		tmp_fltr.fltr_rule_id =
+			LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index);
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI;
+		tmp_fltr.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, vsi_handle);
+		tmp_fltr.vsi_handle = vsi_handle;
+		/* Update the previous switch rule of "forward to VSI" to
+		 * "fwd to VSI list"
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status)
+			goto err_ice_add_adv_rule;
+		adv_fltr->vsi_count = 1;
+	}
+
+	/* Add rule entry to book keeping list */
+	LIST_ADD(&adv_fltr->list_entry, rule_head);
+	if (added_entry) {
+		added_entry->rid = rid;
+		added_entry->rule_id = adv_fltr->rule_info.fltr_rule_id;
+		added_entry->vsi_handle = rinfo->sw_act.vsi_handle;
+	}
+err_ice_add_adv_rule:
+	if (status && adv_fltr) {
+		ice_free(hw, adv_fltr->lkups);
+		ice_free(hw, adv_fltr);
+	}
+
+	ice_free(hw, s_rule);
+
+	return status;
+}
 /**
  * ice_replay_fltr - Replay all the filters stored by a specific list head
  * @hw: pointer to the hardware structure
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index fd61c0eea..890df13dd 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -172,11 +172,21 @@ struct ice_sw_act_ctrl {
 	u8 qgrp_size;
 };
 
+struct ice_rule_query_data {
+	/* Recipe ID for which the requested rule was added */
+	u16 rid;
+	/* Rule ID that was added or is supposed to be removed */
+	u16 rule_id;
+	/* vsi_handle for which Rule was added or is supposed to be removed */
+	u16 vsi_handle;
+};
+
 struct ice_adv_rule_info {
 	enum ice_sw_tunnel_type tun_type;
 	struct ice_sw_act_ctrl sw_act;
 	u32 priority;
 	u8 rx; /* true means LOOKUP_RX otherwise LOOKUP_TX */
+	u16 fltr_rule_id;
 };
 
 /* A collection of one or more four word recipe */
@@ -222,6 +232,7 @@ struct ice_sw_recipe {
 	/* Profiles this recipe should be associated with */
 	struct LIST_HEAD_TYPE fv_list;
 
+#define ICE_MAX_NUM_PROFILES 256
 	/* Profiles this recipe is associated with */
 	u8 num_profs, *prof_ids;
 
@@ -281,6 +292,8 @@ struct ice_adv_fltr_mgmt_list_entry {
 	struct ice_adv_lkup_elem *lkups;
 	struct ice_adv_rule_info rule_info;
 	u16 lkups_cnt;
+	struct ice_vsi_list_map_info *vsi_list_info;
+	u16 vsi_count;
 };
 
 enum ice_promisc_flags {
@@ -421,7 +434,15 @@ enum ice_status
 ice_aq_map_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
 			     struct ice_sq_cd *cd);
 
+enum ice_status
+ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
+			     struct ice_sq_cd *cd);
+
 enum ice_status ice_alloc_recipe(struct ice_hw *hw, u16 *recipe_id);
+enum ice_status
+ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo,
+		 struct ice_rule_query_data *added_entry);
 enum ice_status ice_replay_all_fltr(struct ice_hw *hw);
 
 enum ice_status ice_init_def_sw_recp(struct ice_hw *hw);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 07/69] net/ice/base: replay advanced rule after reset
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (5 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 06/69] net/ice/base: programming a " Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 08/69] net/ice/base: code for removing advanced rule Leyi Rong
                       ` (62 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Victor Raj, Paul M Stillwell Jr

Code added to replay the advanced rule per VSI basis and remove the
advanced rule information from shared code recipe list.

Signed-off-by: Victor Raj <victor.raj@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 81 ++++++++++++++++++++++++++-----
 1 file changed, 69 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 30a908bc8..30c189752 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -3034,6 +3034,27 @@ ice_rem_sw_rule_info(struct ice_hw *hw, struct LIST_HEAD_TYPE *rule_head)
 	}
 }
 
+/**
+ * ice_rem_adv_rule_info
+ * @hw: pointer to the hardware structure
+ * @rule_head: pointer to the switch list structure that we want to delete
+ */
+static void
+ice_rem_adv_rule_info(struct ice_hw *hw, struct LIST_HEAD_TYPE *rule_head)
+{
+	struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
+	struct ice_adv_fltr_mgmt_list_entry *lst_itr;
+
+	if (LIST_EMPTY(rule_head))
+		return;
+
+	LIST_FOR_EACH_ENTRY_SAFE(lst_itr, tmp_entry, rule_head,
+				 ice_adv_fltr_mgmt_list_entry, list_entry) {
+		LIST_DEL(&lst_itr->list_entry);
+		ice_free(hw, lst_itr->lkups);
+		ice_free(hw, lst_itr);
+	}
+}
 
 /**
  * ice_rem_all_sw_rules_info
@@ -3050,6 +3071,8 @@ void ice_rem_all_sw_rules_info(struct ice_hw *hw)
 		rule_head = &sw->recp_list[i].filt_rules;
 		if (!sw->recp_list[i].adv_rule)
 			ice_rem_sw_rule_info(hw, rule_head);
+		else
+			ice_rem_adv_rule_info(hw, rule_head);
 	}
 }
 
@@ -5688,6 +5711,38 @@ ice_replay_vsi_fltr(struct ice_hw *hw, u16 vsi_handle, u8 recp_id,
 	return status;
 }
 
+/**
+ * ice_replay_vsi_adv_rule - Replay advanced rule for requested VSI
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: driver VSI handle
+ * @list_head: list for which filters need to be replayed
+ *
+ * Replay the advanced rule for the given VSI.
+ */
+static enum ice_status
+ice_replay_vsi_adv_rule(struct ice_hw *hw, u16 vsi_handle,
+			struct LIST_HEAD_TYPE *list_head)
+{
+	struct ice_rule_query_data added_entry = { 0 };
+	struct ice_adv_fltr_mgmt_list_entry *adv_fltr;
+	enum ice_status status = ICE_SUCCESS;
+
+	if (LIST_EMPTY(list_head))
+		return status;
+	LIST_FOR_EACH_ENTRY(adv_fltr, list_head, ice_adv_fltr_mgmt_list_entry,
+			    list_entry) {
+		struct ice_adv_rule_info *rinfo = &adv_fltr->rule_info;
+		u16 lk_cnt = adv_fltr->lkups_cnt;
+
+		if (vsi_handle != rinfo->sw_act.vsi_handle)
+			continue;
+		status = ice_add_adv_rule(hw, adv_fltr->lkups, lk_cnt, rinfo,
+					  &added_entry);
+		if (status)
+			break;
+	}
+	return status;
+}
 
 /**
  * ice_replay_vsi_all_fltr - replay all filters stored in bookkeeping lists
@@ -5699,23 +5754,23 @@ ice_replay_vsi_fltr(struct ice_hw *hw, u16 vsi_handle, u8 recp_id,
 enum ice_status ice_replay_vsi_all_fltr(struct ice_hw *hw, u16 vsi_handle)
 {
 	struct ice_switch_info *sw = hw->switch_info;
-	enum ice_status status = ICE_SUCCESS;
+	enum ice_status status;
 	u8 i;
 
+	/* Update the recipes that were created */
 	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
-		/* Update the default recipe lines and ones that were created */
-		if (i < ICE_MAX_NUM_RECIPES || sw->recp_list[i].recp_created) {
-			struct LIST_HEAD_TYPE *head;
+		struct LIST_HEAD_TYPE *head;
 
-			head = &sw->recp_list[i].filt_replay_rules;
-			if (!sw->recp_list[i].adv_rule)
-				status = ice_replay_vsi_fltr(hw, vsi_handle, i,
-							     head);
-			if (status != ICE_SUCCESS)
-				return status;
-		}
+		head = &sw->recp_list[i].filt_replay_rules;
+		if (!sw->recp_list[i].adv_rule)
+			status = ice_replay_vsi_fltr(hw, vsi_handle, i, head);
+		else
+			status = ice_replay_vsi_adv_rule(hw, vsi_handle, head);
+		if (status != ICE_SUCCESS)
+			return status;
 	}
-	return status;
+
+	return ICE_SUCCESS;
 }
 
 /**
@@ -5739,6 +5794,8 @@ void ice_rm_all_sw_replay_rule_info(struct ice_hw *hw)
 			l_head = &sw->recp_list[i].filt_replay_rules;
 			if (!sw->recp_list[i].adv_rule)
 				ice_rem_sw_rule_info(hw, l_head);
+			else
+				ice_rem_adv_rule_info(hw, l_head);
 		}
 	}
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 08/69] net/ice/base: code for removing advanced rule
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (6 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 07/69] net/ice/base: replay advanced rule after reset Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 09/69] net/ice/base: save and post reset replay q bandwidth Leyi Rong
                       ` (61 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Shivanshu Shukla, Paul M Stillwell Jr

This patch also contains ice_remove_adv_rule function to remove existing
advanced rules. it also handles the case when we have multiple VSI using
the same rule using the following helper functions:

ice_adv_rem_update_vsi_list - function to remove VS from VSI list for
advanced rules.

Signed-off-by: Shivanshu Shukla <shivanshu.shukla@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 309 +++++++++++++++++++++++++++++-
 drivers/net/ice/base/ice_switch.h |   9 +
 2 files changed, 310 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 30c189752..5ee4c5f03 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -2218,17 +2218,38 @@ ice_find_vsi_list_entry(struct ice_hw *hw, u8 recp_id, u16 vsi_handle,
 {
 	struct ice_vsi_list_map_info *map_info = NULL;
 	struct ice_switch_info *sw = hw->switch_info;
-	struct ice_fltr_mgmt_list_entry *list_itr;
 	struct LIST_HEAD_TYPE *list_head;
 
 	list_head = &sw->recp_list[recp_id].filt_rules;
-	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
-			    list_entry) {
-		if (list_itr->vsi_count == 1 && list_itr->vsi_list_info) {
-			map_info = list_itr->vsi_list_info;
-			if (ice_is_bit_set(map_info->vsi_map, vsi_handle)) {
-				*vsi_list_id = map_info->vsi_list_id;
-				return map_info;
+	if (sw->recp_list[recp_id].adv_rule) {
+		struct ice_adv_fltr_mgmt_list_entry *list_itr;
+
+		LIST_FOR_EACH_ENTRY(list_itr, list_head,
+				    ice_adv_fltr_mgmt_list_entry,
+				    list_entry) {
+			if (list_itr->vsi_list_info) {
+				map_info = list_itr->vsi_list_info;
+				if (ice_is_bit_set(map_info->vsi_map,
+						   vsi_handle)) {
+					*vsi_list_id = map_info->vsi_list_id;
+					return map_info;
+				}
+			}
+		}
+	} else {
+		struct ice_fltr_mgmt_list_entry *list_itr;
+
+		LIST_FOR_EACH_ENTRY(list_itr, list_head,
+				    ice_fltr_mgmt_list_entry,
+				    list_entry) {
+			if (list_itr->vsi_count == 1 &&
+			    list_itr->vsi_list_info) {
+				map_info = list_itr->vsi_list_info;
+				if (ice_is_bit_set(map_info->vsi_map,
+						   vsi_handle)) {
+					*vsi_list_id = map_info->vsi_list_id;
+					return map_info;
+				}
 			}
 		}
 	}
@@ -5563,6 +5584,278 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 
 	return status;
 }
+
+/**
+ * ice_adv_rem_update_vsi_list
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle of the VSI to remove
+ * @fm_list: filter management entry for which the VSI list management needs to
+ *	     be done
+ */
+static enum ice_status
+ice_adv_rem_update_vsi_list(struct ice_hw *hw, u16 vsi_handle,
+			    struct ice_adv_fltr_mgmt_list_entry *fm_list)
+{
+	struct ice_vsi_list_map_info *vsi_list_info;
+	enum ice_sw_lkup_type lkup_type;
+	enum ice_status status;
+	u16 vsi_list_id;
+
+	if (fm_list->rule_info.sw_act.fltr_act != ICE_FWD_TO_VSI_LIST ||
+	    fm_list->vsi_count == 0)
+		return ICE_ERR_PARAM;
+
+	/* A rule with the VSI being removed does not exist */
+	if (!ice_is_bit_set(fm_list->vsi_list_info->vsi_map, vsi_handle))
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	lkup_type = ICE_SW_LKUP_LAST;
+	vsi_list_id = fm_list->rule_info.sw_act.fwd_id.vsi_list_id;
+	status = ice_update_vsi_list_rule(hw, &vsi_handle, 1, vsi_list_id, true,
+					  ice_aqc_opc_update_sw_rules,
+					  lkup_type);
+	if (status)
+		return status;
+
+	fm_list->vsi_count--;
+	ice_clear_bit(vsi_handle, fm_list->vsi_list_info->vsi_map);
+	vsi_list_info = fm_list->vsi_list_info;
+	if (fm_list->vsi_count == 1) {
+		struct ice_fltr_info tmp_fltr;
+		u16 rem_vsi_handle;
+
+		rem_vsi_handle = ice_find_first_bit(vsi_list_info->vsi_map,
+						    ICE_MAX_VSI);
+		if (!ice_is_vsi_valid(hw, rem_vsi_handle))
+			return ICE_ERR_OUT_OF_RANGE;
+
+		/* Make sure VSI list is empty before removing it below */
+		status = ice_update_vsi_list_rule(hw, &rem_vsi_handle, 1,
+						  vsi_list_id, true,
+						  ice_aqc_opc_update_sw_rules,
+						  lkup_type);
+		if (status)
+			return status;
+		tmp_fltr.fltr_rule_id = fm_list->rule_info.fltr_rule_id;
+		fm_list->rule_info.sw_act.fltr_act = ICE_FWD_TO_VSI;
+		tmp_fltr.fltr_act = ICE_FWD_TO_VSI;
+		tmp_fltr.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, rem_vsi_handle);
+		fm_list->rule_info.sw_act.fwd_id.hw_vsi_id =
+			ice_get_hw_vsi_num(hw, rem_vsi_handle);
+
+		/* Update the previous switch rule of "MAC forward to VSI" to
+		 * "MAC fwd to VSI list"
+		 */
+		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to update pkt fwd rule to FWD_TO_VSI on HW VSI %d, error %d\n",
+				  tmp_fltr.fwd_id.hw_vsi_id, status);
+			return status;
+		}
+	}
+
+	if (fm_list->vsi_count == 1) {
+		/* Remove the VSI list since it is no longer used */
+		status = ice_remove_vsi_list_rule(hw, vsi_list_id, lkup_type);
+		if (status) {
+			ice_debug(hw, ICE_DBG_SW,
+				  "Failed to remove VSI list %d, error %d\n",
+				  vsi_list_id, status);
+			return status;
+		}
+
+		LIST_DEL(&vsi_list_info->list_entry);
+		ice_free(hw, vsi_list_info);
+		fm_list->vsi_list_info = NULL;
+	}
+
+	return status;
+}
+
+/**
+ * ice_rem_adv_rule - removes existing advanced switch rule
+ * @hw: pointer to the hardware structure
+ * @lkups: information on the words that needs to be looked up. All words
+ *         together makes one recipe
+ * @lkups_cnt: num of entries in the lkups array
+ * @rinfo: Its the pointer to the rule information for the rule
+ *
+ * This function can be used to remove 1 rule at a time. The lkups is
+ * used to describe all the words that forms the "lookup" portion of the
+ * rule. These words can span multiple protocols. Callers to this function
+ * need to pass in a list of protocol headers with lookup information along
+ * and mask that determines which words are valid from the given protocol
+ * header. rinfo describes other information related to this rule such as
+ * forwarding IDs, priority of this rule, etc.
+ */
+enum ice_status
+ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo)
+{
+	struct ice_adv_fltr_mgmt_list_entry *list_elem;
+	struct ice_prot_lkup_ext lkup_exts;
+	u16 rule_buf_sz, pkt_len, i, rid;
+	enum ice_status status = ICE_SUCCESS;
+	bool remove_rule = false;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
+	const u8 *pkt = NULL;
+	u16 vsi_handle;
+
+	ice_memset(&lkup_exts, 0, sizeof(lkup_exts), ICE_NONDMA_MEM);
+	for (i = 0; i < lkups_cnt; i++) {
+		u16 count;
+
+		if (lkups[i].type >= ICE_PROTOCOL_LAST)
+			return ICE_ERR_CFG;
+
+		count = ice_fill_valid_words(&lkups[i], &lkup_exts);
+		if (!count)
+			return ICE_ERR_CFG;
+	}
+	rid = ice_find_recp(hw, &lkup_exts);
+	/* If did not find a recipe that match the existing criteria */
+	if (rid == ICE_MAX_NUM_RECIPES)
+		return ICE_ERR_PARAM;
+
+	rule_lock = &hw->switch_info->recp_list[rid].filt_rule_lock;
+	list_elem = ice_find_adv_rule_entry(hw, lkups, lkups_cnt, rid, rinfo);
+	/* the rule is already removed */
+	if (!list_elem)
+		return ICE_SUCCESS;
+	ice_acquire_lock(rule_lock);
+	if (list_elem->rule_info.sw_act.fltr_act != ICE_FWD_TO_VSI_LIST) {
+		remove_rule = true;
+	} else if (list_elem->vsi_count > 1) {
+		list_elem->vsi_list_info->ref_cnt--;
+		remove_rule = false;
+		vsi_handle = rinfo->sw_act.vsi_handle;
+		status = ice_adv_rem_update_vsi_list(hw, vsi_handle, list_elem);
+	} else {
+		vsi_handle = rinfo->sw_act.vsi_handle;
+		status = ice_adv_rem_update_vsi_list(hw, vsi_handle, list_elem);
+		if (status) {
+			ice_release_lock(rule_lock);
+			return status;
+		}
+		if (list_elem->vsi_count == 0)
+			remove_rule = true;
+	}
+	ice_release_lock(rule_lock);
+	if (remove_rule) {
+		struct ice_aqc_sw_rules_elem *s_rule;
+
+		ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt,
+				      &pkt_len);
+		rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + pkt_len;
+		s_rule =
+			(struct ice_aqc_sw_rules_elem *)ice_malloc(hw,
+								   rule_buf_sz);
+		if (!s_rule)
+			return ICE_ERR_NO_MEMORY;
+		s_rule->pdata.lkup_tx_rx.act = 0;
+		s_rule->pdata.lkup_tx_rx.index =
+			CPU_TO_LE16(list_elem->rule_info.fltr_rule_id);
+		s_rule->pdata.lkup_tx_rx.hdr_len = 0;
+		status = ice_aq_sw_rules(hw, (struct ice_aqc_sw_rules *)s_rule,
+					 rule_buf_sz, 1,
+					 ice_aqc_opc_remove_sw_rules, NULL);
+		if (status == ICE_SUCCESS) {
+			ice_acquire_lock(rule_lock);
+			LIST_DEL(&list_elem->list_entry);
+			ice_free(hw, list_elem->lkups);
+			ice_free(hw, list_elem);
+			ice_release_lock(rule_lock);
+		}
+		ice_free(hw, s_rule);
+	}
+	return status;
+}
+
+/**
+ * ice_rem_adv_rule_by_id - removes existing advanced switch rule by ID
+ * @hw: pointer to the hardware structure
+ * @remove_entry: data struct which holds rule_id, VSI handle and recipe ID
+ *
+ * This function is used to remove 1 rule at a time. The removal is based on
+ * the remove_entry parameter. This function will remove rule for a given
+ * vsi_handle with a given rule_id which is passed as parameter in remove_entry
+ */
+enum ice_status
+ice_rem_adv_rule_by_id(struct ice_hw *hw,
+		       struct ice_rule_query_data *remove_entry)
+{
+	struct ice_adv_fltr_mgmt_list_entry *list_itr;
+	struct LIST_HEAD_TYPE *list_head;
+	struct ice_adv_rule_info rinfo;
+	struct ice_switch_info *sw;
+
+	sw = hw->switch_info;
+	if (!sw->recp_list[remove_entry->rid].recp_created)
+		return ICE_ERR_PARAM;
+	list_head = &sw->recp_list[remove_entry->rid].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_adv_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (list_itr->rule_info.fltr_rule_id ==
+		    remove_entry->rule_id) {
+			rinfo = list_itr->rule_info;
+			rinfo.sw_act.vsi_handle = remove_entry->vsi_handle;
+			return ice_rem_adv_rule(hw, list_itr->lkups,
+						list_itr->lkups_cnt, &rinfo);
+		}
+	}
+	return ICE_ERR_PARAM;
+}
+
+/**
+ * ice_rem_adv_for_vsi - removes existing advanced switch rules for a
+ *                       given VSI handle
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: VSI handle for which we are supposed to remove all the rules.
+ *
+ * This function is used to remove all the rules for a given VSI and as soon
+ * as removing a rule fails, it will return immediately with the error code,
+ * else it will return ICE_SUCCESS
+ */
+enum ice_status
+ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_adv_fltr_mgmt_list_entry *list_itr;
+	struct ice_vsi_list_map_info *map_info;
+	struct LIST_HEAD_TYPE *list_head;
+	struct ice_adv_rule_info rinfo;
+	struct ice_switch_info *sw;
+	enum ice_status status;
+	u16 vsi_list_id = 0;
+	u8 rid;
+
+	sw = hw->switch_info;
+	for (rid = 0; rid < ICE_MAX_NUM_RECIPES; rid++) {
+		if (!sw->recp_list[rid].recp_created)
+			continue;
+		if (!sw->recp_list[rid].adv_rule)
+			continue;
+		list_head = &sw->recp_list[rid].filt_rules;
+		map_info = NULL;
+		LIST_FOR_EACH_ENTRY(list_itr, list_head,
+				    ice_adv_fltr_mgmt_list_entry, list_entry) {
+			map_info = ice_find_vsi_list_entry(hw, rid, vsi_handle,
+							   &vsi_list_id);
+			if (!map_info)
+				continue;
+			rinfo = list_itr->rule_info;
+			rinfo.sw_act.vsi_handle = vsi_handle;
+			status = ice_rem_adv_rule(hw, list_itr->lkups,
+						  list_itr->lkups_cnt, &rinfo);
+			if (status)
+				return status;
+			map_info = NULL;
+		}
+	}
+	return ICE_SUCCESS;
+}
+
 /**
  * ice_replay_fltr - Replay all the filters stored by a specific list head
  * @hw: pointer to the hardware structure
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index 890df13dd..a6e17e861 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -443,6 +443,15 @@ enum ice_status
 ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo,
 		 struct ice_rule_query_data *added_entry);
+enum ice_status
+ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle);
+enum ice_status
+ice_rem_adv_rule_by_id(struct ice_hw *hw,
+		       struct ice_rule_query_data *remove_entry);
+enum ice_status
+ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
+		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo);
+
 enum ice_status ice_replay_all_fltr(struct ice_hw *hw);
 
 enum ice_status ice_init_def_sw_recp(struct ice_hw *hw);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 09/69] net/ice/base: save and post reset replay q bandwidth
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (7 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 08/69] net/ice/base: code for removing advanced rule Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 10/69] net/ice/base: rollback AVF RSS configurations Leyi Rong
                       ` (60 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Tarun Singh, Paul M Stillwell Jr

Added code to save the queue bandwidth information when it is applied
and it is replayed when queue is re-enabled again. Earlier saved value
is used for replay purpose.
Added vsi_handle, tc, and q_handle argument to the ice_cfg_q_bw_lmt,
ice_cfg_q_bw_dflt_lmt.

Signed-off-by: Tarun Singh <tarun.k.singh@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c |  7 ++-
 drivers/net/ice/base/ice_common.h |  4 ++
 drivers/net/ice/base/ice_sched.c  | 91 ++++++++++++++++++++++++++-----
 drivers/net/ice/base/ice_sched.h  |  8 +--
 drivers/net/ice/base/ice_switch.h |  5 --
 drivers/net/ice/base/ice_type.h   |  8 +++
 6 files changed, 98 insertions(+), 25 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index c74e4e1d4..09296ead2 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -3606,7 +3606,7 @@ ice_get_ctx(u8 *src_ctx, u8 *dest_ctx, struct ice_ctx_ele *ce_info)
  * @tc: TC number
  * @q_handle: software queue handle
  */
-static struct ice_q_ctx *
+struct ice_q_ctx *
 ice_get_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 q_handle)
 {
 	struct ice_vsi_ctx *vsi;
@@ -3703,9 +3703,12 @@ ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle,
 	node.node_teid = buf->txqs[0].q_teid;
 	node.data.elem_type = ICE_AQC_ELEM_TYPE_LEAF;
 	q_ctx->q_handle = q_handle;
+	q_ctx->q_teid = LE32_TO_CPU(node.node_teid);
 
-	/* add a leaf node into schduler tree queue layer */
+	/* add a leaf node into scheduler tree queue layer */
 	status = ice_sched_add_node(pi, hw->num_tx_sched_layers - 1, &node);
+	if (!status)
+		status = ice_sched_replay_q_bw(pi, q_ctx);
 
 ena_txq_exit:
 	ice_release_lock(&pi->sched_lock);
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 58c66fdc0..aee754b85 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -186,6 +186,10 @@ void ice_sched_replay_agg(struct ice_hw *hw);
 enum ice_status ice_sched_replay_tc_node_bw(struct ice_hw *hw);
 enum ice_status ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle);
 enum ice_status
+ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx);
+struct ice_q_ctx *
+ice_get_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 q_handle);
+enum ice_status
 ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
 			 enum ice_rl_type rl_type, u8 bw_alloc);
 enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes);
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 8773e62a9..855e3848c 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -4326,27 +4326,61 @@ ice_sched_validate_srl_node(struct ice_sched_node *node, u8 sel_layer)
 	return ICE_ERR_CFG;
 }
 
+/**
+ * ice_sched_save_q_bw - save queue node's BW information
+ * @q_ctx: queue context structure
+ * @rl_type: rate limit type min, max, or shared
+ * @bw: bandwidth in Kbps - Kilo bits per sec
+ *
+ * Save BW information of queue type node for post replay use.
+ */
+static enum ice_status
+ice_sched_save_q_bw(struct ice_q_ctx *q_ctx, enum ice_rl_type rl_type, u32 bw)
+{
+	switch (rl_type) {
+	case ICE_MIN_BW:
+		ice_set_clear_cir_bw(&q_ctx->bw_t_info, bw);
+		break;
+	case ICE_MAX_BW:
+		ice_set_clear_eir_bw(&q_ctx->bw_t_info, bw);
+		break;
+	case ICE_SHARED_BW:
+		ice_set_clear_shared_bw(&q_ctx->bw_t_info, bw);
+		break;
+	default:
+		return ICE_ERR_PARAM;
+	}
+	return ICE_SUCCESS;
+}
+
 /**
  * ice_sched_set_q_bw_lmt - sets queue BW limit
  * @pi: port information structure
- * @q_id: queue ID (leaf node TEID)
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
  * @rl_type: min, max, or shared
  * @bw: bandwidth in Kbps
  *
  * This function sets BW limit of queue scheduling node.
  */
 static enum ice_status
-ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u32 q_id,
-		       enum ice_rl_type rl_type, u32 bw)
+ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		       u16 q_handle, enum ice_rl_type rl_type, u32 bw)
 {
 	enum ice_status status = ICE_ERR_PARAM;
 	struct ice_sched_node *node;
+	struct ice_q_ctx *q_ctx;
 
+	if (!ice_is_vsi_valid(pi->hw, vsi_handle))
+		return ICE_ERR_PARAM;
 	ice_acquire_lock(&pi->sched_lock);
-
-	node = ice_sched_find_node_by_teid(pi->root, q_id);
+	q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handle);
+	if (!q_ctx)
+		goto exit_q_bw_lmt;
+	node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
 	if (!node) {
-		ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_id\n");
+		ice_debug(pi->hw, ICE_DBG_SCHED, "Wrong q_teid\n");
 		goto exit_q_bw_lmt;
 	}
 
@@ -4374,6 +4408,9 @@ ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u32 q_id,
 	else
 		status = ice_sched_set_node_bw_lmt(pi, node, rl_type, bw);
 
+	if (!status)
+		status = ice_sched_save_q_bw(q_ctx, rl_type, bw);
+
 exit_q_bw_lmt:
 	ice_release_lock(&pi->sched_lock);
 	return status;
@@ -4382,32 +4419,38 @@ ice_sched_set_q_bw_lmt(struct ice_port_info *pi, u32 q_id,
 /**
  * ice_cfg_q_bw_lmt - configure queue BW limit
  * @pi: port information structure
- * @q_id: queue ID (leaf node TEID)
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
  * @rl_type: min, max, or shared
  * @bw: bandwidth in Kbps
  *
  * This function configures BW limit of queue scheduling node.
  */
 enum ice_status
-ice_cfg_q_bw_lmt(struct ice_port_info *pi, u32 q_id, enum ice_rl_type rl_type,
-		 u32 bw)
+ice_cfg_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		 u16 q_handle, enum ice_rl_type rl_type, u32 bw)
 {
-	return ice_sched_set_q_bw_lmt(pi, q_id, rl_type, bw);
+	return ice_sched_set_q_bw_lmt(pi, vsi_handle, tc, q_handle, rl_type,
+				      bw);
 }
 
 /**
  * ice_cfg_q_bw_dflt_lmt - configure queue BW default limit
  * @pi: port information structure
- * @q_id: queue ID (leaf node TEID)
+ * @vsi_handle: sw VSI handle
+ * @tc: traffic class
+ * @q_handle: software queue handle
  * @rl_type: min, max, or shared
  *
  * This function configures BW default limit of queue scheduling node.
  */
 enum ice_status
-ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u32 q_id,
-		      enum ice_rl_type rl_type)
+ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		      u16 q_handle, enum ice_rl_type rl_type)
 {
-	return ice_sched_set_q_bw_lmt(pi, q_id, rl_type, ICE_SCHED_DFLT_BW);
+	return ice_sched_set_q_bw_lmt(pi, vsi_handle, tc, q_handle, rl_type,
+				      ICE_SCHED_DFLT_BW);
 }
 
 /**
@@ -5421,3 +5464,23 @@ ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle)
 	ice_release_lock(&pi->sched_lock);
 	return status;
 }
+
+/**
+ * ice_sched_replay_q_bw - replay queue type node BW
+ * @pi: port information structure
+ * @q_ctx: queue context structure
+ *
+ * This function replays queue type node bandwidth. This function needs to be
+ * called with scheduler lock held.
+ */
+enum ice_status
+ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx)
+{
+	struct ice_sched_node *q_node;
+
+	/* Following also checks the presence of node in tree */
+	q_node = ice_sched_find_node_by_teid(pi->root, q_ctx->q_teid);
+	if (!q_node)
+		return ICE_ERR_PARAM;
+	return ice_sched_replay_node_bw(pi->hw, q_node, &q_ctx->bw_t_info);
+}
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 92377a82e..56f9977ab 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -122,11 +122,11 @@ ice_move_vsi_to_agg(struct ice_port_info *pi, u32 agg_id, u16 vsi_handle,
 		    u8 tc_bitmap);
 enum ice_status ice_rm_agg_cfg(struct ice_port_info *pi, u32 agg_id);
 enum ice_status
-ice_cfg_q_bw_lmt(struct ice_port_info *pi, u32 q_id, enum ice_rl_type rl_type,
-		 u32 bw);
+ice_cfg_q_bw_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		 u16 q_handle, enum ice_rl_type rl_type, u32 bw);
 enum ice_status
-ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u32 q_id,
-		      enum ice_rl_type rl_type);
+ice_cfg_q_bw_dflt_lmt(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
+		      u16 q_handle, enum ice_rl_type rl_type);
 enum ice_status
 ice_cfg_tc_node_bw_lmt(struct ice_port_info *pi, u8 tc,
 		       enum ice_rl_type rl_type, u32 bw);
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index a6e17e861..e3fb0434d 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -21,11 +21,6 @@
 #define ICE_VSI_INVAL_ID 0xFFFF
 #define ICE_INVAL_Q_HANDLE 0xFFFF
 
-/* VSI queue context structure */
-struct ice_q_ctx {
-	u16  q_handle;
-};
-
 /* VSI context structure for add/get/update/free operations */
 struct ice_vsi_ctx {
 	u16 vsi_num;
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index e4979b832..b1682c5bb 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -557,6 +557,14 @@ struct ice_bw_type_info {
 	u32 shared_bw;
 };
 
+/* VSI queue context structure for given TC */
+struct ice_q_ctx {
+	u16  q_handle;
+	u32  q_teid;
+	/* bw_t_info saves queue BW information */
+	struct ice_bw_type_info bw_t_info;
+};
+
 /* VSI type list entry to locate corresponding VSI/aggregator nodes */
 struct ice_sched_vsi_info {
 	struct ice_sched_node *vsi_node[ICE_MAX_TRAFFIC_CLASS];
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 10/69] net/ice/base: rollback AVF RSS configurations
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (8 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 09/69] net/ice/base: save and post reset replay q bandwidth Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 11/69] net/ice/base: move RSS replay list Leyi Rong
                       ` (59 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Vignesh Sridhar, Paul M Stillwell Jr

Adding support to remove RSS configurations added
prior to failing case in AVF.

Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c | 128 ++++++++++++++++++++++++++++++++
 1 file changed, 128 insertions(+)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index f1bf5b5e7..d97fe1fc7 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -1915,6 +1915,134 @@ ice_rem_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	return status;
 }
 
+/* Mapping of AVF hash bit fields to an L3-L4 hash combination.
+ * As the ice_flow_avf_hdr_field represent individual bit shifts in a hash,
+ * convert its values to their appropriate flow L3, L4 values.
+ */
+#define ICE_FLOW_AVF_RSS_IPV4_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_OTHER) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_FRAG_IPV4))
+#define ICE_FLOW_AVF_RSS_TCP_IPV4_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_TCP_SYN_NO_ACK) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_TCP))
+#define ICE_FLOW_AVF_RSS_UDP_IPV4_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_UNICAST_IPV4_UDP) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_MULTICAST_IPV4_UDP) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_UDP))
+#define ICE_FLOW_AVF_RSS_ALL_IPV4_MASKS \
+	(ICE_FLOW_AVF_RSS_TCP_IPV4_MASKS | ICE_FLOW_AVF_RSS_UDP_IPV4_MASKS | \
+	 ICE_FLOW_AVF_RSS_IPV4_MASKS | BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_SCTP))
+
+#define ICE_FLOW_AVF_RSS_IPV6_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_OTHER) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_FRAG_IPV6))
+#define ICE_FLOW_AVF_RSS_UDP_IPV6_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_UNICAST_IPV6_UDP) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_MULTICAST_IPV6_UDP) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_UDP))
+#define ICE_FLOW_AVF_RSS_TCP_IPV6_MASKS \
+	(BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_TCP_SYN_NO_ACK) | \
+	 BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_TCP))
+#define ICE_FLOW_AVF_RSS_ALL_IPV6_MASKS \
+	(ICE_FLOW_AVF_RSS_TCP_IPV6_MASKS | ICE_FLOW_AVF_RSS_UDP_IPV6_MASKS | \
+	 ICE_FLOW_AVF_RSS_IPV6_MASKS | BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_SCTP))
+
+#define ICE_FLOW_MAX_CFG	10
+
+/**
+ * ice_add_avf_rss_cfg - add an RSS configuration for AVF driver
+ * @hw: pointer to the hardware structure
+ * @vsi_handle: software VSI handle
+ * @avf_hash: hash bit fields (ICE_AVF_FLOW_FIELD_*) to configure
+ *
+ * This function will take the hash bitmap provided by the AVF driver via a
+ * message, convert it to ICE-compatible values, and configure RSS flow
+ * profiles.
+ */
+enum ice_status
+ice_add_avf_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 avf_hash)
+{
+	enum ice_status status = ICE_SUCCESS;
+	u64 hash_flds;
+
+	if (avf_hash == ICE_AVF_FLOW_FIELD_INVALID ||
+	    !ice_is_vsi_valid(hw, vsi_handle))
+		return ICE_ERR_PARAM;
+
+	/* Make sure no unsupported bits are specified */
+	if (avf_hash & ~(ICE_FLOW_AVF_RSS_ALL_IPV4_MASKS |
+			 ICE_FLOW_AVF_RSS_ALL_IPV6_MASKS))
+		return ICE_ERR_CFG;
+
+	hash_flds = avf_hash;
+
+	/* Always create an L3 RSS configuration for any L4 RSS configuration */
+	if (hash_flds & ICE_FLOW_AVF_RSS_ALL_IPV4_MASKS)
+		hash_flds |= ICE_FLOW_AVF_RSS_IPV4_MASKS;
+
+	if (hash_flds & ICE_FLOW_AVF_RSS_ALL_IPV6_MASKS)
+		hash_flds |= ICE_FLOW_AVF_RSS_IPV6_MASKS;
+
+	/* Create the corresponding RSS configuration for each valid hash bit */
+	while (hash_flds) {
+		u64 rss_hash = ICE_HASH_INVALID;
+
+		if (hash_flds & ICE_FLOW_AVF_RSS_ALL_IPV4_MASKS) {
+			if (hash_flds & ICE_FLOW_AVF_RSS_IPV4_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV4;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_IPV4_MASKS;
+			} else if (hash_flds &
+				   ICE_FLOW_AVF_RSS_TCP_IPV4_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV4 |
+					ICE_FLOW_HASH_TCP_PORT;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_TCP_IPV4_MASKS;
+			} else if (hash_flds &
+				   ICE_FLOW_AVF_RSS_UDP_IPV4_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV4 |
+					ICE_FLOW_HASH_UDP_PORT;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_UDP_IPV4_MASKS;
+			} else if (hash_flds &
+				   BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_SCTP)) {
+				rss_hash = ICE_FLOW_HASH_IPV4 |
+					ICE_FLOW_HASH_SCTP_PORT;
+				hash_flds &=
+					~BIT_ULL(ICE_AVF_FLOW_FIELD_IPV4_SCTP);
+			}
+		} else if (hash_flds & ICE_FLOW_AVF_RSS_ALL_IPV6_MASKS) {
+			if (hash_flds & ICE_FLOW_AVF_RSS_IPV6_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV6;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_IPV6_MASKS;
+			} else if (hash_flds &
+				   ICE_FLOW_AVF_RSS_TCP_IPV6_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV6 |
+					ICE_FLOW_HASH_TCP_PORT;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_TCP_IPV6_MASKS;
+			} else if (hash_flds &
+				   ICE_FLOW_AVF_RSS_UDP_IPV6_MASKS) {
+				rss_hash = ICE_FLOW_HASH_IPV6 |
+					ICE_FLOW_HASH_UDP_PORT;
+				hash_flds &= ~ICE_FLOW_AVF_RSS_UDP_IPV6_MASKS;
+			} else if (hash_flds &
+				   BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_SCTP)) {
+				rss_hash = ICE_FLOW_HASH_IPV6 |
+					ICE_FLOW_HASH_SCTP_PORT;
+				hash_flds &=
+					~BIT_ULL(ICE_AVF_FLOW_FIELD_IPV6_SCTP);
+			}
+		}
+
+		if (rss_hash == ICE_HASH_INVALID)
+			return ICE_ERR_OUT_OF_RANGE;
+
+		status = ice_add_rss_cfg(hw, vsi_handle, rss_hash,
+					 ICE_FLOW_SEG_HDR_NONE);
+		if (status)
+			break;
+	}
+
+	return status;
+}
+
 /**
  * ice_rem_rss_cfg - remove an existing RSS config with matching hashed fields
  * @hw: pointer to the hardware structure
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 11/69] net/ice/base: move RSS replay list
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (9 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 10/69] net/ice/base: rollback AVF RSS configurations Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 12/69] net/ice/base: cache the data of set PHY cfg AQ in SW Leyi Rong
                       ` (58 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang
  Cc: dev, Leyi Rong, Vignesh Sridhar, Henry Tieman, Paul M Stillwell Jr

1. Move the RSS list pointer and lock from the VSI context to the ice_hw
structure. This is to ensure that the RSS configurations added to the
list prior to reset and maintained until the PF is unloaded. This will
ensure that the configuration list is unaffected by VFRs that would
destroy the VSI context. This will allow the replay of RSS entries for
VF VSI, as against current method of re-adding default configurations
and also eliminates the need to re-allocate the RSS list and lock post-VFR.
2. Align RSS flow functions to the new position of the RSS list and lock.
3. Adding bitmap for flow type status.

Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Henry Tieman <henry.w.tieman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c   | 100 +++++++++++++++++-------------
 drivers/net/ice/base/ice_flow.h   |   4 +-
 drivers/net/ice/base/ice_switch.c |   6 +-
 drivers/net/ice/base/ice_switch.h |   2 -
 drivers/net/ice/base/ice_type.h   |   3 +
 5 files changed, 63 insertions(+), 52 deletions(-)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index d97fe1fc7..dccd7d3c7 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -1605,27 +1605,32 @@ ice_flow_set_rss_seg_info(struct ice_flow_seg_info *segs, u64 hash_fields,
 }
 
 /**
- * ice_rem_all_rss_vsi_ctx - remove all RSS configurations from VSI context
+ * ice_rem_vsi_rss_list - remove VSI from RSS list
  * @hw: pointer to the hardware structure
  * @vsi_handle: software VSI handle
  *
+ * Remove the VSI from all RSS configurations in the list.
  */
-void ice_rem_all_rss_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
+void ice_rem_vsi_rss_list(struct ice_hw *hw, u16 vsi_handle)
 {
 	struct ice_rss_cfg *r, *tmp;
 
-	if (!ice_is_vsi_valid(hw, vsi_handle) ||
-	    LIST_EMPTY(&hw->vsi_ctx[vsi_handle]->rss_list_head))
+	if (LIST_EMPTY(&hw->rss_list_head))
 		return;
 
-	ice_acquire_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
-	LIST_FOR_EACH_ENTRY_SAFE(r, tmp,
-				 &hw->vsi_ctx[vsi_handle]->rss_list_head,
+	ice_acquire_lock(&hw->rss_locks);
+	LIST_FOR_EACH_ENTRY_SAFE(r, tmp, &hw->rss_list_head,
 				 ice_rss_cfg, l_entry) {
-		LIST_DEL(&r->l_entry);
-		ice_free(hw, r);
+		if (ice_is_bit_set(r->vsis, vsi_handle)) {
+			ice_clear_bit(vsi_handle, r->vsis);
+
+			if (!ice_is_any_bit_set(r->vsis, ICE_MAX_VSI)) {
+				LIST_DEL(&r->l_entry);
+				ice_free(hw, r);
+			}
+		}
 	}
-	ice_release_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_release_lock(&hw->rss_locks);
 }
 
 /**
@@ -1667,7 +1672,7 @@ enum ice_status ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
 }
 
 /**
- * ice_rem_rss_cfg_vsi_ctx - remove RSS configuration from VSI context
+ * ice_rem_rss_list - remove RSS configuration from list
  * @hw: pointer to the hardware structure
  * @vsi_handle: software VSI handle
  * @prof: pointer to flow profile
@@ -1675,8 +1680,7 @@ enum ice_status ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
  * Assumption: lock has already been acquired for RSS list
  */
 static void
-ice_rem_rss_cfg_vsi_ctx(struct ice_hw *hw, u16 vsi_handle,
-			struct ice_flow_prof *prof)
+ice_rem_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof)
 {
 	struct ice_rss_cfg *r, *tmp;
 
@@ -1684,20 +1688,22 @@ ice_rem_rss_cfg_vsi_ctx(struct ice_hw *hw, u16 vsi_handle,
 	 * hash configurations associated to the flow profile. If found
 	 * remove from the RSS entry list of the VSI context and delete entry.
 	 */
-	LIST_FOR_EACH_ENTRY_SAFE(r, tmp,
-				 &hw->vsi_ctx[vsi_handle]->rss_list_head,
+	LIST_FOR_EACH_ENTRY_SAFE(r, tmp, &hw->rss_list_head,
 				 ice_rss_cfg, l_entry) {
 		if (r->hashed_flds == prof->segs[prof->segs_cnt - 1].match &&
 		    r->packet_hdr == prof->segs[prof->segs_cnt - 1].hdrs) {
-			LIST_DEL(&r->l_entry);
-			ice_free(hw, r);
+			ice_clear_bit(vsi_handle, r->vsis);
+			if (!ice_is_any_bit_set(r->vsis, ICE_MAX_VSI)) {
+				LIST_DEL(&r->l_entry);
+				ice_free(hw, r);
+			}
 			return;
 		}
 	}
 }
 
 /**
- * ice_add_rss_vsi_ctx - add RSS configuration to VSI context
+ * ice_add_rss_list - add RSS configuration to list
  * @hw: pointer to the hardware structure
  * @vsi_handle: software VSI handle
  * @prof: pointer to flow profile
@@ -1705,16 +1711,17 @@ ice_rem_rss_cfg_vsi_ctx(struct ice_hw *hw, u16 vsi_handle,
  * Assumption: lock has already been acquired for RSS list
  */
 static enum ice_status
-ice_add_rss_vsi_ctx(struct ice_hw *hw, u16 vsi_handle,
-		    struct ice_flow_prof *prof)
+ice_add_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof)
 {
 	struct ice_rss_cfg *r, *rss_cfg;
 
-	LIST_FOR_EACH_ENTRY(r, &hw->vsi_ctx[vsi_handle]->rss_list_head,
+	LIST_FOR_EACH_ENTRY(r, &hw->rss_list_head,
 			    ice_rss_cfg, l_entry)
 		if (r->hashed_flds == prof->segs[prof->segs_cnt - 1].match &&
-		    r->packet_hdr == prof->segs[prof->segs_cnt - 1].hdrs)
+		    r->packet_hdr == prof->segs[prof->segs_cnt - 1].hdrs) {
+			ice_set_bit(vsi_handle, r->vsis);
 			return ICE_SUCCESS;
+		}
 
 	rss_cfg = (struct ice_rss_cfg *)ice_malloc(hw, sizeof(*rss_cfg));
 	if (!rss_cfg)
@@ -1722,8 +1729,9 @@ ice_add_rss_vsi_ctx(struct ice_hw *hw, u16 vsi_handle,
 
 	rss_cfg->hashed_flds = prof->segs[prof->segs_cnt - 1].match;
 	rss_cfg->packet_hdr = prof->segs[prof->segs_cnt - 1].hdrs;
-	LIST_ADD_TAIL(&rss_cfg->l_entry,
-		      &hw->vsi_ctx[vsi_handle]->rss_list_head);
+	ice_set_bit(vsi_handle, rss_cfg->vsis);
+
+	LIST_ADD_TAIL(&rss_cfg->l_entry, &hw->rss_list_head);
 
 	return ICE_SUCCESS;
 }
@@ -1785,7 +1793,7 @@ ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	if (prof) {
 		status = ice_flow_disassoc_prof(hw, blk, prof, vsi_handle);
 		if (!status)
-			ice_rem_rss_cfg_vsi_ctx(hw, vsi_handle, prof);
+			ice_rem_rss_list(hw, vsi_handle, prof);
 		else
 			goto exit;
 
@@ -1806,7 +1814,7 @@ ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	if (prof) {
 		status = ice_flow_assoc_prof(hw, blk, prof, vsi_handle);
 		if (!status)
-			status = ice_add_rss_vsi_ctx(hw, vsi_handle, prof);
+			status = ice_add_rss_list(hw, vsi_handle, prof);
 		goto exit;
 	}
 
@@ -1828,7 +1836,7 @@ ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 		goto exit;
 	}
 
-	status = ice_add_rss_vsi_ctx(hw, vsi_handle, prof);
+	status = ice_add_rss_list(hw, vsi_handle, prof);
 
 exit:
 	ice_free(hw, segs);
@@ -1856,9 +1864,9 @@ ice_add_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	    !ice_is_vsi_valid(hw, vsi_handle))
 		return ICE_ERR_PARAM;
 
-	ice_acquire_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_acquire_lock(&hw->rss_locks);
 	status = ice_add_rss_cfg_sync(hw, vsi_handle, hashed_flds, addl_hdrs);
-	ice_release_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_release_lock(&hw->rss_locks);
 
 	return status;
 }
@@ -1905,7 +1913,7 @@ ice_rem_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	/* Remove RSS configuration from VSI context before deleting
 	 * the flow profile.
 	 */
-	ice_rem_rss_cfg_vsi_ctx(hw, vsi_handle, prof);
+	ice_rem_rss_list(hw, vsi_handle, prof);
 
 	if (!ice_is_any_bit_set(prof->vsis, ICE_MAX_VSI))
 		status = ice_flow_rem_prof_sync(hw, blk, prof);
@@ -2066,15 +2074,15 @@ ice_rem_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	    !ice_is_vsi_valid(hw, vsi_handle))
 		return ICE_ERR_PARAM;
 
-	ice_acquire_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_acquire_lock(&hw->rss_locks);
 	status = ice_rem_rss_cfg_sync(hw, vsi_handle, hashed_flds, addl_hdrs);
-	ice_release_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_release_lock(&hw->rss_locks);
 
 	return status;
 }
 
 /**
- * ice_replay_rss_cfg - remove RSS configurations associated with VSI
+ * ice_replay_rss_cfg - replay RSS configurations associated with VSI
  * @hw: pointer to the hardware structure
  * @vsi_handle: software VSI handle
  */
@@ -2086,15 +2094,18 @@ enum ice_status ice_replay_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
 	if (!ice_is_vsi_valid(hw, vsi_handle))
 		return ICE_ERR_PARAM;
 
-	ice_acquire_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
-	LIST_FOR_EACH_ENTRY(r, &hw->vsi_ctx[vsi_handle]->rss_list_head,
+	ice_acquire_lock(&hw->rss_locks);
+	LIST_FOR_EACH_ENTRY(r, &hw->rss_list_head,
 			    ice_rss_cfg, l_entry) {
-		status = ice_add_rss_cfg_sync(hw, vsi_handle, r->hashed_flds,
-					      r->packet_hdr);
-		if (status)
-			break;
+		if (ice_is_bit_set(r->vsis, vsi_handle)) {
+			status = ice_add_rss_cfg_sync(hw, vsi_handle,
+						      r->hashed_flds,
+						      r->packet_hdr);
+			if (status)
+				break;
+		}
 	}
-	ice_release_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_release_lock(&hw->rss_locks);
 
 	return status;
 }
@@ -2116,14 +2127,15 @@ u64 ice_get_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u32 hdrs)
 	if (hdrs == ICE_FLOW_SEG_HDR_NONE || !ice_is_vsi_valid(hw, vsi_handle))
 		return ICE_HASH_INVALID;
 
-	ice_acquire_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
-	LIST_FOR_EACH_ENTRY(r, &hw->vsi_ctx[vsi_handle]->rss_list_head,
+	ice_acquire_lock(&hw->rss_locks);
+	LIST_FOR_EACH_ENTRY(r, &hw->rss_list_head,
 			    ice_rss_cfg, l_entry)
-		if (r->packet_hdr == hdrs) {
+		if (ice_is_bit_set(r->vsis, vsi_handle) &&
+		    r->packet_hdr == hdrs) {
 			rss_cfg = r;
 			break;
 		}
-	ice_release_lock(&hw->vsi_ctx[vsi_handle]->rss_locks);
+	ice_release_lock(&hw->rss_locks);
 
 	return rss_cfg ? rss_cfg->hashed_flds : ICE_HASH_INVALID;
 }
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
index f0c74a348..4fa13064e 100644
--- a/drivers/net/ice/base/ice_flow.h
+++ b/drivers/net/ice/base/ice_flow.h
@@ -270,6 +270,8 @@ struct ice_flow_prof {
 
 struct ice_rss_cfg {
 	struct LIST_ENTRY_TYPE l_entry;
+	/* bitmap of VSIs added to the RSS entry */
+	ice_declare_bitmap(vsis, ICE_MAX_VSI);
 	u64 hashed_flds;
 	u32 packet_hdr;
 };
@@ -338,7 +340,7 @@ ice_flow_set_fld_prefix(struct ice_flow_seg_info *seg, enum ice_flow_field fld,
 void
 ice_flow_add_fld_raw(struct ice_flow_seg_info *seg, u16 off, u8 len,
 		     u16 val_loc, u16 mask_loc);
-void ice_rem_all_rss_vsi_ctx(struct ice_hw *hw, u16 vsi_handle);
+void ice_rem_vsi_rss_list(struct ice_hw *hw, u16 vsi_handle);
 enum ice_status ice_replay_rss_cfg(struct ice_hw *hw, u16 vsi_handle);
 enum ice_status
 ice_add_avf_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds);
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 5ee4c5f03..0ad29dace 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -687,10 +687,7 @@ static void ice_clear_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
 
 	vsi = ice_get_vsi_ctx(hw, vsi_handle);
 	if (vsi) {
-		if (!LIST_EMPTY(&vsi->rss_list_head))
-			ice_rem_all_rss_vsi_ctx(hw, vsi_handle);
 		ice_clear_vsi_q_ctx(hw, vsi_handle);
-		ice_destroy_lock(&vsi->rss_locks);
 		ice_free(hw, vsi);
 		hw->vsi_ctx[vsi_handle] = NULL;
 	}
@@ -741,8 +738,7 @@ ice_add_vsi(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi_ctx,
 			return ICE_ERR_NO_MEMORY;
 		}
 		*tmp_vsi_ctx = *vsi_ctx;
-		ice_init_lock(&tmp_vsi_ctx->rss_locks);
-		INIT_LIST_HEAD(&tmp_vsi_ctx->rss_list_head);
+
 		ice_save_vsi_ctx(hw, vsi_handle, tmp_vsi_ctx);
 	} else {
 		/* update with new HW VSI num */
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index e3fb0434d..2f140a86d 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -32,8 +32,6 @@ struct ice_vsi_ctx {
 	u8 alloc_from_pool;
 	u16 num_lan_q_entries[ICE_MAX_TRAFFIC_CLASS];
 	struct ice_q_ctx *lan_q_ctx[ICE_MAX_TRAFFIC_CLASS];
-	struct ice_lock rss_locks;	/* protect rss config in VSI ctx */
-	struct LIST_HEAD_TYPE rss_list_head;
 };
 
 /* This is to be used by add/update mirror rule Admin Queue command */
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index b1682c5bb..63ef5bb46 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -805,6 +805,9 @@ struct ice_hw {
 	u16 fdir_fltr_cnt[ICE_FLTR_PTYPE_MAX];
 
 	struct ice_fd_hw_prof **fdir_prof;
+	ice_declare_bitmap(fdir_perfect_fltr, ICE_FLTR_PTYPE_MAX);
+	struct ice_lock rss_locks;	/* protect RSS configuration */
+	struct LIST_HEAD_TYPE rss_list_head;
 };
 
 /* Statistics collected by each port, VSI, VEB, and S-channel */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 12/69] net/ice/base: cache the data of set PHY cfg AQ in SW
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (10 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 11/69] net/ice/base: move RSS replay list Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 13/69] net/ice/base: refactor HW table init function Leyi Rong
                       ` (57 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Chinh T Cao, Paul M Stillwell Jr

After the transition from cable-unplug to cable-plug events, FW will
clear the set-phy-cfg data, sent by user. Thus, we will need to
cache these info.
1. The submitted data when set-phy-cfg is called. This info will be used
later to check if FW clears out the PHY info, requested by user.
2. The FC, FEC and LinkSpeed, requested by user. This info will be used
later, by device driver, to construct the new input data for the
set-phy-cfg AQ command.

Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 119 +++++++++++++++++++++++-------
 drivers/net/ice/base/ice_common.h |   2 +-
 drivers/net/ice/base/ice_type.h   |  31 ++++++--
 drivers/net/ice/ice_ethdev.c      |   2 +-
 4 files changed, 122 insertions(+), 32 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 09296ead2..a0ab25aef 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -270,21 +270,23 @@ enum ice_status
 ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
 		     struct ice_link_status *link, struct ice_sq_cd *cd)
 {
-	struct ice_link_status *hw_link_info_old, *hw_link_info;
 	struct ice_aqc_get_link_status_data link_data = { 0 };
 	struct ice_aqc_get_link_status *resp;
+	struct ice_link_status *li_old, *li;
 	enum ice_media_type *hw_media_type;
 	struct ice_fc_info *hw_fc_info;
 	bool tx_pause, rx_pause;
 	struct ice_aq_desc desc;
 	enum ice_status status;
+	struct ice_hw *hw;
 	u16 cmd_flags;
 
 	if (!pi)
 		return ICE_ERR_PARAM;
-	hw_link_info_old = &pi->phy.link_info_old;
+	hw = pi->hw;
+	li_old = &pi->phy.link_info_old;
 	hw_media_type = &pi->phy.media_type;
-	hw_link_info = &pi->phy.link_info;
+	li = &pi->phy.link_info;
 	hw_fc_info = &pi->fc;
 
 	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_status);
@@ -293,27 +295,27 @@ ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
 	resp->cmd_flags = CPU_TO_LE16(cmd_flags);
 	resp->lport_num = pi->lport;
 
-	status = ice_aq_send_cmd(pi->hw, &desc, &link_data, sizeof(link_data),
-				 cd);
+	status = ice_aq_send_cmd(hw, &desc, &link_data, sizeof(link_data), cd);
 
 	if (status != ICE_SUCCESS)
 		return status;
 
 	/* save off old link status information */
-	*hw_link_info_old = *hw_link_info;
+	*li_old = *li;
 
 	/* update current link status information */
-	hw_link_info->link_speed = LE16_TO_CPU(link_data.link_speed);
-	hw_link_info->phy_type_low = LE64_TO_CPU(link_data.phy_type_low);
-	hw_link_info->phy_type_high = LE64_TO_CPU(link_data.phy_type_high);
+	li->link_speed = LE16_TO_CPU(link_data.link_speed);
+	li->phy_type_low = LE64_TO_CPU(link_data.phy_type_low);
+	li->phy_type_high = LE64_TO_CPU(link_data.phy_type_high);
 	*hw_media_type = ice_get_media_type(pi);
-	hw_link_info->link_info = link_data.link_info;
-	hw_link_info->an_info = link_data.an_info;
-	hw_link_info->ext_info = link_data.ext_info;
-	hw_link_info->max_frame_size = LE16_TO_CPU(link_data.max_frame_size);
-	hw_link_info->fec_info = link_data.cfg & ICE_AQ_FEC_MASK;
-	hw_link_info->topo_media_conflict = link_data.topo_media_conflict;
-	hw_link_info->pacing = link_data.cfg & ICE_AQ_CFG_PACING_M;
+	li->link_info = link_data.link_info;
+	li->an_info = link_data.an_info;
+	li->ext_info = link_data.ext_info;
+	li->max_frame_size = LE16_TO_CPU(link_data.max_frame_size);
+	li->fec_info = link_data.cfg & ICE_AQ_FEC_MASK;
+	li->topo_media_conflict = link_data.topo_media_conflict;
+	li->pacing = link_data.cfg & (ICE_AQ_CFG_PACING_M |
+				      ICE_AQ_CFG_PACING_TYPE_M);
 
 	/* update fc info */
 	tx_pause = !!(link_data.an_info & ICE_AQ_LINK_PAUSE_TX);
@@ -327,13 +329,24 @@ ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
 	else
 		hw_fc_info->current_mode = ICE_FC_NONE;
 
-	hw_link_info->lse_ena =
-		!!(resp->cmd_flags & CPU_TO_LE16(ICE_AQ_LSE_IS_ENABLED));
-
+	li->lse_ena = !!(resp->cmd_flags & CPU_TO_LE16(ICE_AQ_LSE_IS_ENABLED));
+
+	ice_debug(hw, ICE_DBG_LINK, "link_speed = 0x%x\n", li->link_speed);
+	ice_debug(hw, ICE_DBG_LINK, "phy_type_low = 0x%llx\n",
+		  (unsigned long long)li->phy_type_low);
+	ice_debug(hw, ICE_DBG_LINK, "phy_type_high = 0x%llx\n",
+		  (unsigned long long)li->phy_type_high);
+	ice_debug(hw, ICE_DBG_LINK, "media_type = 0x%x\n", *hw_media_type);
+	ice_debug(hw, ICE_DBG_LINK, "link_info = 0x%x\n", li->link_info);
+	ice_debug(hw, ICE_DBG_LINK, "an_info = 0x%x\n", li->an_info);
+	ice_debug(hw, ICE_DBG_LINK, "ext_info = 0x%x\n", li->ext_info);
+	ice_debug(hw, ICE_DBG_LINK, "lse_ena = 0x%x\n", li->lse_ena);
+	ice_debug(hw, ICE_DBG_LINK, "max_frame = 0x%x\n", li->max_frame_size);
+	ice_debug(hw, ICE_DBG_LINK, "pacing = 0x%x\n", li->pacing);
 
 	/* save link status information */
 	if (link)
-		*link = *hw_link_info;
+		*link = *li;
 
 	/* flag cleared so calling functions don't call AQ again */
 	pi->phy.get_link_info = false;
@@ -2412,7 +2425,7 @@ ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
 /**
  * ice_aq_set_phy_cfg
  * @hw: pointer to the HW struct
- * @lport: logical port number
+ * @pi: port info structure of the interested logical port
  * @cfg: structure with PHY configuration data to be set
  * @cd: pointer to command details structure or NULL
  *
@@ -2422,10 +2435,11 @@ ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
  * parameters. This status will be indicated by the command response (0x0601).
  */
 enum ice_status
-ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi,
 		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd)
 {
 	struct ice_aq_desc desc;
+	enum ice_status status;
 
 	if (!cfg)
 		return ICE_ERR_PARAM;
@@ -2440,10 +2454,26 @@ ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
 	}
 
 	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_phy_cfg);
-	desc.params.set_phy.lport_num = lport;
+	desc.params.set_phy.lport_num = pi->lport;
 	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
 
-	return ice_aq_send_cmd(hw, &desc, cfg, sizeof(*cfg), cd);
+	ice_debug(hw, ICE_DBG_LINK, "phy_type_low = 0x%llx\n",
+		  (unsigned long long)LE64_TO_CPU(cfg->phy_type_low));
+	ice_debug(hw, ICE_DBG_LINK, "phy_type_high = 0x%llx\n",
+		  (unsigned long long)LE64_TO_CPU(cfg->phy_type_high));
+	ice_debug(hw, ICE_DBG_LINK, "caps = 0x%x\n", cfg->caps);
+	ice_debug(hw, ICE_DBG_LINK, "low_power_ctrl = 0x%x\n",
+		  cfg->low_power_ctrl);
+	ice_debug(hw, ICE_DBG_LINK, "eee_cap = 0x%x\n", cfg->eee_cap);
+	ice_debug(hw, ICE_DBG_LINK, "eeer_value = 0x%x\n", cfg->eeer_value);
+	ice_debug(hw, ICE_DBG_LINK, "link_fec_opt = 0x%x\n", cfg->link_fec_opt);
+
+	status = ice_aq_send_cmd(hw, &desc, cfg, sizeof(*cfg), cd);
+
+	if (!status)
+		pi->phy.curr_user_phy_cfg = *cfg;
+
+	return status;
 }
 
 /**
@@ -2487,6 +2517,38 @@ enum ice_status ice_update_link_info(struct ice_port_info *pi)
 	return status;
 }
 
+/**
+ * ice_cache_phy_user_req
+ * @pi: port information structure
+ * @cache_data: PHY logging data
+ * @cache_mode: PHY logging mode
+ *
+ * Log the user request on (FC, FEC, SPEED) for later user.
+ */
+static void
+ice_cache_phy_user_req(struct ice_port_info *pi,
+		       struct ice_phy_cache_mode_data cache_data,
+		       enum ice_phy_cache_mode cache_mode)
+{
+	if (!pi)
+		return;
+
+	switch (cache_mode) {
+	case ICE_FC_MODE:
+		pi->phy.curr_user_fc_req = cache_data.data.curr_user_fc_req;
+		break;
+	case ICE_SPEED_MODE:
+		pi->phy.curr_user_speed_req =
+			cache_data.data.curr_user_speed_req;
+		break;
+	case ICE_FEC_MODE:
+		pi->phy.curr_user_fec_req = cache_data.data.curr_user_fec_req;
+		break;
+	default:
+		break;
+	}
+}
+
 /**
  * ice_set_fc
  * @pi: port information structure
@@ -2499,6 +2561,7 @@ enum ice_status
 ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
 {
 	struct ice_aqc_set_phy_cfg_data cfg = { 0 };
+	struct ice_phy_cache_mode_data cache_data;
 	struct ice_aqc_get_phy_caps_data *pcaps;
 	enum ice_status status;
 	u8 pause_mask = 0x0;
@@ -2509,6 +2572,10 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
 	hw = pi->hw;
 	*aq_failures = ICE_SET_FC_AQ_FAIL_NONE;
 
+	/* Cache user FC request */
+	cache_data.data.curr_user_fc_req = pi->fc.req_mode;
+	ice_cache_phy_user_req(pi, cache_data, ICE_FC_MODE);
+
 	switch (pi->fc.req_mode) {
 	case ICE_FC_FULL:
 		pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE;
@@ -2540,8 +2607,10 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
 	/* clear the old pause settings */
 	cfg.caps = pcaps->caps & ~(ICE_AQC_PHY_EN_TX_LINK_PAUSE |
 				   ICE_AQC_PHY_EN_RX_LINK_PAUSE);
+
 	/* set the new capabilities */
 	cfg.caps |= pause_mask;
+
 	/* If the capabilities have changed, then set the new config */
 	if (cfg.caps != pcaps->caps) {
 		int retry_count, retry_max = 10;
@@ -2557,7 +2626,7 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update)
 		cfg.eeer_value = pcaps->eeer_value;
 		cfg.link_fec_opt = pcaps->link_fec_options;
 
-		status = ice_aq_set_phy_cfg(hw, pi->lport, &cfg, NULL);
+		status = ice_aq_set_phy_cfg(hw, pi, &cfg, NULL);
 		if (status) {
 			*aq_failures = ICE_SET_FC_AQ_FAIL_SET;
 			goto out;
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index aee754b85..cccb5f009 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -134,7 +134,7 @@ ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags,
 
 enum ice_status ice_clear_pf_cfg(struct ice_hw *hw);
 enum ice_status
-ice_aq_set_phy_cfg(struct ice_hw *hw, u8 lport,
+ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi,
 		   struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd);
 enum ice_status
 ice_set_fc(struct ice_port_info *pi, u8 *aq_failures,
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 63ef5bb46..bc1ba60d1 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -136,6 +136,12 @@ enum ice_fc_mode {
 	ICE_FC_DFLT
 };
 
+enum ice_phy_cache_mode {
+	ICE_FC_MODE = 0,
+	ICE_SPEED_MODE,
+	ICE_FEC_MODE
+};
+
 enum ice_fec_mode {
 	ICE_FEC_NONE = 0,
 	ICE_FEC_RS,
@@ -143,6 +149,14 @@ enum ice_fec_mode {
 	ICE_FEC_AUTO
 };
 
+struct ice_phy_cache_mode_data {
+	union {
+		enum ice_fec_mode curr_user_fec_req;
+		enum ice_fc_mode curr_user_fc_req;
+		u16 curr_user_speed_req;
+	} data;
+};
+
 enum ice_set_fc_aq_failures {
 	ICE_SET_FC_AQ_FAIL_NONE = 0,
 	ICE_SET_FC_AQ_FAIL_GET,
@@ -220,6 +234,13 @@ struct ice_phy_info {
 	u64 phy_type_high;
 	enum ice_media_type media_type;
 	u8 get_link_info;
+	/* Please refer to struct ice_aqc_get_link_status_data to get
+	 * detail of enable bit in curr_user_speed_req
+	 */
+	u16 curr_user_speed_req;
+	enum ice_fec_mode curr_user_fec_req;
+	enum ice_fc_mode curr_user_fc_req;
+	struct ice_aqc_set_phy_cfg_data curr_user_phy_cfg;
 };
 
 #define ICE_MAX_NUM_MIRROR_RULES	64
@@ -636,6 +657,8 @@ struct ice_port_info {
 	u8 port_state;
 #define ICE_SCHED_PORT_STATE_INIT	0x0
 #define ICE_SCHED_PORT_STATE_READY	0x1
+	u8 lport;
+#define ICE_LPORT_MASK			0xff
 	u16 dflt_tx_vsi_rule_id;
 	u16 dflt_tx_vsi_num;
 	u16 dflt_rx_vsi_rule_id;
@@ -651,11 +674,9 @@ struct ice_port_info {
 	struct ice_dcbx_cfg remote_dcbx_cfg;	/* Peer Cfg */
 	struct ice_dcbx_cfg desired_dcbx_cfg;	/* CEE Desired Cfg */
 	/* LLDP/DCBX Status */
-	u8 dcbx_status;
-	u8 is_sw_lldp;
-	u8 lport;
-#define ICE_LPORT_MASK		0xff
-	u8 is_vf;
+	u8 dcbx_status:3;		/* see ICE_DCBX_STATUS_DIS */
+	u8 is_sw_lldp:1;
+	u8 is_vf:1;
 };
 
 struct ice_switch_info {
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 713161bf4..5968604b4 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -2296,7 +2296,7 @@ ice_force_phys_link_state(struct ice_hw *hw, bool link_up)
 	else
 		cfg.caps &= ~ICE_AQ_PHY_ENA_LINK;
 
-	status = ice_aq_set_phy_cfg(hw, pi->lport, &cfg, NULL);
+	status = ice_aq_set_phy_cfg(hw, pi, &cfg, NULL);
 
 out:
 	ice_free(hw, pcaps);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 13/69] net/ice/base: refactor HW table init function
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (11 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 12/69] net/ice/base: cache the data of set PHY cfg AQ in SW Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 14/69] net/ice/base: add lock around profile map list Leyi Rong
                       ` (56 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Vignesh Sridhar, Paul M Stillwell Jr

1. Separated the calls to initialize and allocate the HW XLT tables
from call to fill table. This is to allow the ice_init_hw_tbls call
to be made prior to package download so that all HW structures are
correctly initialized. This will avoid any invalid memory references
if package download fails on unloading the driver.
2. Fill HW tables with package content after successful package download.
3. Free HW table and flow profile allocations when unloading driver.
4. Add flag in block structure to check if lists in block are
initialized. This is to avoid any NULL reference in releasing flow
profiles that may have been freed in previous calls to free tables.

Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c    |   6 +-
 drivers/net/ice/base/ice_flex_pipe.c | 282 ++++++++++++++-------------
 drivers/net/ice/base/ice_flex_pipe.h |   1 +
 drivers/net/ice/base/ice_flex_type.h |   1 +
 4 files changed, 149 insertions(+), 141 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index a0ab25aef..62c7fad0d 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -916,12 +916,13 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 
 	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC);
 	ice_init_flex_flds(hw, ICE_RXDID_FLEX_NIC_2);
-
 	/* Obtain counter base index which would be used by flow director */
 	status = ice_alloc_fd_res_cntr(hw, &hw->fd_ctr_base);
 	if (status)
 		goto err_unroll_fltr_mgmt_struct;
-
+	status = ice_init_hw_tbls(hw);
+	if (status)
+		goto err_unroll_fltr_mgmt_struct;
 	return ICE_SUCCESS;
 
 err_unroll_fltr_mgmt_struct:
@@ -952,6 +953,7 @@ void ice_deinit_hw(struct ice_hw *hw)
 	ice_sched_cleanup_all(hw);
 	ice_sched_clear_agg(hw);
 	ice_free_seg(hw);
+	ice_free_hw_tbls(hw);
 
 	if (hw->port_info) {
 		ice_free(hw, hw->port_info);
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index babad94f8..dd9a6fcb4 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1375,10 +1375,12 @@ enum ice_status ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len)
 
 	if (!status) {
 		hw->seg = seg;
-		/* on successful package download, update other required
-		 * registers to support the package
+		/* on successful package download update other required
+		 * registers to support the package and fill HW tables
+		 * with package content.
 		 */
 		ice_init_pkg_regs(hw);
+		ice_fill_blk_tbls(hw);
 	} else {
 		ice_debug(hw, ICE_DBG_INIT, "package load failed, %d\n",
 			  status);
@@ -2755,6 +2757,65 @@ static const u32 ice_blk_sids[ICE_BLK_COUNT][ICE_SID_OFF_COUNT] = {
 	}
 };
 
+/**
+ * ice_init_sw_xlt1_db - init software XLT1 database from HW tables
+ * @hw: pointer to the hardware structure
+ * @blk: the HW block to initialize
+ */
+static
+void ice_init_sw_xlt1_db(struct ice_hw *hw, enum ice_block blk)
+{
+	u16 pt;
+
+	for (pt = 0; pt < hw->blk[blk].xlt1.count; pt++) {
+		u8 ptg;
+
+		ptg = hw->blk[blk].xlt1.t[pt];
+		if (ptg != ICE_DEFAULT_PTG) {
+			ice_ptg_alloc_val(hw, blk, ptg);
+			ice_ptg_add_mv_ptype(hw, blk, pt, ptg);
+		}
+	}
+}
+
+/**
+ * ice_init_sw_xlt2_db - init software XLT2 database from HW tables
+ * @hw: pointer to the hardware structure
+ * @blk: the HW block to initialize
+ */
+static void ice_init_sw_xlt2_db(struct ice_hw *hw, enum ice_block blk)
+{
+	u16 vsi;
+
+	for (vsi = 0; vsi < hw->blk[blk].xlt2.count; vsi++) {
+		u16 vsig;
+
+		vsig = hw->blk[blk].xlt2.t[vsi];
+		if (vsig) {
+			ice_vsig_alloc_val(hw, blk, vsig);
+			ice_vsig_add_mv_vsi(hw, blk, vsi, vsig);
+			/* no changes at this time, since this has been
+			 * initialized from the original package
+			 */
+			hw->blk[blk].xlt2.vsis[vsi].changed = 0;
+		}
+	}
+}
+
+/**
+ * ice_init_sw_db - init software database from HW tables
+ * @hw: pointer to the hardware structure
+ */
+static void ice_init_sw_db(struct ice_hw *hw)
+{
+	u16 i;
+
+	for (i = 0; i < ICE_BLK_COUNT; i++) {
+		ice_init_sw_xlt1_db(hw, (enum ice_block)i);
+		ice_init_sw_xlt2_db(hw, (enum ice_block)i);
+	}
+}
+
 /**
  * ice_fill_tbl - Reads content of a single table type into database
  * @hw: pointer to the hardware structure
@@ -2853,12 +2914,12 @@ static void ice_fill_tbl(struct ice_hw *hw, enum ice_block block_id, u32 sid)
 		case ICE_SID_FLD_VEC_PE:
 			es = (struct ice_sw_fv_section *)sect;
 			src = (u8 *)es->fv;
-			sect_len = LE16_TO_CPU(es->count) *
-				hw->blk[block_id].es.fvw *
+			sect_len = (u32)(LE16_TO_CPU(es->count) *
+					 hw->blk[block_id].es.fvw) *
 				sizeof(*hw->blk[block_id].es.t);
 			dst = (u8 *)hw->blk[block_id].es.t;
-			dst_len = hw->blk[block_id].es.count *
-				hw->blk[block_id].es.fvw *
+			dst_len = (u32)(hw->blk[block_id].es.count *
+					hw->blk[block_id].es.fvw) *
 				sizeof(*hw->blk[block_id].es.t);
 			break;
 		default:
@@ -2886,75 +2947,61 @@ static void ice_fill_tbl(struct ice_hw *hw, enum ice_block block_id, u32 sid)
 }
 
 /**
- * ice_fill_blk_tbls - Read package content for tables of a block
+ * ice_fill_blk_tbls - Read package context for tables
  * @hw: pointer to the hardware structure
- * @block_id: The block ID which contains the tables to be copied
  *
  * Reads the current package contents and populates the driver
- * database with the data it contains to allow for advanced driver
- * features.
- */
-static void ice_fill_blk_tbls(struct ice_hw *hw, enum ice_block block_id)
-{
-	ice_fill_tbl(hw, block_id, hw->blk[block_id].xlt1.sid);
-	ice_fill_tbl(hw, block_id, hw->blk[block_id].xlt2.sid);
-	ice_fill_tbl(hw, block_id, hw->blk[block_id].prof.sid);
-	ice_fill_tbl(hw, block_id, hw->blk[block_id].prof_redir.sid);
-	ice_fill_tbl(hw, block_id, hw->blk[block_id].es.sid);
-}
-
-/**
- * ice_free_flow_profs - free flow profile entries
- * @hw: pointer to the hardware structure
+ * database with the data iteratively for all advanced feature
+ * blocks. Assume that the Hw tables have been allocated.
  */
-static void ice_free_flow_profs(struct ice_hw *hw)
+void ice_fill_blk_tbls(struct ice_hw *hw)
 {
 	u8 i;
 
 	for (i = 0; i < ICE_BLK_COUNT; i++) {
-		struct ice_flow_prof *p, *tmp;
-
-		if (!&hw->fl_profs[i])
-			continue;
-
-		/* This call is being made as part of resource deallocation
-		 * during unload. Lock acquire and release will not be
-		 * necessary here.
-		 */
-		LIST_FOR_EACH_ENTRY_SAFE(p, tmp, &hw->fl_profs[i],
-					 ice_flow_prof, l_entry) {
-			struct ice_flow_entry *e, *t;
-
-			LIST_FOR_EACH_ENTRY_SAFE(e, t, &p->entries,
-						 ice_flow_entry, l_entry)
-				ice_flow_rem_entry(hw, ICE_FLOW_ENTRY_HNDL(e));
-
-			LIST_DEL(&p->l_entry);
-			if (p->acts)
-				ice_free(hw, p->acts);
-			ice_free(hw, p);
-		}
+		enum ice_block blk_id = (enum ice_block)i;
 
-		ice_destroy_lock(&hw->fl_profs_locks[i]);
+		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].xlt1.sid);
+		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].xlt2.sid);
+		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].prof.sid);
+		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].prof_redir.sid);
+		ice_fill_tbl(hw, blk_id, hw->blk[blk_id].es.sid);
 	}
+
+	ice_init_sw_db(hw);
 }
 
 /**
- * ice_free_prof_map - frees the profile map
+ * ice_free_flow_profs - free flow profile entries
  * @hw: pointer to the hardware structure
- * @blk: the HW block which contains the profile map to be freed
+ * @blk_idx: HW block index
  */
-static void ice_free_prof_map(struct ice_hw *hw, enum ice_block blk)
+static void ice_free_flow_profs(struct ice_hw *hw, u8 blk_idx)
 {
-	struct ice_prof_map *del, *tmp;
+	struct ice_flow_prof *p, *tmp;
 
-	if (LIST_EMPTY(&hw->blk[blk].es.prof_map))
-		return;
+	/* This call is being made as part of resource deallocation
+	 * during unload. Lock acquire and release will not be
+	 * necessary here.
+	 */
+	LIST_FOR_EACH_ENTRY_SAFE(p, tmp, &hw->fl_profs[blk_idx],
+				 ice_flow_prof, l_entry) {
+		struct ice_flow_entry *e, *t;
+
+		LIST_FOR_EACH_ENTRY_SAFE(e, t, &p->entries,
+					 ice_flow_entry, l_entry)
+			ice_flow_rem_entry(hw, ICE_FLOW_ENTRY_HNDL(e));
 
-	LIST_FOR_EACH_ENTRY_SAFE(del, tmp, &hw->blk[blk].es.prof_map,
-				 ice_prof_map, list) {
-		ice_rem_prof(hw, blk, del->profile_cookie);
+		LIST_DEL(&p->l_entry);
+		if (p->acts)
+			ice_free(hw, p->acts);
+		ice_free(hw, p);
 	}
+
+	/* if driver is in reset and tables are being cleared
+	 * re-initialize the flow profile list heads
+	 */
+	INIT_LIST_HEAD(&hw->fl_profs[blk_idx]);
 }
 
 /**
@@ -2980,10 +3027,24 @@ static void ice_free_vsig_tbl(struct ice_hw *hw, enum ice_block blk)
  */
 void ice_free_hw_tbls(struct ice_hw *hw)
 {
+	struct ice_rss_cfg *r, *rt;
 	u8 i;
 
 	for (i = 0; i < ICE_BLK_COUNT; i++) {
-		ice_free_prof_map(hw, (enum ice_block)i);
+		if (hw->blk[i].is_list_init) {
+			struct ice_es *es = &hw->blk[i].es;
+			struct ice_prof_map *del, *tmp;
+
+			LIST_FOR_EACH_ENTRY_SAFE(del, tmp, &es->prof_map,
+						 ice_prof_map, list) {
+				LIST_DEL(&del->list);
+				ice_free(hw, del);
+			}
+
+			ice_free_flow_profs(hw, i);
+			ice_destroy_lock(&hw->fl_profs_locks[i]);
+			hw->blk[i].is_list_init = false;
+		}
 		ice_free_vsig_tbl(hw, (enum ice_block)i);
 		ice_free(hw, hw->blk[i].xlt1.ptypes);
 		ice_free(hw, hw->blk[i].xlt1.ptg_tbl);
@@ -2998,84 +3059,24 @@ void ice_free_hw_tbls(struct ice_hw *hw)
 		ice_free(hw, hw->blk[i].es.written);
 	}
 
+	LIST_FOR_EACH_ENTRY_SAFE(r, rt, &hw->rss_list_head,
+				 ice_rss_cfg, l_entry) {
+		LIST_DEL(&r->l_entry);
+		ice_free(hw, r);
+	}
+	ice_destroy_lock(&hw->rss_locks);
 	ice_memset(hw->blk, 0, sizeof(hw->blk), ICE_NONDMA_MEM);
-
-	ice_free_flow_profs(hw);
 }
 
 /**
  * ice_init_flow_profs - init flow profile locks and list heads
  * @hw: pointer to the hardware structure
+ * @blk_idx: HW block index
  */
-static void ice_init_flow_profs(struct ice_hw *hw)
+static void ice_init_flow_profs(struct ice_hw *hw, u8 blk_idx)
 {
-	u8 i;
-
-	for (i = 0; i < ICE_BLK_COUNT; i++) {
-		ice_init_lock(&hw->fl_profs_locks[i]);
-		INIT_LIST_HEAD(&hw->fl_profs[i]);
-	}
-}
-
-/**
- * ice_init_sw_xlt1_db - init software XLT1 database from HW tables
- * @hw: pointer to the hardware structure
- * @blk: the HW block to initialize
- */
-static
-void ice_init_sw_xlt1_db(struct ice_hw *hw, enum ice_block blk)
-{
-	u16 pt;
-
-	for (pt = 0; pt < hw->blk[blk].xlt1.count; pt++) {
-		u8 ptg;
-
-		ptg = hw->blk[blk].xlt1.t[pt];
-		if (ptg != ICE_DEFAULT_PTG) {
-			ice_ptg_alloc_val(hw, blk, ptg);
-			ice_ptg_add_mv_ptype(hw, blk, pt, ptg);
-		}
-	}
-}
-
-/**
- * ice_init_sw_xlt2_db - init software XLT2 database from HW tables
- * @hw: pointer to the hardware structure
- * @blk: the HW block to initialize
- */
-static
-void ice_init_sw_xlt2_db(struct ice_hw *hw, enum ice_block blk)
-{
-	u16 vsi;
-
-	for (vsi = 0; vsi < hw->blk[blk].xlt2.count; vsi++) {
-		u16 vsig;
-
-		vsig = hw->blk[blk].xlt2.t[vsi];
-		if (vsig) {
-			ice_vsig_alloc_val(hw, blk, vsig);
-			ice_vsig_add_mv_vsi(hw, blk, vsi, vsig);
-			/* no changes at this time, since this has been
-			 * initialized from the original package
-			 */
-			hw->blk[blk].xlt2.vsis[vsi].changed = 0;
-		}
-	}
-}
-
-/**
- * ice_init_sw_db - init software database from HW tables
- * @hw: pointer to the hardware structure
- */
-static
-void ice_init_sw_db(struct ice_hw *hw)
-{
-	u16 i;
-
-	for (i = 0; i < ICE_BLK_COUNT; i++) {
-		ice_init_sw_xlt1_db(hw, (enum ice_block)i);
-		ice_init_sw_xlt2_db(hw, (enum ice_block)i);
-	}
+	ice_init_lock(&hw->fl_profs_locks[blk_idx]);
+	INIT_LIST_HEAD(&hw->fl_profs[blk_idx]);
 }
 
 /**
@@ -3086,14 +3087,22 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw)
 {
 	u8 i;
 
-	ice_init_flow_profs(hw);
-
+	ice_init_lock(&hw->rss_locks);
+	INIT_LIST_HEAD(&hw->rss_list_head);
 	for (i = 0; i < ICE_BLK_COUNT; i++) {
 		struct ice_prof_redir *prof_redir = &hw->blk[i].prof_redir;
 		struct ice_prof_tcam *prof = &hw->blk[i].prof;
 		struct ice_xlt1 *xlt1 = &hw->blk[i].xlt1;
 		struct ice_xlt2 *xlt2 = &hw->blk[i].xlt2;
 		struct ice_es *es = &hw->blk[i].es;
+		u16 j;
+
+		if (hw->blk[i].is_list_init)
+			continue;
+
+		ice_init_flow_profs(hw, i);
+		INIT_LIST_HEAD(&es->prof_map);
+		hw->blk[i].is_list_init = true;
 
 		hw->blk[i].overwrite = blk_sizes[i].overwrite;
 		es->reverse = blk_sizes[i].reverse;
@@ -3131,6 +3140,9 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw)
 		if (!xlt2->vsig_tbl)
 			goto err;
 
+		for (j = 0; j < xlt2->count; j++)
+			INIT_LIST_HEAD(&xlt2->vsig_tbl[j].prop_lst);
+
 		xlt2->t = (u16 *)ice_calloc(hw, xlt2->count, sizeof(*xlt2->t));
 		if (!xlt2->t)
 			goto err;
@@ -3157,8 +3169,8 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw)
 		es->count = blk_sizes[i].es;
 		es->fvw = blk_sizes[i].fvw;
 		es->t = (struct ice_fv_word *)
-			ice_calloc(hw, es->count * es->fvw, sizeof(*es->t));
-
+			ice_calloc(hw, (u32)(es->count * es->fvw),
+				   sizeof(*es->t));
 		if (!es->t)
 			goto err;
 
@@ -3170,15 +3182,7 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw)
 
 		if (!es->ref_count)
 			goto err;
-
-		INIT_LIST_HEAD(&es->prof_map);
-
-		/* Now that tables are allocated, read in package data */
-		ice_fill_blk_tbls(hw, (enum ice_block)i);
 	}
-
-	ice_init_sw_db(hw);
-
 	return ICE_SUCCESS;
 
 err:
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 2710dded6..e8cc9cef3 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -98,6 +98,7 @@ enum ice_status
 ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len);
 enum ice_status ice_init_hw_tbls(struct ice_hw *hw);
 void ice_free_seg(struct ice_hw *hw);
+void ice_fill_blk_tbls(struct ice_hw *hw);
 void ice_free_hw_tbls(struct ice_hw *hw);
 enum ice_status
 ice_add_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
index f2a5f27e7..98c7637a5 100644
--- a/drivers/net/ice/base/ice_flex_type.h
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -675,6 +675,7 @@ struct ice_blk_info {
 	struct ice_prof_redir prof_redir;
 	struct ice_es es;
 	u8 overwrite; /* set to true to allow overwrite of table entries */
+	u8 is_list_init;
 };
 
 enum ice_chg_type {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 14/69] net/ice/base: add lock around profile map list
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (12 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 13/69] net/ice/base: refactor HW table init function Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 15/69] net/ice/base: add compatibility check for package version Leyi Rong
                       ` (55 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Add locking mechanism around profile map list.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 62 +++++++++++++++++++++-------
 drivers/net/ice/base/ice_flex_type.h |  5 ++-
 2 files changed, 49 insertions(+), 18 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index dd9a6fcb4..eccc9b26c 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -3041,6 +3041,7 @@ void ice_free_hw_tbls(struct ice_hw *hw)
 				ice_free(hw, del);
 			}
 
+			ice_destroy_lock(&es->prof_map_lock);
 			ice_free_flow_profs(hw, i);
 			ice_destroy_lock(&hw->fl_profs_locks[i]);
 			hw->blk[i].is_list_init = false;
@@ -3101,6 +3102,7 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw)
 			continue;
 
 		ice_init_flow_profs(hw, i);
+		ice_init_lock(&es->prof_map_lock);
 		INIT_LIST_HEAD(&es->prof_map);
 		hw->blk[i].is_list_init = true;
 
@@ -3843,6 +3845,8 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 	u32 byte = 0;
 	u8 prof_id;
 
+	ice_acquire_lock(&hw->blk[blk].es.prof_map_lock);
+
 	/* search for existing profile */
 	status = ice_find_prof_id(hw, blk, es, &prof_id);
 	if (status) {
@@ -3914,24 +3918,26 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 		bytes--;
 		byte++;
 	}
-	LIST_ADD(&prof->list, &hw->blk[blk].es.prof_map);
 
-	return ICE_SUCCESS;
+	LIST_ADD(&prof->list, &hw->blk[blk].es.prof_map);
+	status = ICE_SUCCESS;
 
 err_ice_add_prof:
+	ice_release_lock(&hw->blk[blk].es.prof_map_lock);
 	return status;
 }
 
 /**
- * ice_search_prof_id - Search for a profile tracking ID
+ * ice_search_prof_id_low - Search for a profile tracking ID low level
  * @hw: pointer to the HW struct
  * @blk: hardware block
  * @id: profile tracking ID
  *
- * This will search for a profile tracking ID which was previously added.
+ * This will search for a profile tracking ID which was previously added. This
+ * version assumes that the caller has already acquired the prof map lock.
  */
-struct ice_prof_map *
-ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id)
+static struct ice_prof_map *
+ice_search_prof_id_low(struct ice_hw *hw, enum ice_block blk, u64 id)
 {
 	struct ice_prof_map *entry = NULL;
 	struct ice_prof_map *map;
@@ -3947,6 +3953,26 @@ ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id)
 	return entry;
 }
 
+/**
+ * ice_search_prof_id - Search for a profile tracking ID
+ * @hw: pointer to the HW struct
+ * @blk: hardware block
+ * @id: profile tracking ID
+ *
+ * This will search for a profile tracking ID which was previously added.
+ */
+struct ice_prof_map *
+ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id)
+{
+	struct ice_prof_map *entry;
+
+	ice_acquire_lock(&hw->blk[blk].es.prof_map_lock);
+	entry = ice_search_prof_id_low(hw, blk, id);
+	ice_release_lock(&hw->blk[blk].es.prof_map_lock);
+
+	return entry;
+}
+
 /**
  * ice_set_prof_context - Set context for a given profile
  * @hw: pointer to the HW struct
@@ -4199,29 +4225,33 @@ ice_rem_flow_all(struct ice_hw *hw, enum ice_block blk, u64 id)
  */
 enum ice_status ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id)
 {
-	enum ice_status status;
 	struct ice_prof_map *pmap;
+	enum ice_status status;
 
-	pmap = ice_search_prof_id(hw, blk, id);
-	if (!pmap)
-		return ICE_ERR_DOES_NOT_EXIST;
+	ice_acquire_lock(&hw->blk[blk].es.prof_map_lock);
+
+	pmap = ice_search_prof_id_low(hw, blk, id);
+	if (!pmap) {
+		status = ICE_ERR_DOES_NOT_EXIST;
+		goto err_ice_rem_prof;
+	}
 
 	/* remove all flows with this profile */
 	status = ice_rem_flow_all(hw, blk, pmap->profile_cookie);
 	if (status)
-		return status;
+		goto err_ice_rem_prof;
 
-	/* remove profile */
-	status = ice_free_prof_id(hw, blk, pmap->prof_id);
-	if (status)
-		return status;
 	/* dereference profile, and possibly remove */
 	ice_prof_dec_ref(hw, blk, pmap->prof_id);
 
 	LIST_DEL(&pmap->list);
 	ice_free(hw, pmap);
 
-	return ICE_SUCCESS;
+	status = ICE_SUCCESS;
+
+err_ice_rem_prof:
+	ice_release_lock(&hw->blk[blk].es.prof_map_lock);
+	return status;
 }
 
 /**
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
index 98c7637a5..7133983ff 100644
--- a/drivers/net/ice/base/ice_flex_type.h
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -503,10 +503,11 @@ struct ice_es {
 	u16 count;
 	u16 fvw;
 	u16 *ref_count;
-	u8 *written;
-	u8 reverse; /* set to true to reverse FV order */
 	struct LIST_HEAD_TYPE prof_map;
 	struct ice_fv_word *t;
+	struct ice_lock prof_map_lock;	/* protect access to profiles list */
+	u8 *written;
+	u8 reverse; /* set to true to reverse FV order */
 };
 
 /* PTYPE Group management */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 15/69] net/ice/base: add compatibility check for package version
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (13 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 14/69] net/ice/base: add lock around profile map list Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 16/69] net/ice/base: add API to init FW logging Leyi Rong
                       ` (54 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

1. Perform a check against the package version to make sure that
it will be compatible with the shared code implementation. There
will be points in time when the shared code and package will need
to be changed in lock step; the mechanism added here is meant to
deal with those situations.
2. Support package tunnel labels owned by PF. VXLAN and GENEVE
tunnel labels names in the package are changing to incorporate
the PF that owns them.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 96 ++++++++++++++++++++++------
 drivers/net/ice/base/ice_flex_pipe.h |  8 +++
 drivers/net/ice/base/ice_flex_type.h | 10 ---
 3 files changed, 85 insertions(+), 29 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index eccc9b26c..12e1eb366 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -7,19 +7,12 @@
 #include "ice_protocol_type.h"
 #include "ice_flow.h"
 
+/* To support tunneling entries by PF, the package will append the PF number to
+ * the label; for example TNL_VXLAN_PF0, TNL_VXLAN_PF1, TNL_VXLAN_PF2, etc.
+ */
 static const struct ice_tunnel_type_scan tnls[] = {
-	{ TNL_VXLAN,		"TNL_VXLAN" },
-	{ TNL_GTPC,		"TNL_GTPC" },
-	{ TNL_GTPC_TEID,	"TNL_GTPC_TEID" },
-	{ TNL_GTPU,		"TNL_GTPC" },
-	{ TNL_GTPU_TEID,	"TNL_GTPU_TEID" },
-	{ TNL_VXLAN_GPE,	"TNL_VXLAN_GPE" },
-	{ TNL_GENEVE,		"TNL_GENEVE" },
-	{ TNL_NAT,		"TNL_NAT" },
-	{ TNL_ROCE_V2,		"TNL_ROCE_V2" },
-	{ TNL_MPLSO_UDP,	"TNL_MPLSO_UDP" },
-	{ TNL_UDP2_END,		"TNL_UDP2_END" },
-	{ TNL_UPD_END,		"TNL_UPD_END" },
+	{ TNL_VXLAN,		"TNL_VXLAN_PF" },
+	{ TNL_GENEVE,		"TNL_GENEVE_PF" },
 	{ TNL_LAST,		"" }
 };
 
@@ -485,8 +478,17 @@ void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg)
 
 	while (label_name && hw->tnl.count < ICE_TUNNEL_MAX_ENTRIES) {
 		for (i = 0; tnls[i].type != TNL_LAST; i++) {
-			if (!strncmp(label_name, tnls[i].label_prefix,
-				     strlen(tnls[i].label_prefix))) {
+			size_t len = strlen(tnls[i].label_prefix);
+
+			/* Look for matching label start, before continuing */
+			if (strncmp(label_name, tnls[i].label_prefix, len))
+				continue;
+
+			/* Make sure this label matches our PF. Note that the PF
+			 * character ('0' - '7') will be located where our
+			 * prefix string's null terminator is located.
+			 */
+			if ((label_name[len] - '0') == hw->pf_id) {
 				hw->tnl.tbl[hw->tnl.count].type = tnls[i].type;
 				hw->tnl.tbl[hw->tnl.count].valid = false;
 				hw->tnl.tbl[hw->tnl.count].in_use = false;
@@ -1083,12 +1085,8 @@ enum ice_status ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
 enum ice_status
 ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
 {
-	struct ice_aqc_get_pkg_info_resp *pkg_info;
 	struct ice_global_metadata_seg *meta_seg;
 	struct ice_generic_seg_hdr *seg_hdr;
-	enum ice_status status;
-	u16 size;
-	u32 i;
 
 	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 	if (!pkg_hdr)
@@ -1127,7 +1125,25 @@ ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
 		return ICE_ERR_CFG;
 	}
 
-#define ICE_PKG_CNT	4
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_get_pkg_info
+ * @hw: pointer to the hardware structure
+ *
+ * Store details of the package currently loaded in HW into the HW structure.
+ */
+enum ice_status
+ice_get_pkg_info(struct ice_hw *hw)
+{
+	struct ice_aqc_get_pkg_info_resp *pkg_info;
+	enum ice_status status;
+	u16 size;
+	u32 i;
+
+	ice_debug(hw, ICE_DBG_TRACE, "ice_init_pkg_info\n");
+
 	size = sizeof(*pkg_info) + (sizeof(pkg_info->pkg_info[0]) *
 				    (ICE_PKG_CNT - 1));
 	pkg_info = (struct ice_aqc_get_pkg_info_resp *)ice_malloc(hw, size);
@@ -1310,6 +1326,32 @@ static void ice_init_pkg_regs(struct ice_hw *hw)
 	ice_init_fd_mask_regs(hw);
 }
 
+/**
+ * ice_chk_pkg_version - check package version for compatibility with driver
+ * @hw: pointer to the hardware structure
+ * @pkg_ver: pointer to a version structure to check
+ *
+ * Check to make sure that the package about to be downloaded is compatible with
+ * the driver. To be compatible, the major and minor components of the package
+ * version must match our ICE_PKG_SUPP_VER_MAJ and ICE_PKG_SUPP_VER_MNR
+ * definitions.
+ */
+static enum ice_status
+ice_chk_pkg_version(struct ice_hw *hw, struct ice_pkg_ver *pkg_ver)
+{
+	if (pkg_ver->major != ICE_PKG_SUPP_VER_MAJ ||
+	    pkg_ver->minor != ICE_PKG_SUPP_VER_MNR) {
+		ice_info(hw, "ERROR: Incompatible package: %d.%d.%d.%d - requires package version: %d.%d.*.*\n",
+			 pkg_ver->major, pkg_ver->minor, pkg_ver->update,
+			 pkg_ver->draft, ICE_PKG_SUPP_VER_MAJ,
+			 ICE_PKG_SUPP_VER_MNR);
+
+		return ICE_ERR_NOT_SUPPORTED;
+	}
+
+	return ICE_SUCCESS;
+}
+
 /**
  * ice_init_pkg - initialize/download package
  * @hw: pointer to the hardware structure
@@ -1357,6 +1399,13 @@ enum ice_status ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len)
 	if (status)
 		return status;
 
+	/* before downloading the package, check package version for
+	 * compatibility with driver
+	 */
+	status = ice_chk_pkg_version(hw, &hw->pkg_ver);
+	if (status)
+		return status;
+
 	/* find segment in given package */
 	seg = (struct ice_seg *)ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE, pkg);
 	if (!seg) {
@@ -1373,6 +1422,15 @@ enum ice_status ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len)
 		status = ICE_SUCCESS;
 	}
 
+	/* Get information on the package currently loaded in HW, then make sure
+	 * the driver is compatible with this version.
+	 */
+	if (!status) {
+		status = ice_get_pkg_info(hw);
+		if (!status)
+			status = ice_chk_pkg_version(hw, &hw->active_pkg_ver);
+	}
+
 	if (!status) {
 		hw->seg = seg;
 		/* on successful package download update other required
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index e8cc9cef3..375758c8d 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -7,12 +7,18 @@
 
 #include "ice_type.h"
 
+/* Package minimal version supported */
+#define ICE_PKG_SUPP_VER_MAJ	1
+#define ICE_PKG_SUPP_VER_MNR	2
+
 /* Package format version */
 #define ICE_PKG_FMT_VER_MAJ	1
 #define ICE_PKG_FMT_VER_MNR	0
 #define ICE_PKG_FMT_VER_UPD	0
 #define ICE_PKG_FMT_VER_DFT	0
 
+#define ICE_PKG_CNT 4
+
 enum ice_status
 ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count);
 enum ice_status
@@ -28,6 +34,8 @@ enum ice_status ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg);
 
 enum ice_status
 ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_header);
+enum ice_status
+ice_get_pkg_info(struct ice_hw *hw);
 
 void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg);
 
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
index 7133983ff..d23b2ae82 100644
--- a/drivers/net/ice/base/ice_flex_type.h
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -455,17 +455,7 @@ struct ice_pkg_enum {
 
 enum ice_tunnel_type {
 	TNL_VXLAN = 0,
-	TNL_GTPC,
-	TNL_GTPC_TEID,
-	TNL_GTPU,
-	TNL_GTPU_TEID,
-	TNL_VXLAN_GPE,
 	TNL_GENEVE,
-	TNL_NAT,
-	TNL_ROCE_V2,
-	TNL_MPLSO_UDP,
-	TNL_UDP2_END,
-	TNL_UPD_END,
 	TNL_LAST = 0xFF,
 	TNL_ALL = 0xFF,
 };
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 16/69] net/ice/base: add API to init FW logging
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (14 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 15/69] net/ice/base: add compatibility check for package version Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 17/69] net/ice/base: use macro instead of magic 8 Leyi Rong
                       ` (53 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

In order to initialize the current status of the FW logging,
the api ice_get_fw_log_cfg is added. The function retrieves
the current setting of the FW logging from HW and updates the
ice_hw structure accordingly.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h |  1 +
 drivers/net/ice/base/ice_common.c     | 48 +++++++++++++++++++++++++++
 2 files changed, 49 insertions(+)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 7b0aa8aaa..739f79e88 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -2196,6 +2196,7 @@ enum ice_aqc_fw_logging_mod {
 	ICE_AQC_FW_LOG_ID_WATCHDOG,
 	ICE_AQC_FW_LOG_ID_TASK_DISPATCH,
 	ICE_AQC_FW_LOG_ID_MNG,
+	ICE_AQC_FW_LOG_ID_SYNCE,
 	ICE_AQC_FW_LOG_ID_MAX,
 };
 
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 62c7fad0d..7093ee4f4 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -582,6 +582,49 @@ static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
 #define ICE_FW_LOG_DESC_SIZE_MAX	\
 	ICE_FW_LOG_DESC_SIZE(ICE_AQC_FW_LOG_ID_MAX)
 
+/**
+ * ice_get_fw_log_cfg - get FW logging configuration
+ * @hw: pointer to the HW struct
+ */
+static enum ice_status ice_get_fw_log_cfg(struct ice_hw *hw)
+{
+	struct ice_aqc_fw_logging_data *config;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+	u16 size;
+
+	size = ICE_FW_LOG_DESC_SIZE_MAX;
+	config = (struct ice_aqc_fw_logging_data *)ice_malloc(hw, size);
+	if (!config)
+		return ICE_ERR_NO_MEMORY;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging_info);
+
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_BUF);
+	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
+
+	status = ice_aq_send_cmd(hw, &desc, config, size, NULL);
+	if (!status) {
+		u16 i;
+
+		/* Save fw logging information into the HW structure */
+		for (i = 0; i < ICE_AQC_FW_LOG_ID_MAX; i++) {
+			u16 v, m, flgs;
+
+			v = LE16_TO_CPU(config->entry[i]);
+			m = (v & ICE_AQC_FW_LOG_ID_M) >> ICE_AQC_FW_LOG_ID_S;
+			flgs = (v & ICE_AQC_FW_LOG_EN_M) >> ICE_AQC_FW_LOG_EN_S;
+
+			if (m < ICE_AQC_FW_LOG_ID_MAX)
+				hw->fw_log.evnts[m].cur = flgs;
+		}
+	}
+
+	ice_free(hw, config);
+
+	return status;
+}
+
 /**
  * ice_cfg_fw_log - configure FW logging
  * @hw: pointer to the HW struct
@@ -636,6 +679,11 @@ static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
 	    (!hw->fw_log.actv_evnts || !ice_check_sq_alive(hw, &hw->adminq)))
 		return ICE_SUCCESS;
 
+	/* Get current FW log settings */
+	status = ice_get_fw_log_cfg(hw);
+	if (status)
+		return status;
+
 	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_fw_logging);
 	cmd = &desc.params.fw_logging;
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 17/69] net/ice/base: use macro instead of magic 8
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (15 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 16/69] net/ice/base: add API to init FW logging Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 18/69] net/ice/base: move and redefine ice debug cq API Leyi Rong
                       ` (52 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Bruce Allan, Paul M Stillwell Jr

Replace the use of the magic number 8 by BITS_PER_BYTE when calculating
the number of bits from the number of bytes.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
Acked-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 drivers/net/ice/base/ice_flex_pipe.c |  4 +-
 drivers/net/ice/base/ice_flow.c      | 74 +++++++++++++++-------------
 2 files changed, 43 insertions(+), 35 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 12e1eb366..fb20493ca 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -3862,7 +3862,7 @@ ice_update_fd_swap(struct ice_hw *hw, u16 prof_id, struct ice_fv_word *es)
 
 			idx = (j * 4) + k;
 			if (used[idx])
-				raw_entry |= used[idx] << (k * 8);
+				raw_entry |= used[idx] << (k * BITS_PER_BYTE);
 		}
 
 		/* write the appropriate register set, based on HW block */
@@ -3957,7 +3957,7 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 				u16 ptype;
 				u8 m;
 
-				ptype = byte * 8 + bit;
+				ptype = byte * BITS_PER_BYTE + bit;
 				if (ptype < ICE_FLOW_PTYPE_MAX) {
 					prof->ptype[prof->ptype_count] = ptype;
 
diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index dccd7d3c7..9f2a794bc 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -26,8 +26,8 @@
  * protocol headers. Displacement values are expressed in number of bits.
  */
 #define ICE_FLOW_FLD_IPV6_TTL_DSCP_DISP	(-4)
-#define ICE_FLOW_FLD_IPV6_TTL_PROT_DISP	((-2) * 8)
-#define ICE_FLOW_FLD_IPV6_TTL_TTL_DISP	((-1) * 8)
+#define ICE_FLOW_FLD_IPV6_TTL_PROT_DISP	((-2) * BITS_PER_BYTE)
+#define ICE_FLOW_FLD_IPV6_TTL_TTL_DISP	((-1) * BITS_PER_BYTE)
 
 /* Describe properties of a protocol header field */
 struct ice_flow_field_info {
@@ -36,70 +36,76 @@ struct ice_flow_field_info {
 	u16 size;	/* Size of fields in bits */
 };
 
+#define ICE_FLOW_FLD_INFO(_hdr, _offset_bytes, _size_bytes) { \
+	.hdr = _hdr, \
+	.off = _offset_bytes * BITS_PER_BYTE, \
+	.size = _size_bytes * BITS_PER_BYTE, \
+}
+
 /* Table containing properties of supported protocol header fields */
 static const
 struct ice_flow_field_info ice_flds_info[ICE_FLOW_FIELD_IDX_MAX] = {
 	/* Ether */
 	/* ICE_FLOW_FIELD_IDX_ETH_DA */
-	{ ICE_FLOW_SEG_HDR_ETH, 0, ETH_ALEN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ETH, 0, ETH_ALEN),
 	/* ICE_FLOW_FIELD_IDX_ETH_SA */
-	{ ICE_FLOW_SEG_HDR_ETH, ETH_ALEN * 8, ETH_ALEN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ETH, ETH_ALEN, ETH_ALEN),
 	/* ICE_FLOW_FIELD_IDX_S_VLAN */
-	{ ICE_FLOW_SEG_HDR_VLAN, 12 * 8, ICE_FLOW_FLD_SZ_VLAN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_VLAN, 12, ICE_FLOW_FLD_SZ_VLAN),
 	/* ICE_FLOW_FIELD_IDX_C_VLAN */
-	{ ICE_FLOW_SEG_HDR_VLAN, 14 * 8, ICE_FLOW_FLD_SZ_VLAN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_VLAN, 14, ICE_FLOW_FLD_SZ_VLAN),
 	/* ICE_FLOW_FIELD_IDX_ETH_TYPE */
-	{ ICE_FLOW_SEG_HDR_ETH, 12 * 8, ICE_FLOW_FLD_SZ_ETH_TYPE * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ETH, 12, ICE_FLOW_FLD_SZ_ETH_TYPE),
 	/* IPv4 */
 	/* ICE_FLOW_FIELD_IDX_IP_DSCP */
-	{ ICE_FLOW_SEG_HDR_IPV4, 1 * 8, 1 * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV4, 1, 1),
 	/* ICE_FLOW_FIELD_IDX_IP_TTL */
-	{ ICE_FLOW_SEG_HDR_NONE, 8 * 8, 1 * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_NONE, 8, 1),
 	/* ICE_FLOW_FIELD_IDX_IP_PROT */
-	{ ICE_FLOW_SEG_HDR_NONE, 9 * 8, ICE_FLOW_FLD_SZ_IP_PROT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_NONE, 9, ICE_FLOW_FLD_SZ_IP_PROT),
 	/* ICE_FLOW_FIELD_IDX_IPV4_SA */
-	{ ICE_FLOW_SEG_HDR_IPV4, 12 * 8, ICE_FLOW_FLD_SZ_IPV4_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV4, 12, ICE_FLOW_FLD_SZ_IPV4_ADDR),
 	/* ICE_FLOW_FIELD_IDX_IPV4_DA */
-	{ ICE_FLOW_SEG_HDR_IPV4, 16 * 8, ICE_FLOW_FLD_SZ_IPV4_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV4, 16, ICE_FLOW_FLD_SZ_IPV4_ADDR),
 	/* IPv6 */
 	/* ICE_FLOW_FIELD_IDX_IPV6_SA */
-	{ ICE_FLOW_SEG_HDR_IPV6, 8 * 8, ICE_FLOW_FLD_SZ_IPV6_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV6, 8, ICE_FLOW_FLD_SZ_IPV6_ADDR),
 	/* ICE_FLOW_FIELD_IDX_IPV6_DA */
-	{ ICE_FLOW_SEG_HDR_IPV6, 24 * 8, ICE_FLOW_FLD_SZ_IPV6_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_IPV6, 24, ICE_FLOW_FLD_SZ_IPV6_ADDR),
 	/* Transport */
 	/* ICE_FLOW_FIELD_IDX_TCP_SRC_PORT */
-	{ ICE_FLOW_SEG_HDR_TCP, 0 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_TCP, 0, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_TCP_DST_PORT */
-	{ ICE_FLOW_SEG_HDR_TCP, 2 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_TCP, 2, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_UDP_SRC_PORT */
-	{ ICE_FLOW_SEG_HDR_UDP, 0 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_UDP, 0, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_UDP_DST_PORT */
-	{ ICE_FLOW_SEG_HDR_UDP, 2 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_UDP, 2, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT */
-	{ ICE_FLOW_SEG_HDR_SCTP, 0 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_SCTP, 0, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_SCTP_DST_PORT */
-	{ ICE_FLOW_SEG_HDR_SCTP, 2 * 8, ICE_FLOW_FLD_SZ_PORT * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_SCTP, 2, ICE_FLOW_FLD_SZ_PORT),
 	/* ICE_FLOW_FIELD_IDX_TCP_FLAGS */
-	{ ICE_FLOW_SEG_HDR_TCP, 13 * 8, ICE_FLOW_FLD_SZ_TCP_FLAGS * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_TCP, 13, ICE_FLOW_FLD_SZ_TCP_FLAGS),
 	/* ARP */
 	/* ICE_FLOW_FIELD_IDX_ARP_SIP */
-	{ ICE_FLOW_SEG_HDR_ARP, 14 * 8, ICE_FLOW_FLD_SZ_IPV4_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ARP, 14, ICE_FLOW_FLD_SZ_IPV4_ADDR),
 	/* ICE_FLOW_FIELD_IDX_ARP_DIP */
-	{ ICE_FLOW_SEG_HDR_ARP, 24 * 8, ICE_FLOW_FLD_SZ_IPV4_ADDR * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ARP, 24, ICE_FLOW_FLD_SZ_IPV4_ADDR),
 	/* ICE_FLOW_FIELD_IDX_ARP_SHA */
-	{ ICE_FLOW_SEG_HDR_ARP, 8 * 8, ETH_ALEN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ARP, 8, ETH_ALEN),
 	/* ICE_FLOW_FIELD_IDX_ARP_DHA */
-	{ ICE_FLOW_SEG_HDR_ARP, 18 * 8, ETH_ALEN * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ARP, 18, ETH_ALEN),
 	/* ICE_FLOW_FIELD_IDX_ARP_OP */
-	{ ICE_FLOW_SEG_HDR_ARP, 6 * 8, ICE_FLOW_FLD_SZ_ARP_OPER * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ARP, 6, ICE_FLOW_FLD_SZ_ARP_OPER),
 	/* ICMP */
 	/* ICE_FLOW_FIELD_IDX_ICMP_TYPE */
-	{ ICE_FLOW_SEG_HDR_ICMP, 0 * 8, ICE_FLOW_FLD_SZ_ICMP_TYPE * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ICMP, 0, ICE_FLOW_FLD_SZ_ICMP_TYPE),
 	/* ICE_FLOW_FIELD_IDX_ICMP_CODE */
-	{ ICE_FLOW_SEG_HDR_ICMP, 1 * 8, ICE_FLOW_FLD_SZ_ICMP_CODE * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_ICMP, 1, ICE_FLOW_FLD_SZ_ICMP_CODE),
 	/* GRE */
 	/* ICE_FLOW_FIELD_IDX_GRE_KEYID */
-	{ ICE_FLOW_SEG_HDR_GRE, 12 * 8, ICE_FLOW_FLD_SZ_GRE_KEYID * 8 },
+	ICE_FLOW_FLD_INFO(ICE_FLOW_SEG_HDR_GRE, 12, ICE_FLOW_FLD_SZ_GRE_KEYID),
 };
 
 /* Bitmaps indicating relevant packet types for a particular protocol header
@@ -644,7 +650,7 @@ ice_flow_xtract_fld(struct ice_hw *hw, struct ice_flow_prof_params *params,
 	/* Each extraction sequence entry is a word in size, and extracts a
 	 * word-aligned offset from a protocol header.
 	 */
-	ese_bits = ICE_FLOW_FV_EXTRACT_SZ * 8;
+	ese_bits = ICE_FLOW_FV_EXTRACT_SZ * BITS_PER_BYTE;
 
 	flds[fld].xtrct.prot_id = prot_id;
 	flds[fld].xtrct.off = (ice_flds_info[fld].off / ese_bits) *
@@ -737,15 +743,17 @@ ice_flow_xtract_raws(struct ice_hw *hw, struct ice_flow_prof_params *params,
 		raw->info.xtrct.prot_id = ICE_PROT_PAY;
 		raw->info.xtrct.off = (off / ICE_FLOW_FV_EXTRACT_SZ) *
 			ICE_FLOW_FV_EXTRACT_SZ;
-		raw->info.xtrct.disp = (off % ICE_FLOW_FV_EXTRACT_SZ) * 8;
+		raw->info.xtrct.disp = (off % ICE_FLOW_FV_EXTRACT_SZ) *
+			BITS_PER_BYTE;
 		raw->info.xtrct.idx = params->es_cnt;
 
 		/* Determine the number of field vector entries this raw field
 		 * consumes.
 		 */
 		cnt = DIVIDE_AND_ROUND_UP(raw->info.xtrct.disp +
-					  (raw->info.src.last * 8),
-					  ICE_FLOW_FV_EXTRACT_SZ * 8);
+					  (raw->info.src.last * BITS_PER_BYTE),
+					  (ICE_FLOW_FV_EXTRACT_SZ *
+					   BITS_PER_BYTE));
 		off = raw->info.xtrct.off;
 		for (j = 0; j < cnt; j++) {
 			/* Make sure the number of extraction sequence required
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 18/69] net/ice/base: move and redefine ice debug cq API
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (16 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 17/69] net/ice/base: use macro instead of magic 8 Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 19/69] net/ice/base: separate out control queue lock creation Leyi Rong
                       ` (51 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

The ice_debug_cq function is only called from ice_controlq.c, and has no
other callers outside of that file. Move it and mark it static to avoid
namespace pollution.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c   | 47 -------------------------
 drivers/net/ice/base/ice_common.h   |  2 --
 drivers/net/ice/base/ice_controlq.c | 54 +++++++++++++++++++++++++++--
 3 files changed, 51 insertions(+), 52 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 7093ee4f4..c1af24322 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -1474,53 +1474,6 @@ ice_clear_tx_drbell_q_ctx(struct ice_hw *hw, u32 tx_drbell_q_index)
 }
 #endif /* !NO_UNUSED_CTX_CODE || AE_DRIVER */
 
-/**
- * ice_debug_cq
- * @hw: pointer to the hardware structure
- * @mask: debug mask
- * @desc: pointer to control queue descriptor
- * @buf: pointer to command buffer
- * @buf_len: max length of buf
- *
- * Dumps debug log about control command with descriptor contents.
- */
-void
-ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len)
-{
-	struct ice_aq_desc *cq_desc = (struct ice_aq_desc *)desc;
-	u16 len;
-
-	if (!(mask & hw->debug_mask))
-		return;
-
-	if (!desc)
-		return;
-
-	len = LE16_TO_CPU(cq_desc->datalen);
-
-	ice_debug(hw, mask,
-		  "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
-		  LE16_TO_CPU(cq_desc->opcode),
-		  LE16_TO_CPU(cq_desc->flags),
-		  LE16_TO_CPU(cq_desc->datalen), LE16_TO_CPU(cq_desc->retval));
-	ice_debug(hw, mask, "\tcookie (h,l) 0x%08X 0x%08X\n",
-		  LE32_TO_CPU(cq_desc->cookie_high),
-		  LE32_TO_CPU(cq_desc->cookie_low));
-	ice_debug(hw, mask, "\tparam (0,1)  0x%08X 0x%08X\n",
-		  LE32_TO_CPU(cq_desc->params.generic.param0),
-		  LE32_TO_CPU(cq_desc->params.generic.param1));
-	ice_debug(hw, mask, "\taddr (h,l)   0x%08X 0x%08X\n",
-		  LE32_TO_CPU(cq_desc->params.generic.addr_high),
-		  LE32_TO_CPU(cq_desc->params.generic.addr_low));
-	if (buf && cq_desc->datalen != 0) {
-		ice_debug(hw, mask, "Buffer:\n");
-		if (buf_len < len)
-			len = buf_len;
-
-		ice_debug_array(hw, mask, 16, 1, (u8 *)buf, len);
-	}
-}
-
 
 /* FW Admin Queue command wrappers */
 
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index cccb5f009..58f22b0d3 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -20,8 +20,6 @@ enum ice_fw_modes {
 
 enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw);
 
-void
-ice_debug_cq(struct ice_hw *hw, u32 mask, void *desc, void *buf, u16 buf_len);
 enum ice_status ice_init_hw(struct ice_hw *hw);
 void ice_deinit_hw(struct ice_hw *hw);
 enum ice_status ice_check_reset(struct ice_hw *hw);
diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
index f3404023a..90dec0156 100644
--- a/drivers/net/ice/base/ice_controlq.c
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -727,6 +727,54 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 	return ICE_CTL_Q_DESC_UNUSED(sq);
 }
 
+/**
+ * ice_debug_cq
+ * @hw: pointer to the hardware structure
+ * @desc: pointer to control queue descriptor
+ * @buf: pointer to command buffer
+ * @buf_len: max length of buf
+ *
+ * Dumps debug log about control command with descriptor contents.
+ */
+static void ice_debug_cq(struct ice_hw *hw, void *desc, void *buf, u16 buf_len)
+{
+	struct ice_aq_desc *cq_desc = (struct ice_aq_desc *)desc;
+	u16 datalen, flags;
+
+	if (!((ICE_DBG_AQ_DESC | ICE_DBG_AQ_DESC_BUF) & hw->debug_mask))
+		return;
+
+	if (!desc)
+		return;
+
+	datalen = LE16_TO_CPU(cq_desc->datalen);
+	flags = LE16_TO_CPU(cq_desc->flags);
+
+	ice_debug(hw, ICE_DBG_AQ_DESC,
+		  "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+		  LE16_TO_CPU(cq_desc->opcode), flags, datalen,
+		  LE16_TO_CPU(cq_desc->retval));
+	ice_debug(hw, ICE_DBG_AQ_DESC, "\tcookie (h,l) 0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->cookie_high),
+		  LE32_TO_CPU(cq_desc->cookie_low));
+	ice_debug(hw, ICE_DBG_AQ_DESC, "\tparam (0,1)  0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.param0),
+		  LE32_TO_CPU(cq_desc->params.generic.param1));
+	ice_debug(hw, ICE_DBG_AQ_DESC, "\taddr (h,l)   0x%08X 0x%08X\n",
+		  LE32_TO_CPU(cq_desc->params.generic.addr_high),
+		  LE32_TO_CPU(cq_desc->params.generic.addr_low));
+	/* Dump buffer iff 1) one exists and 2) is either a response indicated
+	 * by the DD and/or CMP flag set or a command with the RD flag set.
+	 */
+	if (buf && cq_desc->datalen != 0 &&
+	    (flags & (ICE_AQ_FLAG_DD | ICE_AQ_FLAG_CMP) ||
+	     flags & ICE_AQ_FLAG_RD)) {
+		ice_debug(hw, ICE_DBG_AQ_DESC_BUF, "Buffer:\n");
+		ice_debug_array(hw, ICE_DBG_AQ_DESC_BUF, 16, 1, (u8 *)buf,
+				min(buf_len, datalen));
+	}
+}
+
 /**
  * ice_sq_done - check if FW has processed the Admin Send Queue (ATQ)
  * @hw: pointer to the HW struct
@@ -886,7 +934,7 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	ice_debug(hw, ICE_DBG_AQ_MSG,
 		  "ATQ: Control Send queue desc and buffer:\n");
 
-	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc_on_ring, buf, buf_size);
+	ice_debug_cq(hw, (void *)desc_on_ring, buf, buf_size);
 
 
 	(cq->sq.next_to_use)++;
@@ -950,7 +998,7 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	ice_debug(hw, ICE_DBG_AQ_MSG,
 		  "ATQ: desc and buffer writeback:\n");
 
-	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, buf, buf_size);
+	ice_debug_cq(hw, (void *)desc, buf, buf_size);
 
 
 	/* save writeback AQ if requested */
@@ -1055,7 +1103,7 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 
 	ice_debug(hw, ICE_DBG_AQ_MSG, "ARQ: desc and buffer:\n");
 
-	ice_debug_cq(hw, ICE_DBG_AQ_CMD, (void *)desc, e->msg_buf,
+	ice_debug_cq(hw, (void *)desc, e->msg_buf,
 		     cq->rq_buf_size);
 
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 19/69] net/ice/base: separate out control queue lock creation
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (17 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 18/69] net/ice/base: move and redefine ice debug cq API Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 20/69] net/ice/base: added sibling head to parse nodes Leyi Rong
                       ` (50 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

The ice_init_all_ctrlq and ice_shutdown_all_ctrlq functions create and
destroy the locks used to protect the send and receive process of each
control queue.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c   |   6 +-
 drivers/net/ice/base/ice_common.h   |   2 +
 drivers/net/ice/base/ice_controlq.c | 112 +++++++++++++++++++++-------
 3 files changed, 91 insertions(+), 29 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index c1af24322..5b4a13a41 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -853,7 +853,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 	ice_get_itr_intrl_gran(hw);
 
 
-	status = ice_init_all_ctrlq(hw);
+	status = ice_create_all_ctrlq(hw);
 	if (status)
 		goto err_unroll_cqinit;
 
@@ -981,7 +981,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 	ice_free(hw, hw->port_info);
 	hw->port_info = NULL;
 err_unroll_cqinit:
-	ice_shutdown_all_ctrlq(hw);
+	ice_destroy_all_ctrlq(hw);
 	return status;
 }
 
@@ -1010,7 +1010,7 @@ void ice_deinit_hw(struct ice_hw *hw)
 
 	/* Attempt to disable FW logging before shutting down control queues */
 	ice_cfg_fw_log(hw, false);
-	ice_shutdown_all_ctrlq(hw);
+	ice_destroy_all_ctrlq(hw);
 
 	/* Clear VSI contexts if not already cleared */
 	ice_clear_all_vsi_ctx(hw);
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 58f22b0d3..4cd87fc1e 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -25,8 +25,10 @@ void ice_deinit_hw(struct ice_hw *hw);
 enum ice_status ice_check_reset(struct ice_hw *hw);
 enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req);
 
+enum ice_status ice_create_all_ctrlq(struct ice_hw *hw);
 enum ice_status ice_init_all_ctrlq(struct ice_hw *hw);
 void ice_shutdown_all_ctrlq(struct ice_hw *hw);
+void ice_destroy_all_ctrlq(struct ice_hw *hw);
 enum ice_status
 ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 		  struct ice_rq_event_info *e, u16 *pending);
diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
index 90dec0156..6d893e2f2 100644
--- a/drivers/net/ice/base/ice_controlq.c
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -283,7 +283,7 @@ ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
  * @cq: pointer to the specific Control queue
  *
  * This is the main initialization routine for the Control Send Queue
- * Prior to calling this function, drivers *MUST* set the following fields
+ * Prior to calling this function, the driver *MUST* set the following fields
  * in the cq->structure:
  *     - cq->num_sq_entries
  *     - cq->sq_buf_size
@@ -342,7 +342,7 @@ static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
  * @cq: pointer to the specific Control queue
  *
  * The main initialization routine for the Admin Receive (Event) Queue.
- * Prior to calling this function, drivers *MUST* set the following fields
+ * Prior to calling this function, the driver *MUST* set the following fields
  * in the cq->structure:
  *     - cq->num_rq_entries
  *     - cq->rq_buf_size
@@ -535,14 +535,8 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
 	return ICE_SUCCESS;
 
 init_ctrlq_free_rq:
-	if (cq->rq.count) {
-		ice_shutdown_rq(hw, cq);
-		ice_destroy_lock(&cq->rq_lock);
-	}
-	if (cq->sq.count) {
-		ice_shutdown_sq(hw, cq);
-		ice_destroy_lock(&cq->sq_lock);
-	}
+	ice_shutdown_rq(hw, cq);
+	ice_shutdown_sq(hw, cq);
 	return status;
 }
 
@@ -551,12 +545,14 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
  * @hw: pointer to the hardware structure
  * @q_type: specific Control queue type
  *
- * Prior to calling this function, drivers *MUST* set the following fields
+ * Prior to calling this function, the driver *MUST* set the following fields
  * in the cq->structure:
  *     - cq->num_sq_entries
  *     - cq->num_rq_entries
  *     - cq->rq_buf_size
  *     - cq->sq_buf_size
+ *
+ * NOTE: this function does not initialize the controlq locks
  */
 static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 {
@@ -582,8 +578,6 @@ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 	    !cq->rq_buf_size || !cq->sq_buf_size) {
 		return ICE_ERR_CFG;
 	}
-	ice_init_lock(&cq->sq_lock);
-	ice_init_lock(&cq->rq_lock);
 
 	/* setup SQ command write back timeout */
 	cq->sq_cmd_timeout = ICE_CTL_Q_SQ_CMD_TIMEOUT;
@@ -591,7 +585,7 @@ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 	/* allocate the ATQ */
 	ret_code = ice_init_sq(hw, cq);
 	if (ret_code)
-		goto init_ctrlq_destroy_locks;
+		return ret_code;
 
 	/* allocate the ARQ */
 	ret_code = ice_init_rq(hw, cq);
@@ -603,9 +597,6 @@ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 
 init_ctrlq_free_sq:
 	ice_shutdown_sq(hw, cq);
-init_ctrlq_destroy_locks:
-	ice_destroy_lock(&cq->sq_lock);
-	ice_destroy_lock(&cq->rq_lock);
 	return ret_code;
 }
 
@@ -613,12 +604,14 @@ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
  * ice_init_all_ctrlq - main initialization routine for all control queues
  * @hw: pointer to the hardware structure
  *
- * Prior to calling this function, drivers *MUST* set the following fields
+ * Prior to calling this function, the driver MUST* set the following fields
  * in the cq->structure for all control queues:
  *     - cq->num_sq_entries
  *     - cq->num_rq_entries
  *     - cq->rq_buf_size
  *     - cq->sq_buf_size
+ *
+ * NOTE: this function does not initialize the controlq locks.
  */
 enum ice_status ice_init_all_ctrlq(struct ice_hw *hw)
 {
@@ -637,10 +630,48 @@ enum ice_status ice_init_all_ctrlq(struct ice_hw *hw)
 	return ice_init_ctrlq(hw, ICE_CTL_Q_MAILBOX);
 }
 
+/**
+ * ice_init_ctrlq_locks - Initialize locks for a control queue
+ * @cq: pointer to the control queue
+ *
+ * Initializes the send and receive queue locks for a given control queue.
+ */
+static void ice_init_ctrlq_locks(struct ice_ctl_q_info *cq)
+{
+	ice_init_lock(&cq->sq_lock);
+	ice_init_lock(&cq->rq_lock);
+}
+
+/**
+ * ice_create_all_ctrlq - main initialization routine for all control queues
+ * @hw: pointer to the hardware structure
+ *
+ * Prior to calling this function, the driver *MUST* set the following fields
+ * in the cq->structure for all control queues:
+ *     - cq->num_sq_entries
+ *     - cq->num_rq_entries
+ *     - cq->rq_buf_size
+ *     - cq->sq_buf_size
+ *
+ * This function creates all the control queue locks and then calls
+ * ice_init_all_ctrlq. It should be called once during driver load. If the
+ * driver needs to re-initialize control queues at run time it should call
+ * ice_init_all_ctrlq instead.
+ */
+enum ice_status ice_create_all_ctrlq(struct ice_hw *hw)
+{
+	ice_init_ctrlq_locks(&hw->adminq);
+	ice_init_ctrlq_locks(&hw->mailboxq);
+
+	return ice_init_all_ctrlq(hw);
+}
+
 /**
  * ice_shutdown_ctrlq - shutdown routine for any control queue
  * @hw: pointer to the hardware structure
  * @q_type: specific Control queue type
+ *
+ * NOTE: this function does not destroy the control queue locks.
  */
 static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 {
@@ -659,19 +690,17 @@ static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 		return;
 	}
 
-	if (cq->sq.count) {
-		ice_shutdown_sq(hw, cq);
-		ice_destroy_lock(&cq->sq_lock);
-	}
-	if (cq->rq.count) {
-		ice_shutdown_rq(hw, cq);
-		ice_destroy_lock(&cq->rq_lock);
-	}
+	ice_shutdown_sq(hw, cq);
+	ice_shutdown_rq(hw, cq);
 }
 
 /**
  * ice_shutdown_all_ctrlq - shutdown routine for all control queues
  * @hw: pointer to the hardware structure
+ *
+ * NOTE: this function does not destroy the control queue locks. The driver
+ * may call this at runtime to shutdown and later restart control queues, such
+ * as in response to a reset event.
  */
 void ice_shutdown_all_ctrlq(struct ice_hw *hw)
 {
@@ -681,6 +710,37 @@ void ice_shutdown_all_ctrlq(struct ice_hw *hw)
 	ice_shutdown_ctrlq(hw, ICE_CTL_Q_MAILBOX);
 }
 
+/**
+ * ice_destroy_ctrlq_locks - Destroy locks for a control queue
+ * @cq: pointer to the control queue
+ *
+ * Destroys the send and receive queue locks for a given control queue.
+ */
+static void
+ice_destroy_ctrlq_locks(struct ice_ctl_q_info *cq)
+{
+	ice_destroy_lock(&cq->sq_lock);
+	ice_destroy_lock(&cq->rq_lock);
+}
+
+/**
+ * ice_destroy_all_ctrlq - exit routine for all control queues
+ * @hw: pointer to the hardware structure
+ *
+ * This function shuts down all the control queues and then destroys the
+ * control queue locks. It should be called once during driver unload. The
+ * driver should call ice_shutdown_all_ctrlq if it needs to shut down and
+ * reinitialize control queues, such as in response to a reset event.
+ */
+void ice_destroy_all_ctrlq(struct ice_hw *hw)
+{
+	/* shut down all the control queues first */
+	ice_shutdown_all_ctrlq(hw);
+
+	ice_destroy_ctrlq_locks(&hw->adminq);
+	ice_destroy_ctrlq_locks(&hw->mailboxq);
+}
+
 /**
  * ice_clean_sq - cleans Admin send queue (ATQ)
  * @hw: pointer to the hardware structure
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 20/69] net/ice/base: added sibling head to parse nodes
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (18 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 19/69] net/ice/base: separate out control queue lock creation Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 21/69] net/ice/base: add and fix debuglogs Leyi Rong
                       ` (49 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Victor Raj, Paul M Stillwell Jr

There was a bug in the previous code which never traverses all the
children to get the first node of the requested layer.

Added a sibling head pointer to point the first node of each layer
per TC. This helps the traverse easy and quicker and also removed the
recursive, complexity of the code.

Signed-off-by: Victor Raj <victor.raj@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_sched.c | 61 ++++++++++++--------------------
 drivers/net/ice/base/ice_type.h  |  2 ++
 2 files changed, 25 insertions(+), 38 deletions(-)

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 855e3848c..0c1c18ba1 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -260,33 +260,17 @@ ice_sched_remove_elems(struct ice_hw *hw, struct ice_sched_node *parent,
 
 /**
  * ice_sched_get_first_node - get the first node of the given layer
- * @hw: pointer to the HW struct
+ * @pi: port information structure
  * @parent: pointer the base node of the subtree
  * @layer: layer number
  *
  * This function retrieves the first node of the given layer from the subtree
  */
 static struct ice_sched_node *
-ice_sched_get_first_node(struct ice_hw *hw, struct ice_sched_node *parent,
-			 u8 layer)
+ice_sched_get_first_node(struct ice_port_info *pi,
+			 struct ice_sched_node *parent, u8 layer)
 {
-	u8 i;
-
-	if (layer < hw->sw_entry_point_layer)
-		return NULL;
-	for (i = 0; i < parent->num_children; i++) {
-		struct ice_sched_node *node = parent->children[i];
-
-		if (node) {
-			if (node->tx_sched_layer == layer)
-				return node;
-			/* this recursion is intentional, and wouldn't
-			 * go more than 9 calls
-			 */
-			return ice_sched_get_first_node(hw, node, layer);
-		}
-	}
-	return NULL;
+	return pi->sib_head[parent->tc_num][layer];
 }
 
 /**
@@ -342,7 +326,7 @@ void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node)
 	parent = node->parent;
 	/* root has no parent */
 	if (parent) {
-		struct ice_sched_node *p, *tc_node;
+		struct ice_sched_node *p;
 
 		/* update the parent */
 		for (i = 0; i < parent->num_children; i++)
@@ -354,16 +338,7 @@ void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node)
 				break;
 			}
 
-		/* search for previous sibling that points to this node and
-		 * remove the reference
-		 */
-		tc_node = ice_sched_get_tc_node(pi, node->tc_num);
-		if (!tc_node) {
-			ice_debug(hw, ICE_DBG_SCHED,
-				  "Invalid TC number %d\n", node->tc_num);
-			goto err_exit;
-		}
-		p = ice_sched_get_first_node(hw, tc_node, node->tx_sched_layer);
+		p = ice_sched_get_first_node(pi, node, node->tx_sched_layer);
 		while (p) {
 			if (p->sibling == node) {
 				p->sibling = node->sibling;
@@ -371,8 +346,13 @@ void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node)
 			}
 			p = p->sibling;
 		}
+
+		/* update the sibling head if head is getting removed */
+		if (pi->sib_head[node->tc_num][node->tx_sched_layer] == node)
+			pi->sib_head[node->tc_num][node->tx_sched_layer] =
+				node->sibling;
 	}
-err_exit:
+
 	/* leaf nodes have no children */
 	if (node->children)
 		ice_free(hw, node->children);
@@ -979,13 +959,17 @@ ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 
 		/* add it to previous node sibling pointer */
 		/* Note: siblings are not linked across branches */
-		prev = ice_sched_get_first_node(hw, tc_node, layer);
+		prev = ice_sched_get_first_node(pi, tc_node, layer);
 		if (prev && prev != new_node) {
 			while (prev->sibling)
 				prev = prev->sibling;
 			prev->sibling = new_node;
 		}
 
+		/* initialize the sibling head */
+		if (!pi->sib_head[tc_node->tc_num][layer])
+			pi->sib_head[tc_node->tc_num][layer] = new_node;
+
 		if (i == 0)
 			*first_node_teid = teid;
 	}
@@ -1451,7 +1435,7 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
 		goto lan_q_exit;
 
 	/* get the first queue group node from VSI sub-tree */
-	qgrp_node = ice_sched_get_first_node(pi->hw, vsi_node, qgrp_layer);
+	qgrp_node = ice_sched_get_first_node(pi, vsi_node, qgrp_layer);
 	while (qgrp_node) {
 		/* make sure the qgroup node is part of the VSI subtree */
 		if (ice_sched_find_node_in_subtree(pi->hw, vsi_node, qgrp_node))
@@ -1482,7 +1466,7 @@ ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
 	u8 vsi_layer;
 
 	vsi_layer = ice_sched_get_vsi_layer(hw);
-	node = ice_sched_get_first_node(hw, tc_node, vsi_layer);
+	node = ice_sched_get_first_node(hw->port_info, tc_node, vsi_layer);
 
 	/* Check whether it already exists */
 	while (node) {
@@ -1511,7 +1495,7 @@ ice_sched_get_agg_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
 	u8 agg_layer;
 
 	agg_layer = ice_sched_get_agg_layer(hw);
-	node = ice_sched_get_first_node(hw, tc_node, agg_layer);
+	node = ice_sched_get_first_node(hw->port_info, tc_node, agg_layer);
 
 	/* Check whether it already exists */
 	while (node) {
@@ -1663,7 +1647,8 @@ ice_sched_calc_vsi_support_nodes(struct ice_hw *hw,
 			/* If intermediate nodes are reached max children
 			 * then add a new one.
 			 */
-			node = ice_sched_get_first_node(hw, tc_node, (u8)i);
+			node = ice_sched_get_first_node(hw->port_info, tc_node,
+							(u8)i);
 			/* scan all the siblings */
 			while (node) {
 				if (node->num_children < hw->max_children[i])
@@ -2528,7 +2513,7 @@ ice_sched_add_agg_cfg(struct ice_port_info *pi, u32 agg_id, u8 tc)
 	 * intermediate node on those layers
 	 */
 	for (i = hw->sw_entry_point_layer; i < aggl; i++) {
-		parent = ice_sched_get_first_node(hw, tc_node, i);
+		parent = ice_sched_get_first_node(pi, tc_node, i);
 
 		/* scan all the siblings */
 		while (parent) {
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index bc1ba60d1..0f033bbf1 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -667,6 +667,8 @@ struct ice_port_info {
 	struct ice_mac_info mac;
 	struct ice_phy_info phy;
 	struct ice_lock sched_lock;	/* protect access to TXSched tree */
+	struct ice_sched_node *
+		sib_head[ICE_MAX_TRAFFIC_CLASS][ICE_AQC_TOPO_MAX_LEVEL_NUM];
 	/* List contain profile ID(s) and other params per layer */
 	struct LIST_HEAD_TYPE rl_prof_list[ICE_AQC_TOPO_MAX_LEVEL_NUM];
 	struct ice_dcbx_cfg local_dcbx_cfg;	/* Oper/Local Cfg */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 21/69] net/ice/base: add and fix debuglogs
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (19 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 20/69] net/ice/base: added sibling head to parse nodes Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 22/69] net/ice/base: forbid VSI to remove unassociated ucast filter Leyi Rong
                       ` (48 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Marta Plantykow, Paul M Stillwell Jr

Adding missing debuglogs and fixing existing debuglogs.

Signed-off-by: Marta Plantykow <marta.a.plantykow@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c    | 16 ++++++++--------
 drivers/net/ice/base/ice_controlq.c  | 19 +++++++++++++++++++
 drivers/net/ice/base/ice_flex_pipe.c |  7 ++++++-
 drivers/net/ice/base/ice_nvm.c       | 14 +++++++-------
 4 files changed, 40 insertions(+), 16 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 5b4a13a41..0083970a5 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -833,7 +833,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 	u16 mac_buf_len;
 	void *mac_buf;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_init_hw");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 
 	/* Set MAC type based on DeviceID */
@@ -1623,7 +1623,7 @@ ice_aq_req_res(struct ice_hw *hw, enum ice_aq_res_ids res,
 	struct ice_aq_desc desc;
 	enum ice_status status;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_req_res");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	cmd_resp = &desc.params.res_owner;
 
@@ -1692,7 +1692,7 @@ ice_aq_release_res(struct ice_hw *hw, enum ice_aq_res_ids res, u8 sdp_number,
 	struct ice_aqc_req_res *cmd;
 	struct ice_aq_desc desc;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_release_res");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	cmd = &desc.params.res_owner;
 
@@ -1722,7 +1722,7 @@ ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
 	u32 time_left = timeout;
 	enum ice_status status;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_res");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	status = ice_aq_req_res(hw, res, access, 0, &time_left, NULL);
 
@@ -1780,7 +1780,7 @@ void ice_release_res(struct ice_hw *hw, enum ice_aq_res_ids res)
 	enum ice_status status;
 	u32 total_delay = 0;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_release_res");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	status = ice_aq_release_res(hw, res, 0, NULL);
 
@@ -1814,7 +1814,7 @@ ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
 	struct ice_aqc_alloc_free_res_cmd *cmd;
 	struct ice_aq_desc desc;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_alloc_free_res");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	cmd = &desc.params.sw_res_ctrl;
 
@@ -3106,7 +3106,7 @@ ice_aq_add_lan_txq(struct ice_hw *hw, u8 num_qgrps,
 	struct ice_aqc_add_txqs *cmd;
 	struct ice_aq_desc desc;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_add_lan_txq");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	cmd = &desc.params.add_txqs;
 
@@ -3162,7 +3162,7 @@ ice_aq_dis_lan_txq(struct ice_hw *hw, u8 num_qgrps,
 	enum ice_status status;
 	u16 i, sz = 0;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_dis_lan_txq");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 	cmd = &desc.params.dis_txqs;
 	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dis_txqs);
 
diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
index 6d893e2f2..4cb6df113 100644
--- a/drivers/net/ice/base/ice_controlq.c
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -35,6 +35,8 @@ static void ice_adminq_init_regs(struct ice_hw *hw)
 {
 	struct ice_ctl_q_info *cq = &hw->adminq;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	ICE_CQ_INIT_REGS(cq, PF_FW);
 }
 
@@ -295,6 +297,8 @@ static enum ice_status ice_init_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 {
 	enum ice_status ret_code;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	if (cq->sq.count > 0) {
 		/* queue already initialized */
 		ret_code = ICE_ERR_NOT_READY;
@@ -354,6 +358,8 @@ static enum ice_status ice_init_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 {
 	enum ice_status ret_code;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	if (cq->rq.count > 0) {
 		/* queue already initialized */
 		ret_code = ICE_ERR_NOT_READY;
@@ -422,6 +428,8 @@ ice_shutdown_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 {
 	enum ice_status ret_code = ICE_SUCCESS;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	ice_acquire_lock(&cq->sq_lock);
 
 	if (!cq->sq.count) {
@@ -485,6 +493,8 @@ ice_shutdown_rq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 {
 	enum ice_status ret_code = ICE_SUCCESS;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	ice_acquire_lock(&cq->rq_lock);
 
 	if (!cq->rq.count) {
@@ -521,6 +531,8 @@ static enum ice_status ice_init_check_adminq(struct ice_hw *hw)
 	struct ice_ctl_q_info *cq = &hw->adminq;
 	enum ice_status status;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 
 	status = ice_aq_get_fw_ver(hw, NULL);
 	if (status)
@@ -559,6 +571,8 @@ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 	struct ice_ctl_q_info *cq;
 	enum ice_status ret_code;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	switch (q_type) {
 	case ICE_CTL_Q_ADMIN:
 		ice_adminq_init_regs(hw);
@@ -617,6 +631,8 @@ enum ice_status ice_init_all_ctrlq(struct ice_hw *hw)
 {
 	enum ice_status ret_code;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 
 	/* Init FW admin queue */
 	ret_code = ice_init_ctrlq(hw, ICE_CTL_Q_ADMIN);
@@ -677,6 +693,8 @@ static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
 {
 	struct ice_ctl_q_info *cq;
 
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
 	switch (q_type) {
 	case ICE_CTL_Q_ADMIN:
 		cq = &hw->adminq;
@@ -704,6 +722,7 @@ static void ice_shutdown_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type)
  */
 void ice_shutdown_all_ctrlq(struct ice_hw *hw)
 {
+	ice_debug(hw, ICE_DBG_TRACE, "ice_shutdown_all_ctrlq\n");
 	/* Shutdown FW admin queue */
 	ice_shutdown_ctrlq(hw, ICE_CTL_Q_ADMIN);
 	/* Shutdown PF-VF Mailbox */
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index fb20493ca..e3de71363 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1142,7 +1142,7 @@ ice_get_pkg_info(struct ice_hw *hw)
 	u16 size;
 	u32 i;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_init_pkg_info\n");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	size = sizeof(*pkg_info) + (sizeof(pkg_info->pkg_info[0]) *
 				    (ICE_PKG_CNT - 1));
@@ -2417,6 +2417,11 @@ ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig)
 		ice_free(hw, del);
 	}
 
+	/* if VSIG characteristic list was cleared for reset
+	 * re-initialize the list head
+	 */
+	INIT_LIST_HEAD(&hw->blk[blk].xlt2.vsig_tbl[idx].prop_lst);
+
 	return ICE_SUCCESS;
 }
 
diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
index b770abfd0..fa9c348ce 100644
--- a/drivers/net/ice/base/ice_nvm.c
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -24,7 +24,7 @@ ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length,
 	struct ice_aq_desc desc;
 	struct ice_aqc_nvm *cmd;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_read_nvm");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	cmd = &desc.params.nvm;
 
@@ -95,7 +95,7 @@ ice_read_sr_aq(struct ice_hw *hw, u32 offset, u16 words, u16 *data,
 {
 	enum ice_status status;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_aq");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	status = ice_check_sr_access_params(hw, offset, words);
 
@@ -123,7 +123,7 @@ ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
 {
 	enum ice_status status;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_word_aq");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	status = ice_read_sr_aq(hw, offset, 1, data, true);
 	if (!status)
@@ -152,7 +152,7 @@ ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
 	u16 words_read = 0;
 	u16 i = 0;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_read_sr_buf_aq");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	do {
 		u16 read_size, off_w;
@@ -202,7 +202,7 @@ ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
 static enum ice_status
 ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access)
 {
-	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_nvm");
+	ice_debug(hw, ICE_DBG_TRACE, "ice_acquire_nvm\n");
 
 	if (hw->nvm.blank_nvm_mode)
 		return ICE_SUCCESS;
@@ -218,7 +218,7 @@ ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access)
  */
 static void ice_release_nvm(struct ice_hw *hw)
 {
-	ice_debug(hw, ICE_DBG_TRACE, "ice_release_nvm");
+	ice_debug(hw, ICE_DBG_TRACE, "ice_release_nvm\n");
 
 	if (hw->nvm.blank_nvm_mode)
 		return;
@@ -263,7 +263,7 @@ enum ice_status ice_init_nvm(struct ice_hw *hw)
 	u32 fla, gens_stat;
 	u8 sr_size;
 
-	ice_debug(hw, ICE_DBG_TRACE, "ice_init_nvm");
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 
 	/* The SR size is stored regardless of the NVM programming mode
 	 * as the blank mode may be used in the factory line.
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 22/69] net/ice/base: forbid VSI to remove unassociated ucast filter
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (20 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 21/69] net/ice/base: add and fix debuglogs Leyi Rong
@ 2019-06-19 15:17     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 23/69] net/ice/base: update some defines Leyi Rong
                       ` (47 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:17 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Akeem G Abodunrin, Paul M Stillwell Jr

If a VSI is not using a unicast filter or did not configure that
particular unicast filter, driver should not allow it to be removed
by the rogue VSI.

Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 57 +++++++++++++++++++++++++++++++
 1 file changed, 57 insertions(+)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 0ad29dace..4e0558939 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -3181,6 +3181,39 @@ ice_cfg_dflt_vsi(struct ice_port_info *pi, u16 vsi_handle, bool set,
 	return status;
 }
 
+/**
+ * ice_find_ucast_rule_entry - Search for a unicast MAC filter rule entry
+ * @hw: pointer to the hardware structure
+ * @recp_id: lookup type for which the specified rule needs to be searched
+ * @f_info: rule information
+ *
+ * Helper function to search for a unicast rule entry - this is to be used
+ * to remove unicast MAC filter that is not shared with other VSIs on the
+ * PF switch.
+ *
+ * Returns pointer to entry storing the rule if found
+ */
+static struct ice_fltr_mgmt_list_entry *
+ice_find_ucast_rule_entry(struct ice_hw *hw, u8 recp_id,
+			  struct ice_fltr_info *f_info)
+{
+	struct ice_switch_info *sw = hw->switch_info;
+	struct ice_fltr_mgmt_list_entry *list_itr;
+	struct LIST_HEAD_TYPE *list_head;
+
+	list_head = &sw->recp_list[recp_id].filt_rules;
+	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_fltr_mgmt_list_entry,
+			    list_entry) {
+		if (!memcmp(&f_info->l_data, &list_itr->fltr_info.l_data,
+			    sizeof(f_info->l_data)) &&
+		    f_info->fwd_id.hw_vsi_id ==
+		    list_itr->fltr_info.fwd_id.hw_vsi_id &&
+		    f_info->flag == list_itr->fltr_info.flag)
+			return list_itr;
+	}
+	return NULL;
+}
+
 /**
  * ice_remove_mac - remove a MAC address based filter rule
  * @hw: pointer to the hardware structure
@@ -3198,16 +3231,40 @@ enum ice_status
 ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list)
 {
 	struct ice_fltr_list_entry *list_itr, *tmp;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
 
 	if (!m_list)
 		return ICE_ERR_PARAM;
 
+	rule_lock = &hw->switch_info->recp_list[ICE_SW_LKUP_MAC].filt_rule_lock;
 	LIST_FOR_EACH_ENTRY_SAFE(list_itr, tmp, m_list, ice_fltr_list_entry,
 				 list_entry) {
 		enum ice_sw_lkup_type l_type = list_itr->fltr_info.lkup_type;
+		u8 *add = &list_itr->fltr_info.l_data.mac.mac_addr[0];
+		u16 vsi_handle;
 
 		if (l_type != ICE_SW_LKUP_MAC)
 			return ICE_ERR_PARAM;
+
+		vsi_handle = list_itr->fltr_info.vsi_handle;
+		if (!ice_is_vsi_valid(hw, vsi_handle))
+			return ICE_ERR_PARAM;
+
+		list_itr->fltr_info.fwd_id.hw_vsi_id =
+					ice_get_hw_vsi_num(hw, vsi_handle);
+		if (IS_UNICAST_ETHER_ADDR(add) && !hw->ucast_shared) {
+			/* Don't remove the unicast address that belongs to
+			 * another VSI on the switch, since it is not being
+			 * shared...
+			 */
+			ice_acquire_lock(rule_lock);
+			if (!ice_find_ucast_rule_entry(hw, ICE_SW_LKUP_MAC,
+						       &list_itr->fltr_info)) {
+				ice_release_lock(rule_lock);
+				return ICE_ERR_DOES_NOT_EXIST;
+			}
+			ice_release_lock(rule_lock);
+		}
 		list_itr->status = ice_remove_rule_internal(hw,
 							    ICE_SW_LKUP_MAC,
 							    list_itr);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 23/69] net/ice/base: update some defines
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (21 preceding siblings ...)
  2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 22/69] net/ice/base: forbid VSI to remove unassociated ucast filter Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 24/69] net/ice/base: add hweight32 support Leyi Rong
                       ` (46 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Paul M Stillwell Jr

Update the defines for ice_aqc_manage_mac_read,
ice_aqc_manage_mac_write, ice_aqc_get_phy_caps_data.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 739f79e88..249e48b82 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -110,6 +110,7 @@ struct ice_aqc_list_caps {
 struct ice_aqc_list_caps_elem {
 	__le16 cap;
 #define ICE_AQC_CAPS_VALID_FUNCTIONS			0x0005
+#define ICE_AQC_MAX_VALID_FUNCTIONS			0x8
 #define ICE_AQC_CAPS_VSI				0x0017
 #define ICE_AQC_CAPS_DCB				0x0018
 #define ICE_AQC_CAPS_RSS				0x0040
@@ -143,11 +144,9 @@ struct ice_aqc_manage_mac_read {
 #define ICE_AQC_MAN_MAC_WOL_ADDR_VALID		BIT(7)
 #define ICE_AQC_MAN_MAC_READ_S			4
 #define ICE_AQC_MAN_MAC_READ_M			(0xF << ICE_AQC_MAN_MAC_READ_S)
-	u8 lport_num;
-	u8 lport_num_valid;
-#define ICE_AQC_MAN_MAC_PORT_NUM_IS_VALID	BIT(0)
+	u8 rsvd[2];
 	u8 num_addr; /* Used in response */
-	u8 reserved[3];
+	u8 rsvd1[3];
 	__le32 addr_high;
 	__le32 addr_low;
 };
@@ -165,7 +164,7 @@ struct ice_aqc_manage_mac_read_resp {
 
 /* Manage MAC address, write command - direct (0x0108) */
 struct ice_aqc_manage_mac_write {
-	u8 port_num;
+	u8 rsvd;
 	u8 flags;
 #define ICE_AQC_MAN_MAC_WR_MC_MAG_EN		BIT(0)
 #define ICE_AQC_MAN_MAC_WR_WOL_LAA_PFR_KEEP	BIT(1)
@@ -481,8 +480,8 @@ struct ice_aqc_vsi_props {
 #define ICE_AQ_VSI_VLAN_MODE_TAGGED	0x2
 #define ICE_AQ_VSI_VLAN_MODE_ALL	0x3
 #define ICE_AQ_VSI_PVLAN_INSERT_PVID	BIT(2)
-#define ICE_AQ_VSI_VLAN_EMOD_S	3
-#define ICE_AQ_VSI_VLAN_EMOD_M	(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
+#define ICE_AQ_VSI_VLAN_EMOD_S		3
+#define ICE_AQ_VSI_VLAN_EMOD_M		(0x3 << ICE_AQ_VSI_VLAN_EMOD_S)
 #define ICE_AQ_VSI_VLAN_EMOD_STR_BOTH	(0x0 << ICE_AQ_VSI_VLAN_EMOD_S)
 #define ICE_AQ_VSI_VLAN_EMOD_STR_UP	(0x1 << ICE_AQ_VSI_VLAN_EMOD_S)
 #define ICE_AQ_VSI_VLAN_EMOD_STR	(0x2 << ICE_AQ_VSI_VLAN_EMOD_S)
@@ -1425,6 +1424,7 @@ struct ice_aqc_get_phy_caps_data {
 #define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN		BIT(6)
 #define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN		BIT(7)
 #define ICE_AQC_PHY_FEC_MASK				MAKEMASK(0xdf, 0)
+	u8 rsvd1;	/* Byte 35 reserved */
 	u8 extended_compliance_code;
 #define ICE_MODULE_TYPE_TOTAL_BYTE			3
 	u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE];
@@ -1439,13 +1439,14 @@ struct ice_aqc_get_phy_caps_data {
 #define ICE_AQC_MOD_TYPE_BYTE2_SFP_PLUS			0xA0
 #define ICE_AQC_MOD_TYPE_BYTE2_QSFP_PLUS		0x86
 	u8 qualified_module_count;
+	u8 rsvd2[7];	/* Bytes 47:41 reserved */
 #define ICE_AQC_QUAL_MOD_COUNT_MAX			16
 	struct {
 		u8 v_oui[3];
 		u8 rsvd3;
 		u8 v_part[16];
 		__le32 v_rev;
-		__le64 rsvd8;
+		__le64 rsvd4;
 	} qual_modules[ICE_AQC_QUAL_MOD_COUNT_MAX];
 };
 
@@ -1831,7 +1832,7 @@ struct ice_aqc_get_cee_dcb_cfg_resp {
 };
 
 /* Set Local LLDP MIB (indirect 0x0A08)
- * Used to replace the local MIB of a given LLDP agent. e.g. DCBx
+ * Used to replace the local MIB of a given LLDP agent. e.g. DCBX
  */
 struct ice_aqc_lldp_set_local_mib {
 	u8 type;
@@ -1856,7 +1857,7 @@ struct ice_aqc_lldp_set_local_mib_resp {
 };
 
 /* Stop/Start LLDP Agent (direct 0x0A09)
- * Used for stopping/starting specific LLDP agent. e.g. DCBx.
+ * Used for stopping/starting specific LLDP agent. e.g. DCBX.
  * The same structure is used for the response, with the command field
  * being used as the status field.
  */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 24/69] net/ice/base: add hweight32 support
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (22 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 23/69] net/ice/base: update some defines Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 25/69] net/ice/base: call out dev/func caps when printing Leyi Rong
                       ` (45 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong

Add API support for hweight32.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_osdep.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/drivers/net/ice/base/ice_osdep.h b/drivers/net/ice/base/ice_osdep.h
index d2d9238c7..ede893fc9 100644
--- a/drivers/net/ice/base/ice_osdep.h
+++ b/drivers/net/ice/base/ice_osdep.h
@@ -267,6 +267,20 @@ ice_hweight8(u32 num)
 	return bits;
 }
 
+static inline u8
+ice_hweight32(u32 num)
+{
+	u8 bits = 0;
+	u32 i;
+
+	for (i = 0; i < 32; i++) {
+		bits += (u8)(num & 0x1);
+		num >>= 1;
+	}
+
+	return bits;
+}
+
 #define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
 #define DELAY(x) rte_delay_us(x)
 #define ice_usec_delay(x) rte_delay_us(x)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 25/69] net/ice/base: call out dev/func caps when printing
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (23 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 24/69] net/ice/base: add hweight32 support Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 26/69] net/ice/base: set the max number of TCs per port to 4 Leyi Rong
                       ` (44 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Anirudh Venkataramanan, Paul M Stillwell Jr

This patch makes a change to add a "func cap" prefix when printing
function capabilities, and a "dev cap" prefix when printing device
capabilities.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 63 ++++++++++++++++---------------
 1 file changed, 33 insertions(+), 30 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 0083970a5..461856827 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -1948,6 +1948,7 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 	struct ice_hw_func_caps *func_p = NULL;
 	struct ice_hw_dev_caps *dev_p = NULL;
 	struct ice_hw_common_caps *caps;
+	char const *prefix;
 	u32 i;
 
 	if (!buf)
@@ -1958,9 +1959,11 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 	if (opc == ice_aqc_opc_list_dev_caps) {
 		dev_p = &hw->dev_caps;
 		caps = &dev_p->common_cap;
+		prefix = "dev cap";
 	} else if (opc == ice_aqc_opc_list_func_caps) {
 		func_p = &hw->func_caps;
 		caps = &func_p->common_cap;
+		prefix = "func cap";
 	} else {
 		ice_debug(hw, ICE_DBG_INIT, "wrong opcode\n");
 		return;
@@ -1976,21 +1979,25 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 		case ICE_AQC_CAPS_VALID_FUNCTIONS:
 			caps->valid_functions = number;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Valid Functions = %d\n",
+				  "%s: valid functions = %d\n", prefix,
 				  caps->valid_functions);
 			break;
 		case ICE_AQC_CAPS_VSI:
 			if (dev_p) {
 				dev_p->num_vsi_allocd_to_host = number;
 				ice_debug(hw, ICE_DBG_INIT,
-					  "HW caps: Dev.VSI cnt = %d\n",
+					  "%s: num VSI alloc to host = %d\n",
+					  prefix,
 					  dev_p->num_vsi_allocd_to_host);
 			} else if (func_p) {
 				func_p->guar_num_vsi =
 					ice_get_num_per_func(hw, ICE_MAX_VSI);
 				ice_debug(hw, ICE_DBG_INIT,
-					  "HW caps: Func.VSI cnt = %d\n",
-					  number);
+					  "%s: num guaranteed VSI (fw) = %d\n",
+					  prefix, number);
+				ice_debug(hw, ICE_DBG_INIT,
+					  "%s: num guaranteed VSI = %d\n",
+					  prefix, func_p->guar_num_vsi);
 			}
 			break;
 		case ICE_AQC_CAPS_DCB:
@@ -1998,49 +2005,51 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 			caps->active_tc_bitmap = logical_id;
 			caps->maxtc = phys_id;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: DCB = %d\n", caps->dcb);
+				  "%s: DCB = %d\n", prefix, caps->dcb);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Active TC bitmap = %d\n",
+				  "%s: active TC bitmap = %d\n", prefix,
 				  caps->active_tc_bitmap);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: TC Max = %d\n", caps->maxtc);
+				  "%s: TC max = %d\n", prefix, caps->maxtc);
 			break;
 		case ICE_AQC_CAPS_RSS:
 			caps->rss_table_size = number;
 			caps->rss_table_entry_width = logical_id;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: RSS table size = %d\n",
+				  "%s: RSS table size = %d\n", prefix,
 				  caps->rss_table_size);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: RSS table width = %d\n",
+				  "%s: RSS table width = %d\n", prefix,
 				  caps->rss_table_entry_width);
 			break;
 		case ICE_AQC_CAPS_RXQS:
 			caps->num_rxq = number;
 			caps->rxq_first_id = phys_id;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Num Rx Qs = %d\n", caps->num_rxq);
+				  "%s: num Rx queues = %d\n", prefix,
+				  caps->num_rxq);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Rx first queue ID = %d\n",
+				  "%s: Rx first queue ID = %d\n", prefix,
 				  caps->rxq_first_id);
 			break;
 		case ICE_AQC_CAPS_TXQS:
 			caps->num_txq = number;
 			caps->txq_first_id = phys_id;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Num Tx Qs = %d\n", caps->num_txq);
+				  "%s: num Tx queues = %d\n", prefix,
+				  caps->num_txq);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Tx first queue ID = %d\n",
+				  "%s: Tx first queue ID = %d\n", prefix,
 				  caps->txq_first_id);
 			break;
 		case ICE_AQC_CAPS_MSIX:
 			caps->num_msix_vectors = number;
 			caps->msix_vector_first_id = phys_id;
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: MSIX vector count = %d\n",
+				  "%s: MSIX vector count = %d\n", prefix,
 				  caps->num_msix_vectors);
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: MSIX first vector index = %d\n",
+				  "%s: MSIX first vector index = %d\n", prefix,
 				  caps->msix_vector_first_id);
 			break;
 		case ICE_AQC_CAPS_FD:
@@ -2050,7 +2059,7 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 			if (dev_p) {
 				dev_p->num_flow_director_fltr = number;
 				ice_debug(hw, ICE_DBG_INIT,
-					  "HW caps: Dev.fd_fltr =%d\n",
+					  "%s: num FD filters = %d\n", prefix,
 					  dev_p->num_flow_director_fltr);
 			}
 			if (func_p) {
@@ -2063,29 +2072,23 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 				      GLQF_FD_SIZE_FD_BSIZE_S;
 				func_p->fd_fltr_best_effort = val;
 				ice_debug(hw, ICE_DBG_INIT,
-					  "HW:func.fd_fltr guar= %d\n",
-					  func_p->fd_fltr_guar);
+					  "%s: num guaranteed FD filters = %d\n",
+					  prefix, func_p->fd_fltr_guar);
 				ice_debug(hw, ICE_DBG_INIT,
-					  "HW:func.fd_fltr best effort=%d\n",
-					  func_p->fd_fltr_best_effort);
+					  "%s: num best effort FD filters = %d\n",
+					  prefix, func_p->fd_fltr_best_effort);
 			}
 			break;
 		}
 		case ICE_AQC_CAPS_MAX_MTU:
 			caps->max_mtu = number;
-			if (dev_p)
-				ice_debug(hw, ICE_DBG_INIT,
-					  "HW caps: Dev.MaxMTU = %d\n",
-					  caps->max_mtu);
-			else if (func_p)
-				ice_debug(hw, ICE_DBG_INIT,
-					  "HW caps: func.MaxMTU = %d\n",
-					  caps->max_mtu);
+			ice_debug(hw, ICE_DBG_INIT, "%s: max MTU = %d\n",
+				  prefix, caps->max_mtu);
 			break;
 		default:
 			ice_debug(hw, ICE_DBG_INIT,
-				  "HW caps: Unknown capability[%d]: 0x%x\n", i,
-				  cap);
+				  "%s: unknown capability[%d]: 0x%x\n", prefix,
+				  i, cap);
 			break;
 		}
 	}
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 26/69] net/ice/base: set the max number of TCs per port to 4
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (24 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 25/69] net/ice/base: call out dev/func caps when printing Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 27/69] net/ice/base: make FDID available for FlexDescriptor Leyi Rong
                       ` (43 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Bruce Allan, Paul M Stillwell Jr

On devices with more than 4 ports, the maximum number of TCs per port is
limited to 4.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 461856827..81eab7ecc 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -2092,6 +2092,18 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count,
 			break;
 		}
 	}
+
+	/* Re-calculate capabilities that are dependent on the number of
+	 * physical ports; i.e. some features are not supported or function
+	 * differently on devices with more than 4 ports.
+	 */
+	if (caps && (ice_hweight32(caps->valid_functions) > 4)) {
+		/* Max 4 TCs per port */
+		caps->maxtc = 4;
+		ice_debug(hw, ICE_DBG_INIT,
+			  "%s: TC max = %d (based on #ports)\n", prefix,
+			  caps->maxtc);
+	}
 }
 
 /**
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 27/69] net/ice/base: make FDID available for FlexDescriptor
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (25 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 26/69] net/ice/base: set the max number of TCs per port to 4 Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 28/69] net/ice/base: use a different debug bit for FW log Leyi Rong
                       ` (42 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Henry Tieman, Paul M Stillwell Jr

The FDID (flow director filter ID) was not inserted into Flex Descriptor
writebacks. The data for FDID is always 0xffffffff when FDID-priority is
0 in the flow director programming descriptor.

This patch changes the FDID-priority to 1 so the FDID is available for
the Flex Descriptor. With this patch the FDID is placed into the Flex
Descriptor.

Signed-off-by: Henry Tieman <henry.w.tieman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_fdir.c      | 2 +-
 drivers/net/ice/base/ice_lan_tx_rx.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_fdir.c b/drivers/net/ice/base/ice_fdir.c
index 4bc8e6dcb..bde676a8f 100644
--- a/drivers/net/ice/base/ice_fdir.c
+++ b/drivers/net/ice/base/ice_fdir.c
@@ -186,7 +186,7 @@ ice_set_dflt_val_fd_desc(struct ice_fd_fltr_desc_ctx *fd_fltr_ctx)
 	fd_fltr_ctx->desc_prof_prio = ICE_FXD_FLTR_QW1_PROF_PRIO_ZERO;
 	fd_fltr_ctx->desc_prof = ICE_FXD_FLTR_QW1_PROF_ZERO;
 	fd_fltr_ctx->swap = ICE_FXD_FLTR_QW1_SWAP_SET;
-	fd_fltr_ctx->fdid_prio = ICE_FXD_FLTR_QW1_FDID_PRI_ZERO;
+	fd_fltr_ctx->fdid_prio = ICE_FXD_FLTR_QW1_FDID_PRI_ONE;
 	fd_fltr_ctx->fdid_mdid = ICE_FXD_FLTR_QW1_FDID_MDID_FD;
 	fd_fltr_ctx->fdid = ICE_FXD_FLTR_QW1_FDID_ZERO;
 }
diff --git a/drivers/net/ice/base/ice_lan_tx_rx.h b/drivers/net/ice/base/ice_lan_tx_rx.h
index 8c9902994..ef12b9f7c 100644
--- a/drivers/net/ice/base/ice_lan_tx_rx.h
+++ b/drivers/net/ice/base/ice_lan_tx_rx.h
@@ -162,7 +162,7 @@ struct ice_fltr_desc {
 
 #define ICE_FXD_FLTR_QW1_FDID_PRI_S	25
 #define ICE_FXD_FLTR_QW1_FDID_PRI_M	(0x7ULL << ICE_FXD_FLTR_QW1_FDID_PRI_S)
-#define ICE_FXD_FLTR_QW1_FDID_PRI_ZERO	0x0ULL
+#define ICE_FXD_FLTR_QW1_FDID_PRI_ONE	0x1ULL
 
 #define ICE_FXD_FLTR_QW1_FDID_MDID_S	28
 #define ICE_FXD_FLTR_QW1_FDID_MDID_M	(0xFULL << ICE_FXD_FLTR_QW1_FDID_MDID_S)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 28/69] net/ice/base: use a different debug bit for FW log
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (26 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 27/69] net/ice/base: make FDID available for FlexDescriptor Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 29/69] net/ice/base: always set prefena when configuring a Rx queue Leyi Rong
                       ` (41 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

Replace the use of the ICE_DBG_AQ_MSG bit when dumping firmware logging
messages with a separate distinct type ICE_DBG_FW_LOG. This is useful
so that developers may enable ICE_DBG_FW_LOG and get firmware logging
messages, without also dumping AdminQ messages at the same time.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 6 +++---
 drivers/net/ice/base/ice_type.h   | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 81eab7ecc..7cd0832bc 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -789,10 +789,10 @@ static enum ice_status ice_cfg_fw_log(struct ice_hw *hw, bool enable)
  */
 void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf)
 {
-	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg Start ]\n");
-	ice_debug_array(hw, ICE_DBG_AQ_MSG, 16, 1, (u8 *)buf,
+	ice_debug(hw, ICE_DBG_FW_LOG, "[ FW Log Msg Start ]\n");
+	ice_debug_array(hw, ICE_DBG_FW_LOG, 16, 1, (u8 *)buf,
 			LE16_TO_CPU(desc->datalen));
-	ice_debug(hw, ICE_DBG_AQ_MSG, "[ FW Log Msg End ]\n");
+	ice_debug(hw, ICE_DBG_FW_LOG, "[ FW Log Msg End ]\n");
 }
 
 /**
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 0f033bbf1..543988898 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -70,7 +70,7 @@ static inline u32 ice_round_to_num(u32 N, u32 R)
 /* debug masks - set these bits in hw->debug_mask to control output */
 #define ICE_DBG_INIT		BIT_ULL(1)
 #define ICE_DBG_RELEASE		BIT_ULL(2)
-
+#define ICE_DBG_FW_LOG		BIT_ULL(3)
 #define ICE_DBG_LINK		BIT_ULL(4)
 #define ICE_DBG_PHY		BIT_ULL(5)
 #define ICE_DBG_QCTX		BIT_ULL(6)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 29/69] net/ice/base: always set prefena when configuring a Rx queue
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (27 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 28/69] net/ice/base: use a different debug bit for FW log Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 30/69] net/ice/base: disable Tx pacing option Leyi Rong
                       ` (40 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Brett Creeley, Paul M Stillwell Jr

Currently we are always setting prefena to 0. This is causing the
hardware to only fetch descriptors when there are none free in the cache
for a received packet instead of prefetching when it has used the last
descriptor regardless of incoming packets. Fix this by allowing the
hardware to prefetch Rx descriptors.

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c    | 9 ++++++++-
 drivers/net/ice/base/ice_lan_tx_rx.h | 1 +
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 7cd0832bc..5490c1dfd 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -1213,6 +1213,7 @@ static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
 	ICE_CTX_STORE(ice_rlan_ctx, tphdata_ena,	1,	195),
 	ICE_CTX_STORE(ice_rlan_ctx, tphhead_ena,	1,	196),
 	ICE_CTX_STORE(ice_rlan_ctx, lrxqthresh,		3,	198),
+	ICE_CTX_STORE(ice_rlan_ctx, prefena,		1,	201),
 	{ 0 }
 };
 
@@ -1223,7 +1224,8 @@ static const struct ice_ctx_ele ice_rlan_ctx_info[] = {
  * @rxq_index: the index of the Rx queue
  *
  * Converts rxq context from sparse to dense structure and then writes
- * it to HW register space
+ * it to HW register space and enables the hardware to prefetch descriptors
+ * instead of only fetching them on demand
  */
 enum ice_status
 ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
@@ -1231,6 +1233,11 @@ ice_write_rxq_ctx(struct ice_hw *hw, struct ice_rlan_ctx *rlan_ctx,
 {
 	u8 ctx_buf[ICE_RXQ_CTX_SZ] = { 0 };
 
+	if (!rlan_ctx)
+		return ICE_ERR_BAD_PTR;
+
+	rlan_ctx->prefena = 1;
+
 	ice_set_ctx((u8 *)rlan_ctx, ctx_buf, ice_rlan_ctx_info);
 	return ice_copy_rxq_ctx_to_hw(hw, ctx_buf, rxq_index);
 }
diff --git a/drivers/net/ice/base/ice_lan_tx_rx.h b/drivers/net/ice/base/ice_lan_tx_rx.h
index ef12b9f7c..fa2309bf1 100644
--- a/drivers/net/ice/base/ice_lan_tx_rx.h
+++ b/drivers/net/ice/base/ice_lan_tx_rx.h
@@ -807,6 +807,7 @@ struct ice_rlan_ctx {
 	u8 tphdata_ena;
 	u8 tphhead_ena;
 	u16 lrxqthresh; /* bigger than needed, see above for reason */
+	u8 prefena;	/* NOTE: normally must be set to 1 at init */
 };
 
 struct ice_ctx_ele {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 30/69] net/ice/base: disable Tx pacing option
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (28 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 29/69] net/ice/base: always set prefena when configuring a Rx queue Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 31/69] net/ice/base: delete the index for chaining other recipe Leyi Rong
                       ` (39 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Chinh T Cao, Paul M Stillwell Jr

With the current NVM, after GLOBR, before the first link up event, FW
will return to base driver, the pacing value of 20 percents for some
reason, in the get-link-status AQ command. We then use this value as
the pacing param for the set-mac-info AQ command. As result, we are
limited to 20 percents of the available bandwidth until the first
set-mac-info AQ call after the link up event.

Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 19 -------------------
 1 file changed, 19 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 5490c1dfd..8af1be9a3 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -449,11 +449,7 @@ ice_aq_set_mac_cfg(struct ice_hw *hw, u16 max_frame_size, struct ice_sq_cd *cd)
 {
 	u16 fc_threshold_val, tx_timer_val;
 	struct ice_aqc_set_mac_cfg *cmd;
-	struct ice_port_info *pi;
 	struct ice_aq_desc desc;
-	enum ice_status status;
-	u8 port_num = 0;
-	bool link_up;
 	u32 reg_val;
 
 	cmd = &desc.params.set_mac_cfg;
@@ -465,21 +461,6 @@ ice_aq_set_mac_cfg(struct ice_hw *hw, u16 max_frame_size, struct ice_sq_cd *cd)
 
 	cmd->max_frame_size = CPU_TO_LE16(max_frame_size);
 
-	/* Retrieve the current data_pacing value in FW*/
-	pi = &hw->port_info[port_num];
-
-	/* We turn on the get_link_info so that ice_update_link_info(...)
-	 * can be called.
-	 */
-	pi->phy.get_link_info = 1;
-
-	status = ice_get_link_status(pi, &link_up);
-
-	if (status)
-		return status;
-
-	cmd->params = pi->phy.link_info.pacing;
-
 	/* We read back the transmit timer and fc threshold value of
 	 * LFC. Thus, we will use index =
 	 * PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX.
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 31/69] net/ice/base: delete the index for chaining other recipe
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (29 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 30/69] net/ice/base: disable Tx pacing option Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 32/69] net/ice/base: cleanup update link info Leyi Rong
                       ` (38 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Paul M Stillwell Jr

To make sure that we don't reuse the same result index
which is already in use, for chaining some other recipe.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 8af1be9a3..a69ae47f0 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -525,7 +525,15 @@ static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
 	}
 	recps = hw->switch_info->recp_list;
 	for (i = 0; i < ICE_MAX_NUM_RECIPES; i++) {
+		struct ice_recp_grp_entry *rg_entry, *tmprg_entry;
+
 		recps[i].root_rid = i;
+		LIST_FOR_EACH_ENTRY_SAFE(rg_entry, tmprg_entry,
+					 &recps[i].rg_list, ice_recp_grp_entry,
+					 l_entry) {
+			LIST_DEL(&rg_entry->l_entry);
+			ice_free(hw, rg_entry);
+		}
 
 		if (recps[i].adv_rule) {
 			struct ice_adv_fltr_mgmt_list_entry *tmp_entry;
@@ -552,6 +560,8 @@ static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw)
 				ice_free(hw, lst_itr);
 			}
 		}
+		if (recps[i].root_buf)
+			ice_free(hw, recps[i].root_buf);
 	}
 	ice_rm_all_sw_replay_rule_info(hw);
 	ice_free(hw, sw->recp_list);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 32/69] net/ice/base: cleanup update link info
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (30 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 31/69] net/ice/base: delete the index for chaining other recipe Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 33/69] net/ice/base: add rd64 support Leyi Rong
                       ` (37 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Bruce Allan, Chinh T Cao, Paul M Stillwell Jr

1. Do not unnecessarily initialize local variable.
2. Cleanup ice_update_link_info.
3. Dont clear auto_fec bit in ice_cfg_phy_fec.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 52 ++++++++++++++-----------------
 1 file changed, 24 insertions(+), 28 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index a69ae47f0..1ed151050 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -2414,10 +2414,10 @@ void
 ice_update_phy_type(u64 *phy_type_low, u64 *phy_type_high,
 		    u16 link_speeds_bitmap)
 {
-	u16 speed = ICE_AQ_LINK_SPEED_UNKNOWN;
 	u64 pt_high;
 	u64 pt_low;
 	int index;
+	u16 speed;
 
 	/* We first check with low part of phy_type */
 	for (index = 0; index <= ICE_PHY_TYPE_LOW_MAX_INDEX; index++) {
@@ -2498,38 +2498,38 @@ ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi,
  */
 enum ice_status ice_update_link_info(struct ice_port_info *pi)
 {
-	struct ice_aqc_get_phy_caps_data *pcaps;
-	struct ice_phy_info *phy_info;
+	struct ice_link_status *li;
 	enum ice_status status;
-	struct ice_hw *hw;
 
 	if (!pi)
 		return ICE_ERR_PARAM;
 
-	hw = pi->hw;
-
-	pcaps = (struct ice_aqc_get_phy_caps_data *)
-		ice_malloc(hw, sizeof(*pcaps));
-	if (!pcaps)
-		return ICE_ERR_NO_MEMORY;
+	li = &pi->phy.link_info;
 
-	phy_info = &pi->phy;
 	status = ice_aq_get_link_info(pi, true, NULL, NULL);
 	if (status)
-		goto out;
+		return status;
+
+	if (li->link_info & ICE_AQ_MEDIA_AVAILABLE) {
+		struct ice_aqc_get_phy_caps_data *pcaps;
+		struct ice_hw *hw;
 
-	if (phy_info->link_info.link_info & ICE_AQ_MEDIA_AVAILABLE) {
-		status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG,
+		hw = pi->hw;
+		pcaps = (struct ice_aqc_get_phy_caps_data *)
+			ice_malloc(hw, sizeof(*pcaps));
+		if (!pcaps)
+			return ICE_ERR_NO_MEMORY;
+
+		status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP,
 					     pcaps, NULL);
-		if (status)
-			goto out;
+		if (status == ICE_SUCCESS)
+			ice_memcpy(li->module_type, &pcaps->module_type,
+				   sizeof(li->module_type),
+				   ICE_NONDMA_TO_NONDMA);
 
-		ice_memcpy(phy_info->link_info.module_type, &pcaps->module_type,
-			   sizeof(phy_info->link_info.module_type),
-			   ICE_NONDMA_TO_NONDMA);
+		ice_free(hw, pcaps);
 	}
-out:
-	ice_free(hw, pcaps);
+
 	return status;
 }
 
@@ -2709,27 +2709,24 @@ ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec)
 {
 	switch (fec) {
 	case ICE_FEC_BASER:
-		/* Clear auto FEC and RS bits, and AND BASE-R ability
+		/* Clear RS bits, and AND BASE-R ability
 		 * bits and OR request bits.
 		 */
-		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
 		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_EN |
 				     ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN;
 		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_10G_KR_40G_KR4_REQ |
 				     ICE_AQC_PHY_FEC_25G_KR_REQ;
 		break;
 	case ICE_FEC_RS:
-		/* Clear auto FEC and BASE-R bits, and AND RS ability
+		/* Clear BASE-R bits, and AND RS ability
 		 * bits and OR request bits.
 		 */
-		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
 		cfg->link_fec_opt &= ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN;
 		cfg->link_fec_opt |= ICE_AQC_PHY_FEC_25G_RS_528_REQ |
 				     ICE_AQC_PHY_FEC_25G_RS_544_REQ;
 		break;
 	case ICE_FEC_NONE:
-		/* Clear auto FEC and all FEC option bits. */
-		cfg->caps &= ~ICE_AQC_PHY_EN_AUTO_FEC;
+		/* Clear all FEC option bits. */
 		cfg->link_fec_opt &= ~ICE_AQC_PHY_FEC_MASK;
 		break;
 	case ICE_FEC_AUTO:
@@ -3829,7 +3826,6 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues,
 	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
 		return ICE_ERR_CFG;
 
-
 	if (!num_queues) {
 		/* if queue is disabled already yet the disable queue command
 		 * has to be sent to complete the VF reset, then call
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 33/69] net/ice/base: add rd64 support
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (31 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 32/69] net/ice/base: cleanup update link info Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 34/69] net/ice/base: track HW stat registers past rollover Leyi Rong
                       ` (36 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong

Add API support for rd64.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_osdep.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/net/ice/base/ice_osdep.h b/drivers/net/ice/base/ice_osdep.h
index ede893fc9..35a17b941 100644
--- a/drivers/net/ice/base/ice_osdep.h
+++ b/drivers/net/ice/base/ice_osdep.h
@@ -126,11 +126,19 @@ do {									\
 #define ICE_PCI_REG(reg)     rte_read32(reg)
 #define ICE_PCI_REG_ADDR(a, reg) \
 	((volatile uint32_t *)((char *)(a)->hw_addr + (reg)))
+#define ICE_PCI_REG64(reg)     rte_read64(reg)
+#define ICE_PCI_REG_ADDR64(a, reg) \
+	((volatile uint64_t *)((char *)(a)->hw_addr + (reg)))
 static inline uint32_t ice_read_addr(volatile void *addr)
 {
 	return rte_le_to_cpu_32(ICE_PCI_REG(addr));
 }
 
+static inline uint64_t ice_read_addr64(volatile void *addr)
+{
+	return rte_le_to_cpu_64(ICE_PCI_REG64(addr));
+}
+
 #define ICE_PCI_REG_WRITE(reg, value) \
 	rte_write32((rte_cpu_to_le_32(value)), reg)
 
@@ -145,6 +153,7 @@ static inline uint32_t ice_read_addr(volatile void *addr)
 	ICE_PCI_REG_WRITE(ICE_PCI_REG_ADDR((a), (reg)), (value))
 #define flush(a) ice_read_addr(ICE_PCI_REG_ADDR((a), (GLGEN_STAT)))
 #define div64_long(n, d) ((n) / (d))
+#define rd64(a, reg) ice_read_addr64(ICE_PCI_REG_ADDR64((a), (reg)))
 
 #define BITS_PER_BYTE       8
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 34/69] net/ice/base: track HW stat registers past rollover
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (32 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 33/69] net/ice/base: add rd64 support Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 35/69] net/ice/base: implement LLDP persistent settings Leyi Rong
                       ` (35 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

Modify ice_stat_update40 to use rd64 instead of two calls to rd32.
Additionally, drop the now unnecessary hireg function parameter.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c | 57 +++++++++++++++++++------------
 drivers/net/ice/base/ice_common.h |  8 ++---
 2 files changed, 38 insertions(+), 27 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 1ed151050..199430e28 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -4004,40 +4004,44 @@ void ice_replay_post(struct ice_hw *hw)
 /**
  * ice_stat_update40 - read 40 bit stat from the chip and update stat values
  * @hw: ptr to the hardware info
- * @hireg: high 32 bit HW register to read from
- * @loreg: low 32 bit HW register to read from
+ * @reg: offset of 64 bit HW register to read from
  * @prev_stat_loaded: bool to specify if previous stats are loaded
  * @prev_stat: ptr to previous loaded stat value
  * @cur_stat: ptr to current stat value
  */
 void
-ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
-		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat)
+ice_stat_update40(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat)
 {
-	u64 new_data;
-
-	new_data = rd32(hw, loreg);
-	new_data |= ((u64)(rd32(hw, hireg) & 0xFFFF)) << 32;
+	u64 new_data = rd64(hw, reg) & (BIT_ULL(40) - 1);
 
 	/* device stats are not reset at PFR, they likely will not be zeroed
-	 * when the driver starts. So save the first values read and use them as
-	 * offsets to be subtracted from the raw values in order to report stats
-	 * that count from zero.
+	 * when the driver starts. Thus, save the value from the first read
+	 * without adding to the statistic value so that we report stats which
+	 * count up from zero.
 	 */
-	if (!prev_stat_loaded)
+	if (!prev_stat_loaded) {
 		*prev_stat = new_data;
+		return;
+	}
+
+	/* Calculate the difference between the new and old values, and then
+	 * add it to the software stat value.
+	 */
 	if (new_data >= *prev_stat)
-		*cur_stat = new_data - *prev_stat;
+		*cur_stat += new_data - *prev_stat;
 	else
 		/* to manage the potential roll-over */
-		*cur_stat = (new_data + BIT_ULL(40)) - *prev_stat;
-	*cur_stat &= 0xFFFFFFFFFFULL;
+		*cur_stat += (new_data + BIT_ULL(40)) - *prev_stat;
+
+	/* Update the previously stored value to prepare for next read */
+	*prev_stat = new_data;
 }
 
 /**
  * ice_stat_update32 - read 32 bit stat from the chip and update stat values
  * @hw: ptr to the hardware info
- * @reg: HW register to read from
+ * @reg: offset of HW register to read from
  * @prev_stat_loaded: bool to specify if previous stats are loaded
  * @prev_stat: ptr to previous loaded stat value
  * @cur_stat: ptr to current stat value
@@ -4051,17 +4055,26 @@ ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
 	new_data = rd32(hw, reg);
 
 	/* device stats are not reset at PFR, they likely will not be zeroed
-	 * when the driver starts. So save the first values read and use them as
-	 * offsets to be subtracted from the raw values in order to report stats
-	 * that count from zero.
+	 * when the driver starts. Thus, save the value from the first read
+	 * without adding to the statistic value so that we report stats which
+	 * count up from zero.
 	 */
-	if (!prev_stat_loaded)
+	if (!prev_stat_loaded) {
 		*prev_stat = new_data;
+		return;
+	}
+
+	/* Calculate the difference between the new and old values, and then
+	 * add it to the software stat value.
+	 */
 	if (new_data >= *prev_stat)
-		*cur_stat = new_data - *prev_stat;
+		*cur_stat += new_data - *prev_stat;
 	else
 		/* to manage the potential roll-over */
-		*cur_stat = (new_data + BIT_ULL(32)) - *prev_stat;
+		*cur_stat += (new_data + BIT_ULL(32)) - *prev_stat;
+
+	/* Update the previously stored value to prepare for next read */
+	*prev_stat = new_data;
 }
 
 
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 4cd87fc1e..2063295ce 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -6,7 +6,6 @@
 #define _ICE_COMMON_H_
 
 #include "ice_type.h"
-
 #include "ice_flex_pipe.h"
 #include "ice_switch.h"
 #include "ice_fdir.h"
@@ -34,8 +33,7 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 		  struct ice_rq_event_info *e, u16 *pending);
 enum ice_status
 ice_get_link_status(struct ice_port_info *pi, bool *link_up);
-enum ice_status
-ice_update_link_info(struct ice_port_info *pi);
+enum ice_status ice_update_link_info(struct ice_port_info *pi);
 enum ice_status
 ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
 		enum ice_aq_res_access_type access, u32 timeout);
@@ -195,8 +193,8 @@ ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
 enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes);
 void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf);
 void
-ice_stat_update40(struct ice_hw *hw, u32 hireg, u32 loreg,
-		  bool prev_stat_loaded, u64 *prev_stat, u64 *cur_stat);
+ice_stat_update40(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
+		  u64 *prev_stat, u64 *cur_stat);
 void
 ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
 		  u64 *prev_stat, u64 *cur_stat);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 35/69] net/ice/base: implement LLDP persistent settings
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (33 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 34/69] net/ice/base: track HW stat registers past rollover Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 36/69] net/ice/base: check new FD filter duplicate location Leyi Rong
                       ` (34 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jaroslaw Ilgiewicz, Paul M Stillwell Jr

This patch implements persistent, across reboots, start and stop
of LLDP agent. Added additional function parameter to
ice_aq_start_lldp and ice_aq_stop_lldp.

Signed-off-by: Jaroslaw Ilgiewicz <jaroslaw.ilgiewicz@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_dcb.c | 33 ++++++++++++++++++++++-----------
 drivers/net/ice/base/ice_dcb.h |  9 ++++-----
 drivers/net/ice/ice_ethdev.c   |  2 +-
 3 files changed, 27 insertions(+), 17 deletions(-)

diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c
index 100c4bb0f..008c7a110 100644
--- a/drivers/net/ice/base/ice_dcb.c
+++ b/drivers/net/ice/base/ice_dcb.c
@@ -83,12 +83,14 @@ ice_aq_cfg_lldp_mib_change(struct ice_hw *hw, bool ena_update,
  * @hw: pointer to the HW struct
  * @shutdown_lldp_agent: True if LLDP Agent needs to be Shutdown
  *			 False if LLDP Agent needs to be Stopped
+ * @persist: True if Stop/Shutdown of LLDP Agent needs to be persistent across
+ *	     reboots
  * @cd: pointer to command details structure or NULL
  *
  * Stop or Shutdown the embedded LLDP Agent (0x0A05)
  */
 enum ice_status
-ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent,
+ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent, bool persist,
 		 struct ice_sq_cd *cd)
 {
 	struct ice_aqc_lldp_stop *cmd;
@@ -101,17 +103,22 @@ ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent,
 	if (shutdown_lldp_agent)
 		cmd->command |= ICE_AQ_LLDP_AGENT_SHUTDOWN;
 
+	if (persist)
+		cmd->command |= ICE_AQ_LLDP_AGENT_PERSIST_DIS;
+
 	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
 }
 
 /**
  * ice_aq_start_lldp
  * @hw: pointer to the HW struct
+ * @persist: True if Start of LLDP Agent needs to be persistent across reboots
  * @cd: pointer to command details structure or NULL
  *
  * Start the embedded LLDP Agent on all ports. (0x0A06)
  */
-enum ice_status ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd)
+enum ice_status
+ice_aq_start_lldp(struct ice_hw *hw, bool persist, struct ice_sq_cd *cd)
 {
 	struct ice_aqc_lldp_start *cmd;
 	struct ice_aq_desc desc;
@@ -122,6 +129,9 @@ enum ice_status ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd)
 
 	cmd->command = ICE_AQ_LLDP_AGENT_START;
 
+	if (persist)
+		cmd->command |= ICE_AQ_LLDP_AGENT_PERSIST_ENA;
+
 	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
 }
 
@@ -615,7 +625,8 @@ ice_parse_org_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg)
  *
  * Parse DCB configuration from the LLDPDU
  */
-enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg)
+enum ice_status
+ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg)
 {
 	struct ice_lldp_org_tlv *tlv;
 	enum ice_status ret = ICE_SUCCESS;
@@ -659,7 +670,7 @@ enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg)
 /**
  * ice_aq_get_dcb_cfg
  * @hw: pointer to the HW struct
- * @mib_type: mib type for the query
+ * @mib_type: MIB type for the query
  * @bridgetype: bridge type for the query (remote)
  * @dcbcfg: store for LLDPDU data
  *
@@ -690,13 +701,13 @@ ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
 }
 
 /**
- * ice_aq_start_stop_dcbx - Start/Stop DCBx service in FW
+ * ice_aq_start_stop_dcbx - Start/Stop DCBX service in FW
  * @hw: pointer to the HW struct
- * @start_dcbx_agent: True if DCBx Agent needs to be started
- *		      False if DCBx Agent needs to be stopped
- * @dcbx_agent_status: FW indicates back the DCBx agent status
- *		       True if DCBx Agent is active
- *		       False if DCBx Agent is stopped
+ * @start_dcbx_agent: True if DCBX Agent needs to be started
+ *		      False if DCBX Agent needs to be stopped
+ * @dcbx_agent_status: FW indicates back the DCBX agent status
+ *		       True if DCBX Agent is active
+ *		       False if DCBX Agent is stopped
  * @cd: pointer to command details structure or NULL
  *
  * Start/Stop the embedded dcbx Agent. In case that this wrapper function
@@ -1236,7 +1247,7 @@ ice_add_dcb_tlv(struct ice_lldp_org_tlv *tlv, struct ice_dcbx_cfg *dcbcfg,
 /**
  * ice_dcb_cfg_to_lldp - Convert DCB configuration to MIB format
  * @lldpmib: pointer to the HW struct
- * @miblen: length of LLDP mib
+ * @miblen: length of LLDP MIB
  * @dcbcfg: Local store which holds the DCB Config
  *
  * Convert the DCB configuration to MIB format
diff --git a/drivers/net/ice/base/ice_dcb.h b/drivers/net/ice/base/ice_dcb.h
index 65d2bafef..47127096b 100644
--- a/drivers/net/ice/base/ice_dcb.h
+++ b/drivers/net/ice/base/ice_dcb.h
@@ -114,7 +114,6 @@ struct ice_lldp_org_tlv {
 	__be32 ouisubtype;
 	u8 tlvinfo[1];
 };
-
 #pragma pack()
 
 struct ice_cee_tlv_hdr {
@@ -147,7 +146,6 @@ struct ice_cee_app_prio {
 	__be16 lower_oui;
 	u8 prio_map;
 };
-
 #pragma pack()
 
 /* TODO: The below structures related LLDP/DCBX variables
@@ -190,8 +188,8 @@ enum ice_status
 ice_aq_get_cee_dcb_cfg(struct ice_hw *hw,
 		       struct ice_aqc_get_cee_dcb_cfg_resp *buff,
 		       struct ice_sq_cd *cd);
-u8 ice_get_dcbx_status(struct ice_hw *hw);
 enum ice_status ice_lldp_to_dcb_cfg(u8 *lldpmib, struct ice_dcbx_cfg *dcbcfg);
+u8 ice_get_dcbx_status(struct ice_hw *hw);
 enum ice_status
 ice_aq_get_dcb_cfg(struct ice_hw *hw, u8 mib_type, u8 bridgetype,
 		   struct ice_dcbx_cfg *dcbcfg);
@@ -211,9 +209,10 @@ enum ice_status
 ice_update_port_tc_tree_cfg(struct ice_port_info *pi,
 			    struct ice_aqc_port_ets_elem *buf);
 enum ice_status
-ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent,
+ice_aq_stop_lldp(struct ice_hw *hw, bool shutdown_lldp_agent, bool persist,
 		 struct ice_sq_cd *cd);
-enum ice_status ice_aq_start_lldp(struct ice_hw *hw, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_start_lldp(struct ice_hw *hw, bool persist, struct ice_sq_cd *cd);
 enum ice_status
 ice_aq_start_stop_dcbx(struct ice_hw *hw, bool start_dcbx_agent,
 		       bool *dcbx_agent_status, struct ice_sq_cd *cd);
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 5968604b4..203d0a9f9 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1447,7 +1447,7 @@ ice_dev_init(struct rte_eth_dev *dev)
 	/* Disable double vlan by default */
 	ice_vsi_config_double_vlan(vsi, FALSE);
 
-	ret = ice_aq_stop_lldp(hw, TRUE, NULL);
+	ret = ice_aq_stop_lldp(hw, TRUE, FALSE, NULL);
 	if (ret != ICE_SUCCESS)
 		PMD_INIT_LOG(DEBUG, "lldp has already stopped\n");
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 36/69] net/ice/base: check new FD filter duplicate location
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (34 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 35/69] net/ice/base: implement LLDP persistent settings Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 37/69] net/ice/base: correct UDP/TCP PTYPE assignments Leyi Rong
                       ` (33 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Karol Kolacinski, Paul M Stillwell Jr

Function ice_fdir_is_dup_fltr tests if new Flow Director rule
is not a duplicate.

Signed-off-by: Karol Kolacinski <karol.kolacinski@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_fdir.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_fdir.c b/drivers/net/ice/base/ice_fdir.c
index bde676a8f..9ef91b3b8 100644
--- a/drivers/net/ice/base/ice_fdir.c
+++ b/drivers/net/ice/base/ice_fdir.c
@@ -692,8 +692,13 @@ bool ice_fdir_is_dup_fltr(struct ice_hw *hw, struct ice_fdir_fltr *input)
 				ret = ice_fdir_comp_rules(rule, input, false);
 			else
 				ret = ice_fdir_comp_rules(rule, input, true);
-			if (ret)
-				break;
+			if (ret) {
+				if (rule->fltr_id == input->fltr_id &&
+				    rule->q_index != input->q_index)
+					ret = false;
+				else
+					break;
+			}
 		}
 	}
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 37/69] net/ice/base: correct UDP/TCP PTYPE assignments
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (35 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 36/69] net/ice/base: check new FD filter duplicate location Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 38/69] net/ice/base: calculate rate limit burst size correctly Leyi Rong
                       ` (32 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Vignesh Sridhar, Paul M Stillwell Jr

1. Using the UDP-IL PTYPEs when processing packet segments as it contains
all PTYPEs with UDP and allow packets to be forwarded to associated VSIs
as switch rules are based on outer IPs.
2. Add PTYPE 0x088 to TCP PTYPE bitmap list.

Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c | 23 ++++++-----------------
 1 file changed, 6 insertions(+), 17 deletions(-)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 9f2a794bc..825c53b51 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -195,21 +195,11 @@ static const u32 ice_ptypes_arp_of[] = {
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 };
 
-/* Packet types for packets with an Outermost/First UDP header */
-static const u32 ice_ptypes_udp_of[] = {
-	0x81000000, 0x00000000, 0x04000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-};
-
-/* Packet types for packets with an Innermost/Last UDP header */
+/* UDP Packet types for non-tunneled packets or tunneled
+ * packets with inner UDP.
+ */
 static const u32 ice_ptypes_udp_il[] = {
-	0x80000000, 0x20204040, 0x00081010, 0x80810102,
+	0x81000000, 0x20204040, 0x04081010, 0x80810102,
 	0x00204040, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
@@ -222,7 +212,7 @@ static const u32 ice_ptypes_udp_il[] = {
 /* Packet types for packets with an Innermost/Last TCP header */
 static const u32 ice_ptypes_tcp_il[] = {
 	0x04000000, 0x80810102, 0x10204040, 0x42040408,
-	0x00810002, 0x00000000, 0x00000000, 0x00000000,
+	0x00810102, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
@@ -473,8 +463,7 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
 				       ICE_FLOW_PTYPE_MAX);
 			hdrs &= ~ICE_FLOW_SEG_HDR_ICMP;
 		} else if (hdrs & ICE_FLOW_SEG_HDR_UDP) {
-			src = !i ? (const ice_bitmap_t *)ice_ptypes_udp_of :
-				(const ice_bitmap_t *)ice_ptypes_udp_il;
+			src = (const ice_bitmap_t *)ice_ptypes_udp_il;
 			ice_and_bitmap(params->ptypes, params->ptypes, src,
 				       ICE_FLOW_PTYPE_MAX);
 			hdrs &= ~ICE_FLOW_SEG_HDR_UDP;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 38/69] net/ice/base: calculate rate limit burst size correctly
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (36 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 37/69] net/ice/base: correct UDP/TCP PTYPE assignments Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 39/69] net/ice/base: fix Flow Director VSI count Leyi Rong
                       ` (31 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Ben Shelton, Paul M Stillwell Jr

When the MSB is not set, the lower 11 bits do not represent bytes, but
chunks of 64 bytes. Adjust the rate limit burst size calculation
accordingly, and update the comments to indicate the way the hardware
actually works.

Signed-off-by: Ben Shelton <benjamin.h.shelton@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_sched.c | 17 ++++++++---------
 drivers/net/ice/base/ice_sched.h | 14 ++++++++------
 2 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 0c1c18ba1..a72e72982 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -5060,16 +5060,15 @@ enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes)
 	if (bytes < ICE_MIN_BURST_SIZE_ALLOWED ||
 	    bytes > ICE_MAX_BURST_SIZE_ALLOWED)
 		return ICE_ERR_PARAM;
-	if (bytes <= ICE_MAX_BURST_SIZE_BYTE_GRANULARITY) {
-		/* byte granularity case */
+	if (ice_round_to_num(bytes, 64) <=
+	    ICE_MAX_BURST_SIZE_64_BYTE_GRANULARITY) {
+		/* 64 byte granularity case */
 		/* Disable MSB granularity bit */
-		burst_size_to_prog = ICE_BYTE_GRANULARITY;
-		/* round number to nearest 256 granularity */
-		bytes = ice_round_to_num(bytes, 256);
-		/* check rounding doesn't go beyond allowed */
-		if (bytes > ICE_MAX_BURST_SIZE_BYTE_GRANULARITY)
-			bytes = ICE_MAX_BURST_SIZE_BYTE_GRANULARITY;
-		burst_size_to_prog |= (u16)bytes;
+		burst_size_to_prog = ICE_64_BYTE_GRANULARITY;
+		/* round number to nearest 64 byte granularity */
+		bytes = ice_round_to_num(bytes, 64);
+		/* The value is in 64 byte chunks */
+		burst_size_to_prog |= (u16)(bytes / 64);
 	} else {
 		/* k bytes granularity case */
 		/* Enable MSB granularity bit */
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 56f9977ab..e444dc880 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -13,14 +13,16 @@
 #define ICE_SCHED_INVAL_LAYER_NUM	0xFF
 /* Burst size is a 12 bits register that is configured while creating the RL
  * profile(s). MSB is a granularity bit and tells the granularity type
- * 0 - LSB bits are in bytes granularity
+ * 0 - LSB bits are in 64 bytes granularity
  * 1 - LSB bits are in 1K bytes granularity
  */
-#define ICE_BYTE_GRANULARITY			0
-#define ICE_KBYTE_GRANULARITY			0x800
-#define ICE_MIN_BURST_SIZE_ALLOWED		1 /* In Bytes */
-#define ICE_MAX_BURST_SIZE_ALLOWED		(2047 * 1024) /* In Bytes */
-#define ICE_MAX_BURST_SIZE_BYTE_GRANULARITY	2047 /* In Bytes */
+#define ICE_64_BYTE_GRANULARITY			0
+#define ICE_KBYTE_GRANULARITY			BIT(11)
+#define ICE_MIN_BURST_SIZE_ALLOWED		64 /* In Bytes */
+#define ICE_MAX_BURST_SIZE_ALLOWED \
+	((BIT(11) - 1) * 1024) /* In Bytes */
+#define ICE_MAX_BURST_SIZE_64_BYTE_GRANULARITY \
+	((BIT(11) - 1) * 64) /* In Bytes */
 #define ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY	ICE_MAX_BURST_SIZE_ALLOWED
 
 #define ICE_RL_PROF_FREQUENCY 446000000
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 39/69] net/ice/base: fix Flow Director VSI count
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (37 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 38/69] net/ice/base: calculate rate limit burst size correctly Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 40/69] net/ice/base: use more efficient structures Leyi Rong
                       ` (30 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Henry Tieman, Paul M Stillwell Jr

Flow director keeps a list of VSIs for each flow type (TCP4, UDP6, etc.)
This list varies in length depending on the number of traffic classes
(ADQ). This patch uses the define of max TCs to calculate the size of
the VSI array.

Fixes: bd984f155f49 ("net/ice/base: support FDIR")

Signed-off-by: Henry Tieman <henry.w.tieman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_type.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 543988898..df3c64c79 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -261,8 +261,8 @@ enum ice_fltr_ptype {
 	ICE_FLTR_PTYPE_MAX,
 };
 
-/* 6 VSI = 1 ICE_VSI_PF + 1 ICE_VSI_CTRL + 4 ICE_VSI_CHNL */
-#define ICE_MAX_FDIR_VSI_PER_FILTER	6
+/* 2 VSI = 1 ICE_VSI_PF + 1 ICE_VSI_CTRL */
+#define ICE_MAX_FDIR_VSI_PER_FILTER	2
 
 struct ice_fd_hw_prof {
 	struct ice_flow_seg_info *fdir_seg;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 40/69] net/ice/base: use more efficient structures
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (38 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 39/69] net/ice/base: fix Flow Director VSI count Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 41/69] net/ice/base: silent semantic parser warnings Leyi Rong
                       ` (29 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jesse Brandeburg, Paul M Stillwell Jr

Move a bunch of members around to make more efficient use of
memory, eliminating holes where possible. None of these members
are hot path so cache line alignment is not very important here.

Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_controlq.h  |  4 +--
 drivers/net/ice/base/ice_flex_type.h | 38 +++++++++++++---------------
 drivers/net/ice/base/ice_flow.c      |  4 +--
 drivers/net/ice/base/ice_flow.h      | 18 ++++++-------
 4 files changed, 29 insertions(+), 35 deletions(-)

diff --git a/drivers/net/ice/base/ice_controlq.h b/drivers/net/ice/base/ice_controlq.h
index 182db6754..21c8722e5 100644
--- a/drivers/net/ice/base/ice_controlq.h
+++ b/drivers/net/ice/base/ice_controlq.h
@@ -81,6 +81,7 @@ struct ice_rq_event_info {
 /* Control Queue information */
 struct ice_ctl_q_info {
 	enum ice_ctl_q qtype;
+	enum ice_aq_err rq_last_status;	/* last status on receive queue */
 	struct ice_ctl_q_ring rq;	/* receive queue */
 	struct ice_ctl_q_ring sq;	/* send queue */
 	u32 sq_cmd_timeout;		/* send queue cmd write back timeout */
@@ -88,10 +89,9 @@ struct ice_ctl_q_info {
 	u16 num_sq_entries;		/* send queue depth */
 	u16 rq_buf_size;		/* receive queue buffer size */
 	u16 sq_buf_size;		/* send queue buffer size */
+	enum ice_aq_err sq_last_status;	/* last status on send queue */
 	struct ice_lock sq_lock;		/* Send queue lock */
 	struct ice_lock rq_lock;		/* Receive queue lock */
-	enum ice_aq_err sq_last_status;	/* last status on send queue */
-	enum ice_aq_err rq_last_status;	/* last status on receive queue */
 };
 
 #endif /* _ICE_CONTROLQ_H_ */
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
index d23b2ae82..dca5cf285 100644
--- a/drivers/net/ice/base/ice_flex_type.h
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -5,7 +5,7 @@
 #ifndef _ICE_FLEX_TYPE_H_
 #define _ICE_FLEX_TYPE_H_
 
-#define ICE_FV_OFFSET_INVAL    0x1FF
+#define ICE_FV_OFFSET_INVAL	0x1FF
 
 #pragma pack(1)
 /* Extraction Sequence (Field Vector) Table */
@@ -14,7 +14,6 @@ struct ice_fv_word {
 	u16 off;		/* Offset within the protocol header */
 	u8 resvrd;
 };
-
 #pragma pack()
 
 #define ICE_MAX_FV_WORDS 48
@@ -367,7 +366,6 @@ struct ice_boost_key_value {
 	__le16 hv_src_port_key;
 	u8 tcam_search_key;
 };
-
 #pragma pack()
 
 struct ice_boost_key {
@@ -406,7 +404,6 @@ struct ice_xlt1_section {
 	__le16 offset;
 	u8 value[1];
 };
-
 #pragma pack()
 
 #define ICE_XLT1_SIZE(n)	(sizeof(struct ice_xlt1_section) + \
@@ -467,19 +464,19 @@ struct ice_tunnel_type_scan {
 
 struct ice_tunnel_entry {
 	enum ice_tunnel_type type;
-	u8 valid;
-	u8 in_use;
-	u8 marked;
 	u16 boost_addr;
 	u16 port;
 	struct ice_boost_tcam_entry *boost_entry;
+	u8 valid;
+	u8 in_use;
+	u8 marked;
 };
 
 #define ICE_TUNNEL_MAX_ENTRIES	16
 
 struct ice_tunnel_table {
-	u16 count;
 	struct ice_tunnel_entry tbl[ICE_TUNNEL_MAX_ENTRIES];
+	u16 count;
 };
 
 struct ice_pkg_es {
@@ -511,13 +508,13 @@ struct ice_es {
 #define ICE_DEFAULT_PTG	0
 
 struct ice_ptg_entry {
-	u8 in_use;
 	struct ice_ptg_ptype *first_ptype;
+	u8 in_use;
 };
 
 struct ice_ptg_ptype {
-	u8 ptg;
 	struct ice_ptg_ptype *next_ptype;
+	u8 ptg;
 };
 
 #define ICE_MAX_TCAM_PER_PROFILE	8
@@ -535,9 +532,9 @@ struct ice_prof_map {
 #define ICE_INVALID_TCAM	0xFFFF
 
 struct ice_tcam_inf {
+	u16 tcam_idx;
 	u8 ptg;
 	u8 prof_id;
-	u16 tcam_idx;
 	u8 in_use;
 };
 
@@ -550,16 +547,16 @@ struct ice_vsig_prof {
 };
 
 struct ice_vsig_entry {
-	u8 in_use;
 	struct LIST_HEAD_TYPE prop_lst;
 	struct ice_vsig_vsi *first_vsi;
+	u8 in_use;
 };
 
 struct ice_vsig_vsi {
+	struct ice_vsig_vsi *next_vsi;
+	u32 prop_mask;
 	u16 changed;
 	u16 vsig;
-	u32 prop_mask;
-	struct ice_vsig_vsi *next_vsi;
 };
 
 #define ICE_XLT1_CNT	1024
@@ -567,11 +564,11 @@ struct ice_vsig_vsi {
 
 /* XLT1 Table */
 struct ice_xlt1 {
-	u32 sid;
-	u16 count;
 	struct ice_ptg_entry *ptg_tbl;
 	struct ice_ptg_ptype *ptypes;
 	u8 *t;
+	u32 sid;
+	u16 count;
 };
 
 #define ICE_XLT2_CNT	768
@@ -591,11 +588,11 @@ struct ice_xlt1 {
 
 /* XLT2 Table */
 struct ice_xlt2 {
-	u32 sid;
-	u16 count;
 	struct ice_vsig_entry *vsig_tbl;
 	struct ice_vsig_vsi *vsis;
 	u16 *t;
+	u32 sid;
+	u16 count;
 };
 
 /* Extraction sequence - list of match fields:
@@ -641,21 +638,20 @@ struct ice_prof_id_section {
 	__le16 count;
 	struct ice_prof_tcam_entry entry[1];
 };
-
 #pragma pack()
 
 struct ice_prof_tcam {
 	u32 sid;
 	u16 count;
 	u16 max_prof_id;
-	u8 cdid_bits; /* # cdid bits to use in key, 0, 2, 4, or 8 */
 	struct ice_prof_tcam_entry *t;
+	u8 cdid_bits; /* # cdid bits to use in key, 0, 2, 4, or 8 */
 };
 
 struct ice_prof_redir {
+	u8 *t;
 	u32 sid;
 	u16 count;
-	u8 *t;
 };
 
 /* Tables per block */
diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 825c53b51..9f3f33ca3 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -284,10 +284,10 @@ static const u32 ice_ptypes_mac_il[] = {
 /* Manage parameters and info. used during the creation of a flow profile */
 struct ice_flow_prof_params {
 	enum ice_block blk;
-	struct ice_flow_prof *prof;
-
 	u16 entry_length; /* # of bytes formatted entry will require */
 	u8 es_cnt;
+	struct ice_flow_prof *prof;
+
 	/* For ACL, the es[0] will have the data of ICE_RX_MDID_PKT_FLAGS_15_0
 	 * This will give us the direction flags.
 	 */
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
index 4fa13064e..33c2f129c 100644
--- a/drivers/net/ice/base/ice_flow.h
+++ b/drivers/net/ice/base/ice_flow.h
@@ -225,20 +225,18 @@ struct ice_flow_entry {
 	struct LIST_ENTRY_TYPE l_entry;
 
 	u64 id;
-	u16 vsi_handle;
-	enum ice_flow_priority priority;
 	struct ice_flow_prof *prof;
-
+	/* Action list */
+	struct ice_flow_action *acts;
 	/* Flow entry's content */
-	u16 entry_sz;
 	void *entry;
-
-	/* Action list */
+	enum ice_flow_priority priority;
+	u16 vsi_handle;
+	u16 entry_sz;
 	u8 acts_cnt;
-	struct ice_flow_action *acts;
 };
 
-#define ICE_FLOW_ENTRY_HNDL(e)	((unsigned long)e)
+#define ICE_FLOW_ENTRY_HNDL(e)	((u64)e)
 #define ICE_FLOW_ENTRY_PTR(h)	((struct ice_flow_entry *)(h))
 
 struct ice_flow_prof {
@@ -246,12 +244,13 @@ struct ice_flow_prof {
 
 	u64 id;
 	enum ice_flow_dir dir;
+	u8 segs_cnt;
+	u8 acts_cnt;
 
 	/* Keep track of flow entries associated with this flow profile */
 	struct ice_lock entries_lock;
 	struct LIST_HEAD_TYPE entries;
 
-	u8 segs_cnt;
 	struct ice_flow_seg_info segs[ICE_FLOW_SEG_MAX];
 
 	/* software VSI handles referenced by this flow profile */
@@ -264,7 +263,6 @@ struct ice_flow_prof {
 	} cfg;
 
 	/* Default actions */
-	u8 acts_cnt;
 	struct ice_flow_action *acts;
 };
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 41/69] net/ice/base: silent semantic parser warnings
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (39 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 40/69] net/ice/base: use more efficient structures Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 42/69] net/ice/base: fix for signed package download Leyi Rong
                       ` (28 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Bruce Allan, Kevin Scott, Paul M Stillwell Jr

Eliminate some semantic warnings, static analysis warnings.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Kevin Scott <kevin.c.scott@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c     | 8 ++++----
 drivers/net/ice/base/ice_flow.c          | 5 +----
 drivers/net/ice/base/ice_nvm.c           | 4 ++--
 drivers/net/ice/base/ice_protocol_type.h | 1 +
 drivers/net/ice/base/ice_switch.c        | 8 +++-----
 drivers/net/ice/base/ice_type.h          | 7 ++++++-
 6 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index e3de71363..20edc502f 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -134,7 +134,7 @@ static struct ice_buf_table *ice_find_buf_table(struct ice_seg *ice_seg)
 	nvms = (struct ice_nvm_table *)(ice_seg->device_table +
 		LE32_TO_CPU(ice_seg->device_table_count));
 
-	return (struct ice_buf_table *)
+	return (_FORCE_ struct ice_buf_table *)
 		(nvms->vers + LE32_TO_CPU(nvms->table_count));
 }
 
@@ -2937,7 +2937,7 @@ static void ice_fill_tbl(struct ice_hw *hw, enum ice_block block_id, u32 sid)
 		case ICE_SID_XLT2_ACL:
 		case ICE_SID_XLT2_PE:
 			xlt2 = (struct ice_xlt2_section *)sect;
-			src = (u8 *)xlt2->value;
+			src = (_FORCE_ u8 *)xlt2->value;
 			sect_len = LE16_TO_CPU(xlt2->count) *
 				sizeof(*hw->blk[block_id].xlt2.t);
 			dst = (u8 *)hw->blk[block_id].xlt2.t;
@@ -3824,7 +3824,7 @@ ice_update_fd_swap(struct ice_hw *hw, u16 prof_id, struct ice_fv_word *es)
 
 	/* fill in the swap array */
 	si = hw->blk[ICE_BLK_FD].es.fvw - 1;
-	do {
+	while (si >= 0) {
 		u8 indexes_used = 1;
 
 		/* assume flat at this index */
@@ -3856,7 +3856,7 @@ ice_update_fd_swap(struct ice_hw *hw, u16 prof_id, struct ice_fv_word *es)
 		}
 
 		si -= indexes_used;
-	} while (si >= 0);
+	}
 
 	/* for each set of 4 swap indexes, write the appropriate register */
 	for (j = 0; j < hw->blk[ICE_BLK_FD].es.fvw / 4; j++) {
diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 9f3f33ca3..126f49881 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -415,9 +415,6 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
 		const ice_bitmap_t *src;
 		u32 hdrs;
 
-		if (i > 0 && (i + 1) < prof->segs_cnt)
-			continue;
-
 		hdrs = prof->segs[i].hdrs;
 
 		if (hdrs & ICE_FLOW_SEG_HDR_ETH) {
@@ -1397,7 +1394,7 @@ enum ice_status ice_flow_rem_entry(struct ice_hw *hw, u64 entry_h)
 	if (entry_h == ICE_FLOW_ENTRY_HANDLE_INVAL)
 		return ICE_ERR_PARAM;
 
-	entry = ICE_FLOW_ENTRY_PTR((unsigned long)entry_h);
+	entry = ICE_FLOW_ENTRY_PTR(entry_h);
 
 	/* Retain the pointer to the flow profile as the entry will be freed */
 	prof = entry->prof;
diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
index fa9c348ce..76cfedb29 100644
--- a/drivers/net/ice/base/ice_nvm.c
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -127,7 +127,7 @@ ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
 
 	status = ice_read_sr_aq(hw, offset, 1, data, true);
 	if (!status)
-		*data = LE16_TO_CPU(*(__le16 *)data);
+		*data = LE16_TO_CPU(*(_FORCE_ __le16 *)data);
 
 	return status;
 }
@@ -185,7 +185,7 @@ ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data)
 	} while (words_read < *words);
 
 	for (i = 0; i < *words; i++)
-		data[i] = LE16_TO_CPU(((__le16 *)data)[i]);
+		data[i] = LE16_TO_CPU(((_FORCE_ __le16 *)data)[i]);
 
 read_nvm_buf_aq_exit:
 	*words = words_read;
diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
index e572dd320..82822fb74 100644
--- a/drivers/net/ice/base/ice_protocol_type.h
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -189,6 +189,7 @@ struct ice_udp_tnl_hdr {
 	u16 field;
 	u16 proto_type;
 	u16 vni;
+	u16 reserved;
 };
 
 struct ice_nvgre {
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 4e0558939..13d0dad58 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -860,7 +860,7 @@ ice_aq_add_update_mir_rule(struct ice_hw *hw, u16 rule_type, u16 dest_vsi,
 			return ICE_ERR_PARAM;
 
 		buf_size = count * sizeof(__le16);
-		mr_list = (__le16 *)ice_malloc(hw, buf_size);
+		mr_list = (_FORCE_ __le16 *)ice_malloc(hw, buf_size);
 		if (!mr_list)
 			return ICE_ERR_NO_MEMORY;
 		break;
@@ -1460,7 +1460,6 @@ static int ice_ilog2(u64 n)
 	return -1;
 }
 
-
 /**
  * ice_fill_sw_rule - Helper function to fill switch rule structure
  * @hw: pointer to the hardware structure
@@ -1480,7 +1479,6 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
 	__be16 *off;
 	u8 q_rgn;
 
-
 	if (opc == ice_aqc_opc_remove_sw_rules) {
 		s_rule->pdata.lkup_tx_rx.act = 0;
 		s_rule->pdata.lkup_tx_rx.index =
@@ -1556,7 +1554,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
 		daddr = f_info->l_data.ethertype_mac.mac_addr;
 		/* fall-through */
 	case ICE_SW_LKUP_ETHERTYPE:
-		off = (__be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
+		off = (_FORCE_ __be16 *)(eth_hdr + ICE_ETH_ETHTYPE_OFFSET);
 		*off = CPU_TO_BE16(f_info->l_data.ethertype_mac.ethertype);
 		break;
 	case ICE_SW_LKUP_MAC_VLAN:
@@ -1587,7 +1585,7 @@ ice_fill_sw_rule(struct ice_hw *hw, struct ice_fltr_info *f_info,
 			   ICE_NONDMA_TO_NONDMA);
 
 	if (!(vlan_id > ICE_MAX_VLAN_ID)) {
-		off = (__be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
+		off = (_FORCE_ __be16 *)(eth_hdr + ICE_ETH_VLAN_TCI_OFFSET);
 		*off = CPU_TO_BE16(vlan_id);
 	}
 
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index df3c64c79..ca8893111 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -14,6 +14,10 @@
 
 #define BITS_PER_BYTE	8
 
+#ifndef _FORCE_
+#define _FORCE_
+#endif
+
 #define ICE_BYTES_PER_WORD	2
 #define ICE_BYTES_PER_DWORD	4
 #define ICE_MAX_TRAFFIC_CLASS	8
@@ -23,7 +27,7 @@
 #endif
 
 #ifndef IS_ASCII
-#define IS_ASCII(_ch)  ((_ch) < 0x80)
+#define IS_ASCII(_ch)	((_ch) < 0x80)
 #endif
 
 #include "ice_status.h"
@@ -728,6 +732,7 @@ struct ice_hw {
 	u8 pf_id;		/* device profile info */
 
 	u16 max_burst_size;	/* driver sets this value */
+
 	/* Tx Scheduler values */
 	u16 num_tx_sched_layers;
 	u16 num_tx_sched_phys_layers;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 42/69] net/ice/base: fix for signed package download
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (40 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 41/69] net/ice/base: silent semantic parser warnings Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 43/69] net/ice/base: add new API to dealloc flow entry Leyi Rong
                       ` (27 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

In order to properly support signed packages, we always have
to send the complete buffer to firmware, regardless of any
unused space at the end. This is because the SHA hash value
is computed over the entire buffer.

Fixes: 51d04e4933e3 ("net/ice/base: add flexible pipeline module")

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 20edc502f..92d3d29ad 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1005,9 +1005,8 @@ ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
 
 		bh = (struct ice_buf_hdr *)(bufs + i);
 
-		status = ice_aq_download_pkg(hw, bh, LE16_TO_CPU(bh->data_end),
-					     last, &offset, &info, NULL);
-
+		status = ice_aq_download_pkg(hw, bh, ICE_PKG_BUF_SIZE, last,
+					     &offset, &info, NULL);
 		if (status) {
 			ice_debug(hw, ICE_DBG_PKG,
 				  "Pkg download failed: err %d off %d inf %d\n",
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 43/69] net/ice/base: add new API to dealloc flow entry
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (41 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 42/69] net/ice/base: fix for signed package download Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 44/69] net/ice/base: check RSS flow profile list Leyi Rong
                       ` (26 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Paul M Stillwell Jr

Decouple ice_dealloc_flow_entry from ice_flow_rem_entry_sync.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c | 26 ++++++++++++++++++++------
 1 file changed, 20 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 126f49881..2943b0901 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -932,17 +932,15 @@ ice_flow_find_prof_id(struct ice_hw *hw, enum ice_block blk, u64 prof_id)
 }
 
 /**
- * ice_flow_rem_entry_sync - Remove a flow entry
+ * ice_dealloc_flow_entry - Deallocate flow entry memory
  * @hw: pointer to the HW struct
  * @entry: flow entry to be removed
  */
-static enum ice_status
-ice_flow_rem_entry_sync(struct ice_hw *hw, struct ice_flow_entry *entry)
+static void
+ice_dealloc_flow_entry(struct ice_hw *hw, struct ice_flow_entry *entry)
 {
 	if (!entry)
-		return ICE_ERR_BAD_PTR;
-
-	LIST_DEL(&entry->l_entry);
+		return;
 
 	if (entry->entry)
 		ice_free(hw, entry->entry);
@@ -954,6 +952,22 @@ ice_flow_rem_entry_sync(struct ice_hw *hw, struct ice_flow_entry *entry)
 	}
 
 	ice_free(hw, entry);
+}
+
+/**
+ * ice_flow_rem_entry_sync - Remove a flow entry
+ * @hw: pointer to the HW struct
+ * @entry: flow entry to be removed
+ */
+static enum ice_status
+ice_flow_rem_entry_sync(struct ice_hw *hw, struct ice_flow_entry *entry)
+{
+	if (!entry)
+		return ICE_ERR_BAD_PTR;
+
+	LIST_DEL(&entry->l_entry);
+
+	ice_dealloc_flow_entry(hw, entry);
 
 	return ICE_SUCCESS;
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 44/69] net/ice/base: check RSS flow profile list
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (42 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 43/69] net/ice/base: add new API to dealloc flow entry Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 45/69] net/ice/base: protect list add with lock Leyi Rong
                       ` (25 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Vignesh Sridhar, Paul M Stillwell Jr

Minor change to check if there are any RSS flow profiles to remove.
This will avoid flow profile lock acquisition and release
if the list is empty.

Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 2943b0901..8bf424bf2 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -1659,6 +1659,9 @@ enum ice_status ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
 	if (!ice_is_vsi_valid(hw, vsi_handle))
 		return ICE_ERR_PARAM;
 
+	if (LIST_EMPTY(&hw->fl_profs[blk]))
+		return ICE_SUCCESS;
+
 	ice_acquire_lock(&hw->fl_profs_locks[blk]);
 	LIST_FOR_EACH_ENTRY_SAFE(p, t, &hw->fl_profs[blk], ice_flow_prof,
 				 l_entry) {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 45/69] net/ice/base: protect list add with lock
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (43 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 44/69] net/ice/base: check RSS flow profile list Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 46/69] net/ice/base: fix Rx functionality for ethertype filters Leyi Rong
                       ` (24 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Tarun Singh, Paul M Stillwell Jr

Function ice_add_rule_internal needs to call ice_create_pkt_fwd_rule
with lock held because it uses the LIST_ADD to modify the filter
rule list. It needs to be protected when modified.

Signed-off-by: Tarun Singh <tarun.k.singh@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 13d0dad58..d6890c049 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -2288,14 +2288,15 @@ ice_add_rule_internal(struct ice_hw *hw, u8 recp_id,
 
 	m_entry = ice_find_rule_entry(hw, recp_id, new_fltr);
 	if (!m_entry) {
-		ice_release_lock(rule_lock);
-		return ice_create_pkt_fwd_rule(hw, f_entry);
+		status = ice_create_pkt_fwd_rule(hw, f_entry);
+		goto exit_add_rule_internal;
 	}
 
 	cur_fltr = &m_entry->fltr_info;
 	status = ice_add_update_vsi_list(hw, m_entry, cur_fltr, new_fltr);
-	ice_release_lock(rule_lock);
 
+exit_add_rule_internal:
+	ice_release_lock(rule_lock);
 	return status;
 }
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 46/69] net/ice/base: fix Rx functionality for ethertype filters
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (44 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 45/69] net/ice/base: protect list add with lock Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 47/69] net/ice/base: introduce some new macros Leyi Rong
                       ` (23 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dave Ertman, Paul M Stillwell Jr

In the function ice_add_eth_mac(), there is a line that
hard-codes the filter info flag to TX. This is redundant
and inaccurate. That flag will be set by the calling
function that built the list of filters to add, and
hard-coding it eliminates the Rx functionality of this
code. The paired function ice_remove_eth_mac() does not
do this, making a mis-matched pair.

Fixes: 157d00901f97 ("net/ice/base: add functions for ethertype filter")

Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index d6890c049..373acb7a6 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -2975,12 +2975,19 @@ ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list)
  * ice_add_eth_mac - Add ethertype and MAC based filter rule
  * @hw: pointer to the hardware structure
  * @em_list: list of ether type MAC filter, MAC is optional
+ *
+ * This function requires the caller to populate the entries in
+ * the filter list with the necessary fields (including flags to
+ * indicate Tx or Rx rules).
  */
 enum ice_status
 ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list)
 {
 	struct ice_fltr_list_entry *em_list_itr;
 
+	if (!em_list || !hw)
+		return ICE_ERR_PARAM;
+
 	LIST_FOR_EACH_ENTRY(em_list_itr, em_list, ice_fltr_list_entry,
 			    list_entry) {
 		enum ice_sw_lkup_type l_type =
@@ -2990,7 +2997,6 @@ ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list)
 		    l_type != ICE_SW_LKUP_ETHERTYPE)
 			return ICE_ERR_PARAM;
 
-		em_list_itr->fltr_info.flag = ICE_FLTR_TX;
 		em_list_itr->status = ice_add_rule_internal(hw, l_type,
 							    em_list_itr);
 		if (em_list_itr->status)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 47/69] net/ice/base: introduce some new macros
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (45 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 46/69] net/ice/base: fix Rx functionality for ethertype filters Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 48/69] net/ice/base: new marker to mark func parameters unused Leyi Rong
                       ` (22 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Paul M Stillwell Jr

Introduce some more new macros, like ICE_VSI_LB and the like.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flow.c   |  4 +++-
 drivers/net/ice/base/ice_switch.h | 14 +++++---------
 drivers/net/ice/base/ice_type.h   |  6 +++++-
 3 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 8bf424bf2..70152a364 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -844,6 +844,7 @@ ice_flow_proc_segs(struct ice_hw *hw, struct ice_flow_prof_params *params)
 
 #define ICE_FLOW_FIND_PROF_CHK_FLDS	0x00000001
 #define ICE_FLOW_FIND_PROF_CHK_VSI	0x00000002
+#define ICE_FLOW_FIND_PROF_NOT_CHK_DIR	0x00000004
 
 /**
  * ice_flow_find_prof_conds - Find a profile matching headers and conditions
@@ -863,7 +864,8 @@ ice_flow_find_prof_conds(struct ice_hw *hw, enum ice_block blk,
 	struct ice_flow_prof *p;
 
 	LIST_FOR_EACH_ENTRY(p, &hw->fl_profs[blk], ice_flow_prof, l_entry) {
-		if (p->dir == dir && segs_cnt && segs_cnt == p->segs_cnt) {
+		if ((p->dir == dir || conds & ICE_FLOW_FIND_PROF_NOT_CHK_DIR) &&
+		    segs_cnt && segs_cnt == p->segs_cnt) {
 			u8 i;
 
 			/* Check for profile-VSI association if specified */
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index 2f140a86d..05b1170c9 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -11,6 +11,9 @@
 #define ICE_SW_CFG_MAX_BUF_LEN 2048
 #define ICE_MAX_SW 256
 #define ICE_DFLT_VSI_INVAL 0xff
+#define ICE_FLTR_RX BIT(0)
+#define ICE_FLTR_TX BIT(1)
+#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
 
 
 /* Worst case buffer length for ice_aqc_opc_get_res_alloc */
@@ -77,9 +80,6 @@ struct ice_fltr_info {
 	/* rule ID returned by firmware once filter rule is created */
 	u16 fltr_rule_id;
 	u16 flag;
-#define ICE_FLTR_RX		BIT(0)
-#define ICE_FLTR_TX		BIT(1)
-#define ICE_FLTR_TX_RX		(ICE_FLTR_RX | ICE_FLTR_TX)
 
 	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
 	u16 src;
@@ -145,10 +145,6 @@ struct ice_sw_act_ctrl {
 	/* Source VSI for LOOKUP_TX or source port for LOOKUP_RX */
 	u16 src;
 	u16 flag;
-#define ICE_FLTR_RX             BIT(0)
-#define ICE_FLTR_TX             BIT(1)
-#define ICE_FLTR_TX_RX (ICE_FLTR_RX | ICE_FLTR_TX)
-
 	enum ice_sw_fwd_act_type fltr_act;
 	/* Depending on filter action */
 	union {
@@ -368,6 +364,8 @@ ice_aq_get_res_descs(struct ice_hw *hw, u16 num_entries,
 		     struct ice_sq_cd *cd);
 enum ice_status
 ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
+enum ice_status
+ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
 void ice_rem_all_sw_rules_info(struct ice_hw *hw);
 enum ice_status ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
 enum ice_status ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_lst);
@@ -375,8 +373,6 @@ enum ice_status
 ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list);
 enum ice_status
 ice_remove_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list);
-enum ice_status
-ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
 #ifndef NO_MACVLAN_SUPPORT
 enum ice_status
 ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index ca8893111..2354d4f0b 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -72,6 +72,7 @@ static inline u32 ice_round_to_num(u32 N, u32 R)
 #define ICE_HI_WORD(x)		((u16)(((x) >> 16) & 0xFFFF))
 
 /* debug masks - set these bits in hw->debug_mask to control output */
+#define ICE_DBG_TRACE		BIT_ULL(0) /* for function-trace only */
 #define ICE_DBG_INIT		BIT_ULL(1)
 #define ICE_DBG_RELEASE		BIT_ULL(2)
 #define ICE_DBG_FW_LOG		BIT_ULL(3)
@@ -191,6 +192,7 @@ enum ice_vsi_type {
 #ifdef ADQ_SUPPORT
 	ICE_VSI_CHNL = 4,
 #endif /* ADQ_SUPPORT */
+	ICE_VSI_LB = 6,
 };
 
 struct ice_link_status {
@@ -705,6 +707,8 @@ struct ice_fw_log_cfg {
 #define ICE_FW_LOG_EVNT_INIT	(ICE_AQC_FW_LOG_INIT_EN >> ICE_AQC_FW_LOG_EN_S)
 #define ICE_FW_LOG_EVNT_FLOW	(ICE_AQC_FW_LOG_FLOW_EN >> ICE_AQC_FW_LOG_EN_S)
 #define ICE_FW_LOG_EVNT_ERR	(ICE_AQC_FW_LOG_ERR_EN >> ICE_AQC_FW_LOG_EN_S)
+#define ICE_FW_LOG_EVNT_ALL	(ICE_FW_LOG_EVNT_INFO | ICE_FW_LOG_EVNT_INIT | \
+				 ICE_FW_LOG_EVNT_FLOW | ICE_FW_LOG_EVNT_ERR)
 	struct ice_fw_log_evnt evnts[ICE_AQC_FW_LOG_ID_MAX];
 };
 
@@ -934,7 +938,6 @@ enum ice_sw_fwd_act_type {
 #define ICE_SR_CSR_PROTECTED_LIST_PTR		0x0D
 #define ICE_SR_MNG_CFG_PTR			0x0E
 #define ICE_SR_EMP_MODULE_PTR			0x0F
-#define ICE_SR_PBA_FLAGS			0x15
 #define ICE_SR_PBA_BLOCK_PTR			0x16
 #define ICE_SR_BOOT_CFG_PTR			0x17
 #define ICE_SR_NVM_WOL_CFG			0x19
@@ -980,6 +983,7 @@ enum ice_sw_fwd_act_type {
 #define ICE_SR_EMP_SR_SETTINGS_PTR		0x48
 #define ICE_SR_CONFIGURATION_METADATA_PTR	0x4D
 #define ICE_SR_IMMEDIATE_VALUES_PTR		0x4E
+#define ICE_SR_POR_REGISTERS_AUTOLOAD_PTR	0x118
 
 /* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
 #define ICE_SR_VPD_SIZE_WORDS		512
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 48/69] net/ice/base: new marker to mark func parameters unused
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (46 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 47/69] net/ice/base: introduce some new macros Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 49/69] net/ice/base: code clean up Leyi Rong
                       ` (21 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

use __ALWAYS_UNUSED to mark function parameters unused to replace
__always_unused marker.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 2 +-
 drivers/net/ice/base/ice_type.h      | 3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 92d3d29ad..c1cd68c53 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -398,7 +398,7 @@ ice_find_boost_entry(struct ice_seg *ice_seg, u16 addr,
  * Handles enumeration of individual label entries.
  */
 static void *
-ice_label_enum_handler(u32 __always_unused sect_type, void *section, u32 index,
+ice_label_enum_handler(u32 __ALWAYS_UNUSED sect_type, void *section, u32 index,
 		       u32 *offset)
 {
 	struct ice_label_section *labels;
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 2354d4f0b..8f7a2c1f9 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -102,6 +102,9 @@ static inline u32 ice_round_to_num(u32 N, u32 R)
 #define ICE_DBG_USER		BIT_ULL(31)
 #define ICE_DBG_ALL		0xFFFFFFFFFFFFFFFFULL
 
+#ifndef __ALWAYS_UNUSED
+#define __ALWAYS_UNUSED
+#endif
 
 
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 49/69] net/ice/base: code clean up
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (47 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 48/69] net/ice/base: new marker to mark func parameters unused Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 50/69] net/ice/base: cleanup ice flex pipe files Leyi Rong
                       ` (20 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Paul M Stillwell Jr

Cleanup the useless code.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_controlq.c  | 62 +---------------------------
 drivers/net/ice/base/ice_fdir.h      |  4 --
 drivers/net/ice/base/ice_flex_pipe.c |  5 ++-
 drivers/net/ice/base/ice_sched.c     |  4 +-
 drivers/net/ice/base/ice_switch.c    |  8 ----
 drivers/net/ice/base/ice_switch.h    |  2 -
 drivers/net/ice/base/ice_type.h      | 12 ------
 7 files changed, 7 insertions(+), 90 deletions(-)

diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
index 4cb6df113..3ef07e094 100644
--- a/drivers/net/ice/base/ice_controlq.c
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -262,7 +262,7 @@ ice_cfg_sq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
  * @hw: pointer to the hardware structure
  * @cq: pointer to the specific Control queue
  *
- * Configure base address and length registers for the receive (event q)
+ * Configure base address and length registers for the receive (event queue)
  */
 static enum ice_status
 ice_cfg_rq_regs(struct ice_hw *hw, struct ice_ctl_q_info *cq)
@@ -772,9 +772,6 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 	struct ice_ctl_q_ring *sq = &cq->sq;
 	u16 ntc = sq->next_to_clean;
 	struct ice_sq_cd *details;
-#if 0
-	struct ice_aq_desc desc_cb;
-#endif
 	struct ice_aq_desc *desc;
 
 	desc = ICE_CTL_Q_DESC(*sq, ntc);
@@ -783,15 +780,6 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 	while (rd32(hw, cq->sq.head) != ntc) {
 		ice_debug(hw, ICE_DBG_AQ_MSG,
 			  "ntc %d head %d.\n", ntc, rd32(hw, cq->sq.head));
-#if 0
-		if (details->callback) {
-			ICE_CTL_Q_CALLBACK cb_func =
-				(ICE_CTL_Q_CALLBACK)details->callback;
-			ice_memcpy(&desc_cb, desc, sizeof(desc_cb),
-				   ICE_DMA_TO_DMA);
-			cb_func(hw, &desc_cb);
-		}
-#endif
 		ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
 		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
 		ntc++;
@@ -941,38 +929,8 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	details = ICE_CTL_Q_DETAILS(cq->sq, cq->sq.next_to_use);
 	if (cd)
 		*details = *cd;
-#if 0
-		/* FIXME: if/when this block gets enabled (when the #if 0
-		 * is removed), add braces to both branches of the surrounding
-		 * conditional expression. The braces have been removed to
-		 * prevent checkpatch complaining.
-		 */
-
-		/* If the command details are defined copy the cookie. The
-		 * CPU_TO_LE32 is not needed here because the data is ignored
-		 * by the FW, only used by the driver
-		 */
-		if (details->cookie) {
-			desc->cookie_high =
-				CPU_TO_LE32(ICE_HI_DWORD(details->cookie));
-			desc->cookie_low =
-				CPU_TO_LE32(ICE_LO_DWORD(details->cookie));
-		}
-#endif
 	else
 		ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
-#if 0
-	/* clear requested flags and then set additional flags if defined */
-	desc->flags &= ~CPU_TO_LE16(details->flags_dis);
-	desc->flags |= CPU_TO_LE16(details->flags_ena);
-
-	if (details->postpone && !details->async) {
-		ice_debug(hw, ICE_DBG_AQ_MSG,
-			  "Async flag not set along with postpone flag\n");
-		status = ICE_ERR_PARAM;
-		goto sq_send_command_error;
-	}
-#endif
 
 	/* Call clean and check queue available function to reclaim the
 	 * descriptors that were processed by FW/MBX; the function returns the
@@ -1019,20 +977,8 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	(cq->sq.next_to_use)++;
 	if (cq->sq.next_to_use == cq->sq.count)
 		cq->sq.next_to_use = 0;
-#if 0
-	/* FIXME - handle this case? */
-	if (!details->postpone)
-#endif
 	wr32(hw, cq->sq.tail, cq->sq.next_to_use);
 
-#if 0
-	/* if command details are not defined or async flag is not set,
-	 * we need to wait for desc write back
-	 */
-	if (!details->async && !details->postpone) {
-		/* FIXME - handle this case? */
-	}
-#endif
 	do {
 		if (ice_sq_done(hw, cq))
 			break;
@@ -1087,9 +1033,6 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 
 	/* update the error if time out occurred */
 	if (!cmd_completed) {
-#if 0
-	    (!details->async && !details->postpone)) {
-#endif
 		ice_debug(hw, ICE_DBG_AQ_MSG,
 			  "Control Send Queue Writeback timeout.\n");
 		status = ICE_ERR_AQ_TIMEOUT;
@@ -1208,9 +1151,6 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	cq->rq.next_to_clean = ntc;
 	cq->rq.next_to_use = ntu;
 
-#if 0
-	ice_nvmupd_check_wait_event(hw, LE16_TO_CPU(e->desc.opcode));
-#endif
 clean_rq_elem_out:
 	/* Set pending if needed, unlock and return */
 	if (pending) {
diff --git a/drivers/net/ice/base/ice_fdir.h b/drivers/net/ice/base/ice_fdir.h
index 2ecb147f1..8490fac61 100644
--- a/drivers/net/ice/base/ice_fdir.h
+++ b/drivers/net/ice/base/ice_fdir.h
@@ -163,9 +163,6 @@ struct ice_fdir_fltr {
 
 	/* filter control */
 	u16 q_index;
-#ifdef ADQ_SUPPORT
-	u16 orig_q_index;
-#endif /* ADQ_SUPPORT */
 	u16 dest_vsi;
 	u8 dest_ctl;
 	u8 fltr_status;
@@ -173,7 +170,6 @@ struct ice_fdir_fltr {
 	u32 fltr_id;
 };
 
-
 /* Dummy packet filter definition structure. */
 struct ice_fdir_base_pkt {
 	enum ice_fltr_ptype flow;
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index c1cd68c53..14d7bbc7e 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -640,7 +640,7 @@ static bool ice_bits_max_set(const u8 *mask, u16 size, u16 max)
  * @size: the size of the complete key in bytes (must be even)
  * @val: array of 8-bit values that makes up the value portion of the key
  * @upd: array of 8-bit masks that determine what key portion to update
- * @dc: array of 8-bit masks that make up the dont' care mask
+ * @dc: array of 8-bit masks that make up the don't care mask
  * @nm: array of 8-bit masks that make up the never match mask
  * @off: the offset of the first byte in the key to update
  * @len: the number of bytes in the key update
@@ -897,7 +897,7 @@ ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
 	u32 i;
 
 	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
-	ice_debug(hw, ICE_DBG_PKG, "Package version: %d.%d.%d.%d\n",
+	ice_debug(hw, ICE_DBG_PKG, "Package format version: %d.%d.%d.%d\n",
 		  pkg_hdr->format_ver.major, pkg_hdr->format_ver.minor,
 		  pkg_hdr->format_ver.update, pkg_hdr->format_ver.draft);
 
@@ -4479,6 +4479,7 @@ ice_move_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig,
 	status = ice_vsig_find_vsi(hw, blk, vsi, &orig_vsig);
 	if (!status)
 		status = ice_vsig_add_mv_vsi(hw, blk, vsi, vsig);
+
 	if (status) {
 		ice_free(hw, p);
 		return status;
diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index a72e72982..fa3158a7b 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -1233,7 +1233,7 @@ enum ice_status ice_sched_init_port(struct ice_port_info *pi)
 		goto err_init_port;
 	}
 
-	/* If the last node is a leaf node then the index of the Q group
+	/* If the last node is a leaf node then the index of the queue group
 	 * layer is two less than the number of elements.
 	 */
 	if (num_elems > 2 && buf[0].generic[num_elems - 1].data.elem_type ==
@@ -3529,9 +3529,11 @@ ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
 		LIST_FOR_EACH_ENTRY(agg_vsi_info, &agg_info->agg_vsi_list,
 				    ice_sched_agg_vsi_info, list_entry)
 			if (agg_vsi_info->vsi_handle == vsi_handle) {
+				/* cppcheck-suppress unreadVariable */
 				vsi_handle_valid = true;
 				break;
 			}
+
 		if (!vsi_handle_valid)
 			goto exit_agg_priority_per_tc;
 
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 373acb7a6..c7fcd71a7 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -2934,7 +2934,6 @@ ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
 	return ICE_SUCCESS;
 }
 
-#ifndef NO_MACVLAN_SUPPORT
 /**
  * ice_add_mac_vlan - Add MAC and VLAN pair based filter rule
  * @hw: pointer to the hardware structure
@@ -2969,7 +2968,6 @@ ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list)
 	}
 	return ICE_SUCCESS;
 }
-#endif
 
 /**
  * ice_add_eth_mac - Add ethertype and MAC based filter rule
@@ -3307,7 +3305,6 @@ ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
 	return ICE_SUCCESS;
 }
 
-#ifndef NO_MACVLAN_SUPPORT
 /**
  * ice_remove_mac_vlan - Remove MAC VLAN based filter rule
  * @hw: pointer to the hardware structure
@@ -3335,7 +3332,6 @@ ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list)
 	}
 	return ICE_SUCCESS;
 }
-#endif /* !NO_MACVLAN_SUPPORT */
 
 /**
  * ice_vsi_uses_fltr - Determine if given VSI uses specified filter
@@ -3850,11 +3846,7 @@ ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle,
 		ice_remove_promisc(hw, lkup, &remove_list_head);
 		break;
 	case ICE_SW_LKUP_MAC_VLAN:
-#ifndef NO_MACVLAN_SUPPORT
 		ice_remove_mac_vlan(hw, &remove_list_head);
-#else
-		ice_debug(hw, ICE_DBG_SW, "MAC VLAN look up is not supported yet\n");
-#endif /* !NO_MACVLAN_SUPPORT */
 		break;
 	case ICE_SW_LKUP_ETHERTYPE:
 	case ICE_SW_LKUP_ETHERTYPE_MAC:
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index 05b1170c9..b788aa7ec 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -373,12 +373,10 @@ enum ice_status
 ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list);
 enum ice_status
 ice_remove_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list);
-#ifndef NO_MACVLAN_SUPPORT
 enum ice_status
 ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list);
 enum ice_status
 ice_remove_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list);
-#endif /* !NO_MACVLAN_SUPPORT */
 
 enum ice_status
 ice_add_mac_with_sw_marker(struct ice_hw *hw, struct ice_fltr_info *f_info,
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 8f7a2c1f9..ea587a0f0 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -22,13 +22,9 @@
 #define ICE_BYTES_PER_DWORD	4
 #define ICE_MAX_TRAFFIC_CLASS	8
 
-#ifndef MIN_T
 #define MIN_T(_t, _a, _b)	min((_t)(_a), (_t)(_b))
-#endif
 
-#ifndef IS_ASCII
 #define IS_ASCII(_ch)	((_ch) < 0x80)
-#endif
 
 #include "ice_status.h"
 #include "ice_hw_autogen.h"
@@ -45,9 +41,7 @@ static inline bool ice_is_tc_ena(ice_bitmap_t bitmap, u8 tc)
 	return ice_is_bit_set(&bitmap, tc);
 }
 
-#ifndef DIV_64BIT
 #define DIV_64BIT(n, d) ((n) / (d))
-#endif /* DIV_64BIT */
 
 static inline u64 round_up_64bit(u64 a, u32 b)
 {
@@ -192,9 +186,6 @@ enum ice_media_type {
 enum ice_vsi_type {
 	ICE_VSI_PF = 0,
 	ICE_VSI_CTRL = 3,	/* equates to ICE_VSI_PF with 1 queue pair */
-#ifdef ADQ_SUPPORT
-	ICE_VSI_CHNL = 4,
-#endif /* ADQ_SUPPORT */
 	ICE_VSI_LB = 6,
 };
 
@@ -914,9 +905,6 @@ struct ice_hw_port_stats {
 	/* flow director stats */
 	u32 fd_sb_status;
 	u64 fd_sb_match;
-#ifdef ADQ_SUPPORT
-	u64 ch_atr_match;
-#endif /* ADQ_SUPPORT */
 };
 
 enum ice_sw_fwd_act_type {
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 50/69] net/ice/base: cleanup ice flex pipe files
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (48 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 49/69] net/ice/base: code clean up Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 51/69] net/ice/base: refactor VSI node sched code Leyi Rong
                       ` (19 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Bruce Allan, Paul M Stillwell Jr

Make functions that can be, static. Remove some code that is not
currently called.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 523 ++++-----------------------
 drivers/net/ice/base/ice_flex_pipe.h |  59 ---
 2 files changed, 78 insertions(+), 504 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 14d7bbc7e..ecc2f5738 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -461,7 +461,7 @@ ice_enum_labels(struct ice_seg *ice_seg, u32 type, struct ice_pkg_enum *state,
  * since the first call to ice_enum_labels requires a pointer to an actual
  * ice_seg structure.
  */
-void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg)
+static void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg)
 {
 	struct ice_pkg_enum state;
 	char *label_name;
@@ -808,27 +808,6 @@ ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
 	return status;
 }
 
-/**
- * ice_aq_upload_section
- * @hw: pointer to the hardware structure
- * @pkg_buf: the package buffer which will receive the section
- * @buf_size: the size of the package buffer
- * @cd: pointer to command details structure or NULL
- *
- * Upload Section (0x0C41)
- */
-enum ice_status
-ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
-		      u16 buf_size, struct ice_sq_cd *cd)
-{
-	struct ice_aq_desc desc;
-
-	ice_debug(hw, ICE_DBG_TRACE, "ice_aq_upload_section");
-	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_upload_section);
-	desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD);
-
-	return ice_aq_send_cmd(hw, &desc, pkg_buf, buf_size, cd);
-}
 
 /**
  * ice_aq_update_pkg
@@ -890,7 +869,7 @@ ice_aq_update_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf, u16 buf_size,
  * success it returns a pointer to the segment header, otherwise it will
  * return NULL.
  */
-struct ice_generic_seg_hdr *
+static struct ice_generic_seg_hdr *
 ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
 		    struct ice_pkg_hdr *pkg_hdr)
 {
@@ -1052,7 +1031,8 @@ ice_aq_get_pkg_info_list(struct ice_hw *hw,
  *
  * Handles the download of a complete package.
  */
-enum ice_status ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
+static enum ice_status
+ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
 {
 	struct ice_buf_table *ice_buf_tbl;
 
@@ -1081,7 +1061,7 @@ enum ice_status ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
  *
  * Saves off the package details into the HW structure.
  */
-enum ice_status
+static enum ice_status
 ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
 {
 	struct ice_global_metadata_seg *meta_seg;
@@ -1133,8 +1113,7 @@ ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
  *
  * Store details of the package currently loaded in HW into the HW structure.
  */
-enum ice_status
-ice_get_pkg_info(struct ice_hw *hw)
+static enum ice_status ice_get_pkg_info(struct ice_hw *hw)
 {
 	struct ice_aqc_get_pkg_info_resp *pkg_info;
 	enum ice_status status;
@@ -1187,40 +1166,6 @@ ice_get_pkg_info(struct ice_hw *hw)
 	return status;
 }
 
-/**
- * ice_find_label_value
- * @ice_seg: pointer to the ice segment (non-NULL)
- * @name: name of the label to search for
- * @type: the section type that will contain the label
- * @value: pointer to a value that will return the label's value if found
- *
- * Finds a label's value given the label name and the section type to search.
- * The ice_seg parameter must not be NULL since the first call to
- * ice_enum_labels requires a pointer to an actual ice_seg structure.
- */
-enum ice_status
-ice_find_label_value(struct ice_seg *ice_seg, char const *name, u32 type,
-		     u16 *value)
-{
-	struct ice_pkg_enum state;
-	char *label_name;
-	u16 val;
-
-	if (!ice_seg)
-		return ICE_ERR_PARAM;
-
-	do {
-		label_name = ice_enum_labels(ice_seg, type, &state, &val);
-		if (label_name && !strcmp(label_name, name)) {
-			*value = val;
-			return ICE_SUCCESS;
-		}
-
-		ice_seg = NULL;
-	} while (label_name);
-
-	return ICE_ERR_CFG;
-}
 
 /**
  * ice_verify_pkg - verify package
@@ -1499,7 +1444,7 @@ enum ice_status ice_copy_and_init_pkg(struct ice_hw *hw, const u8 *buf, u32 len)
  * Allocates a package buffer and returns a pointer to the buffer header.
  * Note: all package contents must be in Little Endian form.
  */
-struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw)
+static struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw)
 {
 	struct ice_buf_build *bld;
 	struct ice_buf_hdr *buf;
@@ -1623,40 +1568,15 @@ ice_get_sw_fv_list(struct ice_hw *hw, u16 *prot_ids, u8 ids_cnt,
 }
 
 /**
- * ice_pkg_buf_alloc_single_section
+ * ice_pkg_buf_free
  * @hw: pointer to the HW structure
- * @type: the section type value
- * @size: the size of the section to reserve (in bytes)
- * @section: returns pointer to the section
+ * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
  *
- * Allocates a package buffer with a single section.
- * Note: all package contents must be in Little Endian form.
+ * Frees a package buffer
  */
-static struct ice_buf_build *
-ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size,
-				 void **section)
+static void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld)
 {
-	struct ice_buf_build *buf;
-
-	if (!section)
-		return NULL;
-
-	buf = ice_pkg_buf_alloc(hw);
-	if (!buf)
-		return NULL;
-
-	if (ice_pkg_buf_reserve_section(buf, 1))
-		goto ice_pkg_buf_alloc_single_section_err;
-
-	*section = ice_pkg_buf_alloc_section(buf, type, size);
-	if (!*section)
-		goto ice_pkg_buf_alloc_single_section_err;
-
-	return buf;
-
-ice_pkg_buf_alloc_single_section_err:
-	ice_pkg_buf_free(hw, buf);
-	return NULL;
+	ice_free(hw, bld);
 }
 
 /**
@@ -1672,7 +1592,7 @@ ice_pkg_buf_alloc_single_section(struct ice_hw *hw, u32 type, u16 size,
  * result in some wasted space in the buffer.
  * Note: all package contents must be in Little Endian form.
  */
-enum ice_status
+static enum ice_status
 ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count)
 {
 	struct ice_buf_hdr *buf;
@@ -1700,48 +1620,6 @@ ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count)
 	return ICE_SUCCESS;
 }
 
-/**
- * ice_pkg_buf_unreserve_section
- * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
- * @count: the number of sections to unreserve
- *
- * Unreserves one or more section table entries in a package buffer, releasing
- * space that can be used for section data. This routine can be called
- * multiple times as long as they are made before calling
- * ice_pkg_buf_alloc_section(). Once ice_pkg_buf_alloc_section()
- * is called once, the number of sections that can be allocated will not be able
- * to be increased; not using all reserved sections is fine, but this will
- * result in some wasted space in the buffer.
- * Note: all package contents must be in Little Endian form.
- */
-enum ice_status
-ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count)
-{
-	struct ice_buf_hdr *buf;
-	u16 section_count;
-	u16 data_end;
-
-	if (!bld)
-		return ICE_ERR_PARAM;
-
-	buf = (struct ice_buf_hdr *)&bld->buf;
-
-	/* already an active section, can't decrease table size */
-	section_count = LE16_TO_CPU(buf->section_count);
-	if (section_count > 0)
-		return ICE_ERR_CFG;
-
-	if (count > bld->reserved_section_table_entries)
-		return ICE_ERR_CFG;
-	bld->reserved_section_table_entries -= count;
-
-	data_end = LE16_TO_CPU(buf->data_end) -
-		   (count * sizeof(buf->section_entry[0]));
-	buf->data_end = CPU_TO_LE16(data_end);
-
-	return ICE_SUCCESS;
-}
-
 /**
  * ice_pkg_buf_alloc_section
  * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
@@ -1754,7 +1632,7 @@ ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count)
  * section contents.
  * Note: all package contents must be in Little Endian form.
  */
-void *
+static void *
 ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size)
 {
 	struct ice_buf_hdr *buf;
@@ -1795,24 +1673,6 @@ ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size)
 	return NULL;
 }
 
-/**
- * ice_pkg_buf_get_free_space
- * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
- *
- * Returns the number of free bytes remaining in the buffer.
- * Note: all package contents must be in Little Endian form.
- */
-u16 ice_pkg_buf_get_free_space(struct ice_buf_build *bld)
-{
-	struct ice_buf_hdr *buf;
-
-	if (!bld)
-		return 0;
-
-	buf = (struct ice_buf_hdr *)&bld->buf;
-	return ICE_MAX_S_DATA_END - LE16_TO_CPU(buf->data_end);
-}
-
 /**
  * ice_pkg_buf_get_active_sections
  * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
@@ -1823,7 +1683,7 @@ u16 ice_pkg_buf_get_free_space(struct ice_buf_build *bld)
  * not be used.
  * Note: all package contents must be in Little Endian form.
  */
-u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld)
+static u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld)
 {
 	struct ice_buf_hdr *buf;
 
@@ -1840,7 +1700,7 @@ u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld)
  *
  * Return a pointer to the buffer's header
  */
-struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld)
+static struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld)
 {
 	if (!bld)
 		return NULL;
@@ -1848,18 +1708,6 @@ struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld)
 	return &bld->buf;
 }
 
-/**
- * ice_pkg_buf_free
- * @hw: pointer to the HW structure
- * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc())
- *
- * Frees a package buffer
- */
-void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld)
-{
-	ice_free(hw, bld);
-}
-
 /**
  * ice_find_prot_off - find prot ID and offset pair, based on prof and FV index
  * @hw: pointer to the hardware structure
@@ -1891,38 +1739,6 @@ ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx,
 
 /* PTG Management */
 
-/**
- * ice_ptg_update_xlt1 - Updates packet type groups in HW via XLT1 table
- * @hw: pointer to the hardware structure
- * @blk: HW block
- *
- * This function will update the XLT1 hardware table to reflect the new
- * packet type group configuration.
- */
-enum ice_status ice_ptg_update_xlt1(struct ice_hw *hw, enum ice_block blk)
-{
-	struct ice_xlt1_section *sect;
-	struct ice_buf_build *bld;
-	enum ice_status status;
-	u16 index;
-
-	bld = ice_pkg_buf_alloc_single_section(hw, ice_sect_id(blk, ICE_XLT1),
-					       ICE_XLT1_SIZE(ICE_XLT1_CNT),
-					       (void **)&sect);
-	if (!bld)
-		return ICE_ERR_NO_MEMORY;
-
-	sect->count = CPU_TO_LE16(ICE_XLT1_CNT);
-	sect->offset = CPU_TO_LE16(0);
-	for (index = 0; index < ICE_XLT1_CNT; index++)
-		sect->value[index] = hw->blk[blk].xlt1.ptypes[index].ptg;
-
-	status = ice_update_pkg(hw, ice_pkg_buf(bld), 1);
-
-	ice_pkg_buf_free(hw, bld);
-
-	return status;
-}
 
 /**
  * ice_ptg_find_ptype - Search for packet type group using packet type (ptype)
@@ -1935,7 +1751,7 @@ enum ice_status ice_ptg_update_xlt1(struct ice_hw *hw, enum ice_block blk)
  * PTG ID that contains it through the ptg parameter, with the value of
  * ICE_DEFAULT_PTG (0) meaning it is part the default PTG.
  */
-enum ice_status
+static enum ice_status
 ice_ptg_find_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 *ptg)
 {
 	if (ptype >= ICE_XLT1_CNT || !ptg)
@@ -1969,7 +1785,7 @@ void ice_ptg_alloc_val(struct ice_hw *hw, enum ice_block blk, u8 ptg)
  * that 0 is the default packet type group, so successfully created PTGs will
  * have a non-zero ID value; which means a 0 return value indicates an error.
  */
-u8 ice_ptg_alloc(struct ice_hw *hw, enum ice_block blk)
+static u8 ice_ptg_alloc(struct ice_hw *hw, enum ice_block blk)
 {
 	u16 i;
 
@@ -1984,31 +1800,6 @@ u8 ice_ptg_alloc(struct ice_hw *hw, enum ice_block blk)
 	return 0;
 }
 
-/**
- * ice_ptg_free - Frees a packet type group
- * @hw: pointer to the hardware structure
- * @blk: HW block
- * @ptg: the ptg ID to free
- *
- * This function frees a packet type group, and returns all the current ptypes
- * within it to the default PTG.
- */
-void ice_ptg_free(struct ice_hw *hw, enum ice_block blk, u8 ptg)
-{
-	struct ice_ptg_ptype *p, *temp;
-
-	hw->blk[blk].xlt1.ptg_tbl[ptg].in_use = false;
-	p = hw->blk[blk].xlt1.ptg_tbl[ptg].first_ptype;
-	while (p) {
-		p->ptg = ICE_DEFAULT_PTG;
-		temp = p->next_ptype;
-		p->next_ptype = NULL;
-		p = temp;
-	}
-
-	hw->blk[blk].xlt1.ptg_tbl[ptg].first_ptype = NULL;
-}
-
 /**
  * ice_ptg_remove_ptype - Removes ptype from a particular packet type group
  * @hw: pointer to the hardware structure
@@ -2066,7 +1857,7 @@ ice_ptg_remove_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg)
  * a destination PTG ID of ICE_DEFAULT_PTG (0) will move the ptype to the
  * default PTG.
  */
-enum ice_status
+static enum ice_status
 ice_ptg_add_mv_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg)
 {
 	enum ice_status status;
@@ -2202,70 +1993,6 @@ ice_match_prop_lst(struct LIST_HEAD_TYPE *list1, struct LIST_HEAD_TYPE *list2)
 
 /* VSIG Management */
 
-/**
- * ice_vsig_update_xlt2_sect - update one section of XLT2 table
- * @hw: pointer to the hardware structure
- * @blk: HW block
- * @vsi: HW VSI number to program
- * @vsig: vsig for the VSI
- *
- * This function will update the XLT2 hardware table with the input VSI
- * group configuration.
- */
-static enum ice_status
-ice_vsig_update_xlt2_sect(struct ice_hw *hw, enum ice_block blk, u16 vsi,
-			  u16 vsig)
-{
-	struct ice_xlt2_section *sect;
-	struct ice_buf_build *bld;
-	enum ice_status status;
-
-	bld = ice_pkg_buf_alloc_single_section(hw, ice_sect_id(blk, ICE_XLT2),
-					       sizeof(struct ice_xlt2_section),
-					       (void **)&sect);
-	if (!bld)
-		return ICE_ERR_NO_MEMORY;
-
-	sect->count = CPU_TO_LE16(1);
-	sect->offset = CPU_TO_LE16(vsi);
-	sect->value[0] = CPU_TO_LE16(vsig);
-
-	status = ice_update_pkg(hw, ice_pkg_buf(bld), 1);
-
-	ice_pkg_buf_free(hw, bld);
-
-	return status;
-}
-
-/**
- * ice_vsig_update_xlt2 - update XLT2 table with VSIG configuration
- * @hw: pointer to the hardware structure
- * @blk: HW block
- *
- * This function will update the XLT2 hardware table with the input VSI
- * group configuration of used vsis.
- */
-enum ice_status ice_vsig_update_xlt2(struct ice_hw *hw, enum ice_block blk)
-{
-	u16 vsi;
-
-	for (vsi = 0; vsi < ICE_MAX_VSI; vsi++) {
-		/* update only vsis that have been changed */
-		if (hw->blk[blk].xlt2.vsis[vsi].changed) {
-			enum ice_status status;
-			u16 vsig;
-
-			vsig = hw->blk[blk].xlt2.vsis[vsi].vsig;
-			status = ice_vsig_update_xlt2_sect(hw, blk, vsi, vsig);
-			if (status)
-				return status;
-
-			hw->blk[blk].xlt2.vsis[vsi].changed = 0;
-		}
-	}
-
-	return ICE_SUCCESS;
-}
 
 /**
  * ice_vsig_find_vsi - find a VSIG that contains a specified VSI
@@ -2346,7 +2073,7 @@ static u16 ice_vsig_alloc(struct ice_hw *hw, enum ice_block blk)
  * for, the list must match exactly, including the order in which the
  * characteristics are listed.
  */
-enum ice_status
+static enum ice_status
 ice_find_dup_props_vsig(struct ice_hw *hw, enum ice_block blk,
 			struct LIST_HEAD_TYPE *chs, u16 *vsig)
 {
@@ -2373,7 +2100,7 @@ ice_find_dup_props_vsig(struct ice_hw *hw, enum ice_block blk,
  * The function will remove all VSIs associated with the input VSIG and move
  * them to the DEFAULT_VSIG and mark the VSIG available.
  */
-enum ice_status
+static enum ice_status
 ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig)
 {
 	struct ice_vsig_prof *dtmp, *del;
@@ -2424,6 +2151,62 @@ ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig)
 	return ICE_SUCCESS;
 }
 
+/**
+ * ice_vsig_remove_vsi - remove VSI from VSIG
+ * @hw: pointer to the hardware structure
+ * @blk: HW block
+ * @vsi: VSI to remove
+ * @vsig: VSI group to remove from
+ *
+ * The function will remove the input VSI from its VSI group and move it
+ * to the DEFAULT_VSIG.
+ */
+static enum ice_status
+ice_vsig_remove_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig)
+{
+	struct ice_vsig_vsi **vsi_head, *vsi_cur, *vsi_tgt;
+	u16 idx;
+
+	idx = vsig & ICE_VSIG_IDX_M;
+
+	if (vsi >= ICE_MAX_VSI || idx >= ICE_MAX_VSIGS)
+		return ICE_ERR_PARAM;
+
+	if (!hw->blk[blk].xlt2.vsig_tbl[idx].in_use)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	/* entry already in default VSIG, don't have to remove */
+	if (idx == ICE_DEFAULT_VSIG)
+		return ICE_SUCCESS;
+
+	vsi_head = &hw->blk[blk].xlt2.vsig_tbl[idx].first_vsi;
+	if (!(*vsi_head))
+		return ICE_ERR_CFG;
+
+	vsi_tgt = &hw->blk[blk].xlt2.vsis[vsi];
+	vsi_cur = (*vsi_head);
+
+	/* iterate the VSI list, skip over the entry to be removed */
+	while (vsi_cur) {
+		if (vsi_tgt == vsi_cur) {
+			(*vsi_head) = vsi_cur->next_vsi;
+			break;
+		}
+		vsi_head = &vsi_cur->next_vsi;
+		vsi_cur = vsi_cur->next_vsi;
+	}
+
+	/* verify if VSI was removed from group list */
+	if (!vsi_cur)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	vsi_cur->vsig = ICE_DEFAULT_VSIG;
+	vsi_cur->changed = 1;
+	vsi_cur->next_vsi = NULL;
+
+	return ICE_SUCCESS;
+}
+
 /**
  * ice_vsig_add_mv_vsi - add or move a VSI to a VSI group
  * @hw: pointer to the hardware structure
@@ -2436,7 +2219,7 @@ ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig)
  * move the entry to the DEFAULT_VSIG, update the original VSIG and
  * then move entry to the new VSIG.
  */
-enum ice_status
+static enum ice_status
 ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig)
 {
 	struct ice_vsig_vsi *tmp;
@@ -2487,62 +2270,6 @@ ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig)
 	return ICE_SUCCESS;
 }
 
-/**
- * ice_vsig_remove_vsi - remove VSI from VSIG
- * @hw: pointer to the hardware structure
- * @blk: HW block
- * @vsi: VSI to remove
- * @vsig: VSI group to remove from
- *
- * The function will remove the input VSI from its VSI group and move it
- * to the DEFAULT_VSIG.
- */
-enum ice_status
-ice_vsig_remove_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig)
-{
-	struct ice_vsig_vsi **vsi_head, *vsi_cur, *vsi_tgt;
-	u16 idx;
-
-	idx = vsig & ICE_VSIG_IDX_M;
-
-	if (vsi >= ICE_MAX_VSI || idx >= ICE_MAX_VSIGS)
-		return ICE_ERR_PARAM;
-
-	if (!hw->blk[blk].xlt2.vsig_tbl[idx].in_use)
-		return ICE_ERR_DOES_NOT_EXIST;
-
-	/* entry already in default VSIG, don't have to remove */
-	if (idx == ICE_DEFAULT_VSIG)
-		return ICE_SUCCESS;
-
-	vsi_head = &hw->blk[blk].xlt2.vsig_tbl[idx].first_vsi;
-	if (!(*vsi_head))
-		return ICE_ERR_CFG;
-
-	vsi_tgt = &hw->blk[blk].xlt2.vsis[vsi];
-	vsi_cur = (*vsi_head);
-
-	/* iterate the VSI list, skip over the entry to be removed */
-	while (vsi_cur) {
-		if (vsi_tgt == vsi_cur) {
-			(*vsi_head) = vsi_cur->next_vsi;
-			break;
-		}
-		vsi_head = &vsi_cur->next_vsi;
-		vsi_cur = vsi_cur->next_vsi;
-	}
-
-	/* verify if VSI was removed from group list */
-	if (!vsi_cur)
-		return ICE_ERR_DOES_NOT_EXIST;
-
-	vsi_cur->vsig = ICE_DEFAULT_VSIG;
-	vsi_cur->changed = 1;
-	vsi_cur->next_vsi = NULL;
-
-	return ICE_SUCCESS;
-}
-
 /**
  * ice_find_prof_id - find profile ID for a given field vector
  * @hw: pointer to the hardware structure
@@ -4035,44 +3762,6 @@ ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id)
 	return entry;
 }
 
-/**
- * ice_set_prof_context - Set context for a given profile
- * @hw: pointer to the HW struct
- * @blk: hardware block
- * @id: profile tracking ID
- * @cntxt: context
- */
-struct ice_prof_map *
-ice_set_prof_context(struct ice_hw *hw, enum ice_block blk, u64 id, u64 cntxt)
-{
-	struct ice_prof_map *entry;
-
-	entry = ice_search_prof_id(hw, blk, id);
-	if (entry)
-		entry->context = cntxt;
-
-	return entry;
-}
-
-/**
- * ice_get_prof_context - Get context for a given profile
- * @hw: pointer to the HW struct
- * @blk: hardware block
- * @id: profile tracking ID
- * @cntxt: pointer to variable to receive the context
- */
-struct ice_prof_map *
-ice_get_prof_context(struct ice_hw *hw, enum ice_block blk, u64 id, u64 *cntxt)
-{
-	struct ice_prof_map *entry;
-
-	entry = ice_search_prof_id(hw, blk, id);
-	if (entry)
-		*cntxt = entry->context;
-
-	return entry;
-}
-
 /**
  * ice_vsig_prof_id_count - count profiles in a VSIG
  * @hw: pointer to the HW struct
@@ -4988,34 +4677,6 @@ ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl)
 	return status;
 }
 
-/**
- * ice_add_flow - add flow
- * @hw: pointer to the HW struct
- * @blk: hardware block
- * @vsi: array of VSIs to enable with the profile specified by ID
- * @count: number of elements in the VSI array
- * @id: profile tracking ID
- *
- * Calling this function will update the hardware tables to enable the
- * profile indicated by the ID parameter for the VSIs specified in the VSI
- * array. Once successfully called, the flow will be enabled.
- */
-enum ice_status
-ice_add_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
-	     u64 id)
-{
-	enum ice_status status;
-	u16 i;
-
-	for (i = 0; i < count; i++) {
-		status = ice_add_prof_id_flow(hw, blk, vsi[i], id);
-		if (status)
-			return status;
-	}
-
-	return ICE_SUCCESS;
-}
-
 /**
  * ice_rem_prof_from_list - remove a profile from list
  * @hw: pointer to the HW struct
@@ -5169,31 +4830,3 @@ ice_rem_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl)
 
 	return status;
 }
-
-/**
- * ice_rem_flow - remove flow
- * @hw: pointer to the HW struct
- * @blk: hardware block
- * @vsi: array of VSIs from which to remove the profile specified by ID
- * @count: number of elements in the VSI array
- * @id: profile tracking ID
- *
- * The function will remove flows from the specified VSIs that were enabled
- * using ice_add_flow. The ID value will indicated which profile will be
- * removed. Once successfully called, the flow will be disabled.
- */
-enum ice_status
-ice_rem_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
-	     u64 id)
-{
-	enum ice_status status;
-	u16 i;
-
-	for (i = 0; i < count; i++) {
-		status = ice_rem_prof_id_flow(hw, blk, vsi[i], id);
-		if (status)
-			return status;
-	}
-
-	return ICE_SUCCESS;
-}
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index 375758c8d..f5fa685df 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -27,67 +27,18 @@ void ice_release_change_lock(struct ice_hw *hw);
 enum ice_status
 ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx,
 		  u8 *prot, u16 *off);
-struct ice_generic_seg_hdr *
-ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
-		    struct ice_pkg_hdr *pkg_hdr);
-enum ice_status ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg);
-
-enum ice_status
-ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_header);
-enum ice_status
-ice_get_pkg_info(struct ice_hw *hw);
-
-void ice_init_pkg_hints(struct ice_hw *hw, struct ice_seg *ice_seg);
-
 enum ice_status
 ice_find_label_value(struct ice_seg *ice_seg, char const *name, u32 type,
 		     u16 *value);
 enum ice_status
 ice_get_sw_fv_list(struct ice_hw *hw, u16 *prot_ids, u8 ids_cnt,
 		   struct LIST_HEAD_TYPE *fv_list);
-enum ice_status
-ice_aq_upload_section(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
-		      u16 buf_size, struct ice_sq_cd *cd);
-
-enum ice_status
-ice_pkg_buf_unreserve_section(struct ice_buf_build *bld, u16 count);
-u16 ice_pkg_buf_get_free_space(struct ice_buf_build *bld);
-u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld);
-
-/* package buffer building routines */
-
-struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw);
-enum ice_status
-ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count);
-void *ice_pkg_buf_alloc_section(struct ice_buf_build *bld, u32 type, u16 size);
-struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld);
-void ice_pkg_buf_free(struct ice_hw *hw, struct ice_buf_build *bld);
 
-/* XLT1/PType group functions */
-enum ice_status ice_ptg_update_xlt1(struct ice_hw *hw, enum ice_block blk);
-enum ice_status
-ice_ptg_find_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 *ptg);
-u8 ice_ptg_alloc(struct ice_hw *hw, enum ice_block blk);
-void ice_ptg_free(struct ice_hw *hw, enum ice_block blk, u8 ptg);
-enum ice_status
-ice_ptg_add_mv_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg);
 
 /* XLT2/VSI group functions */
-enum ice_status ice_vsig_update_xlt2(struct ice_hw *hw, enum ice_block blk);
 enum ice_status
 ice_vsig_find_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 *vsig);
 enum ice_status
-ice_find_dup_props_vsig(struct ice_hw *hw, enum ice_block blk,
-			struct LIST_HEAD_TYPE *chs, u16 *vsig);
-
-enum ice_status
-ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig);
-enum ice_status ice_vsig_free(struct ice_hw *hw, enum ice_block blk, u16 vsig);
-enum ice_status
-ice_vsig_add_mv_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig);
-enum ice_status
-ice_vsig_remove_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig);
-enum ice_status
 ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 	     struct ice_fv_word *es);
 struct ice_prof_map *
@@ -96,10 +47,6 @@ enum ice_status
 ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl);
 enum ice_status
 ice_rem_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl);
-struct ice_prof_map *
-ice_set_prof_context(struct ice_hw *hw, enum ice_block blk, u64 id, u64 cntxt);
-struct ice_prof_map *
-ice_get_prof_context(struct ice_hw *hw, enum ice_block blk, u64 id, u64 *cntxt);
 enum ice_status
 ice_init_pkg(struct ice_hw *hw, u8 *buff, u32 len);
 enum ice_status
@@ -109,12 +56,6 @@ void ice_free_seg(struct ice_hw *hw);
 void ice_fill_blk_tbls(struct ice_hw *hw);
 void ice_free_hw_tbls(struct ice_hw *hw);
 enum ice_status
-ice_add_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
-	     u64 id);
-enum ice_status
-ice_rem_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi[], u8 count,
-	     u64 id);
-enum ice_status
 ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id);
 
 enum ice_status
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 51/69] net/ice/base: refactor VSI node sched code
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (49 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 50/69] net/ice/base: cleanup ice flex pipe files Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 52/69] net/ice/base: add some minor new defines Leyi Rong
                       ` (18 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Grzegorz Nitka, Paul M Stillwell Jr

Refactored VSI node sched code to use port_info ptr as call arg.

The declaration of VSI node getter function has been modified to use
pointer to ice_port_info structure instead of pointer to hw structure.
This way suitable port_info structure is used to find VSI node.

Signed-off-by: Grzegorz Nitka <grzegorz.nitka@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_sched.c | 47 ++++++++++++++++----------------
 drivers/net/ice/base/ice_sched.h |  2 +-
 2 files changed, 24 insertions(+), 25 deletions(-)

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index fa3158a7b..0f4153146 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -1451,7 +1451,7 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
 
 /**
  * ice_sched_get_vsi_node - Get a VSI node based on VSI ID
- * @hw: pointer to the HW struct
+ * @pi: pointer to the port information structure
  * @tc_node: pointer to the TC node
  * @vsi_handle: software VSI handle
  *
@@ -1459,14 +1459,14 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
  * TC branch
  */
 struct ice_sched_node *
-ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+ice_sched_get_vsi_node(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 		       u16 vsi_handle)
 {
 	struct ice_sched_node *node;
 	u8 vsi_layer;
 
-	vsi_layer = ice_sched_get_vsi_layer(hw);
-	node = ice_sched_get_first_node(hw->port_info, tc_node, vsi_layer);
+	vsi_layer = ice_sched_get_vsi_layer(pi->hw);
+	node = ice_sched_get_first_node(pi, tc_node, vsi_layer);
 
 	/* Check whether it already exists */
 	while (node) {
@@ -1587,7 +1587,7 @@ ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
 
 	qgl = ice_sched_get_qgrp_layer(hw);
 	vsil = ice_sched_get_vsi_layer(hw);
-	parent = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	parent = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 	for (i = vsil + 1; i <= qgl; i++) {
 		if (!parent)
 			return ICE_ERR_CFG;
@@ -1620,7 +1620,7 @@ ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
 
 /**
  * ice_sched_calc_vsi_support_nodes - calculate number of VSI support nodes
- * @hw: pointer to the HW struct
+ * @pi: pointer to the port info structure
  * @tc_node: pointer to TC node
  * @num_nodes: pointer to num nodes array
  *
@@ -1629,15 +1629,15 @@ ice_sched_add_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
  * layers
  */
 static void
-ice_sched_calc_vsi_support_nodes(struct ice_hw *hw,
+ice_sched_calc_vsi_support_nodes(struct ice_port_info *pi,
 				 struct ice_sched_node *tc_node, u16 *num_nodes)
 {
 	struct ice_sched_node *node;
 	u8 vsil;
 	int i;
 
-	vsil = ice_sched_get_vsi_layer(hw);
-	for (i = vsil; i >= hw->sw_entry_point_layer; i--)
+	vsil = ice_sched_get_vsi_layer(pi->hw);
+	for (i = vsil; i >= pi->hw->sw_entry_point_layer; i--)
 		/* Add intermediate nodes if TC has no children and
 		 * need at least one node for VSI
 		 */
@@ -1647,11 +1647,11 @@ ice_sched_calc_vsi_support_nodes(struct ice_hw *hw,
 			/* If intermediate nodes are reached max children
 			 * then add a new one.
 			 */
-			node = ice_sched_get_first_node(hw->port_info, tc_node,
-							(u8)i);
+			node = ice_sched_get_first_node(pi, tc_node, (u8)i);
 			/* scan all the siblings */
 			while (node) {
-				if (node->num_children < hw->max_children[i])
+				if (node->num_children <
+				    pi->hw->max_children[i])
 					break;
 				node = node->sibling;
 			}
@@ -1731,14 +1731,13 @@ ice_sched_add_vsi_to_topo(struct ice_port_info *pi, u16 vsi_handle, u8 tc)
 {
 	u16 num_nodes[ICE_AQC_TOPO_MAX_LEVEL_NUM] = { 0 };
 	struct ice_sched_node *tc_node;
-	struct ice_hw *hw = pi->hw;
 
 	tc_node = ice_sched_get_tc_node(pi, tc);
 	if (!tc_node)
 		return ICE_ERR_PARAM;
 
 	/* calculate number of supported nodes needed for this VSI */
-	ice_sched_calc_vsi_support_nodes(hw, tc_node, num_nodes);
+	ice_sched_calc_vsi_support_nodes(pi, tc_node, num_nodes);
 
 	/* add VSI supported nodes to TC subtree */
 	return ice_sched_add_vsi_support_nodes(pi, vsi_handle, tc_node,
@@ -1771,7 +1770,7 @@ ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
 	if (!tc_node)
 		return ICE_ERR_CFG;
 
-	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 	if (!vsi_node)
 		return ICE_ERR_CFG;
 
@@ -1834,7 +1833,7 @@ ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
 	vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
 	if (!vsi_ctx)
 		return ICE_ERR_PARAM;
-	vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+	vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 
 	/* suspend the VSI if TC is not enabled */
 	if (!enable) {
@@ -1855,7 +1854,7 @@ ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
 		if (status)
 			return status;
 
-		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			return ICE_ERR_CFG;
 
@@ -1966,7 +1965,7 @@ ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
 		if (!tc_node)
 			continue;
 
-		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			continue;
 
@@ -2256,7 +2255,7 @@ ice_sched_move_vsi_to_agg(struct ice_port_info *pi, u16 vsi_handle, u32 agg_id,
 	if (!agg_node)
 		return ICE_ERR_DOES_NOT_EXIST;
 
-	vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+	vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 	if (!vsi_node)
 		return ICE_ERR_DOES_NOT_EXIST;
 
@@ -3537,7 +3536,7 @@ ice_cfg_agg_vsi_priority_per_tc(struct ice_port_info *pi, u32 agg_id,
 		if (!vsi_handle_valid)
 			goto exit_agg_priority_per_tc;
 
-		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			goto exit_agg_priority_per_tc;
 
@@ -3593,7 +3592,7 @@ ice_cfg_vsi_bw_alloc(struct ice_port_info *pi, u16 vsi_handle, u8 ena_tcmap,
 		if (!tc_node)
 			continue;
 
-		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			continue;
 
@@ -4805,7 +4804,7 @@ ice_sched_validate_vsi_srl_node(struct ice_port_info *pi, u16 vsi_handle)
 		if (!tc_node)
 			continue;
 
-		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			continue;
 
@@ -4864,7 +4863,7 @@ ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle,
 		if (!tc_node)
 			continue;
 
-		vsi_node = ice_sched_get_vsi_node(pi->hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			continue;
 
@@ -5368,7 +5367,7 @@ ice_sched_replay_vsi_bw(struct ice_hw *hw, u16 vsi_handle,
 		tc_node = ice_sched_get_tc_node(pi, tc);
 		if (!tc_node)
 			continue;
-		vsi_node = ice_sched_get_vsi_node(hw, tc_node, vsi_handle);
+		vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
 		if (!vsi_node)
 			continue;
 		bw_t_info = &vsi_ctx->sched.bw_t_info[tc];
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index e444dc880..38f8f93d2 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -107,7 +107,7 @@ ice_sched_cfg_vsi(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 maxqs,
 		  u8 owner, bool enable);
 enum ice_status ice_rm_vsi_lan_cfg(struct ice_port_info *pi, u16 vsi_handle);
 struct ice_sched_node *
-ice_sched_get_vsi_node(struct ice_hw *hw, struct ice_sched_node *tc_node,
+ice_sched_get_vsi_node(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 		       u16 vsi_handle);
 bool ice_sched_is_tree_balanced(struct ice_hw *hw, struct ice_sched_node *node);
 enum ice_status
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 52/69] net/ice/base: add some minor new defines
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (50 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 51/69] net/ice/base: refactor VSI node sched code Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 53/69] net/ice/base: add vxlan/generic tunnel management Leyi Rong
                       ` (17 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacek Naczyk, Faerman Lev, Paul M Stillwell Jr

1. Add defines for Link Topology Netlist Section.
2. Add missing Read MAC command response bits.
3. Adds AQ error 29.

Signed-off-by: Jacek Naczyk <jacek.naczyk@intel.com>
Signed-off-by: Faerman Lev <lev.faerman@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 3 +++
 drivers/net/ice/base/ice_type.h       | 2 ++
 2 files changed, 5 insertions(+)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 249e48b82..1fdd612a1 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -142,6 +142,8 @@ struct ice_aqc_manage_mac_read {
 #define ICE_AQC_MAN_MAC_SAN_ADDR_VALID		BIT(5)
 #define ICE_AQC_MAN_MAC_PORT_ADDR_VALID		BIT(6)
 #define ICE_AQC_MAN_MAC_WOL_ADDR_VALID		BIT(7)
+#define ICE_AQC_MAN_MAC_MC_MAG_EN		BIT(8)
+#define ICE_AQC_MAN_MAC_WOL_PRESERVE_ON_PFR	BIT(9)
 #define ICE_AQC_MAN_MAC_READ_S			4
 #define ICE_AQC_MAN_MAC_READ_M			(0xF << ICE_AQC_MAN_MAC_READ_S)
 	u8 rsvd[2];
@@ -2389,6 +2391,7 @@ enum ice_aq_err {
 	ICE_AQ_RC_ESVN		= 26, /* SVN number prohibits this package */
 	ICE_AQ_RC_EBADMAN	= 27, /* Manifest hash mismatch */
 	ICE_AQ_RC_EBADBUF	= 28, /* Buffer hash mismatches manifest */
+	ICE_AQ_RC_EACCES_BMCU	= 29, /* BMC Update in progress */
 };
 
 /* Admin Queue command opcodes */
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index ea587a0f0..b03f18d16 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -971,6 +971,8 @@ enum ice_sw_fwd_act_type {
 #define ICE_SR_NVM_BANK_SIZE			0x43
 #define ICE_SR_1ND_OROM_BANK_PTR		0x44
 #define ICE_SR_OROM_BANK_SIZE			0x45
+#define ICE_SR_NETLIST_BANK_PTR			0x46
+#define ICE_SR_NETLIST_BANK_SIZE		0x47
 #define ICE_SR_EMP_SR_SETTINGS_PTR		0x48
 #define ICE_SR_CONFIGURATION_METADATA_PTR	0x4D
 #define ICE_SR_IMMEDIATE_VALUES_PTR		0x4E
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 53/69] net/ice/base: add vxlan/generic tunnel management
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (51 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 52/69] net/ice/base: add some minor new defines Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 54/69] net/ice/base: enable additional switch rules Leyi Rong
                       ` (16 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Added routines for handling tunnel management:
	- ice_tunnel_port_in_use()
	- ice_tunnel_get_type()
	- ice_find_free_tunnel_entry()
	- ice_create_tunnel()
	- ice_destroy_tunnel()

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 228 +++++++++++++++++++++++++++
 drivers/net/ice/base/ice_flex_pipe.h |   6 +
 2 files changed, 234 insertions(+)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index ecc2f5738..0582c0ecf 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1708,6 +1708,234 @@ static struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld)
 	return &bld->buf;
 }
 
+/**
+ * ice_tunnel_port_in_use
+ * @hw: pointer to the HW structure
+ * @port: port to search for
+ * @index: optionally returns index
+ *
+ * Returns whether a port is already in use as a tunnel, and optionally its
+ * index
+ */
+bool ice_tunnel_port_in_use(struct ice_hw *hw, u16 port, u16 *index)
+{
+	u16 i;
+
+	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
+		if (hw->tnl.tbl[i].in_use && hw->tnl.tbl[i].port == port) {
+			if (index)
+				*index = i;
+			return true;
+		}
+
+	return false;
+}
+
+/**
+ * ice_tunnel_get_type
+ * @hw: pointer to the HW structure
+ * @port: port to search for
+ * @type: returns tunnel index
+ *
+ * For a given port number, will return the type of tunnel.
+ */
+bool
+ice_tunnel_get_type(struct ice_hw *hw, u16 port, enum ice_tunnel_type *type)
+{
+	u16 i;
+
+	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
+		if (hw->tnl.tbl[i].in_use && hw->tnl.tbl[i].port == port) {
+			*type = hw->tnl.tbl[i].type;
+			return true;
+		}
+
+	return false;
+}
+
+/**
+ * ice_find_free_tunnel_entry
+ * @hw: pointer to the HW structure
+ * @type: tunnel type
+ * @index: optionally returns index
+ *
+ * Returns whether there is a free tunnel entry, and optionally its index
+ */
+static bool
+ice_find_free_tunnel_entry(struct ice_hw *hw, enum ice_tunnel_type type,
+			   u16 *index)
+{
+	u16 i;
+
+	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
+		if (hw->tnl.tbl[i].valid && !hw->tnl.tbl[i].in_use &&
+		    hw->tnl.tbl[i].type == type) {
+			if (index)
+				*index = i;
+			return true;
+		}
+
+	return false;
+}
+
+/**
+ * ice_create_tunnel
+ * @hw: pointer to the HW structure
+ * @type: type of tunnel
+ * @port: port to use for vxlan tunnel
+ *
+ * Creates a tunnel
+ */
+enum ice_status
+ice_create_tunnel(struct ice_hw *hw, enum ice_tunnel_type type, u16 port)
+{
+	struct ice_boost_tcam_section *sect_rx, *sect_tx;
+	enum ice_status status = ICE_ERR_MAX_LIMIT;
+	struct ice_buf_build *bld;
+	u16 index;
+
+	if (ice_tunnel_port_in_use(hw, port, NULL))
+		return ICE_ERR_ALREADY_EXISTS;
+
+	if (!ice_find_free_tunnel_entry(hw, type, &index))
+		return ICE_ERR_OUT_OF_RANGE;
+
+	bld = ice_pkg_buf_alloc(hw);
+	if (!bld)
+		return ICE_ERR_NO_MEMORY;
+
+	/* allocate 2 sections, one for RX parser, one for TX parser */
+	if (ice_pkg_buf_reserve_section(bld, 2))
+		goto ice_create_tunnel_err;
+
+	sect_rx = (struct ice_boost_tcam_section *)
+		ice_pkg_buf_alloc_section(bld, ICE_SID_RXPARSER_BOOST_TCAM,
+					  sizeof(*sect_rx));
+	if (!sect_rx)
+		goto ice_create_tunnel_err;
+	sect_rx->count = CPU_TO_LE16(1);
+
+	sect_tx = (struct ice_boost_tcam_section *)
+		ice_pkg_buf_alloc_section(bld, ICE_SID_TXPARSER_BOOST_TCAM,
+					  sizeof(*sect_tx));
+	if (!sect_tx)
+		goto ice_create_tunnel_err;
+	sect_tx->count = CPU_TO_LE16(1);
+
+	/* copy original boost entry to update package buffer */
+	ice_memcpy(sect_rx->tcam, hw->tnl.tbl[index].boost_entry,
+		   sizeof(*sect_rx->tcam), ICE_NONDMA_TO_NONDMA);
+
+	/* over-write the never-match dest port key bits with the encoded port
+	 * bits
+	 */
+	ice_set_key((u8 *)&sect_rx->tcam[0].key, sizeof(sect_rx->tcam[0].key),
+		    (u8 *)&port, NULL, NULL, NULL,
+		    offsetof(struct ice_boost_key_value, hv_dst_port_key),
+		    sizeof(sect_rx->tcam[0].key.key.hv_dst_port_key));
+
+	/* exact copy of entry to TX section entry */
+	ice_memcpy(sect_tx->tcam, sect_rx->tcam, sizeof(*sect_tx->tcam),
+		   ICE_NONDMA_TO_NONDMA);
+
+	status = ice_update_pkg(hw, ice_pkg_buf(bld), 1);
+	if (!status) {
+		hw->tnl.tbl[index].port = port;
+		hw->tnl.tbl[index].in_use = true;
+	}
+
+ice_create_tunnel_err:
+	ice_pkg_buf_free(hw, bld);
+
+	return status;
+}
+
+/**
+ * ice_destroy_tunnel
+ * @hw: pointer to the HW structure
+ * @port: port of tunnel to destroy (ignored if the all parameter is true)
+ * @all: flag that states to destroy all tunnels
+ *
+ * Destroys a tunnel or all tunnels by creating an update package buffer
+ * targeting the specific updates requested and then performing an update
+ * package.
+ */
+enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all)
+{
+	struct ice_boost_tcam_section *sect_rx, *sect_tx;
+	enum ice_status status = ICE_ERR_MAX_LIMIT;
+	struct ice_buf_build *bld;
+	u16 count = 0;
+	u16 size;
+	u16 i;
+
+	/* determine count */
+	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
+		if (hw->tnl.tbl[i].valid && hw->tnl.tbl[i].in_use &&
+		    (all || hw->tnl.tbl[i].port == port))
+			count++;
+
+	if (!count)
+		return ICE_ERR_PARAM;
+
+	/* size of section - there is at least one entry */
+	size = (count - 1) * sizeof(*sect_rx->tcam) + sizeof(*sect_rx);
+
+	bld = ice_pkg_buf_alloc(hw);
+	if (!bld)
+		return ICE_ERR_NO_MEMORY;
+
+	/* allocate 2 sections, one for RX parser, one for TX parser */
+	if (ice_pkg_buf_reserve_section(bld, 2))
+		goto ice_destroy_tunnel_err;
+
+	sect_rx = (struct ice_boost_tcam_section *)
+		ice_pkg_buf_alloc_section(bld, ICE_SID_RXPARSER_BOOST_TCAM,
+					  size);
+	if (!sect_rx)
+		goto ice_destroy_tunnel_err;
+	sect_rx->count = CPU_TO_LE16(1);
+
+	sect_tx = (struct ice_boost_tcam_section *)
+		ice_pkg_buf_alloc_section(bld, ICE_SID_TXPARSER_BOOST_TCAM,
+					  size);
+	if (!sect_tx)
+		goto ice_destroy_tunnel_err;
+	sect_tx->count = CPU_TO_LE16(1);
+
+	/* copy original boost entry to update package buffer, one copy to RX
+	 * section, another copy to the TX section
+	 */
+	for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++)
+		if (hw->tnl.tbl[i].valid && hw->tnl.tbl[i].in_use &&
+		    (all || hw->tnl.tbl[i].port == port)) {
+			ice_memcpy(sect_rx->tcam + i,
+				   hw->tnl.tbl[i].boost_entry,
+				   sizeof(*sect_rx->tcam),
+				   ICE_NONDMA_TO_NONDMA);
+			ice_memcpy(sect_tx->tcam + i,
+				   hw->tnl.tbl[i].boost_entry,
+				   sizeof(*sect_tx->tcam),
+				   ICE_NONDMA_TO_NONDMA);
+			hw->tnl.tbl[i].marked = true;
+		}
+
+	status = ice_update_pkg(hw, ice_pkg_buf(bld), 1);
+	if (!status)
+		for (i = 0; i < hw->tnl.count &&
+		     i < ICE_TUNNEL_MAX_ENTRIES; i++)
+			if (hw->tnl.tbl[i].marked) {
+				hw->tnl.tbl[i].port = 0;
+				hw->tnl.tbl[i].in_use = false;
+				hw->tnl.tbl[i].marked = false;
+			}
+
+ice_destroy_tunnel_err:
+	ice_pkg_buf_free(hw, bld);
+
+	return status;
+}
+
 /**
  * ice_find_prot_off - find prot ID and offset pair, based on prof and FV index
  * @hw: pointer to the hardware structure
diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h
index f5fa685df..2801e1b50 100644
--- a/drivers/net/ice/base/ice_flex_pipe.h
+++ b/drivers/net/ice/base/ice_flex_pipe.h
@@ -33,6 +33,12 @@ ice_find_label_value(struct ice_seg *ice_seg, char const *name, u32 type,
 enum ice_status
 ice_get_sw_fv_list(struct ice_hw *hw, u16 *prot_ids, u8 ids_cnt,
 		   struct LIST_HEAD_TYPE *fv_list);
+enum ice_status
+ice_create_tunnel(struct ice_hw *hw, enum ice_tunnel_type type, u16 port);
+enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all);
+bool ice_tunnel_port_in_use(struct ice_hw *hw, u16 port, u16 *index);
+bool
+ice_tunnel_get_type(struct ice_hw *hw, u16 port, enum ice_tunnel_type *type);
 
 
 /* XLT2/VSI group functions */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 54/69] net/ice/base: enable additional switch rules
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (52 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 53/69] net/ice/base: add vxlan/generic tunnel management Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 55/69] net/ice/base: allow forward to Q groups in switch rule Leyi Rong
                       ` (15 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Add capability to create inner IP and inner TCP switch recipes and
rules. Change UDP tunnel dummy packet to accommodate the training of
these new rules.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_protocol_type.h |   8 +-
 drivers/net/ice/base/ice_switch.c        | 361 ++++++++++++-----------
 drivers/net/ice/base/ice_switch.h        |   1 +
 3 files changed, 203 insertions(+), 167 deletions(-)

diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
index 82822fb74..38bed7a79 100644
--- a/drivers/net/ice/base/ice_protocol_type.h
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -35,6 +35,7 @@ enum ice_protocol_type {
 	ICE_IPV6_IL,
 	ICE_IPV6_OFOS,
 	ICE_TCP_IL,
+	ICE_UDP_OF,
 	ICE_UDP_ILOS,
 	ICE_SCTP_IL,
 	ICE_VXLAN,
@@ -112,6 +113,7 @@ enum ice_prot_id {
 #define ICE_IPV6_OFOS_HW	40
 #define ICE_IPV6_IL_HW		41
 #define ICE_TCP_IL_HW		49
+#define ICE_UDP_OF_HW		52
 #define ICE_UDP_ILOS_HW		53
 #define ICE_SCTP_IL_HW		96
 
@@ -188,8 +190,7 @@ struct ice_l4_hdr {
 struct ice_udp_tnl_hdr {
 	u16 field;
 	u16 proto_type;
-	u16 vni;
-	u16 reserved;
+	u32 vni;	/* only use lower 24-bits */
 };
 
 struct ice_nvgre {
@@ -225,6 +226,7 @@ struct ice_prot_lkup_ext {
 	u8 n_val_words;
 	/* create a buffer to hold max words per recipe */
 	u16 field_off[ICE_MAX_CHAIN_WORDS];
+	u16 field_mask[ICE_MAX_CHAIN_WORDS];
 
 	struct ice_fv_word fv_words[ICE_MAX_CHAIN_WORDS];
 
@@ -235,6 +237,7 @@ struct ice_prot_lkup_ext {
 struct ice_pref_recipe_group {
 	u8 n_val_pairs;		/* Number of valid pairs */
 	struct ice_fv_word pairs[ICE_NUM_WORDS_RECIPE];
+	u16 mask[ICE_NUM_WORDS_RECIPE];
 };
 
 struct ice_recp_grp_entry {
@@ -244,6 +247,7 @@ struct ice_recp_grp_entry {
 	u16 rid;
 	u8 chain_idx;
 	u16 fv_idx[ICE_NUM_WORDS_RECIPE];
+	u16 fv_mask[ICE_NUM_WORDS_RECIPE];
 	struct ice_pref_recipe_group r_group;
 };
 #endif /* _ICE_PROTOCOL_TYPE_H_ */
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index c7fcd71a7..0dae1b609 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -53,60 +53,109 @@ static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0,
 	 sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi) + \
 	 ((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi)))
 
+static const struct ice_dummy_pkt_offsets {
+	enum ice_protocol_type type;
+	u16 offset; /* ICE_PROTOCOL_LAST indicates end of list */
+} dummy_gre_packet_offsets[] = {
+	{ ICE_MAC_OFOS,		0 },
+	{ ICE_IPV4_OFOS,	14 },
+	{ ICE_VXLAN,		34 },
+	{ ICE_MAC_IL,		42 },
+	{ ICE_IPV4_IL,		54 },
+	{ ICE_PROTOCOL_LAST,	0 },
+};
+
 static const
-u8 dummy_gre_packet[] = { 0, 0, 0, 0,		/* Ether starts */
+u8 dummy_gre_packet[] = { 0, 0, 0, 0,		/* ICE_MAC_OFOS 0 */
 			  0, 0, 0, 0,
 			  0, 0, 0, 0,
-			  0x08, 0,		/* Ether ends */
-			  0x45, 0, 0, 0x3E,	/* IP starts */
+			  0x08, 0,
+			  0x45, 0, 0, 0x3E,	/* ICE_IPV4_OFOS 14 */
 			  0, 0, 0, 0,
 			  0, 0x2F, 0, 0,
 			  0, 0, 0, 0,
-			  0, 0, 0, 0,		/* IP ends */
-			  0x80, 0, 0x65, 0x58,	/* GRE starts */
-			  0, 0, 0, 0,		/* GRE ends */
-			  0, 0, 0, 0,		/* Ether starts */
-			  0, 0, 0, 0,
-			  0, 0, 0, 0,
-			  0x08, 0,		/* Ether ends */
-			  0x45, 0, 0, 0x14,	/* IP starts */
 			  0, 0, 0, 0,
+			  0x80, 0, 0x65, 0x58,	/* ICE_VXLAN_GRE 34 */
 			  0, 0, 0, 0,
+			  0, 0, 0, 0,		/* ICE_MAC_IL 42 */
 			  0, 0, 0, 0,
-			  0, 0, 0, 0		/* IP ends */
-			};
-
-static const u8
-dummy_udp_tun_packet[] = {0, 0, 0, 0,		/* Ether starts */
-			  0, 0, 0, 0,
-			  0, 0, 0, 0,
-			  0x08, 0,		/* Ether ends */
-			  0x45, 0, 0, 0x32,	/* IP starts */
 			  0, 0, 0, 0,
-			  0, 0x11, 0, 0,
+			  0x08, 0,
+			  0x45, 0, 0, 0x14,	/* ICE_IPV4_IL 54 */
 			  0, 0, 0, 0,
-			  0, 0, 0, 0,		/* IP ends */
-			  0, 0, 0x12, 0xB5,	/* UDP start*/
-			  0, 0x1E, 0, 0,	/* UDP end*/
-			  0, 0, 0, 0,		/* VXLAN start */
-			  0, 0, 0, 0,		/* VXLAN end*/
-			  0, 0, 0, 0,		/* Ether starts */
 			  0, 0, 0, 0,
 			  0, 0, 0, 0,
-			  0, 0			/* Ether ends */
+			  0, 0, 0, 0
 			};
 
+static const
+struct ice_dummy_pkt_offsets dummy_udp_tun_packet_offsets[] = {
+	{ ICE_MAC_OFOS,		0 },
+	{ ICE_IPV4_OFOS,	14 },
+	{ ICE_UDP_OF,		34 },
+	{ ICE_VXLAN,		42 },
+	{ ICE_MAC_IL,		50 },
+	{ ICE_IPV4_IL,		64 },
+	{ ICE_TCP_IL,		84 },
+	{ ICE_PROTOCOL_LAST,	0 },
+};
+
+static const
+u8 dummy_udp_tun_packet[] = {
+	0x00, 0x00, 0x00, 0x00,  /* ICE_MAC_OFOS 0 */
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x08, 0x00,
+
+	0x45, 0x00, 0x00, 0x5a, /* ICE_IPV4_OFOS 14 */
+	0x00, 0x01, 0x00, 0x00,
+	0x40, 0x11, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+
+	0x00, 0x00, 0x12, 0xb5, /* ICE_UDP_OF 34 */
+	0x00, 0x46, 0x00, 0x00,
+
+	0x04, 0x00, 0x00, 0x03, /* ICE_VXLAN 42 */
+	0x00, 0x00, 0x00, 0x00,
+
+	0x00, 0x00, 0x00, 0x00, /* ICE_MAC_IL 50 */
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x08, 0x00,
+
+	0x45, 0x00, 0x00, 0x28, /* ICE_IPV4_IL 64 */
+	0x00, 0x01, 0x00, 0x00,
+	0x40, 0x06, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+
+	0x00, 0x00, 0x00, 0x00, /* ICE_TCP_IL 84 */
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x50, 0x02, 0x20, 0x00,
+	0x00, 0x00, 0x00, 0x00
+};
+
+static const
+struct ice_dummy_pkt_offsets dummy_tcp_tun_packet_offsets[] = {
+	{ ICE_MAC_OFOS,		0 },
+	{ ICE_IPV4_OFOS,	14 },
+	{ ICE_TCP_IL,		34 },
+	{ ICE_PROTOCOL_LAST,	0 },
+};
+
 static const u8
-dummy_tcp_tun_packet[] = {0, 0, 0, 0,		/* Ether starts */
+dummy_tcp_tun_packet[] = {0, 0, 0, 0,		/* ICE_MAC_OFOS 0 */
 			  0, 0, 0, 0,
 			  0, 0, 0, 0,
-			  0x08, 0,              /* Ether ends */
-			  0x45, 0, 0, 0x28,     /* IP starts */
+			  0x08, 0,
+			  0x45, 0, 0, 0x28,     /* ICE_IPV4_OFOS 14 */
 			  0, 0x01, 0, 0,
 			  0x40, 0x06, 0xF5, 0x69,
 			  0, 0, 0, 0,
-			  0, 0, 0, 0,   /* IP ends */
 			  0, 0, 0, 0,
+			  0, 0, 0, 0,		/* ICE_TCP_IL 34 */
 			  0, 0, 0, 0,
 			  0, 0, 0, 0,
 			  0x50, 0x02, 0x20,
@@ -184,6 +233,9 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid)
 			u8 lkup_indx = root_bufs.content.lkup_indx[i + 1];
 
 			rg_entry->fv_idx[i] = lkup_indx;
+			rg_entry->fv_mask[i] =
+				LE16_TO_CPU(root_bufs.content.mask[i + 1]);
+
 			/* If the recipe is a chained recipe then all its
 			 * child recipe's result will have a result index.
 			 * To fill fv_words we should not use those result
@@ -4246,10 +4298,11 @@ static const struct ice_prot_ext_tbl_entry ice_prot_ext[] = {
 	{ ICE_IPV6_OFOS,	{ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24,
 				 26, 28, 30, 32, 34, 36, 38 } },
 	{ ICE_TCP_IL,		{ 0, 2 } },
+	{ ICE_UDP_OF,		{ 0, 2 } },
 	{ ICE_UDP_ILOS,		{ 0, 2 } },
 	{ ICE_SCTP_IL,		{ 0, 2 } },
-	{ ICE_VXLAN,		{ 8, 10, 12 } },
-	{ ICE_GENEVE,		{ 8, 10, 12 } },
+	{ ICE_VXLAN,		{ 8, 10, 12, 14 } },
+	{ ICE_GENEVE,		{ 8, 10, 12, 14 } },
 	{ ICE_VXLAN_GPE,	{ 0, 2, 4 } },
 	{ ICE_NVGRE,		{ 0, 2 } },
 	{ ICE_PROTOCOL_LAST,	{ 0 } }
@@ -4262,11 +4315,14 @@ static const struct ice_prot_ext_tbl_entry ice_prot_ext[] = {
  */
 static const struct ice_pref_recipe_group ice_recipe_pack[] = {
 	{3, { { ICE_MAC_OFOS_HW, 0, 0 }, { ICE_MAC_OFOS_HW, 2, 0 },
-	      { ICE_MAC_OFOS_HW, 4, 0 } } },
+	      { ICE_MAC_OFOS_HW, 4, 0 } }, { 0xffff, 0xffff, 0xffff, 0xffff } },
 	{4, { { ICE_MAC_IL_HW, 0, 0 }, { ICE_MAC_IL_HW, 2, 0 },
-	      { ICE_MAC_IL_HW, 4, 0 }, { ICE_META_DATA_ID_HW, 44, 0 } } },
-	{2, { { ICE_IPV4_IL_HW, 0, 0 }, { ICE_IPV4_IL_HW, 2, 0 } } },
-	{2, { { ICE_IPV4_IL_HW, 12, 0 }, { ICE_IPV4_IL_HW, 14, 0 } } },
+	      { ICE_MAC_IL_HW, 4, 0 }, { ICE_META_DATA_ID_HW, 44, 0 } },
+		{ 0xffff, 0xffff, 0xffff, 0xffff } },
+	{2, { { ICE_IPV4_IL_HW, 0, 0 }, { ICE_IPV4_IL_HW, 2, 0 } },
+		{ 0xffff, 0xffff, 0xffff, 0xffff } },
+	{2, { { ICE_IPV4_IL_HW, 12, 0 }, { ICE_IPV4_IL_HW, 14, 0 } },
+		{ 0xffff, 0xffff, 0xffff, 0xffff } },
 };
 
 static const struct ice_protocol_entry ice_prot_id_tbl[] = {
@@ -4277,6 +4333,7 @@ static const struct ice_protocol_entry ice_prot_id_tbl[] = {
 	{ ICE_IPV6_OFOS,	ICE_IPV6_OFOS_HW },
 	{ ICE_IPV6_IL,		ICE_IPV6_IL_HW },
 	{ ICE_TCP_IL,		ICE_TCP_IL_HW },
+	{ ICE_UDP_OF,		ICE_UDP_OF_HW },
 	{ ICE_UDP_ILOS,		ICE_UDP_ILOS_HW },
 	{ ICE_SCTP_IL,		ICE_SCTP_IL_HW },
 	{ ICE_VXLAN,		ICE_UDP_OF_HW },
@@ -4395,7 +4452,7 @@ ice_fill_valid_words(struct ice_adv_lkup_elem *rule,
 	word = lkup_exts->n_val_words;
 
 	for (j = 0; j < sizeof(rule->m_u) / sizeof(u16); j++)
-		if (((u16 *)&rule->m_u)[j] == 0xffff &&
+		if (((u16 *)&rule->m_u)[j] &&
 		    rule->type < ARRAY_SIZE(ice_prot_ext)) {
 			/* No more space to accommodate */
 			if (word >= ICE_MAX_CHAIN_WORDS)
@@ -4404,6 +4461,7 @@ ice_fill_valid_words(struct ice_adv_lkup_elem *rule,
 				ice_prot_ext[rule->type].offs[j];
 			lkup_exts->fv_words[word].prot_id =
 				ice_prot_id_tbl[rule->type].protocol_id;
+			lkup_exts->field_mask[word] = ((u16 *)&rule->m_u)[j];
 			word++;
 		}
 
@@ -4527,6 +4585,7 @@ ice_create_first_fit_recp_def(struct ice_hw *hw,
 				lkup_exts->fv_words[j].prot_id;
 			grp->pairs[grp->n_val_pairs].off =
 				lkup_exts->fv_words[j].off;
+			grp->mask[grp->n_val_pairs] = lkup_exts->field_mask[j];
 			grp->n_val_pairs++;
 		}
 
@@ -4561,14 +4620,22 @@ ice_fill_fv_word_index(struct ice_hw *hw, struct LIST_HEAD_TYPE *fv_list,
 
 		for (i = 0; i < rg->r_group.n_val_pairs; i++) {
 			struct ice_fv_word *pr;
+			u16 mask;
 			u8 j;
 
 			pr = &rg->r_group.pairs[i];
+			mask = rg->r_group.mask[i];
+
 			for (j = 0; j < hw->blk[ICE_BLK_SW].es.fvw; j++)
 				if (fv_ext[j].prot_id == pr->prot_id &&
 				    fv_ext[j].off == pr->off) {
 					/* Store index of field vector */
 					rg->fv_idx[i] = j;
+					/* Mask is given by caller as big
+					 * endian, but sent to FW as little
+					 * endian
+					 */
+					rg->fv_mask[i] = mask << 8 | mask >> 8;
 					break;
 				}
 		}
@@ -4666,7 +4733,8 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm,
 
 		for (i = 0; i < entry->r_group.n_val_pairs; i++) {
 			buf[recps].content.lkup_indx[i + 1] = entry->fv_idx[i];
-			buf[recps].content.mask[i + 1] = CPU_TO_LE16(0xFFFF);
+			buf[recps].content.mask[i + 1] =
+				CPU_TO_LE16(entry->fv_mask[i]);
 		}
 
 		if (rm->n_grp_count > 1) {
@@ -4888,6 +4956,8 @@ ice_create_recipe_group(struct ice_hw *hw, struct ice_sw_recipe *rm,
 		rm->n_ext_words = lkup_exts->n_val_words;
 		ice_memcpy(&rm->ext_words, lkup_exts->fv_words,
 			   sizeof(rm->ext_words), ICE_NONDMA_TO_NONDMA);
+		ice_memcpy(rm->word_masks, lkup_exts->field_mask,
+			   sizeof(rm->word_masks), ICE_NONDMA_TO_NONDMA);
 		goto out;
 	}
 
@@ -5089,16 +5159,8 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 	return status;
 }
 
-#define ICE_MAC_HDR_OFFSET	0
-#define ICE_IP_HDR_OFFSET	14
-#define ICE_GRE_HDR_OFFSET	34
-#define ICE_MAC_IL_HDR_OFFSET	42
-#define ICE_IP_IL_HDR_OFFSET	56
-#define ICE_L4_HDR_OFFSET	34
-#define ICE_UDP_TUN_HDR_OFFSET	42
-
 /**
- * ice_find_dummy_packet - find dummy packet with given match criteria
+ * ice_find_dummy_packet - find dummy packet by tunnel type
  *
  * @lkups: lookup elements or match criteria for the advanced recipe, one
  *	   structure per protocol header
@@ -5106,17 +5168,20 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
  * @tun_type: tunnel type from the match criteria
  * @pkt: dummy packet to fill according to filter match criteria
  * @pkt_len: packet length of dummy packet
+ * @offsets: pointer to receive the pointer to the offsets for the packet
  */
 static void
 ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
 		      enum ice_sw_tunnel_type tun_type, const u8 **pkt,
-		      u16 *pkt_len)
+		      u16 *pkt_len,
+		      const struct ice_dummy_pkt_offsets **offsets)
 {
 	u16 i;
 
 	if (tun_type == ICE_SW_TUN_NVGRE || tun_type == ICE_ALL_TUNNELS) {
 		*pkt = dummy_gre_packet;
 		*pkt_len = sizeof(dummy_gre_packet);
+		*offsets = dummy_gre_packet_offsets;
 		return;
 	}
 
@@ -5124,6 +5189,7 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
 	    tun_type == ICE_SW_TUN_VXLAN_GPE) {
 		*pkt = dummy_udp_tun_packet;
 		*pkt_len = sizeof(dummy_udp_tun_packet);
+		*offsets = dummy_udp_tun_packet_offsets;
 		return;
 	}
 
@@ -5131,12 +5197,14 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
 		if (lkups[i].type == ICE_UDP_ILOS) {
 			*pkt = dummy_udp_tun_packet;
 			*pkt_len = sizeof(dummy_udp_tun_packet);
+			*offsets = dummy_udp_tun_packet_offsets;
 			return;
 		}
 	}
 
 	*pkt = dummy_tcp_tun_packet;
 	*pkt_len = sizeof(dummy_tcp_tun_packet);
+	*offsets = dummy_tcp_tun_packet_offsets;
 }
 
 /**
@@ -5145,16 +5213,16 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
  * @lkups: lookup elements or match criteria for the advanced recipe, one
  *	   structure per protocol header
  * @lkups_cnt: number of protocols
- * @tun_type: to know if the dummy packet is supposed to be tunnel packet
  * @s_rule: stores rule information from the match criteria
  * @dummy_pkt: dummy packet to fill according to filter match criteria
  * @pkt_len: packet length of dummy packet
+ * @offsets: offset info for the dummy packet
  */
-static void
+static enum ice_status
 ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
-			  enum ice_sw_tunnel_type tun_type,
 			  struct ice_aqc_sw_rules_elem *s_rule,
-			  const u8 *dummy_pkt, u16 pkt_len)
+			  const u8 *dummy_pkt, u16 pkt_len,
+			  const struct ice_dummy_pkt_offsets *offsets)
 {
 	u8 *pkt;
 	u16 i;
@@ -5167,124 +5235,74 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
 	ice_memcpy(pkt, dummy_pkt, pkt_len, ICE_NONDMA_TO_NONDMA);
 
 	for (i = 0; i < lkups_cnt; i++) {
-		u32 len, pkt_off, hdr_size, field_off;
+		enum ice_protocol_type type;
+		u16 offset = 0, len = 0, j;
+		bool found = false;
+
+		/* find the start of this layer; it should be found since this
+		 * was already checked when search for the dummy packet
+		 */
+		type = lkups[i].type;
+		for (j = 0; offsets[j].type != ICE_PROTOCOL_LAST; j++) {
+			if (type == offsets[j].type) {
+				offset = offsets[j].offset;
+				found = true;
+				break;
+			}
+		}
+		/* this should never happen in a correct calling sequence */
+		if (!found)
+			return ICE_ERR_PARAM;
 
 		switch (lkups[i].type) {
 		case ICE_MAC_OFOS:
 		case ICE_MAC_IL:
-			pkt_off = offsetof(struct ice_ether_hdr, dst_addr) +
-				((lkups[i].type == ICE_MAC_IL) ?
-				 ICE_MAC_IL_HDR_OFFSET : 0);
-			len = sizeof(lkups[i].h_u.eth_hdr.dst_addr);
-			if ((tun_type == ICE_SW_TUN_VXLAN ||
-			     tun_type == ICE_SW_TUN_GENEVE ||
-			     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
-			     lkups[i].type == ICE_MAC_IL) {
-				pkt_off += sizeof(struct ice_udp_tnl_hdr);
-			}
-
-			ice_memcpy(&pkt[pkt_off],
-				   &lkups[i].h_u.eth_hdr.dst_addr, len,
-				   ICE_NONDMA_TO_NONDMA);
-			pkt_off = offsetof(struct ice_ether_hdr, src_addr) +
-				((lkups[i].type == ICE_MAC_IL) ?
-				 ICE_MAC_IL_HDR_OFFSET : 0);
-			len = sizeof(lkups[i].h_u.eth_hdr.src_addr);
-			if ((tun_type == ICE_SW_TUN_VXLAN ||
-			     tun_type == ICE_SW_TUN_GENEVE ||
-			     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
-			     lkups[i].type == ICE_MAC_IL) {
-				pkt_off += sizeof(struct ice_udp_tnl_hdr);
-			}
-			ice_memcpy(&pkt[pkt_off],
-				   &lkups[i].h_u.eth_hdr.src_addr, len,
-				   ICE_NONDMA_TO_NONDMA);
-			if (lkups[i].h_u.eth_hdr.ethtype_id) {
-				pkt_off = offsetof(struct ice_ether_hdr,
-						   ethtype_id) +
-					((lkups[i].type == ICE_MAC_IL) ?
-					 ICE_MAC_IL_HDR_OFFSET : 0);
-				len = sizeof(lkups[i].h_u.eth_hdr.ethtype_id);
-				if ((tun_type == ICE_SW_TUN_VXLAN ||
-				     tun_type == ICE_SW_TUN_GENEVE ||
-				     tun_type == ICE_SW_TUN_VXLAN_GPE) &&
-				     lkups[i].type == ICE_MAC_IL) {
-					pkt_off +=
-						sizeof(struct ice_udp_tnl_hdr);
-				}
-				ice_memcpy(&pkt[pkt_off],
-					   &lkups[i].h_u.eth_hdr.ethtype_id,
-					   len, ICE_NONDMA_TO_NONDMA);
-			}
+			len = sizeof(struct ice_ether_hdr);
 			break;
 		case ICE_IPV4_OFOS:
-			hdr_size = sizeof(struct ice_ipv4_hdr);
-			if (lkups[i].h_u.ipv4_hdr.dst_addr) {
-				pkt_off = ICE_IP_HDR_OFFSET +
-					   offsetof(struct ice_ipv4_hdr,
-						    dst_addr);
-				field_off = offsetof(struct ice_ipv4_hdr,
-						     dst_addr);
-				len = hdr_size - field_off;
-				ice_memcpy(&pkt[pkt_off],
-					   &lkups[i].h_u.ipv4_hdr.dst_addr,
-					   len, ICE_NONDMA_TO_NONDMA);
-			}
-			if (lkups[i].h_u.ipv4_hdr.src_addr) {
-				pkt_off = ICE_IP_HDR_OFFSET +
-					   offsetof(struct ice_ipv4_hdr,
-						    src_addr);
-				field_off = offsetof(struct ice_ipv4_hdr,
-						     src_addr);
-				len = hdr_size - field_off;
-				ice_memcpy(&pkt[pkt_off],
-					   &lkups[i].h_u.ipv4_hdr.src_addr,
-					   len, ICE_NONDMA_TO_NONDMA);
-			}
-			break;
 		case ICE_IPV4_IL:
+			len = sizeof(struct ice_ipv4_hdr);
 			break;
 		case ICE_TCP_IL:
+		case ICE_UDP_OF:
 		case ICE_UDP_ILOS:
+			len = sizeof(struct ice_l4_hdr);
+			break;
 		case ICE_SCTP_IL:
-			hdr_size = sizeof(struct ice_udp_tnl_hdr);
-			if (lkups[i].h_u.l4_hdr.dst_port) {
-				pkt_off = ICE_L4_HDR_OFFSET +
-					   offsetof(struct ice_l4_hdr,
-						    dst_port);
-				field_off = offsetof(struct ice_l4_hdr,
-						     dst_port);
-				len =  hdr_size - field_off;
-				ice_memcpy(&pkt[pkt_off],
-					   &lkups[i].h_u.l4_hdr.dst_port,
-					   len, ICE_NONDMA_TO_NONDMA);
-			}
-			if (lkups[i].h_u.l4_hdr.src_port) {
-				pkt_off = ICE_L4_HDR_OFFSET +
-					offsetof(struct ice_l4_hdr, src_port);
-				field_off = offsetof(struct ice_l4_hdr,
-						     src_port);
-				len =  hdr_size - field_off;
-				ice_memcpy(&pkt[pkt_off],
-					   &lkups[i].h_u.l4_hdr.src_port,
-					   len, ICE_NONDMA_TO_NONDMA);
-			}
+			len = sizeof(struct ice_sctp_hdr);
 			break;
 		case ICE_VXLAN:
 		case ICE_GENEVE:
 		case ICE_VXLAN_GPE:
-			pkt_off = ICE_UDP_TUN_HDR_OFFSET +
-				   offsetof(struct ice_udp_tnl_hdr, vni);
-			field_off = offsetof(struct ice_udp_tnl_hdr, vni);
-			len =  sizeof(struct ice_udp_tnl_hdr) - field_off;
-			ice_memcpy(&pkt[pkt_off], &lkups[i].h_u.tnl_hdr.vni,
-				   len, ICE_NONDMA_TO_NONDMA);
+			len = sizeof(struct ice_udp_tnl_hdr);
 			break;
 		default:
-			break;
+			return ICE_ERR_PARAM;
 		}
+
+		/* the length should be a word multiple */
+		if (len % ICE_BYTES_PER_WORD)
+			return ICE_ERR_CFG;
+
+		/* We have the offset to the header start, the length, the
+		 * caller's header values and mask. Use this information to
+		 * copy the data into the dummy packet appropriately based on
+		 * the mask. Note that we need to only write the bits as
+		 * indicated by the mask to make sure we don't improperly write
+		 * over any significant packet data.
+		 */
+		for (j = 0; j < len / sizeof(u16); j++)
+			if (((u16 *)&lkups[i].m_u)[j])
+				((u16 *)(pkt + offset))[j] =
+					(((u16 *)(pkt + offset))[j] &
+					 ~((u16 *)&lkups[i].m_u)[j]) |
+					(((u16 *)&lkups[i].h_u)[j] &
+					 ((u16 *)&lkups[i].m_u)[j]);
 	}
+
 	s_rule->pdata.lkup_tx_rx.hdr_len = CPU_TO_LE16(pkt_len);
+
+	return ICE_SUCCESS;
 }
 
 /**
@@ -5438,7 +5456,7 @@ ice_adv_add_update_vsi_list(struct ice_hw *hw,
 }
 
 /**
- * ice_add_adv_rule - create an advanced switch rule
+ * ice_add_adv_rule - helper function to create an advanced switch rule
  * @hw: pointer to the hardware structure
  * @lkups: information on the words that needs to be looked up. All words
  * together makes one recipe
@@ -5462,11 +5480,13 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 {
 	struct ice_adv_fltr_mgmt_list_entry *m_entry, *adv_fltr = NULL;
 	u16 rid = 0, i, pkt_len, rule_buf_sz, vsi_handle;
-	struct ice_aqc_sw_rules_elem *s_rule;
+	const struct ice_dummy_pkt_offsets *pkt_offsets;
+	struct ice_aqc_sw_rules_elem *s_rule = NULL;
 	struct LIST_HEAD_TYPE *rule_head;
 	struct ice_switch_info *sw;
 	enum ice_status status;
 	const u8 *pkt = NULL;
+	bool found = false;
 	u32 act = 0;
 
 	if (!lkups_cnt)
@@ -5475,13 +5495,25 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 	for (i = 0; i < lkups_cnt; i++) {
 		u16 j, *ptr;
 
-		/* Validate match masks to make sure they match complete 16-bit
-		 * words.
+		/* Validate match masks to make sure that there is something
+		 * to match.
 		 */
-		ptr = (u16 *)&lkups->m_u;
+		ptr = (u16 *)&lkups[i].m_u;
 		for (j = 0; j < sizeof(lkups->m_u) / sizeof(u16); j++)
-			if (ptr[j] != 0 && ptr[j] != 0xffff)
-				return ICE_ERR_PARAM;
+			if (ptr[j] != 0) {
+				found = true;
+				break;
+			}
+	}
+	if (!found)
+		return ICE_ERR_PARAM;
+
+	/* make sure that we can locate a dummy packet */
+	ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt, &pkt_len,
+			      &pkt_offsets);
+	if (!pkt) {
+		status = ICE_ERR_PARAM;
+		goto err_ice_add_adv_rule;
 	}
 
 	if (!(rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI ||
@@ -5522,8 +5554,6 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 		}
 		return status;
 	}
-	ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt,
-			      &pkt_len);
 	rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + pkt_len;
 	s_rule = (struct ice_aqc_sw_rules_elem *)ice_malloc(hw, rule_buf_sz);
 	if (!s_rule)
@@ -5568,8 +5598,8 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 	s_rule->pdata.lkup_tx_rx.recipe_id = CPU_TO_LE16(rid);
 	s_rule->pdata.lkup_tx_rx.act = CPU_TO_LE32(act);
 
-	ice_fill_adv_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, s_rule,
-				  pkt, pkt_len);
+	ice_fill_adv_dummy_packet(lkups, lkups_cnt, s_rule, pkt, pkt_len,
+				  pkt_offsets);
 
 	status = ice_aq_sw_rules(hw, (struct ice_aqc_sw_rules *)s_rule,
 				 rule_buf_sz, 1, ice_aqc_opc_add_sw_rules,
@@ -5745,11 +5775,12 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 		 u16 lkups_cnt, struct ice_adv_rule_info *rinfo)
 {
 	struct ice_adv_fltr_mgmt_list_entry *list_elem;
+	const struct ice_dummy_pkt_offsets *offsets;
 	struct ice_prot_lkup_ext lkup_exts;
 	u16 rule_buf_sz, pkt_len, i, rid;
+	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
 	enum ice_status status = ICE_SUCCESS;
 	bool remove_rule = false;
-	struct ice_lock *rule_lock; /* Lock to protect filter rule list */
 	const u8 *pkt = NULL;
 	u16 vsi_handle;
 
@@ -5797,7 +5828,7 @@ ice_rem_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 		struct ice_aqc_sw_rules_elem *s_rule;
 
 		ice_find_dummy_packet(lkups, lkups_cnt, rinfo->tun_type, &pkt,
-				      &pkt_len);
+				      &pkt_len, &offsets);
 		rule_buf_sz = ICE_SW_RULE_RX_TX_NO_HDR_SIZE + pkt_len;
 		s_rule =
 			(struct ice_aqc_sw_rules_elem *)ice_malloc(hw,
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index b788aa7ec..4c34bc2ea 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -192,6 +192,7 @@ struct ice_sw_recipe {
 	 * recipe
 	 */
 	struct ice_fv_word ext_words[ICE_MAX_CHAIN_WORDS];
+	u16 word_masks[ICE_MAX_CHAIN_WORDS];
 
 	/* if this recipe is a collection of other recipe */
 	u8 big_recp;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 55/69] net/ice/base: allow forward to Q groups in switch rule
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (53 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 54/69] net/ice/base: enable additional switch rules Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 56/69] net/ice/base: changes for reducing ice add adv rule time Leyi Rong
                       ` (14 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Enable forward to Q group action in ice_add_adv_rule.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 0dae1b609..9f47ae96b 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -5488,6 +5488,7 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 	const u8 *pkt = NULL;
 	bool found = false;
 	u32 act = 0;
+	u8 q_rgn;
 
 	if (!lkups_cnt)
 		return ICE_ERR_PARAM;
@@ -5518,6 +5519,7 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 
 	if (!(rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI ||
 	      rinfo->sw_act.fltr_act == ICE_FWD_TO_Q ||
+	      rinfo->sw_act.fltr_act == ICE_FWD_TO_QGRP ||
 	      rinfo->sw_act.fltr_act == ICE_DROP_PACKET))
 		return ICE_ERR_CFG;
 
@@ -5570,6 +5572,15 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups,
 		act |= (rinfo->sw_act.fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
 		       ICE_SINGLE_ACT_Q_INDEX_M;
 		break;
+	case ICE_FWD_TO_QGRP:
+		q_rgn = rinfo->sw_act.qgrp_size > 0 ?
+			(u8)ice_ilog2(rinfo->sw_act.qgrp_size) : 0;
+		act |= ICE_SINGLE_ACT_TO_Q;
+		act |= (rinfo->sw_act.fwd_id.q_id << ICE_SINGLE_ACT_Q_INDEX_S) &
+		       ICE_SINGLE_ACT_Q_INDEX_M;
+		act |= (q_rgn << ICE_SINGLE_ACT_Q_REGION_S) &
+		       ICE_SINGLE_ACT_Q_REGION_M;
+		break;
 	case ICE_DROP_PACKET:
 		act |= ICE_SINGLE_ACT_VSI_FORWARDING | ICE_SINGLE_ACT_DROP |
 		       ICE_SINGLE_ACT_VALID_BIT;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 56/69] net/ice/base: changes for reducing ice add adv rule time
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (54 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 55/69] net/ice/base: allow forward to Q groups in switch rule Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 57/69] net/ice/base: deduce TSA value in the CEE mode Leyi Rong
                       ` (13 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Shivanshu Shukla, Paul M Stillwell Jr

While calling ice_find_recp we were calling ice_get_recp_to_prof_map
everytime we called ice_find_recp. ice_get_recp_to_prof_map is a very
expensive operation and we should try to reduce the number of times we
call this function. So moved it into ice_get_recp_frm_fw since we only
need to have fresh recp_to_profile mapping when we we check FW to see if
the recipe we are trying to add already exists in FW.

Signed-off-by: Shivanshu Shukla <shivanshu.shukla@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 9f47ae96b..fe4d344a4 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -168,6 +168,8 @@ static ice_declare_bitmap(recipe_to_profile[ICE_MAX_NUM_RECIPES],
 			  ICE_MAX_NUM_PROFILES);
 static ice_declare_bitmap(available_result_ids, ICE_CHAIN_FV_INDEX_START + 1);
 
+static void ice_get_recp_to_prof_map(struct ice_hw *hw);
+
 /**
  * ice_get_recp_frm_fw - update SW bookkeeping from FW recipe entries
  * @hw: pointer to hardware structure
@@ -189,6 +191,10 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid)
 	struct ice_prot_lkup_ext *lkup_exts;
 	enum ice_status status;
 
+	/* Get recipe to profile map so that we can get the fv from
+	 * lkups that we read for a recipe from FW.
+	 */
+	ice_get_recp_to_prof_map(hw);
 	/* we need a buffer big enough to accommodate all the recipes */
 	tmp = (struct ice_aqc_recipe_data_elem *)ice_calloc(hw,
 		ICE_MAX_NUM_RECIPES, sizeof(*tmp));
@@ -4355,7 +4361,6 @@ static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts)
 	struct ice_sw_recipe *recp;
 	u16 i;
 
-	ice_get_recp_to_prof_map(hw);
 	/* Initialize available_result_ids which tracks available result idx */
 	for (i = 0; i <= ICE_CHAIN_FV_INDEX_START; i++)
 		ice_set_bit(ICE_CHAIN_FV_INDEX_START - i,
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 57/69] net/ice/base: deduce TSA value in the CEE mode
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (55 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 56/69] net/ice/base: changes for reducing ice add adv rule time Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 58/69] net/ice/base: rework API for ice zero bitmap Leyi Rong
                       ` (12 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Chinh T Cao, Paul M Stillwell Jr

In CEE mode, the TSA information can be derived from the reported
priority value.

Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_dcb.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c
index 008c7a110..a6fbedd18 100644
--- a/drivers/net/ice/base/ice_dcb.c
+++ b/drivers/net/ice/base/ice_dcb.c
@@ -445,9 +445,15 @@ ice_parse_cee_pgcfg_tlv(struct ice_cee_feat_tlv *tlv,
 	 *        |pg0|pg1|pg2|pg3|pg4|pg5|pg6|pg7|
 	 *        ---------------------------------
 	 */
-	ice_for_each_traffic_class(i)
+	ice_for_each_traffic_class(i) {
 		etscfg->tcbwtable[i] = buf[offset++];
 
+		if (etscfg->prio_table[i] == ICE_CEE_PGID_STRICT)
+			dcbcfg->etscfg.tsatable[i] = ICE_IEEE_TSA_STRICT;
+		else
+			dcbcfg->etscfg.tsatable[i] = ICE_IEEE_TSA_ETS;
+	}
+
 	/* Number of TCs supported (1 octet) */
 	etscfg->maxtcs = buf[offset];
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 58/69] net/ice/base: rework API for ice zero bitmap
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (56 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 57/69] net/ice/base: deduce TSA value in the CEE mode Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 59/69] net/ice/base: rework API for ice cp bitmap Leyi Rong
                       ` (11 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

Fix ice_zero_bitmap to zero the entire storage.

Fixes: c9e37832c95f ("net/ice/base: rework on bit ops")

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_bitops.h | 19 ++++++-------------
 1 file changed, 6 insertions(+), 13 deletions(-)

diff --git a/drivers/net/ice/base/ice_bitops.h b/drivers/net/ice/base/ice_bitops.h
index f52713021..aca5529a6 100644
--- a/drivers/net/ice/base/ice_bitops.h
+++ b/drivers/net/ice/base/ice_bitops.h
@@ -147,22 +147,15 @@ ice_test_and_set_bit(u16 nr, ice_bitmap_t *bitmap)
  * @bmp: bitmap to set zeros
  * @size: Size of the bitmaps in bits
  *
- * This function sets bits of a bitmap to zero.
+ * Set all of the bits in a bitmap to zero. Note that this function assumes it
+ * operates on an ice_bitmap_t which was declared using ice_declare_bitmap. It
+ * will zero every bit in the last chunk, even if those bits are beyond the
+ * size.
  */
 static inline void ice_zero_bitmap(ice_bitmap_t *bmp, u16 size)
 {
-	ice_bitmap_t mask;
-	u16 i;
-
-	/* Handle all but last chunk*/
-	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
-		bmp[i] = 0;
-	/* For the last chunk, we want to take care of not to modify bits
-	 * outside the size boundary. ~mask take care of all the bits outside
-	 * the boundary.
-	 */
-	mask = LAST_CHUNK_MASK(size);
-	bmp[i] &= ~mask;
+	ice_memset(bmp, 0, BITS_TO_CHUNKS(size) * sizeof(ice_bitmap_t),
+		   ICE_NONDMA_MEM);
 }
 
 /**
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 59/69] net/ice/base: rework API for ice cp bitmap
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (57 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 58/69] net/ice/base: rework API for ice zero bitmap Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 60/69] net/ice/base: use ice zero bitmap instead of ice memset Leyi Rong
                       ` (10 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

Fix ice_cp_bitmap to copy the entire storage.

Fixes: c9e37832c95f ("net/ice/base: rework on bit ops")

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_bitops.h | 17 +++++------------
 1 file changed, 5 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ice/base/ice_bitops.h b/drivers/net/ice/base/ice_bitops.h
index aca5529a6..c74407d9d 100644
--- a/drivers/net/ice/base/ice_bitops.h
+++ b/drivers/net/ice/base/ice_bitops.h
@@ -306,21 +306,14 @@ static inline bool ice_is_any_bit_set(ice_bitmap_t *bitmap, u16 size)
  * @src: bitmap to copy from
  * @size: Size of the bitmaps in bits
  *
- * This function copy bitmap from src to dst.
+ * This function copy bitmap from src to dst. Note that this function assumes
+ * it is operating on a bitmap declared using ice_declare_bitmap. It will copy
+ * the entire last chunk even if this contains bits beyond the size.
  */
 static inline void ice_cp_bitmap(ice_bitmap_t *dst, ice_bitmap_t *src, u16 size)
 {
-	ice_bitmap_t mask;
-	u16 i;
-
-	/* Handle all but last chunk*/
-	for (i = 0; i < BITS_TO_CHUNKS(size) - 1; i++)
-		dst[i] = src[i];
-
-	/* We want to only copy bits within the size.*/
-	mask = LAST_CHUNK_MASK(size);
-	dst[i] &= ~mask;
-	dst[i] |= src[i] & mask;
+	ice_memcpy(dst, src, BITS_TO_CHUNKS(size) * sizeof(ice_bitmap_t),
+		   ICE_NONDMA_TO_NONDMA);
 }
 
 /**
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 60/69] net/ice/base: use ice zero bitmap instead of ice memset
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (58 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 59/69] net/ice/base: rework API for ice cp bitmap Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 61/69] net/ice/base: use the specified size for ice zero bitmap Leyi Rong
                       ` (9 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

A few places in the code used ice_memset instead of ice_zero_bitmap to
initialize a bitmap to zeros.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 0582c0ecf..5f14d0488 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -3712,7 +3712,7 @@ ice_update_fd_swap(struct ice_hw *hw, u16 prof_id, struct ice_fv_word *es)
 	u32 mask_sel = 0;
 	u8 i, j, k;
 
-	ice_memset(pair_list, 0, sizeof(pair_list), ICE_NONDMA_MEM);
+	ice_zero_bitmap(pair_list, ICE_FD_SRC_DST_PAIR_COUNT);
 
 	ice_init_fd_mask_regs(hw);
 
@@ -4488,7 +4488,7 @@ ice_adj_prof_priorities(struct ice_hw *hw, enum ice_block blk, u16 vsig,
 	enum ice_status status;
 	u16 idx;
 
-	ice_memset(ptgs_used, 0, sizeof(ptgs_used), ICE_NONDMA_MEM);
+	ice_zero_bitmap(ptgs_used, ICE_XLT1_CNT);
 	idx = vsig & ICE_VSIG_IDX_M;
 
 	/* Priority is based on the order in which the profiles are added. The
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 61/69] net/ice/base: use the specified size for ice zero bitmap
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (59 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 60/69] net/ice/base: use ice zero bitmap instead of ice memset Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 62/69] net/ice/base: correct NVGRE header structure Leyi Rong
                       ` (8 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Jacob Keller, Paul M Stillwell Jr

A couple of places in the code use a 'sizeof(bitmap) * BITS_PER_BYTE'
construction to calculate the size of the bitmap when calling
ice_zero_bitmap. Instead of doing this, just use the same value as in
the ice_declare_bitmap declaration.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_sched.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 0f4153146..2d80af731 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -5259,8 +5259,7 @@ void ice_sched_replay_agg(struct ice_hw *hw)
 					   ICE_MAX_TRAFFIC_CLASS);
 			enum ice_status status;
 
-			ice_zero_bitmap(replay_bitmap,
-					sizeof(replay_bitmap) * BITS_PER_BYTE);
+			ice_zero_bitmap(replay_bitmap, ICE_MAX_TRAFFIC_CLASS);
 			ice_sched_get_ena_tc_bitmap(pi,
 						    agg_info->replay_tc_bitmap,
 						    replay_bitmap);
@@ -5396,7 +5395,7 @@ ice_sched_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle)
 	struct ice_sched_agg_info *agg_info;
 	enum ice_status status;
 
-	ice_zero_bitmap(replay_bitmap, sizeof(replay_bitmap) * BITS_PER_BYTE);
+	ice_zero_bitmap(replay_bitmap, ICE_MAX_TRAFFIC_CLASS);
 	if (!ice_is_vsi_valid(hw, vsi_handle))
 		return ICE_ERR_PARAM;
 	agg_info = ice_get_vsi_agg_info(hw, vsi_handle);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 62/69] net/ice/base: correct NVGRE header structure
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (60 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 61/69] net/ice/base: use the specified size for ice zero bitmap Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 63/69] net/ice/base: reduce calls to get profile associations Leyi Rong
                       ` (7 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

Correct NVGRE header structure and its field offsets.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_protocol_type.h | 5 +++--
 drivers/net/ice/base/ice_switch.c        | 2 +-
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h
index 38bed7a79..033efdb5a 100644
--- a/drivers/net/ice/base/ice_protocol_type.h
+++ b/drivers/net/ice/base/ice_protocol_type.h
@@ -194,8 +194,9 @@ struct ice_udp_tnl_hdr {
 };
 
 struct ice_nvgre {
-	u16 tni;
-	u16 flow_id;
+	u16 flags;
+	u16 protocol;
+	u32 tni_flow;
 };
 
 union ice_prot_hdr {
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index fe4d344a4..636b43d69 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -4310,7 +4310,7 @@ static const struct ice_prot_ext_tbl_entry ice_prot_ext[] = {
 	{ ICE_VXLAN,		{ 8, 10, 12, 14 } },
 	{ ICE_GENEVE,		{ 8, 10, 12, 14 } },
 	{ ICE_VXLAN_GPE,	{ 0, 2, 4 } },
-	{ ICE_NVGRE,		{ 0, 2 } },
+	{ ICE_NVGRE,		{ 0, 2, 4, 6 } },
 	{ ICE_PROTOCOL_LAST,	{ 0 } }
 };
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 63/69] net/ice/base: reduce calls to get profile associations
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (61 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 62/69] net/ice/base: correct NVGRE header structure Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 64/69] net/ice/base: fix for chained recipe switch ID index Leyi Rong
                       ` (6 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Shivanshu Shukla, Paul M Stillwell Jr

Added refresh_required flag to determine if we need to update the
recipe to profile mapping cache. This will reduce the number of
calls made to refresh the profile map.

Signed-off-by: Shivanshu Shukla <shivanshu.shukla@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 25 +++++++++++++++++++------
 1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 636b43d69..7f4edd274 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -175,13 +175,15 @@ static void ice_get_recp_to_prof_map(struct ice_hw *hw);
  * @hw: pointer to hardware structure
  * @recps: struct that we need to populate
  * @rid: recipe ID that we are populating
+ * @refresh_required: true if we should get recipe to profile mapping from FW
  *
  * This function is used to populate all the necessary entries into our
  * bookkeeping so that we have a current list of all the recipes that are
  * programmed in the firmware.
  */
 static enum ice_status
-ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid)
+ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid,
+		    bool *refresh_required)
 {
 	u16 i, sub_recps, fv_word_idx = 0, result_idx = 0;
 	ice_declare_bitmap(r_bitmap, ICE_MAX_NUM_PROFILES);
@@ -191,10 +193,6 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid)
 	struct ice_prot_lkup_ext *lkup_exts;
 	enum ice_status status;
 
-	/* Get recipe to profile map so that we can get the fv from
-	 * lkups that we read for a recipe from FW.
-	 */
-	ice_get_recp_to_prof_map(hw);
 	/* we need a buffer big enough to accommodate all the recipes */
 	tmp = (struct ice_aqc_recipe_data_elem *)ice_calloc(hw,
 		ICE_MAX_NUM_RECIPES, sizeof(*tmp));
@@ -206,6 +204,19 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid)
 	/* non-zero status meaning recipe doesn't exist */
 	if (status)
 		goto err_unroll;
+
+	/* Get recipe to profile map so that we can get the fv from lkups that
+	 * we read for a recipe from FW. Since we want to minimize the number of
+	 * times we make this FW call, just make one call and cache the copy
+	 * until a new recipe is added. This operation is only required the
+	 * first time to get the changes from FW. Then to search existing
+	 * entries we don't need to update the cache again until another recipe
+	 * gets added.
+	 */
+	if (*refresh_required) {
+		ice_get_recp_to_prof_map(hw);
+		*refresh_required = false;
+	}
 	lkup_exts = &recps[rid].lkup_exts;
 	/* start populating all the entries for recps[rid] based on lkups from
 	 * firmware
@@ -4358,6 +4369,7 @@ static const struct ice_protocol_entry ice_prot_id_tbl[] = {
  */
 static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts)
 {
+	bool refresh_required = true;
 	struct ice_sw_recipe *recp;
 	u16 i;
 
@@ -4376,7 +4388,8 @@ static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts)
 		 */
 		if (!recp[i].recp_created)
 			if (ice_get_recp_frm_fw(hw,
-						hw->switch_info->recp_list, i))
+						hw->switch_info->recp_list, i,
+						&refresh_required))
 				continue;
 
 		/* if number of words we are looking for match */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 64/69] net/ice/base: fix for chained recipe switch ID index
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (62 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 63/69] net/ice/base: reduce calls to get profile associations Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 65/69] net/ice/base: update driver unloading field Leyi Rong
                       ` (5 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

For chained recipe entry, the field vector index used to match
switch ID should always be zero.

Add ICE_AQ_SW_ID_LKUP_IDX define to indicate the FV index used
to extract the switch ID.

Fixes: dca90ed479cf ("net/ice/base: programming a new switch recipe")

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h |  1 +
 drivers/net/ice/base/ice_switch.c     | 11 ++++++-----
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 1fdd612a1..8b8ed7a73 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -711,6 +711,7 @@ struct ice_aqc_recipe_content {
 #define ICE_AQ_RECIPE_ID_S		0
 #define ICE_AQ_RECIPE_ID_M		(0x3F << ICE_AQ_RECIPE_ID_S)
 #define ICE_AQ_RECIPE_ID_IS_ROOT	BIT(7)
+#define ICE_AQ_SW_ID_LKUP_IDX		0
 	u8 lkup_indx[5];
 #define ICE_AQ_RECIPE_LKUP_DATA_S	0
 #define ICE_AQ_RECIPE_LKUP_DATA_M	(0x3F << ICE_AQ_RECIPE_LKUP_DATA_S)
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 7f4edd274..660d491ed 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -4737,8 +4737,8 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm,
 			   sizeof(buf[recps].content.lkup_indx),
 			   ICE_NONDMA_MEM);
 
-		/* All recipes use look-up field index 0 to match switch ID. */
-		buf[recps].content.lkup_indx[0] = 0;
+		/* All recipes use look-up index 0 to match switch ID. */
+		buf[recps].content.lkup_indx[0] = ICE_AQ_SW_ID_LKUP_IDX;
 		buf[recps].content.mask[0] =
 			CPU_TO_LE16(ICE_AQ_SW_ID_LKUP_MASK);
 		/* Setup lkup_indx 1..4 to INVALID/ignore and set the mask
@@ -4804,7 +4804,7 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm,
 		buf[0].content.act_ctrl_fwd_priority = rm->priority;
 	} else {
 		struct ice_recp_grp_entry *last_chain_entry;
-		u16 rid, i = 0;
+		u16 rid, i;
 
 		/* Allocate the last recipe that will chain the outcomes of the
 		 * other recipes together
@@ -4829,8 +4829,9 @@ ice_add_sw_recipe(struct ice_hw *hw, struct ice_sw_recipe *rm,
 		ice_memset(&buf[recps].content.lkup_indx, 0,
 			   sizeof(buf[recps].content.lkup_indx),
 			   ICE_NONDMA_MEM);
-		buf[recps].content.lkup_indx[i] = hw->port_info->sw_id;
-		buf[recps].content.mask[i] =
+		/* All recipes use look-up index 0 to match switch ID. */
+		buf[recps].content.lkup_indx[0] = ICE_AQ_SW_ID_LKUP_IDX;
+		buf[recps].content.mask[0] =
 			CPU_TO_LE16(ICE_AQ_SW_ID_LKUP_MASK);
 		for (i = 1; i <= ICE_NUM_WORDS_RECIPE; i++) {
 			buf[recps].content.lkup_indx[i] =
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 65/69] net/ice/base: update driver unloading field
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (63 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 64/69] net/ice/base: fix for chained recipe switch ID index Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 66/69] net/ice/base: fix for UDP and TCP related switch rules Leyi Rong
                       ` (4 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Bruce Allan, Paul M Stillwell Jr

According to recent specification versions, the field in the Queue
Shutdown AdminQ command consisting of the "driver unloading" indication
is not a 4 byte field (it is byte.bit 16.0). Change it to a byte and
remove the unnecessary endian conversion.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 4 ++--
 drivers/net/ice/base/ice_common.c     | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 8b8ed7a73..7afdb6578 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -52,9 +52,9 @@ struct ice_aqc_driver_ver {
 
 /* Queue Shutdown (direct 0x0003) */
 struct ice_aqc_q_shutdown {
-	__le32 driver_unloading;
+	u8 driver_unloading;
 #define ICE_AQC_DRIVER_UNLOADING	BIT(0)
-	u8 reserved[12];
+	u8 reserved[15];
 };
 
 
diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 199430e28..58c108b68 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -1581,7 +1581,7 @@ enum ice_status ice_aq_q_shutdown(struct ice_hw *hw, bool unloading)
 	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_q_shutdown);
 
 	if (unloading)
-		cmd->driver_unloading = CPU_TO_LE32(ICE_AQC_DRIVER_UNLOADING);
+		cmd->driver_unloading = ICE_AQC_DRIVER_UNLOADING;
 
 	return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
 }
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 66/69] net/ice/base: fix for UDP and TCP related switch rules
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (64 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 65/69] net/ice/base: update driver unloading field Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 67/69] net/ice/base: changes in flow and profile removal Leyi Rong
                       ` (3 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Dan Nowlin, Paul M Stillwell Jr

This patch corrects some errors in UDP and TCP switch rule
programming by adding additional dummy packets.

Fixes: 5e81d85ff066 ("net/ice/base: enable additional switch rules")

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 156 +++++++++++++++++++++++-------
 1 file changed, 123 insertions(+), 33 deletions(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 660d491ed..2e1dbfe42 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -89,7 +89,7 @@ u8 dummy_gre_packet[] = { 0, 0, 0, 0,		/* ICE_MAC_OFOS 0 */
 			};
 
 static const
-struct ice_dummy_pkt_offsets dummy_udp_tun_packet_offsets[] = {
+struct ice_dummy_pkt_offsets dummy_udp_tun_tcp_packet_offsets[] = {
 	{ ICE_MAC_OFOS,		0 },
 	{ ICE_IPV4_OFOS,	14 },
 	{ ICE_UDP_OF,		34 },
@@ -101,7 +101,7 @@ struct ice_dummy_pkt_offsets dummy_udp_tun_packet_offsets[] = {
 };
 
 static const
-u8 dummy_udp_tun_packet[] = {
+u8 dummy_udp_tun_tcp_packet[] = {
 	0x00, 0x00, 0x00, 0x00,  /* ICE_MAC_OFOS 0 */
 	0x00, 0x00, 0x00, 0x00,
 	0x00, 0x00, 0x00, 0x00,
@@ -138,7 +138,80 @@ u8 dummy_udp_tun_packet[] = {
 };
 
 static const
-struct ice_dummy_pkt_offsets dummy_tcp_tun_packet_offsets[] = {
+struct ice_dummy_pkt_offsets dummy_udp_tun_udp_packet_offsets[] = {
+	{ ICE_MAC_OFOS,		0 },
+	{ ICE_IPV4_OFOS,	14 },
+	{ ICE_UDP_OF,		34 },
+	{ ICE_VXLAN,		42 },
+	{ ICE_MAC_IL,		50 },
+	{ ICE_IPV4_IL,		64 },
+	{ ICE_UDP_ILOS,		84 },
+	{ ICE_PROTOCOL_LAST,	0 },
+};
+
+static const
+u8 dummy_udp_tun_udp_packet[] = {
+	0x00, 0x00, 0x00, 0x00,  /* ICE_MAC_OFOS 0 */
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x08, 0x00,
+
+	0x45, 0x00, 0x00, 0x4e, /* ICE_IPV4_OFOS 14 */
+	0x00, 0x01, 0x00, 0x00,
+	0x00, 0x11, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+
+	0x00, 0x00, 0x12, 0xb5, /* ICE_UDP_OF 34 */
+	0x00, 0x3a, 0x00, 0x00,
+
+	0x0c, 0x00, 0x00, 0x03, /* ICE_VXLAN 42 */
+	0x00, 0x00, 0x00, 0x00,
+
+	0x00, 0x00, 0x00, 0x00, /* ICE_MAC_IL 50 */
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x08, 0x00,
+
+	0x45, 0x00, 0x00, 0x1c, /* ICE_IPV4_IL 64 */
+	0x00, 0x01, 0x00, 0x00,
+	0x00, 0x11, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+
+	0x00, 0x00, 0x00, 0x00, /* ICE_UDP_ILOS 84 */
+	0x00, 0x08, 0x00, 0x00,
+};
+
+static const
+struct ice_dummy_pkt_offsets dummy_udp_packet_offsets[] = {
+	{ ICE_MAC_OFOS,		0 },
+	{ ICE_IPV4_OFOS,	14 },
+	{ ICE_UDP_ILOS,		34 },
+	{ ICE_PROTOCOL_LAST,	0 },
+};
+
+static const u8
+dummy_udp_packet[] = {
+	0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x08, 0x00,
+
+	0x45, 0x00, 0x00, 0x1c, /* ICE_IPV4_OFOS 14 */
+	0x00, 0x01, 0x00, 0x00,
+	0x00, 0x11, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+
+	0x00, 0x00, 0x00, 0x00, /* ICE_UDP_ILOS 34 */
+	0x00, 0x08, 0x00, 0x00,
+
+	0x00, 0x00,	/* 2 bytes for 4 byte alignment */
+};
+
+static const
+struct ice_dummy_pkt_offsets dummy_tcp_packet_offsets[] = {
 	{ ICE_MAC_OFOS,		0 },
 	{ ICE_IPV4_OFOS,	14 },
 	{ ICE_TCP_IL,		34 },
@@ -146,22 +219,26 @@ struct ice_dummy_pkt_offsets dummy_tcp_tun_packet_offsets[] = {
 };
 
 static const u8
-dummy_tcp_tun_packet[] = {0, 0, 0, 0,		/* ICE_MAC_OFOS 0 */
-			  0, 0, 0, 0,
-			  0, 0, 0, 0,
-			  0x08, 0,
-			  0x45, 0, 0, 0x28,     /* ICE_IPV4_OFOS 14 */
-			  0, 0x01, 0, 0,
-			  0x40, 0x06, 0xF5, 0x69,
-			  0, 0, 0, 0,
-			  0, 0, 0, 0,
-			  0, 0, 0, 0,		/* ICE_TCP_IL 34 */
-			  0, 0, 0, 0,
-			  0, 0, 0, 0,
-			  0x50, 0x02, 0x20,
-			  0, 0x9, 0x79, 0, 0,
-			  0, 0 /* 2 bytes padding for 4 byte alignment*/
-			};
+dummy_tcp_packet[] = {
+	0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x08, 0x00,
+
+	0x45, 0x00, 0x00, 0x28, /* ICE_IPV4_OFOS 14 */
+	0x00, 0x01, 0x00, 0x00,
+	0x00, 0x06, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+
+	0x00, 0x00, 0x00, 0x00, /* ICE_TCP_IL 34 */
+	0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+	0x50, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00,
+
+	0x00, 0x00,	/* 2 bytes for 4 byte alignment */
+};
 
 /* this is a recipe to profile bitmap association */
 static ice_declare_bitmap(recipe_to_profile[ICE_MAX_NUM_RECIPES],
@@ -5195,8 +5272,16 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
 		      u16 *pkt_len,
 		      const struct ice_dummy_pkt_offsets **offsets)
 {
+	bool tcp = false, udp = false;
 	u16 i;
 
+	for (i = 0; i < lkups_cnt; i++) {
+		if (lkups[i].type == ICE_UDP_ILOS)
+			udp = true;
+		else if (lkups[i].type == ICE_TCP_IL)
+			tcp = true;
+	}
+
 	if (tun_type == ICE_SW_TUN_NVGRE || tun_type == ICE_ALL_TUNNELS) {
 		*pkt = dummy_gre_packet;
 		*pkt_len = sizeof(dummy_gre_packet);
@@ -5205,25 +5290,30 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
 	}
 
 	if (tun_type == ICE_SW_TUN_VXLAN || tun_type == ICE_SW_TUN_GENEVE ||
-	    tun_type == ICE_SW_TUN_VXLAN_GPE) {
-		*pkt = dummy_udp_tun_packet;
-		*pkt_len = sizeof(dummy_udp_tun_packet);
-		*offsets = dummy_udp_tun_packet_offsets;
+	    tun_type == ICE_SW_TUN_VXLAN_GPE || tun_type == ICE_SW_TUN_UDP) {
+		if (tcp) {
+			*pkt = dummy_udp_tun_tcp_packet;
+			*pkt_len = sizeof(dummy_udp_tun_tcp_packet);
+			*offsets = dummy_udp_tun_tcp_packet_offsets;
+			return;
+		}
+
+		*pkt = dummy_udp_tun_udp_packet;
+		*pkt_len = sizeof(dummy_udp_tun_udp_packet);
+		*offsets = dummy_udp_tun_udp_packet_offsets;
 		return;
 	}
 
-	for (i = 0; i < lkups_cnt; i++) {
-		if (lkups[i].type == ICE_UDP_ILOS) {
-			*pkt = dummy_udp_tun_packet;
-			*pkt_len = sizeof(dummy_udp_tun_packet);
-			*offsets = dummy_udp_tun_packet_offsets;
-			return;
-		}
+	if (udp) {
+		*pkt = dummy_udp_packet;
+		*pkt_len = sizeof(dummy_udp_packet);
+		*offsets = dummy_udp_packet_offsets;
+		return;
 	}
 
-	*pkt = dummy_tcp_tun_packet;
-	*pkt_len = sizeof(dummy_tcp_tun_packet);
-	*offsets = dummy_tcp_tun_packet_offsets;
+	*pkt = dummy_tcp_packet;
+	*pkt_len = sizeof(dummy_tcp_packet);
+	*offsets = dummy_tcp_packet_offsets;
 }
 
 /**
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 67/69] net/ice/base: changes in flow and profile removal
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (65 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 66/69] net/ice/base: fix for UDP and TCP related switch rules Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 68/69] net/ice/base: update Tx context struct Leyi Rong
                       ` (2 subsequent siblings)
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Vignesh Sridhar, Paul M Stillwell Jr

- Add function to remove ES profile map as it is now being used in
clearing and freeing HW tables.
- Locks were initially not used for releasing ES profile maps and
flow profiles as the sequence is part of driver unload. Adding
calls to acquire and release locks to ensure that any calls made
by the VF VSI during VFR or unload do not result in memory
access violations.

Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_flex_pipe.c | 35 +++++++++++++++++++---------
 1 file changed, 24 insertions(+), 11 deletions(-)

diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 5f14d0488..c1f23ec02 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -2988,6 +2988,26 @@ void ice_fill_blk_tbls(struct ice_hw *hw)
 	ice_init_sw_db(hw);
 }
 
+/**
+ * ice_free_prof_map - free profile map
+ * @hw: pointer to the hardware structure
+ * @blk_idx: HW block index
+ */
+static void ice_free_prof_map(struct ice_hw *hw, u8 blk_idx)
+{
+	struct ice_es *es = &hw->blk[blk_idx].es;
+	struct ice_prof_map *del, *tmp;
+
+	ice_acquire_lock(&es->prof_map_lock);
+	LIST_FOR_EACH_ENTRY_SAFE(del, tmp, &es->prof_map,
+				 ice_prof_map, list) {
+		LIST_DEL(&del->list);
+		ice_free(hw, del);
+	}
+	INIT_LIST_HEAD(&es->prof_map);
+	ice_release_lock(&es->prof_map_lock);
+}
+
 /**
  * ice_free_flow_profs - free flow profile entries
  * @hw: pointer to the hardware structure
@@ -2997,10 +3017,7 @@ static void ice_free_flow_profs(struct ice_hw *hw, u8 blk_idx)
 {
 	struct ice_flow_prof *p, *tmp;
 
-	/* This call is being made as part of resource deallocation
-	 * during unload. Lock acquire and release will not be
-	 * necessary here.
-	 */
+	ice_acquire_lock(&hw->fl_profs_locks[blk_idx]);
 	LIST_FOR_EACH_ENTRY_SAFE(p, tmp, &hw->fl_profs[blk_idx],
 				 ice_flow_prof, l_entry) {
 		struct ice_flow_entry *e, *t;
@@ -3014,6 +3031,7 @@ static void ice_free_flow_profs(struct ice_hw *hw, u8 blk_idx)
 			ice_free(hw, p->acts);
 		ice_free(hw, p);
 	}
+	ice_release_lock(&hw->fl_profs_locks[blk_idx]);
 
 	/* if driver is in reset and tables are being cleared
 	 * re-initialize the flow profile list heads
@@ -3050,17 +3068,12 @@ void ice_free_hw_tbls(struct ice_hw *hw)
 	for (i = 0; i < ICE_BLK_COUNT; i++) {
 		if (hw->blk[i].is_list_init) {
 			struct ice_es *es = &hw->blk[i].es;
-			struct ice_prof_map *del, *tmp;
-
-			LIST_FOR_EACH_ENTRY_SAFE(del, tmp, &es->prof_map,
-						 ice_prof_map, list) {
-				LIST_DEL(&del->list);
-				ice_free(hw, del);
-			}
 
+			ice_free_prof_map(hw, i);
 			ice_destroy_lock(&es->prof_map_lock);
 			ice_free_flow_profs(hw, i);
 			ice_destroy_lock(&hw->fl_profs_locks[i]);
+
 			hw->blk[i].is_list_init = false;
 		}
 		ice_free_vsig_tbl(hw, (enum ice_block)i);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 68/69] net/ice/base: update Tx context struct
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (66 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 67/69] net/ice/base: changes in flow and profile removal Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 69/69] net/ice/base: fixes for GRE Leyi Rong
  2019-06-20  1:55     ` [dpdk-dev] [PATCH v3 00/69] shared code update Zhang, Qi Z
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Ashish Shah, Paul M Stillwell Jr

Add internal usage flag, bit 91 as described in spec.
Update width of internal queue state to 122 also as described in spec.

Signed-off-by: Ashish Shah <ashish.n.shah@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_common.c    | 3 ++-
 drivers/net/ice/base/ice_lan_tx_rx.h | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 58c108b68..328ff3c31 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -1267,6 +1267,7 @@ const struct ice_ctx_ele ice_tlan_ctx_info[] = {
 	ICE_CTX_STORE(ice_tlan_ctx, vmvf_type,			2,	78),
 	ICE_CTX_STORE(ice_tlan_ctx, src_vsi,			10,	80),
 	ICE_CTX_STORE(ice_tlan_ctx, tsyn_ena,			1,	90),
+	ICE_CTX_STORE(ice_tlan_ctx, internal_usage_flag,	1,	91),
 	ICE_CTX_STORE(ice_tlan_ctx, alt_vlan,			1,	92),
 	ICE_CTX_STORE(ice_tlan_ctx, cpuid,			8,	93),
 	ICE_CTX_STORE(ice_tlan_ctx, wb_mode,			1,	101),
@@ -1285,7 +1286,7 @@ const struct ice_ctx_ele ice_tlan_ctx_info[] = {
 	ICE_CTX_STORE(ice_tlan_ctx, drop_ena,			1,	165),
 	ICE_CTX_STORE(ice_tlan_ctx, cache_prof_idx,		2,	166),
 	ICE_CTX_STORE(ice_tlan_ctx, pkt_shaper_prof_idx,	3,	168),
-	ICE_CTX_STORE(ice_tlan_ctx, int_q_state,		110,	171),
+	ICE_CTX_STORE(ice_tlan_ctx, int_q_state,		122,	171),
 	{ 0 }
 };
 
diff --git a/drivers/net/ice/base/ice_lan_tx_rx.h b/drivers/net/ice/base/ice_lan_tx_rx.h
index fa2309bf1..02c54e818 100644
--- a/drivers/net/ice/base/ice_lan_tx_rx.h
+++ b/drivers/net/ice/base/ice_lan_tx_rx.h
@@ -1009,6 +1009,7 @@ struct ice_tlan_ctx {
 #define ICE_TLAN_CTX_VMVF_TYPE_PF	2
 	u16 src_vsi;
 	u8 tsyn_ena;
+	u8 internal_usage_flag;
 	u8 alt_vlan;
 	u16 cpuid;		/* bigger than needed, see above for reason */
 	u8 wb_mode;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* [dpdk-dev] [PATCH v3 69/69] net/ice/base: fixes for GRE
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (67 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 68/69] net/ice/base: update Tx context struct Leyi Rong
@ 2019-06-19 15:18     ` Leyi Rong
  2019-06-20  1:55     ` [dpdk-dev] [PATCH v3 00/69] shared code update Zhang, Qi Z
  69 siblings, 0 replies; 225+ messages in thread
From: Leyi Rong @ 2019-06-19 15:18 UTC (permalink / raw)
  To: qi.z.zhang; +Cc: dev, Leyi Rong, Paul M Stillwell Jr

The dummy packet for GRE had ICE_VXLAN instead of ICE_NVGRE
listed in the offsets so change that to be correct. The
dummy packet itself had some text to indicate VXLAN also
so I changed that to just be GRE since it really wasn't
VXLAN.

Fixes: 5e81d85ff066 ("net/ice/base: enable additional switch rules")

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Leyi Rong <leyi.rong@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 2e1dbfe42..dbf4c5fb0 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -53,13 +53,16 @@ static const u8 dummy_eth_header[DUMMY_ETH_HDR_LEN] = { 0x2, 0, 0, 0, 0, 0,
 	 sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi) + \
 	 ((n) * sizeof(((struct ice_sw_rule_vsi_list *)0)->vsi)))
 
-static const struct ice_dummy_pkt_offsets {
+struct ice_dummy_pkt_offsets {
 	enum ice_protocol_type type;
 	u16 offset; /* ICE_PROTOCOL_LAST indicates end of list */
-} dummy_gre_packet_offsets[] = {
+};
+
+static const
+struct ice_dummy_pkt_offsets dummy_gre_packet_offsets[] = {
 	{ ICE_MAC_OFOS,		0 },
 	{ ICE_IPV4_OFOS,	14 },
-	{ ICE_VXLAN,		34 },
+	{ ICE_NVGRE,		34 },
 	{ ICE_MAC_IL,		42 },
 	{ ICE_IPV4_IL,		54 },
 	{ ICE_PROTOCOL_LAST,	0 },
@@ -75,7 +78,7 @@ u8 dummy_gre_packet[] = { 0, 0, 0, 0,		/* ICE_MAC_OFOS 0 */
 			  0, 0x2F, 0, 0,
 			  0, 0, 0, 0,
 			  0, 0, 0, 0,
-			  0x80, 0, 0x65, 0x58,	/* ICE_VXLAN_GRE 34 */
+			  0x80, 0, 0x65, 0x58,	/* ICE_NVGRE 34 */
 			  0, 0, 0, 0,
 			  0, 0, 0, 0,		/* ICE_MAC_IL 42 */
 			  0, 0, 0, 0,
@@ -5380,6 +5383,9 @@ ice_fill_adv_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt,
 		case ICE_SCTP_IL:
 			len = sizeof(struct ice_sctp_hdr);
 			break;
+		case ICE_NVGRE:
+			len = sizeof(struct ice_nvgre);
+			break;
 		case ICE_VXLAN:
 		case ICE_GENEVE:
 		case ICE_VXLAN_GPE:
-- 
2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH v3 00/69] shared code update
  2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
                       ` (68 preceding siblings ...)
  2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 69/69] net/ice/base: fixes for GRE Leyi Rong
@ 2019-06-20  1:55     ` Zhang, Qi Z
  2019-06-20 20:18       ` Ferruh Yigit
  69 siblings, 1 reply; 225+ messages in thread
From: Zhang, Qi Z @ 2019-06-20  1:55 UTC (permalink / raw)
  To: Rong, Leyi; +Cc: dev



> -----Original Message-----
> From: Rong, Leyi
> Sent: Wednesday, June 19, 2019 11:18 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org; Rong, Leyi <leyi.rong@intel.com>
> Subject: [PATCH v3 00/69] shared code update
> 
> Main changes:
> 1. Advanced switch rule support.
> 2. Add more APIs for tunnel management.
> 3. Add some minor features.
> 4. Code clean and bug fix.
> 

Acked-by: Qi Zhang <qi.z.zhang@intel.com>

Applied to dpdk-next-net-intel with below changes

1. patch 66 and 69 fixes patch 54, so merged to patch 54
  patch 64 fixes patch 06, so merged to patch 06
  so totally 66 patch merged.
2. fix Camel title in patch 27, 39,

Thanks
Qi

> ---
> v3:
> - Drop some patches which do not used.
> - Split some patches which include irrelevant code.
> - Squash some patches which needs to be put together.
> - Add some new patches from latest shared code release.
> 
> v2:
> - Split [03/49] into 2 commits.
> - Split [27/49] with a standalone commit for code change in ice_osdep.h.
> - Split [39/48] by kind of changes.
> - Remove [42/49].
> - Add some new patches from latest shared code release.
> 
> 
> Leyi Rong (69):
>   net/ice/base: update standard extr seq to include DIR flag
>   net/ice/base: add API to configure MIB
>   net/ice/base: add another valid DCBx state
>   net/ice/base: add more recipe commands
>   net/ice/base: add funcs to create new switch recipe
>   net/ice/base: programming a new switch recipe
>   net/ice/base: replay advanced rule after reset
>   net/ice/base: code for removing advanced rule
>   net/ice/base: save and post reset replay q bandwidth
>   net/ice/base: rollback AVF RSS configurations
>   net/ice/base: move RSS replay list
>   net/ice/base: cache the data of set PHY cfg AQ in SW
>   net/ice/base: refactor HW table init function
>   net/ice/base: add lock around profile map list
>   net/ice/base: add compatibility check for package version
>   net/ice/base: add API to init FW logging
>   net/ice/base: use macro instead of magic 8
>   net/ice/base: move and redefine ice debug cq API
>   net/ice/base: separate out control queue lock creation
>   net/ice/base: added sibling head to parse nodes
>   net/ice/base: add and fix debuglogs
>   net/ice/base: forbid VSI to remove unassociated ucast filter
>   net/ice/base: update some defines
>   net/ice/base: add hweight32 support
>   net/ice/base: call out dev/func caps when printing
>   net/ice/base: set the max number of TCs per port to 4
>   net/ice/base: make FDID available for FlexDescriptor
>   net/ice/base: use a different debug bit for FW log
>   net/ice/base: always set prefena when configuring a Rx queue
>   net/ice/base: disable Tx pacing option
>   net/ice/base: delete the index for chaining other recipe
>   net/ice/base: cleanup update link info
>   net/ice/base: add rd64 support
>   net/ice/base: track HW stat registers past rollover
>   net/ice/base: implement LLDP persistent settings
>   net/ice/base: check new FD filter duplicate location
>   net/ice/base: correct UDP/TCP PTYPE assignments
>   net/ice/base: calculate rate limit burst size correctly
>   net/ice/base: fix Flow Director VSI count
>   net/ice/base: use more efficient structures
>   net/ice/base: silent semantic parser warnings
>   net/ice/base: fix for signed package download
>   net/ice/base: add new API to dealloc flow entry
>   net/ice/base: check RSS flow profile list
>   net/ice/base: protect list add with lock
>   net/ice/base: fix Rx functionality for ethertype filters
>   net/ice/base: introduce some new macros
>   net/ice/base: new marker to mark func parameters unused
>   net/ice/base: code clean up
>   net/ice/base: cleanup ice flex pipe files
>   net/ice/base: refactor VSI node sched code
>   net/ice/base: add some minor new defines
>   net/ice/base: add vxlan/generic tunnel management
>   net/ice/base: enable additional switch rules
>   net/ice/base: allow forward to Q groups in switch rule
>   net/ice/base: changes for reducing ice add adv rule time
>   net/ice/base: deduce TSA value in the CEE mode
>   net/ice/base: rework API for ice zero bitmap
>   net/ice/base: rework API for ice cp bitmap
>   net/ice/base: use ice zero bitmap instead of ice memset
>   net/ice/base: use the specified size for ice zero bitmap
>   net/ice/base: correct NVGRE header structure
>   net/ice/base: reduce calls to get profile associations
>   net/ice/base: fix for chained recipe switch ID index
>   net/ice/base: update driver unloading field
>   net/ice/base: fix for UDP and TCP related switch rules
>   net/ice/base: changes in flow and profile removal
>   net/ice/base: update Tx context struct
>   net/ice/base: fixes for GRE
> 
>  drivers/net/ice/base/ice_adminq_cmd.h    |  103 +-
>  drivers/net/ice/base/ice_bitops.h        |   36 +-
>  drivers/net/ice/base/ice_common.c        |  482 +++--
>  drivers/net/ice/base/ice_common.h        |   18 +-
>  drivers/net/ice/base/ice_controlq.c      |  247 ++-
>  drivers/net/ice/base/ice_controlq.h      |    4 +-
>  drivers/net/ice/base/ice_dcb.c           |   82 +-
>  drivers/net/ice/base/ice_dcb.h           |   12 +-
>  drivers/net/ice/base/ice_fdir.c          |   11 +-
>  drivers/net/ice/base/ice_fdir.h          |    4 -
>  drivers/net/ice/base/ice_flex_pipe.c     | 1198 +++++------
>  drivers/net/ice/base/ice_flex_pipe.h     |   73 +-
>  drivers/net/ice/base/ice_flex_type.h     |   54 +-
>  drivers/net/ice/base/ice_flow.c          |  410 +++-
>  drivers/net/ice/base/ice_flow.h          |   22 +-
>  drivers/net/ice/base/ice_lan_tx_rx.h     |    4 +-
>  drivers/net/ice/base/ice_nvm.c           |   18 +-
>  drivers/net/ice/base/ice_osdep.h         |   23 +
>  drivers/net/ice/base/ice_protocol_type.h |   12 +-
>  drivers/net/ice/base/ice_sched.c         |  219 +-
>  drivers/net/ice/base/ice_sched.h         |   24 +-
>  drivers/net/ice/base/ice_switch.c        | 2498 +++++++++++++++++++++-
>  drivers/net/ice/base/ice_switch.h        |   66 +-
>  drivers/net/ice/base/ice_type.h          |   80 +-
>  drivers/net/ice/ice_ethdev.c             |    4 +-
>  25 files changed, 4303 insertions(+), 1401 deletions(-)
> 
> --
> 2.17.1


^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH v3 00/69] shared code update
  2019-06-20  1:55     ` [dpdk-dev] [PATCH v3 00/69] shared code update Zhang, Qi Z
@ 2019-06-20 20:18       ` Ferruh Yigit
  2019-06-21  1:20         ` Zhang, Qi Z
  0 siblings, 1 reply; 225+ messages in thread
From: Ferruh Yigit @ 2019-06-20 20:18 UTC (permalink / raw)
  To: Zhang, Qi Z, Rong, Leyi; +Cc: dev

On 6/20/2019 2:55 AM, Zhang, Qi Z wrote:
> 
> 
>> -----Original Message-----
>> From: Rong, Leyi
>> Sent: Wednesday, June 19, 2019 11:18 PM
>> To: Zhang, Qi Z <qi.z.zhang@intel.com>
>> Cc: dev@dpdk.org; Rong, Leyi <leyi.rong@intel.com>
>> Subject: [PATCH v3 00/69] shared code update
>>
>> Main changes:
>> 1. Advanced switch rule support.
>> 2. Add more APIs for tunnel management.
>> 3. Add some minor features.
>> 4. Code clean and bug fix.
>>
> 
> Acked-by: Qi Zhang <qi.z.zhang@intel.com>
> 
> Applied to dpdk-next-net-intel with below changes
> 
> 1. patch 66 and 69 fixes patch 54, so merged to patch 54
>   patch 64 fixes patch 06, so merged to patch 06
>   so totally 66 patch merged.
> 2. fix Camel title in patch 27, 39,
> 
> Thanks
> Qi

There is a 32 bits build error, can you please check?



In file included from .../dpdk/drivers/net/ice/base/ice_flow.c:6:
.../dpdk/drivers/net/ice/base/ice_flow.c: In function ‘ice_flow_find_entry’:
.../dpdk/drivers/net/ice/base/ice_flow.h:239:33: error: cast from pointer to
integer of different size [-Werror=pointer-to-int-cast]
  239 | #define ICE_FLOW_ENTRY_HNDL(e) ((u64)e)
      |                                 ^
.../dpdk/drivers/net/ice/base/ice_flow.c:1309:17: note: in expansion of macro
‘ICE_FLOW_ENTRY_HNDL’
 1309 |  return found ? ICE_FLOW_ENTRY_HNDL(found) : ICE_FLOW_ENTRY_HANDLE_INVAL;
      |                 ^~~~~~~~~~~~~~~~~~~
.../dpdk/drivers/net/ice/base/ice_flow.c: In function ‘ice_flow_add_entry’:
.../dpdk/drivers/net/ice/base/ice_flow.h:239:33: error: cast from pointer to
integer of different size [-Werror=pointer-to-int-cast]
  239 | #define ICE_FLOW_ENTRY_HNDL(e) ((u64)e)
      |                                 ^
.../dpdk/drivers/net/ice/base/ice_flow.c:1387:13: note: in expansion of macro
‘ICE_FLOW_ENTRY_HNDL’
 1387 |  *entry_h = ICE_FLOW_ENTRY_HNDL(e);
      |             ^~~~~~~~~~~~~~~~~~~
.../dpdk/drivers/net/ice/base/ice_flow.c: In function ‘ice_flow_rem_entry’:
.../dpdk/drivers/net/ice/base/ice_flow.h:240:32: error: cast to pointer from
integer of different size [-Werror=int-to-pointer-cast]
  240 | #define ICE_FLOW_ENTRY_PTR(h) ((struct ice_flow_entry *)(h))
      |                                ^
.../dpdk/drivers/net/ice/base/ice_flow.c:1413:10: note: in expansion of macro
‘ICE_FLOW_ENTRY_PTR’
 1413 |  entry = ICE_FLOW_ENTRY_PTR(entry_h);
      |          ^~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
make[7]: *** [.../dpdk/mk/internal/rte.compile-pre.mk:114: ice_flow.o] Error 1
make[7]: *** Waiting for unfinished jobs....
In file included from .../dpdk/drivers/net/ice/base/ice_flex_pipe.c:8:
.../dpdk/drivers/net/ice/base/ice_flex_pipe.c: In function ‘ice_free_flow_profs’:
.../dpdk/drivers/net/ice/base/ice_flow.h:239:33: error: cast from pointer to
integer of different size [-Werror=pointer-to-int-cast]
  239 | #define ICE_FLOW_ENTRY_HNDL(e) ((u64)e)
      |                                 ^
.../dpdk/drivers/net/ice/base/ice_flex_pipe.c:3027:27: note: in expansion of
macro ‘ICE_FLOW_ENTRY_HNDL’
 3027 |    ice_flow_rem_entry(hw, ICE_FLOW_ENTRY_HNDL(e));
      |                           ^~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors

^ permalink raw reply	[flat|nested] 225+ messages in thread

* Re: [dpdk-dev] [PATCH v3 00/69] shared code update
  2019-06-20 20:18       ` Ferruh Yigit
@ 2019-06-21  1:20         ` Zhang, Qi Z
  0 siblings, 0 replies; 225+ messages in thread
From: Zhang, Qi Z @ 2019-06-21  1:20 UTC (permalink / raw)
  To: Yigit, Ferruh, Rong, Leyi; +Cc: dev



> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Friday, June 21, 2019 4:18 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Rong, Leyi <leyi.rong@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v3 00/69] shared code update
> 
> On 6/20/2019 2:55 AM, Zhang, Qi Z wrote:
> >
> >
> >> -----Original Message-----
> >> From: Rong, Leyi
> >> Sent: Wednesday, June 19, 2019 11:18 PM
> >> To: Zhang, Qi Z <qi.z.zhang@intel.com>
> >> Cc: dev@dpdk.org; Rong, Leyi <leyi.rong@intel.com>
> >> Subject: [PATCH v3 00/69] shared code update
> >>
> >> Main changes:
> >> 1. Advanced switch rule support.
> >> 2. Add more APIs for tunnel management.
> >> 3. Add some minor features.
> >> 4. Code clean and bug fix.
> >>
> >
> > Acked-by: Qi Zhang <qi.z.zhang@intel.com>
> >
> > Applied to dpdk-next-net-intel with below changes
> >
> > 1. patch 66 and 69 fixes patch 54, so merged to patch 54
> >   patch 64 fixes patch 06, so merged to patch 06
> >   so totally 66 patch merged.
> > 2. fix Camel title in patch 27, 39,
> >
> > Thanks
> > Qi
> 
> There is a 32 bits build error, can you please check?

My bad, forgot to check 32 bits

The reason is some 32 bit fix in 19.05 does not synced, 
Minor fix on patch 40/69 and 41/69 are applied to dpdk-next-net-intel.

Thanks
Qi

> 
> 
> 
> In file included from .../dpdk/drivers/net/ice/base/ice_flow.c:6:
> .../dpdk/drivers/net/ice/base/ice_flow.c: In function ‘ice_flow_find_entry’:
> .../dpdk/drivers/net/ice/base/ice_flow.h:239:33: error: cast from pointer to
> integer of different size [-Werror=pointer-to-int-cast]
>   239 | #define ICE_FLOW_ENTRY_HNDL(e) ((u64)e)
>       |                                 ^
> .../dpdk/drivers/net/ice/base/ice_flow.c:1309:17: note: in expansion of macro
> ‘ICE_FLOW_ENTRY_HNDL’
>  1309 |  return found ? ICE_FLOW_ENTRY_HNDL(found) :
> ICE_FLOW_ENTRY_HANDLE_INVAL;
>       |                 ^~~~~~~~~~~~~~~~~~~
> .../dpdk/drivers/net/ice/base/ice_flow.c: In function ‘ice_flow_add_entry’:
> .../dpdk/drivers/net/ice/base/ice_flow.h:239:33: error: cast from pointer to
> integer of different size [-Werror=pointer-to-int-cast]
>   239 | #define ICE_FLOW_ENTRY_HNDL(e) ((u64)e)
>       |                                 ^
> .../dpdk/drivers/net/ice/base/ice_flow.c:1387:13: note: in expansion of macro
> ‘ICE_FLOW_ENTRY_HNDL’
>  1387 |  *entry_h = ICE_FLOW_ENTRY_HNDL(e);
>       |             ^~~~~~~~~~~~~~~~~~~
> .../dpdk/drivers/net/ice/base/ice_flow.c: In function ‘ice_flow_rem_entry’:
> .../dpdk/drivers/net/ice/base/ice_flow.h:240:32: error: cast to pointer from
> integer of different size [-Werror=int-to-pointer-cast]
>   240 | #define ICE_FLOW_ENTRY_PTR(h) ((struct ice_flow_entry *)(h))
>       |                                ^
> .../dpdk/drivers/net/ice/base/ice_flow.c:1413:10: note: in expansion of macro
> ‘ICE_FLOW_ENTRY_PTR’
>  1413 |  entry = ICE_FLOW_ENTRY_PTR(entry_h);
>       |          ^~~~~~~~~~~~~~~~~~
> cc1: all warnings being treated as errors
> make[7]: *** [.../dpdk/mk/internal/rte.compile-pre.mk:114: ice_flow.o] Error
> 1
> make[7]: *** Waiting for unfinished jobs....
> In file included from .../dpdk/drivers/net/ice/base/ice_flex_pipe.c:8:
> .../dpdk/drivers/net/ice/base/ice_flex_pipe.c: In function
> ‘ice_free_flow_profs’:
> .../dpdk/drivers/net/ice/base/ice_flow.h:239:33: error: cast from pointer to
> integer of different size [-Werror=pointer-to-int-cast]
>   239 | #define ICE_FLOW_ENTRY_HNDL(e) ((u64)e)
>       |                                 ^
> .../dpdk/drivers/net/ice/base/ice_flex_pipe.c:3027:27: note: in expansion of
> macro ‘ICE_FLOW_ENTRY_HNDL’
>  3027 |    ice_flow_rem_entry(hw, ICE_FLOW_ENTRY_HNDL(e));
>       |                           ^~~~~~~~~~~~~~~~~~~
> cc1: all warnings being treated as errors

^ permalink raw reply	[flat|nested] 225+ messages in thread

end of thread, other threads:[~2019-06-21  1:20 UTC | newest]

Thread overview: 225+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-04  5:41 [dpdk-dev] [PATCH 00/49] shared code update Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 01/49] net/ice/base: add macro for rounding up Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 02/49] net/ice/base: update standard extr seq to include DIR flag Leyi Rong
2019-06-04 17:06   ` Maxime Coquelin
2019-06-04  5:42 ` [dpdk-dev] [PATCH 03/49] net/ice/base: add API to configure MIB Leyi Rong
2019-06-04 17:14   ` Maxime Coquelin
2019-06-05  0:00     ` Stillwell Jr, Paul M
2019-06-05  8:03       ` Maxime Coquelin
2019-06-04  5:42 ` [dpdk-dev] [PATCH 04/49] net/ice/base: add more recipe commands Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 05/49] net/ice/base: add funcs to create new switch recipe Leyi Rong
2019-06-04 17:27   ` Maxime Coquelin
2019-06-04  5:42 ` [dpdk-dev] [PATCH 06/49] net/ice/base: programming a " Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 07/49] net/ice/base: replay advanced rule after reset Leyi Rong
2019-06-05  8:58   ` Maxime Coquelin
2019-06-05 15:53     ` Stillwell Jr, Paul M
2019-06-05 15:59       ` Maxime Coquelin
2019-06-05 16:16         ` Stillwell Jr, Paul M
2019-06-05 16:28           ` Maxime Coquelin
2019-06-05 16:31             ` Stillwell Jr, Paul M
2019-06-04  5:42 ` [dpdk-dev] [PATCH 08/49] net/ice/base: code for removing advanced rule Leyi Rong
2019-06-05  9:07   ` Maxime Coquelin
2019-06-04  5:42 ` [dpdk-dev] [PATCH 09/49] net/ice/base: add lock around profile map list Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 10/49] net/ice/base: save and post reset replay q bandwidth Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 11/49] net/ice/base: rollback AVF RSS configurations Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 12/49] net/ice/base: move RSS replay list Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 13/49] net/ice/base: cache the data of set PHY cfg AQ in SW Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 14/49] net/ice/base: refactor HW table init function Leyi Rong
2019-06-05 10:35   ` Maxime Coquelin
2019-06-05 18:10     ` Stillwell Jr, Paul M
2019-06-04  5:42 ` [dpdk-dev] [PATCH 15/49] net/ice/base: add compatibility check for package version Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 16/49] net/ice/base: add API to init FW logging Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 17/49] net/ice/base: use macro instead of magic 8 Leyi Rong
2019-06-05 10:39   ` Maxime Coquelin
2019-06-04  5:42 ` [dpdk-dev] [PATCH 18/49] net/ice/base: move and redefine ice debug cq API Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 19/49] net/ice/base: separate out control queue lock creation Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 20/49] net/ice/base: add helper functions for PHY caching Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 21/49] net/ice/base: added sibling head to parse nodes Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 22/49] net/ice/base: add and fix debuglogs Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 23/49] net/ice/base: add support for reading REPC statistics Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 24/49] net/ice/base: move VSI to VSI group Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 25/49] net/ice/base: forbid VSI to remove unassociated ucast filter Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 26/49] net/ice/base: add some minor features Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 27/49] net/ice/base: call out dev/func caps when printing Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 28/49] net/ice/base: add some minor features Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 29/49] net/ice/base: cleanup update link info Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 30/49] net/ice/base: add rd64 support Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 31/49] net/ice/base: track HW stat registers past rollover Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 32/49] net/ice/base: implement LLDP persistent settings Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 33/49] net/ice/base: check new FD filter duplicate location Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 34/49] net/ice/base: correct UDP/TCP PTYPE assignments Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 35/49] net/ice/base: calculate rate limit burst size correctly Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 36/49] net/ice/base: add lock around profile map list Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 37/49] net/ice/base: fix Flow Director VSI count Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 38/49] net/ice/base: use more efficient structures Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 39/49] net/ice/base: slightly code update Leyi Rong
2019-06-05 12:04   ` Maxime Coquelin
2019-06-06  6:46     ` Rong, Leyi
2019-06-04  5:42 ` [dpdk-dev] [PATCH 40/49] net/ice/base: code clean up Leyi Rong
2019-06-05 12:06   ` Maxime Coquelin
2019-06-06  7:32     ` Rong, Leyi
2019-06-04  5:42 ` [dpdk-dev] [PATCH 41/49] net/ice/base: cleanup ice flex pipe files Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 42/49] net/ice/base: change how VMDq capability is wrapped Leyi Rong
     [not found]   ` <ca03c24866cdb2f45ed04b6b3e9b35bac06c5dcd.camel@intel.com>
2019-06-05  0:02     ` Stillwell Jr, Paul M
2019-06-04  5:42 ` [dpdk-dev] [PATCH 43/49] net/ice/base: refactor VSI node sched code Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 44/49] net/ice/base: add some minor new defines Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 45/49] net/ice/base: add 16-byte Flex Rx Descriptor Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 46/49] net/ice/base: add vxlan/generic tunnel management Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 47/49] net/ice/base: enable additional switch rules Leyi Rong
2019-06-05 12:24   ` Maxime Coquelin
2019-06-05 16:34     ` Stillwell Jr, Paul M
2019-06-07 12:41       ` Maxime Coquelin
2019-06-07 15:58         ` Stillwell Jr, Paul M
2019-06-04  5:42 ` [dpdk-dev] [PATCH 48/49] net/ice/base: allow forward to Q groups in switch rule Leyi Rong
2019-06-04  5:42 ` [dpdk-dev] [PATCH 49/49] net/ice/base: changes for reducing ice add adv rule time Leyi Rong
2019-06-04 16:56 ` [dpdk-dev] [PATCH 00/49] shared code update Maxime Coquelin
2019-06-06  5:44   ` Rong, Leyi
2019-06-07 12:53     ` Maxime Coquelin
2019-06-11 15:51 ` [dpdk-dev] [PATCH v2 00/66] " Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 01/66] net/ice/base: add macro for rounding up Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 02/66] net/ice/base: update standard extr seq to include DIR flag Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 03/66] net/ice/base: add API to configure MIB Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 04/66] net/ice/base: add another valid DCBx state Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 05/66] net/ice/base: add more recipe commands Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 06/66] net/ice/base: add funcs to create new switch recipe Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 07/66] net/ice/base: programming a " Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 08/66] net/ice/base: replay advanced rule after reset Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 09/66] net/ice/base: code for removing advanced rule Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 10/66] net/ice/base: add lock around profile map list Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 11/66] net/ice/base: save and post reset replay q bandwidth Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 12/66] net/ice/base: rollback AVF RSS configurations Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 13/66] net/ice/base: move RSS replay list Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 14/66] net/ice/base: cache the data of set PHY cfg AQ in SW Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 15/66] net/ice/base: refactor HW table init function Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 16/66] net/ice/base: add compatibility check for package version Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 17/66] net/ice/base: add API to init FW logging Leyi Rong
2019-06-11 16:23     ` Stillwell Jr, Paul M
2019-06-12 14:38       ` Rong, Leyi
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 18/66] net/ice/base: use macro instead of magic 8 Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 19/66] net/ice/base: move and redefine ice debug cq API Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 20/66] net/ice/base: separate out control queue lock creation Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 21/66] net/ice/base: add helper functions for PHY caching Leyi Rong
2019-06-11 16:26     ` Stillwell Jr, Paul M
2019-06-12 14:45       ` Rong, Leyi
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 22/66] net/ice/base: added sibling head to parse nodes Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 23/66] net/ice/base: add and fix debuglogs Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 24/66] net/ice/base: add support for reading REPC statistics Leyi Rong
2019-06-11 16:28     ` Stillwell Jr, Paul M
2019-06-12 14:48       ` Rong, Leyi
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 25/66] net/ice/base: move VSI to VSI group Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 26/66] net/ice/base: forbid VSI to remove unassociated ucast filter Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 27/66] net/ice/base: add some minor features Leyi Rong
2019-06-11 16:30     ` Stillwell Jr, Paul M
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 28/66] net/ice/base: add hweight32 support Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 29/66] net/ice/base: call out dev/func caps when printing Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 30/66] net/ice/base: add some minor features Leyi Rong
2019-06-11 16:30     ` Stillwell Jr, Paul M
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 31/66] net/ice/base: cleanup update link info Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 32/66] net/ice/base: add rd64 support Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 33/66] net/ice/base: track HW stat registers past rollover Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 34/66] net/ice/base: implement LLDP persistent settings Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 35/66] net/ice/base: check new FD filter duplicate location Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 36/66] net/ice/base: correct UDP/TCP PTYPE assignments Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 37/66] net/ice/base: calculate rate limit burst size correctly Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 38/66] net/ice/base: add lock around profile map list Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 39/66] net/ice/base: fix Flow Director VSI count Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 40/66] net/ice/base: use more efficient structures Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 41/66] net/ice/base: silent semantic parser warnings Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 42/66] net/ice/base: fix for signed package download Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 43/66] net/ice/base: add new API to dealloc flow entry Leyi Rong
2019-06-11 15:51   ` [dpdk-dev] [PATCH v2 44/66] net/ice/base: check RSS flow profile list Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 45/66] net/ice/base: protect list add with lock Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 46/66] net/ice/base: fix Rx functionality for ethertype filters Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 47/66] net/ice/base: introduce some new macros Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 48/66] net/ice/base: add init for SW recipe member rg list Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 49/66] net/ice/base: code clean up Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 50/66] net/ice/base: cleanup ice flex pipe files Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 51/66] net/ice/base: refactor VSI node sched code Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 52/66] net/ice/base: add some minor new defines Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 53/66] net/ice/base: add 16-byte Flex Rx Descriptor Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 54/66] net/ice/base: add vxlan/generic tunnel management Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 55/66] net/ice/base: enable additional switch rules Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 56/66] net/ice/base: allow forward to Q groups in switch rule Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 57/66] net/ice/base: changes for reducing ice add adv rule time Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 58/66] net/ice/base: deduce TSA value in the CEE mode Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 59/66] net/ice/base: rework API for ice zero bitmap Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 60/66] net/ice/base: rework API for ice cp bitmap Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 61/66] net/ice/base: use ice zero bitmap instead of ice memset Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 62/66] net/ice/base: use the specified size for ice zero bitmap Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 63/66] net/ice/base: fix potential memory leak in destroy tunnel Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 64/66] net/ice/base: correct NVGRE header structure Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 65/66] net/ice/base: add link event defines Leyi Rong
2019-06-11 15:52   ` [dpdk-dev] [PATCH v2 66/66] net/ice/base: reduce calls to get profile associations Leyi Rong
2019-06-19 15:17   ` [dpdk-dev] [PATCH v3 00/69] shared code update Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 01/69] net/ice/base: update standard extr seq to include DIR flag Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 02/69] net/ice/base: add API to configure MIB Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 03/69] net/ice/base: add another valid DCBx state Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 04/69] net/ice/base: add more recipe commands Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 05/69] net/ice/base: add funcs to create new switch recipe Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 06/69] net/ice/base: programming a " Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 07/69] net/ice/base: replay advanced rule after reset Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 08/69] net/ice/base: code for removing advanced rule Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 09/69] net/ice/base: save and post reset replay q bandwidth Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 10/69] net/ice/base: rollback AVF RSS configurations Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 11/69] net/ice/base: move RSS replay list Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 12/69] net/ice/base: cache the data of set PHY cfg AQ in SW Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 13/69] net/ice/base: refactor HW table init function Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 14/69] net/ice/base: add lock around profile map list Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 15/69] net/ice/base: add compatibility check for package version Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 16/69] net/ice/base: add API to init FW logging Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 17/69] net/ice/base: use macro instead of magic 8 Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 18/69] net/ice/base: move and redefine ice debug cq API Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 19/69] net/ice/base: separate out control queue lock creation Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 20/69] net/ice/base: added sibling head to parse nodes Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 21/69] net/ice/base: add and fix debuglogs Leyi Rong
2019-06-19 15:17     ` [dpdk-dev] [PATCH v3 22/69] net/ice/base: forbid VSI to remove unassociated ucast filter Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 23/69] net/ice/base: update some defines Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 24/69] net/ice/base: add hweight32 support Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 25/69] net/ice/base: call out dev/func caps when printing Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 26/69] net/ice/base: set the max number of TCs per port to 4 Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 27/69] net/ice/base: make FDID available for FlexDescriptor Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 28/69] net/ice/base: use a different debug bit for FW log Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 29/69] net/ice/base: always set prefena when configuring a Rx queue Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 30/69] net/ice/base: disable Tx pacing option Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 31/69] net/ice/base: delete the index for chaining other recipe Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 32/69] net/ice/base: cleanup update link info Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 33/69] net/ice/base: add rd64 support Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 34/69] net/ice/base: track HW stat registers past rollover Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 35/69] net/ice/base: implement LLDP persistent settings Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 36/69] net/ice/base: check new FD filter duplicate location Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 37/69] net/ice/base: correct UDP/TCP PTYPE assignments Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 38/69] net/ice/base: calculate rate limit burst size correctly Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 39/69] net/ice/base: fix Flow Director VSI count Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 40/69] net/ice/base: use more efficient structures Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 41/69] net/ice/base: silent semantic parser warnings Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 42/69] net/ice/base: fix for signed package download Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 43/69] net/ice/base: add new API to dealloc flow entry Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 44/69] net/ice/base: check RSS flow profile list Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 45/69] net/ice/base: protect list add with lock Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 46/69] net/ice/base: fix Rx functionality for ethertype filters Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 47/69] net/ice/base: introduce some new macros Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 48/69] net/ice/base: new marker to mark func parameters unused Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 49/69] net/ice/base: code clean up Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 50/69] net/ice/base: cleanup ice flex pipe files Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 51/69] net/ice/base: refactor VSI node sched code Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 52/69] net/ice/base: add some minor new defines Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 53/69] net/ice/base: add vxlan/generic tunnel management Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 54/69] net/ice/base: enable additional switch rules Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 55/69] net/ice/base: allow forward to Q groups in switch rule Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 56/69] net/ice/base: changes for reducing ice add adv rule time Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 57/69] net/ice/base: deduce TSA value in the CEE mode Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 58/69] net/ice/base: rework API for ice zero bitmap Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 59/69] net/ice/base: rework API for ice cp bitmap Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 60/69] net/ice/base: use ice zero bitmap instead of ice memset Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 61/69] net/ice/base: use the specified size for ice zero bitmap Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 62/69] net/ice/base: correct NVGRE header structure Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 63/69] net/ice/base: reduce calls to get profile associations Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 64/69] net/ice/base: fix for chained recipe switch ID index Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 65/69] net/ice/base: update driver unloading field Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 66/69] net/ice/base: fix for UDP and TCP related switch rules Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 67/69] net/ice/base: changes in flow and profile removal Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 68/69] net/ice/base: update Tx context struct Leyi Rong
2019-06-19 15:18     ` [dpdk-dev] [PATCH v3 69/69] net/ice/base: fixes for GRE Leyi Rong
2019-06-20  1:55     ` [dpdk-dev] [PATCH v3 00/69] shared code update Zhang, Qi Z
2019-06-20 20:18       ` Ferruh Yigit
2019-06-21  1:20         ` Zhang, Qi Z

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).