DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v3 00/21] ice: update base code
@ 2020-10-28  3:22 Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 01/21] net/ice/base: add tunnel support for FDIR Qi Zhang
                   ` (21 more replies)
  0 siblings, 22 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:22 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang

main change:
1. Refactor the RSS configure API.
2. Add global LUT support .
3. copule fix and code clean

v3:
- fix gtpu rss bug in patch 19/22

v2:
- fix missing code in patch 19/21.

*** BLURB HERE ***

Qi Zhang (21):
  net/ice/base: add tunnel support for FDIR
  net/ice/base: add NVM Write Response flags
  net/ice/base: modify ptype bitmap for outer MAC
  net/ice/base: rename ptype bitmap
  net/ice/base: move sched function prototypes
  net/ice/base: read security revision
  net/ice/base: add functions to allocate and free a RSS global LUT
  net/ice/base: add more capability to admin queue
  net/ice/base: update to use package info from ice segment
  net/ice/base: use malloc instead of calloc
  net/ice/base: add support for class 5+ modules
  net/ice/base: return error directly
  net/ice/base: implement shared rate limiter
  net/ice/base: recognize 860 as iSCSI port in CEE mode
  net/ice/base: fix parameter name in comment
  net/ice/base: support extended GPIO access
  net/ice/base: remove duplicated AQ command flag setting
  net/ice/base: introduce and use FLEX_ARRAY_SIZE where possible
  net/ice/base: refactor RSS configure API
  net/ice/base: add support for get/set RSS LUT to specify global LUT
  net/ice/base: update version

 drivers/net/ice/base/README           |   2 +-
 drivers/net/ice/base/ice_adminq_cmd.h |  35 +-
 drivers/net/ice/base/ice_common.c     |  58 ++-
 drivers/net/ice/base/ice_common.h     |  13 +-
 drivers/net/ice/base/ice_dcb.c        |  38 +-
 drivers/net/ice/base/ice_fdir.c       |   8 +
 drivers/net/ice/base/ice_fdir.h       |   9 +
 drivers/net/ice/base/ice_flex_pipe.c  |  46 +--
 drivers/net/ice/base/ice_flex_type.h  |   8 +
 drivers/net/ice/base/ice_flow.c       | 265 ++++++++------
 drivers/net/ice/base/ice_flow.h       |  34 +-
 drivers/net/ice/base/ice_nvm.c        | 174 +++++++++
 drivers/net/ice/base/ice_sched.c      | 493 +++++++++++++++++---------
 drivers/net/ice/base/ice_sched.h      |  29 +-
 drivers/net/ice/base/ice_switch.c     |  68 +++-
 drivers/net/ice/base/ice_switch.h     |   2 +
 drivers/net/ice/base/ice_type.h       |  64 +++-
 drivers/net/ice/ice_ethdev.c          | 346 +++++++++---------
 drivers/net/ice/ice_ethdev.h          |  18 +-
 drivers/net/ice/ice_hash.c            |  14 +-
 20 files changed, 1138 insertions(+), 586 deletions(-)

-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 01/21] net/ice/base: add tunnel support for FDIR
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 02/21] net/ice/base: add NVM Write Response flags Qi Zhang
                   ` (20 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang, Zhirun Yan

Add struct to store outer part for tunnel rule.
Add vxlan ptype in ipv4 mac bitmap. So when create a vxlan rule, the
ptype group will be valid.

Signed-off-by: Zhirun Yan <zhirun.yan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_fdir.c | 8 ++++++++
 drivers/net/ice/base/ice_fdir.h | 9 +++++++++
 drivers/net/ice/base/ice_flow.c | 6 +++---
 3 files changed, 20 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/base/ice_fdir.c b/drivers/net/ice/base/ice_fdir.c
index aea388f2ab..aeff7af55d 100644
--- a/drivers/net/ice/base/ice_fdir.c
+++ b/drivers/net/ice/base/ice_fdir.c
@@ -1057,6 +1057,14 @@ ice_fdir_get_gen_prgm_pkt(struct ice_hw *hw, struct ice_fdir_fltr *input,
 			loc[20] = ICE_FDIR_IPV4_PKT_FLAG_DF;
 		break;
 	case ICE_FLTR_PTYPE_NONF_IPV4_UDP:
+		ice_pkt_insert_mac_addr(pkt, input->ext_data_outer.dst_mac);
+		ice_pkt_insert_mac_addr(pkt + ETH_ALEN,
+					input->ext_data_outer.src_mac);
+		ice_pkt_insert_u32(pkt, ICE_IPV4_SRC_ADDR_OFFSET,
+				   input->ip_outer.v4.dst_ip);
+		ice_pkt_insert_u32(pkt, ICE_IPV4_DST_ADDR_OFFSET,
+				   input->ip_outer.v4.src_ip);
+		ice_pkt_insert_u8(pkt, ICE_IPV4_TOS_OFFSET, input->ip_outer.v4.tos);
 		ice_pkt_insert_u32(loc, ICE_IPV4_DST_ADDR_OFFSET,
 				   input->ip.v4.src_ip);
 		ice_pkt_insert_u16(loc, ICE_IPV4_UDP_DST_PORT_OFFSET,
diff --git a/drivers/net/ice/base/ice_fdir.h b/drivers/net/ice/base/ice_fdir.h
index 7e00bb2730..d363de385d 100644
--- a/drivers/net/ice/base/ice_fdir.h
+++ b/drivers/net/ice/base/ice_fdir.h
@@ -181,6 +181,15 @@ struct ice_fdir_fltr {
 		struct ice_fdir_v6 v6;
 	} ip, mask;
 
+	/* for tunnel outer part */
+	union {
+		struct ice_fdir_v4 v4;
+		struct ice_fdir_v6 v6;
+	} ip_outer, mask_outer;
+
+	struct ice_fdir_extra ext_data_outer;
+	struct ice_fdir_extra ext_mask_outer;
+
 	struct ice_fdir_udp_gtp gtpu_data;
 	struct ice_fdir_udp_gtp gtpu_mask;
 
diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 80ac0b662e..601ca251d3 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -222,7 +222,7 @@ static const u32 ice_ptypes_macvlan_il[] = {
  * include IPV4 other PTYPEs
  */
 static const u32 ice_ptypes_ipv4_ofos[] = {
-	0x1DC00000, 0x04000800, 0x00000000, 0x00000000,
+	0x1DC00000, 0x24000800, 0x00000000, 0x00000000,
 	0x00000000, 0x00000155, 0x00000000, 0x00000000,
 	0x00000000, 0x000FC000, 0x000002A0, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
@@ -236,7 +236,7 @@ static const u32 ice_ptypes_ipv4_ofos[] = {
  * IPV4 other PTYPEs
  */
 static const u32 ice_ptypes_ipv4_ofos_all[] = {
-	0x1DC00000, 0x04000800, 0x00000000, 0x00000000,
+	0x1DC00000, 0x24000800, 0x00000000, 0x00000000,
 	0x00000000, 0x00000155, 0x00000000, 0x00000000,
 	0x00000000, 0x000FC000, 0x83E0FAA0, 0x00000101,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
@@ -434,7 +434,7 @@ static const u32 ice_ptypes_gre_of[] = {
 
 /* Packet types for packets with an Innermost/Last MAC header */
 static const u32 ice_ptypes_mac_il[] = {
-	0x00000000, 0x00000000, 0x00000000, 0x00000000,
+	0x00000000, 0x20000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 02/21] net/ice/base: add NVM Write Response flags
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 01/21] net/ice/base: add tunnel support for FDIR Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 03/21] net/ice/base: modify ptype bitmap for outer MAC Qi Zhang
                   ` (19 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang, Amir Shay

Added NVM Write Admin Command (0x703) ARQ response flags - as
returned in "Response flags" field.
Three flags are supported: POR, PERST and EMPR. All indicate the
type of reset required to get the NVM bank update effective.

Signed-off-by: Amir Shay <shay.amir@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index bc71ec5317..9db50de11c 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -1654,6 +1654,9 @@ struct ice_aqc_nvm {
 #define ICE_AQC_NVM_REVERT_LAST_ACTIV	BIT(6) /* Write Activate only */
 #define ICE_AQC_NVM_ACTIV_SEL_MASK	MAKEMASK(0x7, 3)
 #define ICE_AQC_NVM_FLASH_ONLY		BIT(7)
+#define ICE_AQC_NVM_POR_FLAG	0	/* Used by NVM Write completion on ARQ */
+#define ICE_AQC_NVM_PERST_FLAG	1
+#define ICE_AQC_NVM_EMPR_FLAG	2
 	__le16 module_typeid;
 	__le16 length;
 #define ICE_AQC_NVM_ERASE_LEN	0xFFFF
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 03/21] net/ice/base: modify ptype bitmap for outer MAC
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 01/21] net/ice/base: add tunnel support for FDIR Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 02/21] net/ice/base: add NVM Write Response flags Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 04/21] net/ice/base: rename ptype bitmap Qi Zhang
                   ` (18 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang

Add below ptypes into ice_ptypes_mac_ofos:

MAC_IPV4[6]_ESP
MAC_IPV4[6]_AH
MAC_IPV4[6]_NAT_T_ESP
MAC_IPV4[6]_NAT_T_IKE
MAC_IPV4[6]_NAT_T_KEEP
MAC_IPV4[6]_PFCP_NODE
MAC_IPV4[6]_PFCP_SESSION
MAC_IPV4[6]_L2TPV3

So above ptype can also be selected by a filter when outer mac header
is required.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_flow.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 601ca251d3..45990aeca0 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -197,8 +197,8 @@ struct ice_flow_field_info ice_flds_info[ICE_FLOW_FIELD_IDX_MAX] = {
  */
 static const u32 ice_ptypes_mac_ofos[] = {
 	0xFDC00846, 0xBFBF7F7E, 0xF70001DF, 0xFEFDFDFB,
-	0x0000077E, 0x00000000, 0x00000000, 0x00000000,
-	0x00400000, 0x03FFF000, 0x7FFFFFE0, 0x00000000,
+	0x0000077E, 0x000003FF, 0x00000000, 0x00000000,
+	0x00400000, 0x03FFF000, 0xFFFFFFE0, 0x00000307,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 04/21] net/ice/base: rename ptype bitmap
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (2 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 03/21] net/ice/base: modify ptype bitmap for outer MAC Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 05/21] net/ice/base: move sched function prototypes Qi Zhang
                   ` (17 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang

Align all ptype bitmap to follow ice_ptypes_xxx prefix.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_flow.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 45990aeca0..4512b12368 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -299,7 +299,7 @@ static const u32 ice_ptypes_ipv6_il[] = {
 };
 
 /* Packet types for packets with an Outer/First/Single IPv4 header - no L4 */
-static const u32 ice_ipv4_ofos_no_l4[] = {
+static const u32 ice_ptypes_ipv4_ofos_no_l4[] = {
 	0x10C00000, 0x04000800, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x000cc000, 0x000002A0, 0x00000000,
@@ -311,7 +311,7 @@ static const u32 ice_ipv4_ofos_no_l4[] = {
 };
 
 /* Packet types for packets with an Innermost/Last IPv4 header - no L4 */
-static const u32 ice_ipv4_il_no_l4[] = {
+static const u32 ice_ptypes_ipv4_il_no_l4[] = {
 	0x60000000, 0x18043008, 0x80000002, 0x6010c021,
 	0x00000008, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00139800, 0x00000000,
@@ -323,7 +323,7 @@ static const u32 ice_ipv4_il_no_l4[] = {
 };
 
 /* Packet types for packets with an Outer/First/Single IPv6 header - no L4 */
-static const u32 ice_ipv6_ofos_no_l4[] = {
+static const u32 ice_ptypes_ipv6_ofos_no_l4[] = {
 	0x00000000, 0x00000000, 0x43000000, 0x10002000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x02300000, 0x00000540, 0x00000000,
@@ -335,7 +335,7 @@ static const u32 ice_ipv6_ofos_no_l4[] = {
 };
 
 /* Packet types for packets with an Innermost/Last IPv6 header - no L4 */
-static const u32 ice_ipv6_il_no_l4[] = {
+static const u32 ice_ptypes_ipv6_il_no_l4[] = {
 	0x00000000, 0x02180430, 0x0000010c, 0x086010c0,
 	0x00000430, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x4e600000, 0x00000000,
@@ -853,8 +853,8 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
 				       ICE_FLOW_PTYPE_MAX);
 		} else if ((hdrs & ICE_FLOW_SEG_HDR_IPV4) &&
 			   !(hdrs & ICE_FLOW_SEG_HDRS_L4_MASK_NO_OTHER)) {
-			src = !i ? (const ice_bitmap_t *)ice_ipv4_ofos_no_l4 :
-				(const ice_bitmap_t *)ice_ipv4_il_no_l4;
+			src = !i ? (const ice_bitmap_t *)ice_ptypes_ipv4_ofos_no_l4 :
+				(const ice_bitmap_t *)ice_ptypes_ipv4_il_no_l4;
 			ice_and_bitmap(params->ptypes, params->ptypes, src,
 				       ICE_FLOW_PTYPE_MAX);
 		} else if (hdrs & ICE_FLOW_SEG_HDR_IPV4) {
@@ -864,8 +864,8 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
 				       ICE_FLOW_PTYPE_MAX);
 		} else if ((hdrs & ICE_FLOW_SEG_HDR_IPV6) &&
 			   !(hdrs & ICE_FLOW_SEG_HDRS_L4_MASK_NO_OTHER)) {
-			src = !i ? (const ice_bitmap_t *)ice_ipv6_ofos_no_l4 :
-				(const ice_bitmap_t *)ice_ipv6_il_no_l4;
+			src = !i ? (const ice_bitmap_t *)ice_ptypes_ipv6_ofos_no_l4 :
+				(const ice_bitmap_t *)ice_ptypes_ipv6_il_no_l4;
 			ice_and_bitmap(params->ptypes, params->ptypes, src,
 				       ICE_FLOW_PTYPE_MAX);
 		} else if (hdrs & ICE_FLOW_SEG_HDR_IPV6) {
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 05/21] net/ice/base: move sched function prototypes
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (3 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 04/21] net/ice/base: rename ptype bitmap Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 06/21] net/ice/base: read security revision Qi Zhang
                   ` (16 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang, Tony Nguyen

These functions reside in ice_sched.c but the function protypes are
declared in ice_common.h. Move the function prototypes to ice_sched.h.

Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_common.h | 7 -------
 drivers/net/ice/base/ice_sched.h  | 8 ++++++++
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 393e2d3f6d..0288fb73e0 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -198,13 +198,6 @@ ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle,
 		struct ice_sq_cd *cd);
 enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle);
 void ice_replay_post(struct ice_hw *hw);
-void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw);
-void ice_sched_replay_agg(struct ice_hw *hw);
-enum ice_status ice_sched_replay_tc_node_bw(struct ice_port_info *pi);
-enum ice_status ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle);
-enum ice_status ice_sched_replay_root_node_bw(struct ice_port_info *pi);
-enum ice_status
-ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx);
 struct ice_q_ctx *
 ice_get_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 q_handle);
 void
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index da2604c75e..501a4c499e 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -194,4 +194,12 @@ enum ice_status
 ice_cfg_tc_node_bw_alloc(struct ice_port_info *pi, u8 tc,
 			 enum ice_rl_type rl_type, u8 bw_alloc);
 enum ice_status ice_cfg_rl_burst_size(struct ice_hw *hw, u32 bytes);
+void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw);
+void ice_sched_replay_agg(struct ice_hw *hw);
+enum ice_status ice_sched_replay_tc_node_bw(struct ice_port_info *pi);
+enum ice_status ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle);
+enum ice_status ice_sched_replay_root_node_bw(struct ice_port_info *pi);
+enum ice_status
+ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx);
+
 #endif /* _ICE_SCHED_H_ */
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 06/21] net/ice/base: read security revision
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (4 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 05/21] net/ice/base: move sched function prototypes Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 07/21] net/ice/base: add functions to allocate and free a RSS global LUT Qi Zhang
                   ` (15 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang, Jacob Keller

The main NVM module and the Option ROM module contain a security
revision in their CSS header. This security revision is used to
determine whether or not the signed module should be loaded at bootup.
If the module security revision is lower than the associated minimum
security revision, it will not be loaded.

The CSS header does not have a module id associated with it, and thus
requires flat NVM reads in order to access it. To do this, take
advantage of the cached bank information. Introduce a new
"ice_read_flash_module" function that takes the module and bank to read.
Implement both ice_read_active_nvm_module and
ice_read_active_orom_module. These functions will use the cached values
to determine the active bank and calculate the appropriate offset.

Using these new access functions, extract the security revision for both
the main NVM bank and the Option ROM into the associated info structure.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_nvm.c  | 174 ++++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_type.h |   9 ++
 2 files changed, 183 insertions(+)

diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
index 61af767edd..7b76af7b6f 100644
--- a/drivers/net/ice/base/ice_nvm.c
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -212,6 +212,107 @@ void ice_release_nvm(struct ice_hw *hw)
 	ice_release_res(hw, ICE_NVM_RES_ID);
 }
 
+/**
+ * ice_read_flash_module - Read a word from one of the main NVM modules
+ * @hw: pointer to the HW structure
+ * @bank: which bank of the module to read
+ * @module: the module to read
+ * @offset: the offset into the module in words
+ * @data: storage for the word read from the flash
+ *
+ * Read a word from the specified bank of the module. The bank must be either
+ * the 1st or 2nd bank. The word will be read using flat NVM access, and
+ * relies on the hw->flash.banks data being setup by
+ * ice_determine_active_flash_banks() during initialization.
+ */
+static enum ice_status
+ice_read_flash_module(struct ice_hw *hw, enum ice_flash_bank bank, u16 module,
+		      u32 offset, u16 *data)
+{
+	struct ice_bank_info *banks = &hw->flash.banks;
+	u32 bytes = sizeof(u16);
+	enum ice_status status;
+	__le16 data_local;
+	bool second_bank;
+	u32 start;
+
+	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
+
+	switch (bank) {
+	case ICE_1ST_FLASH_BANK:
+		second_bank = false;
+		break;
+	case ICE_2ND_FLASH_BANK:
+		second_bank = true;
+		break;
+	case ICE_INVALID_FLASH_BANK:
+	default:
+		ice_debug(hw, ICE_DBG_NVM, "Unexpected flash bank %u\n", bank);
+		return ICE_ERR_PARAM;
+	}
+
+	switch (module) {
+	case ICE_SR_1ST_NVM_BANK_PTR:
+		start = banks->nvm_ptr + (second_bank ? banks->nvm_size : 0);
+		break;
+	case ICE_SR_1ST_OROM_BANK_PTR:
+		start = banks->orom_ptr + (second_bank ? banks->orom_size : 0);
+		break;
+	case ICE_SR_NETLIST_BANK_PTR:
+		start = banks->netlist_ptr + (second_bank ? banks->netlist_size : 0);
+		break;
+	default:
+		ice_debug(hw, ICE_DBG_NVM, "Unexpected flash module 0x%04x\n", module);
+		return ICE_ERR_PARAM;
+	}
+
+	status = ice_acquire_nvm(hw, ICE_RES_READ);
+	if (status)
+		return status;
+
+	status = ice_read_flat_nvm(hw, start + offset * sizeof(u16), &bytes,
+				   (_FORCE_ u8 *)&data_local, false);
+	if (!status)
+		*data = LE16_TO_CPU(data_local);
+
+	ice_release_nvm(hw);
+
+	return status;
+}
+
+/**
+ * ice_read_active_nvm_module - Read from the active main NVM module
+ * @hw: pointer to the HW structure
+ * @offset: offset into the NVM module to read, in words
+ * @data: storage for returned word value
+ *
+ * Read the specified word from the active NVM module. This includes the CSS
+ * header at the start of the NVM module.
+ */
+static enum ice_status
+ice_read_active_nvm_module(struct ice_hw *hw, u32 offset, u16 *data)
+{
+	return ice_read_flash_module(hw, hw->flash.banks.nvm_bank,
+				     ICE_SR_1ST_NVM_BANK_PTR, offset, data);
+}
+
+/**
+ * ice_read_active_orom_module - Read from the active Option ROM module
+ * @hw: pointer to the HW structure
+ * @offset: offset into the OROM module to read, in words
+ * @data: storage for returned word value
+ *
+ * Read the specified word from the active Option ROM module of the flash.
+ * Note that unlike the NVM module, the CSS data is stored at the end of the
+ * module instead of at the beginning.
+ */
+static enum ice_status
+ice_read_active_orom_module(struct ice_hw *hw, u32 offset, u16 *data)
+{
+	return ice_read_flash_module(hw, hw->flash.banks.orom_bank,
+				     ICE_SR_1ST_OROM_BANK_PTR, offset, data);
+}
+
 /**
  * ice_read_sr_word - Reads Shadow RAM word and acquire NVM if necessary
  * @hw: pointer to the HW structure
@@ -358,6 +459,32 @@ ice_read_pba_string(struct ice_hw *hw, u8 *pba_num, u32 pba_num_size)
 	return status;
 }
 
+/**
+ * ice_get_nvm_srev - Read the security revision from the NVM CSS header
+ * @hw: pointer to the HW struct
+ * @srev: storage for security revision
+ *
+ * Read the security revision out of the CSS header of the active NVM module
+ * bank.
+ */
+static enum ice_status ice_get_nvm_srev(struct ice_hw *hw, u32 *srev)
+{
+	enum ice_status status;
+	u16 srev_l, srev_h;
+
+	status = ice_read_active_nvm_module(hw, ICE_NVM_CSS_SREV_L, &srev_l);
+	if (status)
+		return status;
+
+	status = ice_read_active_nvm_module(hw, ICE_NVM_CSS_SREV_H, &srev_h);
+	if (status)
+		return status;
+
+	*srev = srev_h << 16 | srev_l;
+
+	return ICE_SUCCESS;
+}
+
 /**
  * ice_get_nvm_ver_info - Read NVM version information
  * @hw: pointer to the HW struct
@@ -393,6 +520,49 @@ ice_get_nvm_ver_info(struct ice_hw *hw, struct ice_nvm_info *nvm)
 
 	nvm->eetrack = (eetrack_hi << 16) | eetrack_lo;
 
+	status = ice_get_nvm_srev(hw, &nvm->srev);
+	if (status)
+		ice_debug(hw, ICE_DBG_NVM, "Failed to read NVM security revision.\n");
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_get_orom_srev - Read the security revision from the OROM CSS header
+ * @hw: pointer to the HW struct
+ * @srev: storage for security revision
+ *
+ * Read the security revision out of the CSS header of the active OROM module
+ * bank.
+ */
+static enum ice_status ice_get_orom_srev(struct ice_hw *hw, u32 *srev)
+{
+	enum ice_status status;
+	u16 srev_l, srev_h;
+	u32 css_start;
+
+	if (hw->flash.banks.orom_size < ICE_NVM_OROM_TRAILER_LENGTH) {
+		ice_debug(hw, ICE_DBG_NVM, "Unexpected Option ROM Size of %u\n",
+			  hw->flash.banks.orom_size);
+		return ICE_ERR_CFG;
+	}
+
+	/* calculate how far into the Option ROM the CSS header starts. Note
+	 * that ice_read_active_orom_module takes a word offset so we need to
+	 * divide by 2 here.
+	 */
+	css_start = (hw->flash.banks.orom_size - ICE_NVM_OROM_TRAILER_LENGTH) / 2;
+
+	status = ice_read_active_orom_module(hw, css_start + ICE_NVM_CSS_SREV_L, &srev_l);
+	if (status)
+		return status;
+
+	status = ice_read_active_orom_module(hw, css_start + ICE_NVM_CSS_SREV_H, &srev_h);
+	if (status)
+		return status;
+
+	*srev = srev_h << 16 | srev_l;
+
 	return ICE_SUCCESS;
 }
 
@@ -448,6 +618,10 @@ ice_get_orom_ver_info(struct ice_hw *hw, struct ice_orom_info *orom)
 	orom->build = (u16)((combo_ver & ICE_OROM_VER_BUILD_MASK) >>
 			    ICE_OROM_VER_BUILD_SHIFT);
 
+	status = ice_get_orom_srev(hw, &orom->srev);
+	if (status)
+		ice_debug(hw, ICE_DBG_NVM, "Failed to read Option ROM security revision.\n");
+
 	return ICE_SUCCESS;
 }
 
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 1e1c672cbd..fb350faa60 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -509,11 +509,13 @@ struct ice_orom_info {
 	u8 major;			/* Major version of OROM */
 	u8 patch;			/* Patch version of OROM */
 	u16 build;			/* Build version of OROM */
+	u32 srev;			/* Security revision */
 };
 
 /* NVM version information */
 struct ice_nvm_info {
 	u32 eetrack;
+	u32 srev;
 	u8 major;
 	u8 minor;
 };
@@ -1117,6 +1119,13 @@ enum ice_sw_fwd_act_type {
 #define ICE_SR_LINK_DEFAULT_OVERRIDE_PTR	0x134
 #define ICE_SR_POR_REGISTERS_AUTOLOAD_PTR	0x118
 
+/* CSS Header words */
+#define ICE_NVM_CSS_SREV_L			0x14
+#define ICE_NVM_CSS_SREV_H			0x15
+
+/* Size in bytes of Option ROM trailer */
+#define ICE_NVM_OROM_TRAILER_LENGTH		660
+
 /* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */
 #define ICE_SR_VPD_SIZE_WORDS		512
 #define ICE_SR_PCIE_ALT_SIZE_WORDS	512
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 07/21] net/ice/base: add functions to allocate and free a RSS global LUT
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (5 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 06/21] net/ice/base: read security revision Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 08/21] net/ice/base: add more capability to admin queue Qi Zhang
                   ` (14 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang, Brett Creeley

Currently there is no API to allocate and free a RSS global LUT.
Incoming changes to support VFs having >16 queues will require using
RSS global LUT resources. The functions included will allow a PF to
configure a RSS global LUT for VFs that request >16 queues.

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 65 +++++++++++++++++++++++++++++++
 drivers/net/ice/base/ice_switch.h |  2 +
 2 files changed, 67 insertions(+)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index 01d59edf42..cd78685735 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -1848,6 +1848,71 @@ ice_aq_get_sw_cfg(struct ice_hw *hw, struct ice_aqc_get_sw_cfg_resp_elem *buf,
 	return status;
 }
 
+/**
+ * ice_alloc_rss_global_lut - allocate a RSS global LUT
+ * @hw: pointer to the HW struct
+ * @shared_res: true to allocate as a shared resource and false to allocate as a dedicated resource
+ * @global_lut_id: output parameter for the RSS global LUT's ID
+ */
+enum ice_status ice_alloc_rss_global_lut(struct ice_hw *hw, bool shared_res, u16 *global_lut_id)
+{
+	struct ice_aqc_alloc_free_res_elem *sw_buf;
+	enum ice_status status;
+	u16 buf_len;
+
+	buf_len = ice_struct_size(sw_buf, elem, 1);
+	sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len);
+	if (!sw_buf)
+		return ICE_ERR_NO_MEMORY;
+
+	sw_buf->num_elems = CPU_TO_LE16(1);
+	sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_GLOBAL_RSS_HASH |
+				       (shared_res ? ICE_AQC_RES_TYPE_FLAG_SHARED :
+				       ICE_AQC_RES_TYPE_FLAG_DEDICATED));
+
+	status = ice_aq_alloc_free_res(hw, 1, sw_buf, buf_len, ice_aqc_opc_alloc_res, NULL);
+	if (status) {
+		ice_debug(hw, ICE_DBG_RES, "Failed to allocate %s RSS global LUT, status %d\n",
+			  shared_res ? "shared" : "dedicated", status);
+		goto ice_alloc_global_lut_exit;
+	}
+
+	*global_lut_id = LE16_TO_CPU(sw_buf->elem[0].e.sw_resp);
+
+ice_alloc_global_lut_exit:
+	ice_free(hw, sw_buf);
+	return status;
+}
+
+/**
+ * ice_free_global_lut - free a RSS global LUT
+ * @hw: pointer to the HW struct
+ * @global_lut_id: ID of the RSS global LUT to free
+ */
+enum ice_status ice_free_rss_global_lut(struct ice_hw *hw, u16 global_lut_id)
+{
+	struct ice_aqc_alloc_free_res_elem *sw_buf;
+	u16 buf_len, num_elems = 1;
+	enum ice_status status;
+
+	buf_len = ice_struct_size(sw_buf, elem, num_elems);
+	sw_buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len);
+	if (!sw_buf)
+		return ICE_ERR_NO_MEMORY;
+
+	sw_buf->num_elems = CPU_TO_LE16(num_elems);
+	sw_buf->res_type = CPU_TO_LE16(ICE_AQC_RES_TYPE_GLOBAL_RSS_HASH);
+	sw_buf->elem[0].e.sw_resp = CPU_TO_LE16(global_lut_id);
+
+	status = ice_aq_alloc_free_res(hw, num_elems, sw_buf, buf_len, ice_aqc_opc_free_res, NULL);
+	if (status)
+		ice_debug(hw, ICE_DBG_RES, "Failed to free RSS global LUT %d, status %d\n",
+			  global_lut_id, status);
+
+	ice_free(hw, sw_buf);
+	return status;
+}
+
 /**
  * ice_alloc_sw - allocate resources specific to switch
  * @hw: pointer to the HW struct
diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h
index a7e94344c1..be9b74fd4c 100644
--- a/drivers/net/ice/base/ice_switch.h
+++ b/drivers/net/ice/base/ice_switch.h
@@ -418,6 +418,8 @@ ice_free_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items,
 
 /* Switch/bridge related commands */
 enum ice_status ice_update_sw_rule_bridge_mode(struct ice_hw *hw);
+enum ice_status ice_alloc_rss_global_lut(struct ice_hw *hw, bool shared_res, u16 *global_lut_id);
+enum ice_status ice_free_rss_global_lut(struct ice_hw *hw, u16 global_lut_id);
 enum ice_status
 ice_alloc_sw(struct ice_hw *hw, bool ena_stats, bool shared_res, u16 *sw_id,
 	     u16 *counter_id);
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 08/21] net/ice/base: add more capability to admin queue
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (6 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 07/21] net/ice/base: add functions to allocate and free a RSS global LUT Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 09/21] net/ice/base: update to use package info from ice segment Qi Zhang
                   ` (13 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang, Amir Shay

Add below 3 new capability to "Get Capabilities" AQ commands
0x000A and 0x000B.

ICE_AQC_CAPS_IWARP
ICE_AQC_CAPS_PCIE_RESET_AVOIDANCE
ICE_AQC_CAPS_NVM_MGMT

Signed-off-by: Amir Shay <shay.amir@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 9db50de11c..fd34be2524 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -109,6 +109,9 @@ struct ice_aqc_list_caps_elem {
 #define ICE_AQC_CAPS_MSIX				0x0043
 #define ICE_AQC_CAPS_FD					0x0045
 #define ICE_AQC_CAPS_MAX_MTU				0x0047
+#define ICE_AQC_CAPS_IWARP				0x0051
+#define ICE_AQC_CAPS_PCIE_RESET_AVOIDANCE		0x0076
+#define ICE_AQC_CAPS_NVM_MGMT				0x0080
 
 	u8 major_ver;
 	u8 minor_ver;
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 09/21] net/ice/base: update to use package info from ice segment
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (7 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 08/21] net/ice/base: add more capability to admin queue Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 10/21] net/ice/base: use malloc instead of calloc Qi Zhang
                   ` (12 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang, Dan Nowlin

There are two package versions in the package binary. Today, these two
version numbers are the same. However, in the future that may change.

Update code to use the package info from the ice segment metadata
section, which is the package information that is actually downloaded to
the firmware during the download package process.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h |  1 +
 drivers/net/ice/base/ice_flex_pipe.c  | 44 +++++++++++++++------------
 drivers/net/ice/base/ice_flex_type.h  |  8 +++++
 drivers/net/ice/base/ice_type.h       |  8 ++---
 4 files changed, 38 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index fd34be2524..cadd6df384 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -2558,6 +2558,7 @@ struct ice_pkg_ver {
 };
 
 #define ICE_PKG_NAME_SIZE	32
+#define ICE_SEG_ID_SIZE	28
 #define ICE_SEG_NAME_SIZE	28
 
 struct ice_aqc_get_pkg_info {
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 8d918eff7d..4a27061b3d 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1075,34 +1075,40 @@ ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
 static enum ice_status
 ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
 {
-	struct ice_global_metadata_seg *meta_seg;
 	struct ice_generic_seg_hdr *seg_hdr;
 
 	ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__);
 	if (!pkg_hdr)
 		return ICE_ERR_PARAM;
 
-	meta_seg = (struct ice_global_metadata_seg *)
-		   ice_find_seg_in_pkg(hw, SEGMENT_TYPE_METADATA, pkg_hdr);
-	if (meta_seg) {
-		hw->pkg_ver = meta_seg->pkg_ver;
-		ice_memcpy(hw->pkg_name, meta_seg->pkg_name,
-			   sizeof(hw->pkg_name), ICE_NONDMA_TO_NONDMA);
+	seg_hdr = (struct ice_generic_seg_hdr *)
+		ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE, pkg_hdr);
+	if (seg_hdr) {
+		struct ice_meta_sect *meta;
+		struct ice_pkg_enum state;
+
+		ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM);
+
+		/* Get package information from the Metadata Section */
+		meta = (struct ice_meta_sect *)
+			ice_pkg_enum_section((struct ice_seg *)seg_hdr, &state,
+					     ICE_SID_METADATA);
+		if (!meta) {
+			ice_debug(hw, ICE_DBG_INIT, "Did not find ice metadata section in package\n");
+			return ICE_ERR_CFG;
+		}
+
+		hw->pkg_ver = meta->ver;
+		ice_memcpy(hw->pkg_name, meta->name, sizeof(meta->name),
+			   ICE_NONDMA_TO_NONDMA);
 
 		ice_debug(hw, ICE_DBG_PKG, "Pkg: %d.%d.%d.%d, %s\n",
-			  meta_seg->pkg_ver.major, meta_seg->pkg_ver.minor,
-			  meta_seg->pkg_ver.update, meta_seg->pkg_ver.draft,
-			  meta_seg->pkg_name);
-	} else {
-		ice_debug(hw, ICE_DBG_INIT, "Did not find metadata segment in driver package\n");
-		return ICE_ERR_CFG;
-	}
+			  meta->ver.major, meta->ver.minor, meta->ver.update,
+			  meta->ver.draft, meta->name);
 
-	seg_hdr = ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE, pkg_hdr);
-	if (seg_hdr) {
-		hw->ice_pkg_ver = seg_hdr->seg_format_ver;
-		ice_memcpy(hw->ice_pkg_name, seg_hdr->seg_id,
-			   sizeof(hw->ice_pkg_name), ICE_NONDMA_TO_NONDMA);
+		hw->ice_seg_fmt_ver = seg_hdr->seg_format_ver;
+		ice_memcpy(hw->ice_seg_id, seg_hdr->seg_id,
+			   sizeof(hw->ice_seg_id), ICE_NONDMA_TO_NONDMA);
 
 		ice_debug(hw, ICE_DBG_PKG, "Ice Seg: %d.%d.%d.%d, %s\n",
 			  seg_hdr->seg_format_ver.major,
diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h
index 8f33efdd62..1dd57baccd 100644
--- a/drivers/net/ice/base/ice_flex_type.h
+++ b/drivers/net/ice/base/ice_flex_type.h
@@ -114,6 +114,7 @@ struct ice_buf_hdr {
 	(ent_sz))
 
 /* ice package section IDs */
+#define ICE_SID_METADATA		1
 #define ICE_SID_XLT0_SW			10
 #define ICE_SID_XLT_KEY_BUILDER_SW	11
 #define ICE_SID_XLT1_SW			12
@@ -343,6 +344,13 @@ struct ice_ptype_attributes {
 	enum ice_ptype_attrib_type attrib;
 };
 
+struct ice_meta_sect {
+	struct ice_pkg_ver ver;
+#define ICE_META_SECT_NAME_SIZE	28
+	char name[ICE_META_SECT_NAME_SIZE];
+	__le32 track_id;
+};
+
 /* Packet Type Groups (PTG) - Inner Most fields (IM) */
 #define ICE_PTG_IM_IPV4_TCP		16
 #define ICE_PTG_IM_IPV4_UDP		17
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index fb350faa60..3d231db61a 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -930,13 +930,13 @@ struct ice_hw {
 
 	enum ice_aq_err pkg_dwnld_status;
 
-	/* Driver's package ver - (from the Metadata seg) */
+	/* Driver's package ver - (from the Ice Metadata section) */
 	struct ice_pkg_ver pkg_ver;
 	u8 pkg_name[ICE_PKG_NAME_SIZE];
 
-	/* Driver's Ice package version (from the Ice seg) */
-	struct ice_pkg_ver ice_pkg_ver;
-	u8 ice_pkg_name[ICE_PKG_NAME_SIZE];
+	/* Driver's Ice segment format version and id (from the Ice seg) */
+	struct ice_pkg_ver ice_seg_fmt_ver;
+	u8 ice_seg_id[ICE_SEG_ID_SIZE];
 
 	/* Pointer to the ice segment */
 	struct ice_seg *seg;
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 10/21] net/ice/base: use malloc instead of calloc
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (8 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 09/21] net/ice/base: update to use package info from ice segment Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 11/21] net/ice/base: add support for class 5+ modules Qi Zhang
                   ` (11 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang, Bruce Allan

Use *malloc() instead of *calloc() when allocating only a single object as
opposed to an array of objects.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_switch.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index cd78685735..dc55d7e3ce 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -3366,8 +3366,7 @@ ice_create_vsi_list_map(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
 	struct ice_vsi_list_map_info *v_map;
 	int i;
 
-	v_map = (struct ice_vsi_list_map_info *)ice_calloc(hw, 1,
-		sizeof(*v_map));
+	v_map = (struct ice_vsi_list_map_info *)ice_malloc(hw, sizeof(*v_map));
 	if (!v_map)
 		return NULL;
 
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 11/21] net/ice/base: add support for class 5+ modules
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (9 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 10/21] net/ice/base: use malloc instead of calloc Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 12/21] net/ice/base: return error directly Qi Zhang
                   ` (10 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang, Amir Shay

Currently QSFP/SFP modules up to power class 4 are supported.
100G modules require higher power in many cases.
Also, low power mode requires support of power classes 7 and even 8.

This change extends "Get Link Status" AQ command (0x0607) to
support class 5+ modules.

The patch also add couple other missing bits for link status.

Signed-off-by: Amir Shay <shay.amir@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index cadd6df384..3b75cf577a 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -1442,6 +1442,10 @@ struct ice_aqc_get_link_status_data {
 #define ICE_AQ_LINK_TOPO_UNSUPP_MEDIA	BIT(7)
 	u8 link_cfg_err;
 #define ICE_AQ_LINK_CFG_ERR		BIT(0)
+#define ICE_AQ_LINK_ACT_PORT_OPT_INVAL	BIT(2)
+#define ICE_AQ_LINK_FEAT_ID_OR_CONFIG_ID_INVAL	BIT(3)
+#define ICE_AQ_LINK_TOPO_CRITICAL_SDP_ERR	BIT(4)
+#define ICE_AQ_LINK_MODULE_POWER_UNSUPPORTED	BIT(5)
 	u8 link_info;
 #define ICE_AQ_LINK_UP			BIT(0)	/* Link Status */
 #define ICE_AQ_LINK_FAULT		BIT(1)
@@ -1489,7 +1493,7 @@ struct ice_aqc_get_link_status_data {
 #define ICE_AQ_CFG_PACING_TYPE_FIXED	ICE_AQ_CFG_PACING_TYPE_M
 	/* External Device Power Ability */
 	u8 power_desc;
-#define ICE_AQ_PWR_CLASS_M		0x3
+#define ICE_AQ_PWR_CLASS_M		0x3F
 #define ICE_AQ_LINK_PWR_BASET_LOW_HIGH	0
 #define ICE_AQ_LINK_PWR_BASET_HIGH	1
 #define ICE_AQ_LINK_PWR_QSFP_CLASS_1	0
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 12/21] net/ice/base: return error directly
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (10 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 11/21] net/ice/base: add support for class 5+ modules Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 13/21] net/ice/base: implement shared rate limiter Qi Zhang
                   ` (9 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang, Tony Nguyen

As there is nothing to unroll, return the error directly. Remove the label
as this is the only reference to that label.

Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_sched.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 16314ef18e..7867d4fda8 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -2662,10 +2662,9 @@ ice_sched_cfg_agg(struct ice_port_info *pi, u32 agg_id,
 		/* Create new entry for new aggregator ID */
 		agg_info = (struct ice_sched_agg_info *)
 			ice_malloc(hw, sizeof(*agg_info));
-		if (!agg_info) {
-			status = ICE_ERR_NO_MEMORY;
-			goto exit_reg_agg;
-		}
+		if (!agg_info)
+			return ICE_ERR_NO_MEMORY;
+
 		agg_info->agg_id = agg_id;
 		agg_info->agg_type = agg_type;
 		agg_info->tc_bitmap[0] = 0;
@@ -2698,7 +2697,7 @@ ice_sched_cfg_agg(struct ice_port_info *pi, u32 agg_id,
 		/* Save aggregator node's TC information */
 		ice_set_bit(tc, agg_info->tc_bitmap);
 	}
-exit_reg_agg:
+
 	return status;
 }
 
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 13/21] net/ice/base: implement shared rate limiter
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (11 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 12/21] net/ice/base: return error directly Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 14/21] net/ice/base: recognize 860 as iSCSI port in CEE mode Qi Zhang
                   ` (8 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang, Tarun Singh

Implemented shared bandwidth rate limit functionality to account for
dedicated bandwidth and minimum bandwidth. It requires non default
profile be programmed for CIR, EIR/PIR, and SRL.

Signed-off-by: Tarun Singh <tarun.k.singh@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_sched.c | 484 +++++++++++++++++++++----------
 drivers/net/ice/base/ice_sched.h |  21 +-
 2 files changed, 343 insertions(+), 162 deletions(-)

diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c
index 7867d4fda8..ac48bbe279 100644
--- a/drivers/net/ice/base/ice_sched.c
+++ b/drivers/net/ice/base/ice_sched.c
@@ -3132,12 +3132,6 @@ static void ice_set_clear_eir_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
 		ice_clear_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
 		bw_t_info->eir_bw.bw = 0;
 	} else {
-		/* EIR BW and Shared BW profiles are mutually exclusive and
-		 * hence only one of them may be set for any given element.
-		 * First clear earlier saved shared BW information.
-		 */
-		ice_clear_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
-		bw_t_info->shared_bw = 0;
 		/* save EIR BW information */
 		ice_set_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
 		bw_t_info->eir_bw.bw = bw;
@@ -3157,12 +3151,6 @@ static void ice_set_clear_shared_bw(struct ice_bw_type_info *bw_t_info, u32 bw)
 		ice_clear_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
 		bw_t_info->shared_bw = 0;
 	} else {
-		/* EIR BW and Shared BW profiles are mutually exclusive and
-		 * hence only one of them may be set for any given element.
-		 * First clear earlier saved EIR BW information.
-		 */
-		ice_clear_bit(ICE_BW_TYPE_EIR, bw_t_info->bw_t_bitmap);
-		bw_t_info->eir_bw.bw = 0;
 		/* save shared BW information */
 		ice_set_bit(ICE_BW_TYPE_SHARED, bw_t_info->bw_t_bitmap);
 		bw_t_info->shared_bw = bw;
@@ -3435,15 +3423,19 @@ ice_cfg_agg_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
  * ice_cfg_vsi_bw_shared_lmt - configure VSI BW shared limit
  * @pi: port information structure
  * @vsi_handle: software VSI handle
- * @bw: bandwidth in Kbps
+ * @min_bw: minimum bandwidth in Kbps
+ * @max_bw: maximum bandwidth in Kbps
+ * @shared_bw: shared bandwidth in Kbps
  *
- * This function Configures shared rate limiter(SRL) of all VSI type nodes
- * across all traffic classes for VSI matching handle.
+ * Configure shared rate limiter(SRL) of all VSI type nodes across all traffic
+ * classes for VSI matching handle.
  */
 enum ice_status
-ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 bw)
+ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 min_bw,
+			  u32 max_bw, u32 shared_bw)
 {
-	return ice_sched_set_vsi_bw_shared_lmt(pi, vsi_handle, bw);
+	return ice_sched_set_vsi_bw_shared_lmt(pi, vsi_handle, min_bw, max_bw,
+					       shared_bw);
 }
 
 /**
@@ -3458,6 +3450,8 @@ enum ice_status
 ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle)
 {
 	return ice_sched_set_vsi_bw_shared_lmt(pi, vsi_handle,
+					       ICE_SCHED_DFLT_BW,
+					       ICE_SCHED_DFLT_BW,
 					       ICE_SCHED_DFLT_BW);
 }
 
@@ -3465,15 +3459,19 @@ ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle)
  * ice_cfg_agg_bw_shared_lmt - configure aggregator BW shared limit
  * @pi: port information structure
  * @agg_id: aggregator ID
- * @bw: bandwidth in Kbps
+ * @min_bw: minimum bandwidth in Kbps
+ * @max_bw: maximum bandwidth in Kbps
+ * @shared_bw: shared bandwidth in Kbps
  *
  * This function configures the shared rate limiter(SRL) of all aggregator type
  * nodes across all traffic classes for aggregator matching agg_id.
  */
 enum ice_status
-ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw)
+ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 min_bw,
+			  u32 max_bw, u32 shared_bw)
 {
-	return ice_sched_set_agg_bw_shared_lmt(pi, agg_id, bw);
+	return ice_sched_set_agg_bw_shared_lmt(pi, agg_id, min_bw, max_bw,
+					       shared_bw);
 }
 
 /**
@@ -3487,7 +3485,47 @@ ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw)
 enum ice_status
 ice_cfg_agg_bw_no_shared_lmt(struct ice_port_info *pi, u32 agg_id)
 {
-	return ice_sched_set_agg_bw_shared_lmt(pi, agg_id, ICE_SCHED_DFLT_BW);
+	return ice_sched_set_agg_bw_shared_lmt(pi, agg_id, ICE_SCHED_DFLT_BW,
+					       ICE_SCHED_DFLT_BW,
+					       ICE_SCHED_DFLT_BW);
+}
+
+/**
+ * ice_cfg_agg_bw_shared_lmt_per_tc - configure aggregator BW shared limit per tc
+ * @pi: port information structure
+ * @agg_id: aggregator ID
+ * @tc: traffic class
+ * @min_bw: minimum bandwidth in Kbps
+ * @max_bw: maximum bandwidth in Kbps
+ * @shared_bw: shared bandwidth in Kbps
+ *
+ * This function configures the shared rate limiter(SRL) of all aggregator type
+ * nodes across all traffic classes for aggregator matching agg_id.
+ */
+enum ice_status
+ice_cfg_agg_bw_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+				 u32 min_bw, u32 max_bw, u32 shared_bw)
+{
+	return ice_sched_set_agg_bw_shared_lmt_per_tc(pi, agg_id, tc, min_bw,
+						      max_bw, shared_bw);
+}
+
+/**
+ * ice_cfg_agg_bw_shared_lmt_per_tc - configure aggregator BW shared limit per tc
+ * @pi: port information structure
+ * @agg_id: aggregator ID
+ * @tc: traffic class
+ *
+ * This function configures the shared rate limiter(SRL) of all aggregator type
+ * nodes across all traffic classes for aggregator matching agg_id.
+ */
+enum ice_status
+ice_cfg_agg_bw_no_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc)
+{
+	return ice_sched_set_agg_bw_shared_lmt_per_tc(pi, agg_id, tc,
+						      ICE_SCHED_DFLT_BW,
+						      ICE_SCHED_DFLT_BW,
+						      ICE_SCHED_DFLT_BW);
 }
 
 /**
@@ -3946,37 +3984,10 @@ ice_sched_cfg_node_bw_lmt(struct ice_hw *hw, struct ice_sched_node *node,
 		data->cir_bw.bw_profile_idx = CPU_TO_LE16(rl_prof_id);
 		break;
 	case ICE_MAX_BW:
-		/* EIR BW and Shared BW profiles are mutually exclusive and
-		 * hence only one of them may be set for any given element
-		 */
-		if (data->valid_sections & ICE_AQC_ELEM_VALID_SHARED)
-			return ICE_ERR_CFG;
 		data->valid_sections |= ICE_AQC_ELEM_VALID_EIR;
 		data->eir_bw.bw_profile_idx = CPU_TO_LE16(rl_prof_id);
 		break;
 	case ICE_SHARED_BW:
-		/* Check for removing shared BW */
-		if (rl_prof_id == ICE_SCHED_NO_SHARED_RL_PROF_ID) {
-			/* remove shared profile */
-			data->valid_sections &= ~ICE_AQC_ELEM_VALID_SHARED;
-			data->srl_id = 0; /* clear SRL field */
-
-			/* enable back EIR to default profile */
-			data->valid_sections |= ICE_AQC_ELEM_VALID_EIR;
-			data->eir_bw.bw_profile_idx =
-				CPU_TO_LE16(ICE_SCHED_DFLT_RL_PROF_ID);
-			break;
-		}
-		/* EIR BW and Shared BW profiles are mutually exclusive and
-		 * hence only one of them may be set for any given element
-		 */
-		if ((data->valid_sections & ICE_AQC_ELEM_VALID_EIR) &&
-		    (LE16_TO_CPU(data->eir_bw.bw_profile_idx) !=
-			    ICE_SCHED_DFLT_RL_PROF_ID))
-			return ICE_ERR_CFG;
-		/* EIR BW is set to default, disable it */
-		data->valid_sections &= ~ICE_AQC_ELEM_VALID_EIR;
-		/* Okay to enable shared BW now */
 		data->valid_sections |= ICE_AQC_ELEM_VALID_SHARED;
 		data->srl_id = CPU_TO_LE16(rl_prof_id);
 		break;
@@ -4187,51 +4198,6 @@ ice_sched_set_node_bw_dflt(struct ice_port_info *pi,
 	return ice_sched_rm_rl_profile(hw, layer_num, profile_type, old_id);
 }
 
-/**
- * ice_sched_set_eir_srl_excl - set EIR/SRL exclusiveness
- * @pi: port information structure
- * @node: pointer to node structure
- * @layer_num: layer number where rate limit profiles are saved
- * @rl_type: rate limit type min, max, or shared
- * @bw: bandwidth value
- *
- * This function prepares node element's bandwidth to SRL or EIR exclusively.
- * EIR BW and Shared BW profiles are mutually exclusive and hence only one of
- * them may be set for any given element. This function needs to be called
- * with the scheduler lock held.
- */
-static enum ice_status
-ice_sched_set_eir_srl_excl(struct ice_port_info *pi,
-			   struct ice_sched_node *node,
-			   u8 layer_num, enum ice_rl_type rl_type, u32 bw)
-{
-	if (rl_type == ICE_SHARED_BW) {
-		/* SRL node passed in this case, it may be different node */
-		if (bw == ICE_SCHED_DFLT_BW)
-			/* SRL being removed, ice_sched_cfg_node_bw_lmt()
-			 * enables EIR to default. EIR is not set in this
-			 * case, so no additional action is required.
-			 */
-			return ICE_SUCCESS;
-
-		/* SRL being configured, set EIR to default here.
-		 * ice_sched_cfg_node_bw_lmt() disables EIR when it
-		 * configures SRL
-		 */
-		return ice_sched_set_node_bw_dflt(pi, node, ICE_MAX_BW,
-						  layer_num);
-	} else if (rl_type == ICE_MAX_BW &&
-		   node->info.data.valid_sections & ICE_AQC_ELEM_VALID_SHARED) {
-		/* Remove Shared profile. Set default shared BW call
-		 * removes shared profile for a node.
-		 */
-		return ice_sched_set_node_bw_dflt(pi, node,
-						  ICE_SHARED_BW,
-						  layer_num);
-	}
-	return ICE_SUCCESS;
-}
-
 /**
  * ice_sched_set_node_bw - set node's bandwidth
  * @pi: port information structure
@@ -4289,14 +4255,14 @@ ice_sched_set_node_bw(struct ice_port_info *pi, struct ice_sched_node *node,
  *
  * It updates node's BW limit parameters like BW RL profile ID of type CIR,
  * EIR, or SRL. The caller needs to hold scheduler lock.
+ *
+ * NOTE: Caller provides the correct SRL node in case of shared profile
+ * settings.
  */
 static enum ice_status
 ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node,
 			  enum ice_rl_type rl_type, u32 bw)
 {
-	struct ice_sched_node *cfg_node = node;
-	enum ice_status status;
-
 	struct ice_hw *hw;
 	u8 layer_num;
 
@@ -4305,28 +4271,15 @@ ice_sched_set_node_bw_lmt(struct ice_port_info *pi, struct ice_sched_node *node,
 	hw = pi->hw;
 	/* Remove unused RL profile IDs from HW and SW DB */
 	ice_sched_rm_unused_rl_prof(hw);
+
 	layer_num = ice_sched_get_rl_prof_layer(pi, rl_type,
-						node->tx_sched_layer);
+		node->tx_sched_layer);
 	if (layer_num >= hw->num_tx_sched_layers)
 		return ICE_ERR_PARAM;
 
-	if (rl_type == ICE_SHARED_BW) {
-		/* SRL node may be different */
-		cfg_node = ice_sched_get_srl_node(node, layer_num);
-		if (!cfg_node)
-			return ICE_ERR_CFG;
-	}
-	/* EIR BW and Shared BW profiles are mutually exclusive and
-	 * hence only one of them may be set for any given element
-	 */
-	status = ice_sched_set_eir_srl_excl(pi, cfg_node, layer_num, rl_type,
-					    bw);
-	if (status)
-		return status;
 	if (bw == ICE_SCHED_DFLT_BW)
-		return ice_sched_set_node_bw_dflt(pi, cfg_node, rl_type,
-						  layer_num);
-	return ice_sched_set_node_bw(pi, cfg_node, rl_type, bw, layer_num);
+		return ice_sched_set_node_bw_dflt(pi, node, rl_type, layer_num);
+	return ice_sched_set_node_bw(pi, node, rl_type, bw, layer_num);
 }
 
 /**
@@ -4886,19 +4839,108 @@ ice_sched_validate_vsi_srl_node(struct ice_port_info *pi, u16 vsi_handle)
 	return ICE_SUCCESS;
 }
 
+/**
+ * ice_sched_set_save_vsi_srl_node_bw - set VSI shared limit values
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: traffic class
+ * @srl_node: sched node to configure
+ * @rl_type: rate limit type minimum, maximum, or shared
+ * @bw: minimum, maximum, or shared bandwidth in Kbps
+ *
+ * Configure shared rate limiter(SRL) of VSI type nodes across given traffic
+ * class, and saves those value for later use for replaying purposes. The
+ * caller holds the scheduler lock.
+ */
+static enum ice_status
+ice_sched_set_save_vsi_srl_node_bw(struct ice_port_info *pi, u16 vsi_handle,
+				   u8 tc, struct ice_sched_node *srl_node,
+				   enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status;
+
+	if (bw == ICE_SCHED_DFLT_BW) {
+		status = ice_sched_set_node_bw_dflt_lmt(pi, srl_node, rl_type);
+	} else {
+		status = ice_sched_set_node_bw_lmt(pi, srl_node, rl_type, bw);
+		if (status)
+			return status;
+		status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type, bw);
+	}
+	return status;
+}
+
+/**
+ * ice_sched_set_vsi_node_srl_per_tc - set VSI node BW shared limit for tc
+ * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: traffic class
+ * @min_bw: minimum bandwidth in Kbps
+ * @max_bw: maximum bandwidth in Kbps
+ * @shared_bw: shared bandwidth in Kbps
+ *
+ * Configure shared rate limiter(SRL) of  VSI type nodes across requested
+ * traffic class for VSI matching handle. When BW value of ICE_SCHED_DFLT_BW
+ * is passed, it removes the corresponding bw from the node. The caller
+ * holds scheduler lock.
+ */
+static enum ice_status
+ice_sched_set_vsi_node_srl_per_tc(struct ice_port_info *pi, u16 vsi_handle,
+				  u8 tc, u32 min_bw, u32 max_bw, u32 shared_bw)
+{
+	struct ice_sched_node *tc_node, *vsi_node, *cfg_node;
+	enum ice_status status;
+	u8 layer_num;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	vsi_node = ice_sched_get_vsi_node(pi, tc_node, vsi_handle);
+	if (!vsi_node)
+		return ICE_ERR_CFG;
+
+	layer_num = ice_sched_get_rl_prof_layer(pi, ICE_SHARED_BW,
+						vsi_node->tx_sched_layer);
+	if (layer_num >= pi->hw->num_tx_sched_layers)
+		return ICE_ERR_PARAM;
+
+	/* SRL node may be different */
+	cfg_node = ice_sched_get_srl_node(vsi_node, layer_num);
+	if (!cfg_node)
+		return ICE_ERR_CFG;
+
+	status = ice_sched_set_save_vsi_srl_node_bw(pi, vsi_handle, tc,
+						    cfg_node, ICE_MIN_BW,
+						    min_bw);
+	if (status)
+		return status;
+
+	status = ice_sched_set_save_vsi_srl_node_bw(pi, vsi_handle, tc,
+						    cfg_node, ICE_MAX_BW,
+						    max_bw);
+	if (status)
+		return status;
+
+	return ice_sched_set_save_vsi_srl_node_bw(pi, vsi_handle, tc, cfg_node,
+						  ICE_SHARED_BW, shared_bw);
+}
+
 /**
  * ice_sched_set_vsi_bw_shared_lmt - set VSI BW shared limit
  * @pi: port information structure
  * @vsi_handle: software VSI handle
- * @bw: bandwidth in Kbps
+ * @min_bw: minimum bandwidth in Kbps
+ * @max_bw: maximum bandwidth in Kbps
+ * @shared_bw: shared bandwidth in Kbps
  *
- * This function Configures shared rate limiter(SRL) of all VSI type nodes
- * across all traffic classes for VSI matching handle. When BW value of
- * ICE_SCHED_DFLT_BW is passed, it removes the SRL from the node.
+ * Configure shared rate limiter(SRL) of all VSI type nodes across all traffic
+ * classes for VSI matching handle. When BW value of ICE_SCHED_DFLT_BW is
+ * passed, it removes those value(s) from the node.
  */
 enum ice_status
 ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle,
-				u32 bw)
+				u32 min_bw, u32 max_bw, u32 shared_bw)
 {
 	enum ice_status status = ICE_SUCCESS;
 	u8 tc;
@@ -4916,7 +4958,6 @@ ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle,
 	/* Return success if no nodes are present across TC */
 	ice_for_each_traffic_class(tc) {
 		struct ice_sched_node *tc_node, *vsi_node;
-		enum ice_rl_type rl_type = ICE_SHARED_BW;
 
 		tc_node = ice_sched_get_tc_node(pi, tc);
 		if (!tc_node)
@@ -4926,16 +4967,9 @@ ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle,
 		if (!vsi_node)
 			continue;
 
-		if (bw == ICE_SCHED_DFLT_BW)
-			/* It removes existing SRL from the node */
-			status = ice_sched_set_node_bw_dflt_lmt(pi, vsi_node,
-								rl_type);
-		else
-			status = ice_sched_set_node_bw_lmt(pi, vsi_node,
-							   rl_type, bw);
-		if (status)
-			break;
-		status = ice_sched_save_vsi_bw(pi, vsi_handle, tc, rl_type, bw);
+		status = ice_sched_set_vsi_node_srl_per_tc(pi, vsi_handle, tc,
+							   min_bw, max_bw,
+							   shared_bw);
 		if (status)
 			break;
 	}
@@ -5003,32 +5037,23 @@ ice_sched_validate_agg_srl_node(struct ice_port_info *pi, u32 agg_id)
 }
 
 /**
- * ice_sched_set_agg_bw_shared_lmt - set aggregator BW shared limit
+ * ice_sched_validate_agg_id - Validate aggregator id
  * @pi: port information structure
  * @agg_id: aggregator ID
- * @bw: bandwidth in Kbps
  *
- * This function configures the shared rate limiter(SRL) of all aggregator type
- * nodes across all traffic classes for aggregator matching agg_id. When
- * BW value of ICE_SCHED_DFLT_BW is passed, it removes SRL from the
- * node(s).
+ * This function validates aggregator id. Caller holds the scheduler lock.
  */
-enum ice_status
-ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw)
+static enum ice_status
+ice_sched_validate_agg_id(struct ice_port_info *pi, u32 agg_id)
 {
 	struct ice_sched_agg_info *agg_info;
 	struct ice_sched_agg_info *tmp;
 	bool agg_id_present = false;
-	enum ice_status status = ICE_SUCCESS;
-	u8 tc;
-
-	if (!pi)
-		return ICE_ERR_PARAM;
+	enum ice_status status;
 
-	ice_acquire_lock(&pi->sched_lock);
 	status = ice_sched_validate_agg_srl_node(pi, agg_id);
 	if (status)
-		goto exit_agg_bw_shared_lmt;
+		return status;
 
 	LIST_FOR_EACH_ENTRY_SAFE(agg_info, tmp, &pi->hw->agg_list,
 				 ice_sched_agg_info, list_entry)
@@ -5037,14 +5062,129 @@ ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw)
 			break;
 		}
 
-	if (!agg_id_present) {
-		status = ICE_ERR_PARAM;
-		goto exit_agg_bw_shared_lmt;
+	if (!agg_id_present)
+		return ICE_ERR_PARAM;
+
+	return ICE_SUCCESS;
+}
+
+/**
+ * ice_sched_set_save_agg_srl_node_bw - set aggregator shared limit values
+ * @pi: port information structure
+ * @agg_id: aggregator ID
+ * @tc: traffic class
+ * @srl_node: sched node to configure
+ * @rl_type: rate limit type minimum, maximum, or shared
+ * @bw: minimum, maximum, or shared bandwidth in Kbps
+ *
+ * Configure shared rate limiter(SRL) of aggregator type nodes across
+ * requested traffic class, and saves those value for later use for
+ * replaying purposes. The caller holds the scheduler lock.
+ */
+static enum ice_status
+ice_sched_set_save_agg_srl_node_bw(struct ice_port_info *pi, u32 agg_id, u8 tc,
+				   struct ice_sched_node *srl_node,
+				   enum ice_rl_type rl_type, u32 bw)
+{
+	enum ice_status status;
+
+	if (bw == ICE_SCHED_DFLT_BW) {
+		status = ice_sched_set_node_bw_dflt_lmt(pi, srl_node, rl_type);
+	} else {
+		status = ice_sched_set_node_bw_lmt(pi, srl_node, rl_type, bw);
+		if (status)
+			return status;
+		status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type, bw);
 	}
+	return status;
+}
+
+/**
+ * ice_sched_set_agg_node_srl_per_tc - set aggregator SRL per tc
+ * @pi: port information structure
+ * @agg_id: aggregator ID
+ * @tc: traffic class
+ * @min_bw: minimum bandwidth in Kbps
+ * @max_bw: maximum bandwidth in Kbps
+ * @shared_bw: shared bandwidth in Kbps
+ *
+ * This function configures the shared rate limiter(SRL) of aggregator type
+ * node for a given traffic class for aggregator matching agg_id. When BW
+ * value of ICE_SCHED_DFLT_BW is passed, it removes SRL from the node. Caller
+ * holds the scheduler lock.
+ */
+static enum ice_status
+ice_sched_set_agg_node_srl_per_tc(struct ice_port_info *pi, u32 agg_id,
+				  u8 tc, u32 min_bw, u32 max_bw, u32 shared_bw)
+{
+	struct ice_sched_node *tc_node, *agg_node, *cfg_node;
+	enum ice_rl_type rl_type = ICE_SHARED_BW;
+	enum ice_status status = ICE_ERR_CFG;
+	u8 layer_num;
+
+	tc_node = ice_sched_get_tc_node(pi, tc);
+	if (!tc_node)
+		return ICE_ERR_CFG;
+
+	agg_node = ice_sched_get_agg_node(pi, tc_node, agg_id);
+	if (!agg_node)
+		return ICE_ERR_CFG;
+
+	layer_num = ice_sched_get_rl_prof_layer(pi, rl_type,
+						agg_node->tx_sched_layer);
+	if (layer_num >= pi->hw->num_tx_sched_layers)
+		return ICE_ERR_PARAM;
+
+	/* SRL node may be different */
+	cfg_node = ice_sched_get_srl_node(agg_node, layer_num);
+	if (!cfg_node)
+		return ICE_ERR_CFG;
+
+	status = ice_sched_set_save_agg_srl_node_bw(pi, agg_id, tc, cfg_node,
+						    ICE_MIN_BW, min_bw);
+	if (status)
+		return status;
+
+	status = ice_sched_set_save_agg_srl_node_bw(pi, agg_id, tc, cfg_node,
+						    ICE_MAX_BW, max_bw);
+	if (status)
+		return status;
+
+	status = ice_sched_set_save_agg_srl_node_bw(pi, agg_id, tc, cfg_node,
+						    ICE_SHARED_BW, shared_bw);
+	return status;
+}
+
+/**
+ * ice_sched_set_agg_bw_shared_lmt - set aggregator BW shared limit
+ * @pi: port information structure
+ * @agg_id: aggregator ID
+ * @min_bw: minimum bandwidth in Kbps
+ * @max_bw: maximum bandwidth in Kbps
+ * @shared_bw: shared bandwidth in Kbps
+ *
+ * This function configures the shared rate limiter(SRL) of all aggregator type
+ * nodes across all traffic classes for aggregator matching agg_id. When
+ * BW value of ICE_SCHED_DFLT_BW is passed, it removes SRL from the
+ * node(s).
+ */
+enum ice_status
+ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id,
+				u32 min_bw, u32 max_bw, u32 shared_bw)
+{
+	enum ice_status status;
+	u8 tc;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_validate_agg_id(pi, agg_id);
+	if (status)
+		goto exit_agg_bw_shared_lmt;
 
 	/* Return success if no nodes are present across TC */
 	ice_for_each_traffic_class(tc) {
-		enum ice_rl_type rl_type = ICE_SHARED_BW;
 		struct ice_sched_node *tc_node, *agg_node;
 
 		tc_node = ice_sched_get_tc_node(pi, tc);
@@ -5055,16 +5195,9 @@ ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw)
 		if (!agg_node)
 			continue;
 
-		if (bw == ICE_SCHED_DFLT_BW)
-			/* It removes existing SRL from the node */
-			status = ice_sched_set_node_bw_dflt_lmt(pi, agg_node,
-								rl_type);
-		else
-			status = ice_sched_set_node_bw_lmt(pi, agg_node,
-							   rl_type, bw);
-		if (status)
-			break;
-		status = ice_sched_save_agg_bw(pi, agg_id, tc, rl_type, bw);
+		status = ice_sched_set_agg_node_srl_per_tc(pi, agg_id, tc,
+							   min_bw, max_bw,
+							   shared_bw);
 		if (status)
 			break;
 	}
@@ -5074,6 +5207,41 @@ ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw)
 	return status;
 }
 
+/**
+ * ice_sched_set_agg_bw_shared_lmt_per_tc - set aggregator BW shared lmt per tc
+ * @pi: port information structure
+ * @agg_id: aggregator ID
+ * @tc: traffic class
+ * @min_bw: minimum bandwidth in Kbps
+ * @max_bw: maximum bandwidth in Kbps
+ * @shared_bw: shared bandwidth in Kbps
+ *
+ * This function configures the shared rate limiter(SRL) of aggregator type
+ * node for a given traffic class for aggregator matching agg_id. When BW
+ * value of ICE_SCHED_DFLT_BW is passed, it removes SRL from the node.
+ */
+enum ice_status
+ice_sched_set_agg_bw_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id,
+				       u8 tc, u32 min_bw, u32 max_bw,
+				       u32 shared_bw)
+{
+	enum ice_status status;
+
+	if (!pi)
+		return ICE_ERR_PARAM;
+	ice_acquire_lock(&pi->sched_lock);
+	status = ice_sched_validate_agg_id(pi, agg_id);
+	if (status)
+		goto exit_agg_bw_shared_lmt_per_tc;
+
+	status = ice_sched_set_agg_node_srl_per_tc(pi, agg_id, tc, min_bw,
+						   max_bw, shared_bw);
+
+exit_agg_bw_shared_lmt_per_tc:
+	ice_release_lock(&pi->sched_lock);
+	return status;
+}
+
 /**
  * ice_sched_cfg_sibl_node_prio - configure node sibling priority
  * @pi: port information structure
diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h
index 501a4c499e..8b275637a4 100644
--- a/drivers/net/ice/base/ice_sched.h
+++ b/drivers/net/ice/base/ice_sched.h
@@ -153,14 +153,22 @@ enum ice_status
 ice_cfg_agg_bw_dflt_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
 			       enum ice_rl_type rl_type);
 enum ice_status
-ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 bw);
+ice_cfg_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle, u32 min_bw,
+			  u32 max_bw, u32 shared_bw);
 enum ice_status
 ice_cfg_vsi_bw_no_shared_lmt(struct ice_port_info *pi, u16 vsi_handle);
 enum ice_status
-ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw);
+ice_cfg_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 min_bw,
+			  u32 max_bw, u32 shared_bw);
 enum ice_status
 ice_cfg_agg_bw_no_shared_lmt(struct ice_port_info *pi, u32 agg_id);
 enum ice_status
+ice_cfg_agg_bw_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id, u8 tc,
+				 u32 min_bw, u32 max_bw, u32 shared_bw);
+enum ice_status
+ice_cfg_agg_bw_no_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id,
+				    u8 tc);
+enum ice_status
 ice_cfg_vsi_q_priority(struct ice_port_info *pi, u16 num_qs, u32 *q_ids,
 		       u8 *q_prio);
 enum ice_status
@@ -184,9 +192,14 @@ ice_sched_set_node_bw_lmt_per_tc(struct ice_port_info *pi, u32 id,
 				 enum ice_rl_type rl_type, u32 bw);
 enum ice_status
 ice_sched_set_vsi_bw_shared_lmt(struct ice_port_info *pi, u16 vsi_handle,
-				u32 bw);
+				u32 min_bw, u32 max_bw, u32 shared_bw);
+enum ice_status
+ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 min_bw,
+				u32 max_bw, u32 shared_bw);
 enum ice_status
-ice_sched_set_agg_bw_shared_lmt(struct ice_port_info *pi, u32 agg_id, u32 bw);
+ice_sched_set_agg_bw_shared_lmt_per_tc(struct ice_port_info *pi, u32 agg_id,
+				       u8 tc, u32 min_bw, u32 max_bw,
+				       u32 shared_bw);
 enum ice_status
 ice_sched_cfg_sibl_node_prio(struct ice_port_info *pi,
 			     struct ice_sched_node *node, u8 priority);
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 14/21] net/ice/base: recognize 860 as iSCSI port in CEE mode
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (12 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 13/21] net/ice/base: implement shared rate limiter Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 15/21] net/ice/base: fix parameter name in comment Qi Zhang
                   ` (7 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang, Chinh T Cao

iSCSI can use both TCP ports 860 and 3260. However, in our current
implementation, the ice_aqc_opc_get_cee_dcb_cfg (0x0A07) AQ command
doesn't provide a way to communicate the protocol port number to the
AQ's caller. Thus, we assume that 3260 is the iSCSI port number at the
AQ's caller layer.

In this patch, we will rely on the dcbx-willing mode, desired QOS and
remote QOS configuration to determine which port number that iSCSI will
use.

Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_dcb.c  | 38 +++++++++++++++++++++++++--------
 drivers/net/ice/base/ice_type.h | 35 +++++++++++++++++++-----------
 2 files changed, 51 insertions(+), 22 deletions(-)

diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c
index f5f375a7a5..351038528b 100644
--- a/drivers/net/ice/base/ice_dcb.c
+++ b/drivers/net/ice/base/ice_dcb.c
@@ -738,22 +738,27 @@ ice_aq_get_cee_dcb_cfg(struct ice_hw *hw,
 /**
  * ice_cee_to_dcb_cfg
  * @cee_cfg: pointer to CEE configuration struct
- * @dcbcfg: DCB configuration struct
+ * @pi: port information structure
  *
  * Convert CEE configuration from firmware to DCB configuration
  */
 static void
 ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
-		   struct ice_dcbx_cfg *dcbcfg)
+		   struct ice_port_info *pi)
 {
 	u32 status, tlv_status = LE32_TO_CPU(cee_cfg->tlv_status);
 	u32 ice_aqc_cee_status_mask, ice_aqc_cee_status_shift;
+	u8 i, j, err, sync, oper, app_index, ice_app_sel_type;
 	u16 app_prio = LE16_TO_CPU(cee_cfg->oper_app_prio);
-	u8 i, err, sync, oper, app_index, ice_app_sel_type;
 	u16 ice_aqc_cee_app_mask, ice_aqc_cee_app_shift;
+	struct ice_dcbx_cfg *cmp_dcbcfg, *dcbcfg;
 	u16 ice_app_prot_id_type;
 
-	/* CEE PG data to ETS config */
+	dcbcfg = &pi->qos_cfg.local_dcbx_cfg;
+	dcbcfg->dcbx_mode = ICE_DCBX_MODE_CEE;
+	dcbcfg->tlv_status = tlv_status;
+
+	/* CEE PG data */
 	dcbcfg->etscfg.maxtcs = cee_cfg->oper_num_tc;
 
 	/* Note that the FW creates the oper_prio_tc nibbles reversed
@@ -780,10 +785,16 @@ ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
 		}
 	}
 
-	/* CEE PFC data to ETS config */
+	/* CEE PFC data */
 	dcbcfg->pfc.pfcena = cee_cfg->oper_pfc_en;
 	dcbcfg->pfc.pfccap = ICE_MAX_TRAFFIC_CLASS;
 
+	/* CEE APP TLV data */
+	if (dcbcfg->app_mode == ICE_DCBX_APPS_NON_WILLING)
+		cmp_dcbcfg = &pi->qos_cfg.desired_dcbx_cfg;
+	else
+		cmp_dcbcfg = &pi->qos_cfg.remote_dcbx_cfg;
+
 	app_index = 0;
 	for (i = 0; i < 3; i++) {
 		if (i == 0) {
@@ -802,6 +813,18 @@ ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
 			ice_aqc_cee_app_shift = ICE_AQC_CEE_APP_ISCSI_S;
 			ice_app_sel_type = ICE_APP_SEL_TCPIP;
 			ice_app_prot_id_type = ICE_APP_PROT_ID_ISCSI;
+
+			for (j = 0; j < cmp_dcbcfg->numapps; j++) {
+				u16 prot_id = cmp_dcbcfg->app[j].prot_id;
+				u8 sel = cmp_dcbcfg->app[j].selector;
+
+				if  (sel == ICE_APP_SEL_TCPIP &&
+				     (prot_id == ICE_APP_PROT_ID_ISCSI ||
+				      prot_id == ICE_APP_PROT_ID_ISCSI_860)) {
+					ice_app_prot_id_type = prot_id;
+					break;
+				}
+			}
 		} else {
 			/* FIP APP */
 			ice_aqc_cee_status_mask = ICE_AQC_CEE_FIP_STATUS_M;
@@ -892,11 +915,8 @@ enum ice_status ice_get_dcb_cfg(struct ice_port_info *pi)
 	ret = ice_aq_get_cee_dcb_cfg(pi->hw, &cee_cfg, NULL);
 	if (ret == ICE_SUCCESS) {
 		/* CEE mode */
-		dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg;
-		dcbx_cfg->dcbx_mode = ICE_DCBX_MODE_CEE;
-		dcbx_cfg->tlv_status = LE32_TO_CPU(cee_cfg.tlv_status);
-		ice_cee_to_dcb_cfg(&cee_cfg, dcbx_cfg);
 		ret = ice_get_ieee_or_cee_dcb_cfg(pi, ICE_DCBX_MODE_CEE);
+		ice_cee_to_dcb_cfg(&cee_cfg, pi);
 	} else if (pi->hw->adminq.sq_last_status == ICE_AQ_RC_ENOENT) {
 		/* CEE mode not enabled try querying IEEE data */
 		dcbx_cfg = &pi->qos_cfg.local_dcbx_cfg;
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 3d231db61a..7545235635 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -755,19 +755,28 @@ struct ice_dcb_app_priority_table {
 	u8 selector;
 };
 
-#define ICE_MAX_USER_PRIORITY	8
-#define ICE_DCBX_MAX_APPS	32
-#define ICE_LLDPDU_SIZE		1500
-#define ICE_TLV_STATUS_OPER	0x1
-#define ICE_TLV_STATUS_SYNC	0x2
-#define ICE_TLV_STATUS_ERR	0x4
-#define ICE_APP_PROT_ID_FCOE	0x8906
-#define ICE_APP_PROT_ID_ISCSI	0x0cbc
-#define ICE_APP_PROT_ID_FIP	0x8914
-#define ICE_APP_SEL_ETHTYPE	0x1
-#define ICE_APP_SEL_TCPIP	0x2
-#define ICE_CEE_APP_SEL_ETHTYPE	0x0
-#define ICE_CEE_APP_SEL_TCPIP	0x1
+#define ICE_MAX_USER_PRIORITY		8
+#define ICE_DCBX_MAX_APPS		32
+#define ICE_LLDPDU_SIZE			1500
+#define ICE_TLV_STATUS_OPER		0x1
+#define ICE_TLV_STATUS_SYNC		0x2
+#define ICE_TLV_STATUS_ERR		0x4
+#ifndef ICE_APP_PROT_ID_FCOE
+#define ICE_APP_PROT_ID_FCOE		0x8906
+#endif /* ICE_APP_PROT_ID_FCOE */
+#ifndef ICE_APP_PROT_ID_ISCSI
+#define ICE_APP_PROT_ID_ISCSI		0x0cbc
+#endif /* ICE_APP_PROT_ID_ISCSI */
+#ifndef ICE_APP_PROT_ID_ISCSI_860
+#define ICE_APP_PROT_ID_ISCSI_860	0x035c
+#endif /* ICE_APP_PROT_ID_ISCSI_860 */
+#ifndef ICE_APP_PROT_ID_FIP
+#define ICE_APP_PROT_ID_FIP		0x8914
+#endif /* ICE_APP_PROT_ID_FIP */
+#define ICE_APP_SEL_ETHTYPE		0x1
+#define ICE_APP_SEL_TCPIP		0x2
+#define ICE_CEE_APP_SEL_ETHTYPE		0x0
+#define ICE_CEE_APP_SEL_TCPIP		0x1
 
 struct ice_dcbx_cfg {
 	u32 numapps;
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 15/21] net/ice/base: fix parameter name in comment
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (13 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 14/21] net/ice/base: recognize 860 as iSCSI port in CEE mode Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 16/21] net/ice/base: support extended GPIO access Qi Zhang
                   ` (6 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang, stable, Jesse Brandeburg

Fix parameter name for cookie_high and cookie_low.

Fixes: a90fae1d0755 ("net/ice/base: add admin queue structures and commands")
Cc: stable@dpdk.org

Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 3b75cf577a..c105a445ee 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -2652,8 +2652,8 @@ struct ice_aqc_clear_health_status {
  * @opcode: AQ command opcode
  * @datalen: length in bytes of indirect/external data buffer
  * @retval: return value from firmware
- * @cookie_h: opaque data high-half
- * @cookie_l: opaque data low-half
+ * @cookie_high: opaque data high-half
+ * @cookie_low: opaque data low-half
  * @params: command-specific parameters
  *
  * Descriptor format for commands the driver posts on the Admin Transmit Queue
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 16/21] net/ice/base: support extended GPIO access
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (14 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 15/21] net/ice/base: fix parameter name in comment Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 17/21] net/ice/base: remove duplicated AQ command flag setting Qi Zhang
                   ` (5 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang, Amir Shay

Added two new admin commands called: SW Set GPIO and SW Get GPIO
(0x6EF and 0x6F0 respectively) which extends GPIO handling
capabilities by SW driver

Signed-off-by: Amir Shay <shay.amir@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_adminq_cmd.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index c105a445ee..f715fb0910 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -1635,6 +1635,22 @@ struct ice_aqc_sff_eeprom {
 	__le32 addr_low;
 };
 
+/* SW Set GPIO command (indirect 0x6EF)
+ * SW Get GPIO command (indirect 0x6F0)
+ */
+struct ice_aqc_sw_gpio {
+	__le16 gpio_ctrl_handle;
+#define ICE_AQC_SW_GPIO_CONTROLLER_HANDLE_S	0
+#define ICE_AQC_SW_GPIO_CONTROLLER_HANDLE_M	(0x3FF << ICE_AQC_SW_GPIO_CONTROLLER_HANDLE_S)
+	u8 gpio_num;
+#define ICE_AQC_SW_GPIO_NUMBER_S	0
+#define ICE_AQC_SW_GPIO_NUMBER_M	(0x1F << ICE_AQC_SW_GPIO_NUMBER_S)
+	u8 gpio_params;
+#define ICE_AQC_SW_GPIO_PARAMS_DIRECTION    BIT(1)
+#define ICE_AQC_SW_GPIO_PARAMS_VALUE        BIT(0)
+	u8 rsvd[12];
+};
+
 /* NVM Read command (indirect 0x0701)
  * NVM Erase commands (direct 0x0702)
  * NVM Write commands (indirect 0x0703)
@@ -2925,6 +2941,8 @@ enum ice_adminq_opc {
 	ice_aqc_opc_set_gpio				= 0x06EC,
 	ice_aqc_opc_get_gpio				= 0x06ED,
 	ice_aqc_opc_sff_eeprom				= 0x06EE,
+	ice_aqc_opc_sw_set_gpio				= 0x06EF,
+	ice_aqc_opc_sw_get_gpio				= 0x06F0,
 
 	/* NVM commands */
 	ice_aqc_opc_nvm_read				= 0x0701,
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 17/21] net/ice/base: remove duplicated AQ command flag setting
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (15 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 16/21] net/ice/base: support extended GPIO access Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 18/21] net/ice/base: introduce and use FLEX_ARRAY_SIZE where possible Qi Zhang
                   ` (4 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang, Bruce Allan

When sending the indirect Read/Write SFF EEPROM AQ command. The flag is
already added later in the code flow for all indirect AQ commands, i.e.
commands that provide an additional data buffer.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 9d8f78fdcc..2a9185f570 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -3198,7 +3198,7 @@ ice_aq_sff_eeprom(struct ice_hw *hw, u16 lport, u8 bus_addr,
 
 	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_sff_eeprom);
 	cmd = &desc.params.read_write_sff_param;
-	desc.flags = CPU_TO_LE16(ICE_AQ_FLAG_RD | ICE_AQ_FLAG_BUF);
+	desc.flags = CPU_TO_LE16(ICE_AQ_FLAG_RD);
 	cmd->lport_num = (u8)(lport & 0xff);
 	cmd->lport_num_valid = (u8)((lport >> 8) & 0x01);
 	cmd->i2c_bus_addr = CPU_TO_LE16(((bus_addr >> 1) &
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 18/21] net/ice/base: introduce and use FLEX_ARRAY_SIZE where possible
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (16 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 17/21] net/ice/base: remove duplicated AQ command flag setting Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 19/21] net/ice/base: refactor RSS configure API Qi Zhang
                   ` (3 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang, Bruce Allan

Use the FLEX_ARRAY_SIZE() helper with the recently added flexible array
members in structures.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_common.c    | 2 +-
 drivers/net/ice/base/ice_flex_pipe.c | 2 +-
 drivers/net/ice/base/ice_type.h      | 4 ++++
 3 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 2a9185f570..0c1259b42a 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -1704,7 +1704,7 @@ ice_aq_alloc_free_res(struct ice_hw *hw, u16 num_entries,
 	if (!buf)
 		return ICE_ERR_PARAM;
 
-	if (buf_size < (num_entries * sizeof(buf->elem[0])))
+	if (buf_size < FLEX_ARRAY_SIZE(buf, elem, num_entries))
 		return ICE_ERR_PARAM;
 
 	ice_fill_dflt_direct_cmd_desc(&desc, opc);
diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c
index 4a27061b3d..7594df1696 100644
--- a/drivers/net/ice/base/ice_flex_pipe.c
+++ b/drivers/net/ice/base/ice_flex_pipe.c
@@ -1833,7 +1833,7 @@ ice_pkg_buf_reserve_section(struct ice_buf_build *bld, u16 count)
 	bld->reserved_section_table_entries += count;
 
 	data_end = LE16_TO_CPU(buf->data_end) +
-		   (count * sizeof(buf->section_entry[0]));
+		FLEX_ARRAY_SIZE(buf, section_entry, count);
 	buf->data_end = CPU_TO_LE16(data_end);
 
 	return ICE_SUCCESS;
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index 7545235635..f93baed8d9 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -44,6 +44,10 @@
 #define ice_struct_size(ptr, field, num) \
 	(sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num))
 
+#ifndef FLEX_ARRAY_SIZE
+#define FLEX_ARRAY_SIZE(_ptr, _mem, cnt) ((cnt) * sizeof(_ptr->_mem[0]))
+#endif /* FLEX_ARRAY_SIZE */
+
 #include "ice_status.h"
 #include "ice_hw_autogen.h"
 #include "ice_devids.h"
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 19/21] net/ice/base: refactor RSS configure API
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (17 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 18/21] net/ice/base: introduce and use FLEX_ARRAY_SIZE where possible Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 20/21] net/ice/base: add support for get/set RSS LUT to specify global LUT Qi Zhang
                   ` (2 subsequent siblings)
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang

Use struct ice_rss_hash_cfg as parameter for
ice_add_rss_cfg, ice_add_rss_cfg_sync and
ice_rem_rss_cfg, ice_rem_rss_cfg_sync.

Introduce enmu ice_rss_cfg_hdr_type to allow user specify the more
flexible RSS configure.

ICE_RSS_OUTER_HEADERS - take outer layer as RSS inputset
ICE_RSS_INNER_HEADERS - take inner layer as RSS inputset
ICE_RSS_INNER_HEADERS_W_OUTER_IPV4
	- take inner layer as RSS inputset for packet with outer IPV4
ICE_RSS_INNER_HEADERS_W_OUTER_IPV6
	- take inner layer as RSS inputset for packet with outer IPV6
ICE_RSS_ANY_HEADERS - try with outer first then inner
		(same as the behaviour without this change)

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_flow.c | 229 ++++++++++++++----------
 drivers/net/ice/base/ice_flow.h |  34 +++-
 drivers/net/ice/ice_ethdev.c    | 298 ++++++++++++++------------------
 drivers/net/ice/ice_ethdev.h    |  18 +-
 drivers/net/ice/ice_hash.c      |  14 +-
 5 files changed, 301 insertions(+), 292 deletions(-)

diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c
index 4512b12368..1b36c2b897 100644
--- a/drivers/net/ice/base/ice_flow.c
+++ b/drivers/net/ice/base/ice_flow.c
@@ -3244,37 +3244,49 @@ ice_flow_add_fld_raw(struct ice_flow_seg_info *seg, u16 off, u8 len,
 /**
  * ice_flow_set_rss_seg_info - setup packet segments for RSS
  * @segs: pointer to the flow field segment(s)
- * @hash_fields: fields to be hashed on for the segment(s)
- * @flow_hdr: protocol header fields within a packet segment
+ * @seg_cnt: segment count
+ * @cfg: configure parameters
  *
  * Helper function to extract fields from hash bitmap and use flow
  * header value to set flow field segment for further use in flow
  * profile entry or removal.
  */
 static enum ice_status
-ice_flow_set_rss_seg_info(struct ice_flow_seg_info *segs, u64 hash_fields,
-			  u32 flow_hdr)
+ice_flow_set_rss_seg_info(struct ice_flow_seg_info *segs, u8 seg_cnt,
+			  const struct ice_rss_hash_cfg *cfg)
 {
+	struct ice_flow_seg_info *seg;
 	u64 val;
 	u8 i;
 
-	ice_for_each_set_bit(i, (ice_bitmap_t *)&hash_fields,
+	/* set inner most segment */
+	seg = &segs[seg_cnt - 1];
+
+	ice_for_each_set_bit(i, (const ice_bitmap_t *)&cfg->hash_flds,
 			     ICE_FLOW_FIELD_IDX_MAX)
-		ice_flow_set_fld(segs, (enum ice_flow_field)i,
+		ice_flow_set_fld(seg, (enum ice_flow_field)i,
 				 ICE_FLOW_FLD_OFF_INVAL, ICE_FLOW_FLD_OFF_INVAL,
 				 ICE_FLOW_FLD_OFF_INVAL, false);
 
-	ICE_FLOW_SET_HDRS(segs, flow_hdr);
+	ICE_FLOW_SET_HDRS(seg, cfg->addl_hdrs);
+
+	/* set outer most header */
+	if (cfg->hdr_type == ICE_RSS_INNER_HEADERS_W_OUTER_IPV4)
+		segs[ICE_RSS_OUTER_HEADERS].hdrs |= ICE_FLOW_SEG_HDR_IPV4 |
+						   ICE_FLOW_SEG_HDR_IPV_OTHER;
+	else if (cfg->hdr_type == ICE_RSS_INNER_HEADERS_W_OUTER_IPV6)
+		segs[ICE_RSS_OUTER_HEADERS].hdrs |= ICE_FLOW_SEG_HDR_IPV6 |
+						   ICE_FLOW_SEG_HDR_IPV_OTHER;
 
-	if (segs->hdrs & ~ICE_FLOW_RSS_SEG_HDR_VAL_MASKS &
+	if (seg->hdrs & ~ICE_FLOW_RSS_SEG_HDR_VAL_MASKS &
 	    ~ICE_FLOW_RSS_HDRS_INNER_MASK & ~ICE_FLOW_SEG_HDR_IPV_OTHER)
 		return ICE_ERR_PARAM;
 
-	val = (u64)(segs->hdrs & ICE_FLOW_RSS_SEG_HDR_L3_MASKS);
+	val = (u64)(seg->hdrs & ICE_FLOW_RSS_SEG_HDR_L3_MASKS);
 	if (val && !ice_is_pow2(val))
 		return ICE_ERR_CFG;
 
-	val = (u64)(segs->hdrs & ICE_FLOW_RSS_SEG_HDR_L4_MASKS);
+	val = (u64)(seg->hdrs & ICE_FLOW_RSS_SEG_HDR_L4_MASKS);
 	if (val && !ice_is_pow2(val))
 		return ICE_ERR_CFG;
 
@@ -3346,6 +3358,29 @@ enum ice_status ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
 	return status;
 }
 
+/**
+ * ice_get_rss_hdr_type - get a RSS profile's header type
+ * @prof: RSS flow profile
+ */
+static enum ice_rss_cfg_hdr_type
+ice_get_rss_hdr_type(struct ice_flow_prof *prof)
+{
+	enum ice_rss_cfg_hdr_type hdr_type = ICE_RSS_ANY_HEADERS;
+
+	if (prof->segs_cnt == ICE_FLOW_SEG_SINGLE) {
+		hdr_type = ICE_RSS_OUTER_HEADERS;
+	} else if (prof->segs_cnt == ICE_FLOW_SEG_MAX) {
+		if (prof->segs[ICE_RSS_OUTER_HEADERS].hdrs == ICE_FLOW_SEG_HDR_NONE)
+			hdr_type = ICE_RSS_INNER_HEADERS;
+		if (prof->segs[ICE_RSS_OUTER_HEADERS].hdrs & ICE_FLOW_SEG_HDR_IPV4)
+			hdr_type = ICE_RSS_INNER_HEADERS_W_OUTER_IPV4;
+		if (prof->segs[ICE_RSS_OUTER_HEADERS].hdrs & ICE_FLOW_SEG_HDR_IPV6)
+			hdr_type = ICE_RSS_INNER_HEADERS_W_OUTER_IPV6;
+	}
+
+	return hdr_type;
+}
+
 /**
  * ice_rem_rss_list - remove RSS configuration from list
  * @hw: pointer to the hardware structure
@@ -3357,16 +3392,19 @@ enum ice_status ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
 static void
 ice_rem_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof)
 {
+	enum ice_rss_cfg_hdr_type hdr_type;
 	struct ice_rss_cfg *r, *tmp;
 
 	/* Search for RSS hash fields associated to the VSI that match the
 	 * hash configurations associated to the flow profile. If found
 	 * remove from the RSS entry list of the VSI context and delete entry.
 	 */
+	hdr_type = ice_get_rss_hdr_type(prof);
 	LIST_FOR_EACH_ENTRY_SAFE(r, tmp, &hw->rss_list_head,
 				 ice_rss_cfg, l_entry)
-		if (r->hashed_flds == prof->segs[prof->segs_cnt - 1].match &&
-		    r->packet_hdr == prof->segs[prof->segs_cnt - 1].hdrs) {
+		if (r->hash.hash_flds == prof->segs[prof->segs_cnt - 1].match &&
+		    r->hash.addl_hdrs == prof->segs[prof->segs_cnt - 1].hdrs &&
+		    r->hash.hdr_type == hdr_type) {
 			ice_clear_bit(vsi_handle, r->vsis);
 			if (!ice_is_any_bit_set(r->vsis, ICE_MAX_VSI)) {
 				LIST_DEL(&r->l_entry);
@@ -3387,12 +3425,15 @@ ice_rem_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof)
 static enum ice_status
 ice_add_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof)
 {
+	enum ice_rss_cfg_hdr_type hdr_type;
 	struct ice_rss_cfg *r, *rss_cfg;
 
+	hdr_type = ice_get_rss_hdr_type(prof);
 	LIST_FOR_EACH_ENTRY(r, &hw->rss_list_head,
 			    ice_rss_cfg, l_entry)
-		if (r->hashed_flds == prof->segs[prof->segs_cnt - 1].match &&
-		    r->packet_hdr == prof->segs[prof->segs_cnt - 1].hdrs) {
+		if (r->hash.hash_flds == prof->segs[prof->segs_cnt - 1].match &&
+		    r->hash.addl_hdrs == prof->segs[prof->segs_cnt - 1].hdrs &&
+		    r->hash.hdr_type == hdr_type) {
 			ice_set_bit(vsi_handle, r->vsis);
 			return ICE_SUCCESS;
 		}
@@ -3401,9 +3442,10 @@ ice_add_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof)
 	if (!rss_cfg)
 		return ICE_ERR_NO_MEMORY;
 
-	rss_cfg->hashed_flds = prof->segs[prof->segs_cnt - 1].match;
-	rss_cfg->packet_hdr = prof->segs[prof->segs_cnt - 1].hdrs;
-	rss_cfg->symm = prof->cfg.symm;
+	rss_cfg->hash.hash_flds = prof->segs[prof->segs_cnt - 1].match;
+	rss_cfg->hash.addl_hdrs = prof->segs[prof->segs_cnt - 1].hdrs;
+	rss_cfg->hash.hdr_type = hdr_type;
+	rss_cfg->hash.symm = prof->cfg.symm;
 	ice_set_bit(vsi_handle, rss_cfg->vsis);
 
 	LIST_ADD_TAIL(&rss_cfg->l_entry, &hw->rss_list_head);
@@ -3415,21 +3457,22 @@ ice_add_rss_list(struct ice_hw *hw, u16 vsi_handle, struct ice_flow_prof *prof)
 #define ICE_FLOW_PROF_HASH_M	(0xFFFFFFFFULL << ICE_FLOW_PROF_HASH_S)
 #define ICE_FLOW_PROF_HDR_S	32
 #define ICE_FLOW_PROF_HDR_M	(0x3FFFFFFFULL << ICE_FLOW_PROF_HDR_S)
-#define ICE_FLOW_PROF_ENCAP_S	63
-#define ICE_FLOW_PROF_ENCAP_M	(BIT_ULL(ICE_FLOW_PROF_ENCAP_S))
-
-#define ICE_RSS_OUTER_HEADERS	1
-#define ICE_RSS_INNER_HEADERS	2
+#define ICE_FLOW_PROF_ENCAP_S	62
+#define ICE_FLOW_PROF_ENCAP_M	(0x3ULL << ICE_FLOW_PROF_ENCAP_S)
 
 /* Flow profile ID format:
  * [0:31] - Packet match fields
- * [32:62] - Protocol header
- * [63] - Encapsulation flag, 0 if non-tunneled, 1 if tunneled
+ * [32:61] - Protocol header
+ * [62:63] - Encapsulation flag:
+ *	     0 if non-tunneled
+ *	     1 if tunneled
+ *	     2 for tunneled with outer ipv4
+ *	     3 for tunneled with outer ipv6
  */
-#define ICE_FLOW_GEN_PROFID(hash, hdr, segs_cnt) \
+#define ICE_FLOW_GEN_PROFID(hash, hdr, encap) \
 	(u64)(((u64)(hash) & ICE_FLOW_PROF_HASH_M) | \
 	      (((u64)(hdr) << ICE_FLOW_PROF_HDR_S) & ICE_FLOW_PROF_HDR_M) | \
-	      ((u8)((segs_cnt) - 1) ? ICE_FLOW_PROF_ENCAP_M : 0))
+	      (((u64)(encap) << ICE_FLOW_PROF_ENCAP_S) & ICE_FLOW_PROF_ENCAP_M))
 
 static void
 ice_rss_config_xor_word(struct ice_hw *hw, u8 prof_id, u8 src, u8 dst)
@@ -3540,24 +3583,22 @@ ice_rss_update_symm(struct ice_hw *hw,
  * ice_add_rss_cfg_sync - add an RSS configuration
  * @hw: pointer to the hardware structure
  * @vsi_handle: software VSI handle
- * @hashed_flds: hash bit fields (ICE_FLOW_HASH_*) to configure
- * @addl_hdrs: protocol header fields
- * @segs_cnt: packet segment count
- * @symm: symmetric hash enable/disable
+ * @cfg: configure parameters
  *
  * Assumption: lock has already been acquired for RSS list
  */
 static enum ice_status
-ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
-		     u32 addl_hdrs, u8 segs_cnt, bool symm)
+ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle,
+		     const struct ice_rss_hash_cfg *cfg)
 {
 	const enum ice_block blk = ICE_BLK_RSS;
 	struct ice_flow_prof *prof = NULL;
 	struct ice_flow_seg_info *segs;
 	enum ice_status status;
+	u8 segs_cnt;
 
-	if (!segs_cnt || segs_cnt > ICE_FLOW_SEG_MAX)
-		return ICE_ERR_PARAM;
+	segs_cnt = (cfg->hdr_type == ICE_RSS_OUTER_HEADERS) ?
+			ICE_FLOW_SEG_SINGLE : ICE_FLOW_SEG_MAX;
 
 	segs = (struct ice_flow_seg_info *)ice_calloc(hw, segs_cnt,
 						      sizeof(*segs));
@@ -3565,13 +3606,12 @@ ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 		return ICE_ERR_NO_MEMORY;
 
 	/* Construct the packet segment info from the hashed fields */
-	status = ice_flow_set_rss_seg_info(&segs[segs_cnt - 1], hashed_flds,
-					   addl_hdrs);
+	status = ice_flow_set_rss_seg_info(segs, segs_cnt, cfg);
 	if (status)
 		goto exit;
 
 	/* Don't do RSS for GTPU Outer */
-	if (segs_cnt == ICE_RSS_OUTER_HEADERS &&
+	if (segs_cnt == ICE_FLOW_SEG_SINGLE &&
 	    segs[segs_cnt - 1].hdrs & ICE_FLOW_SEG_HDR_GTPU) {
 		status = ICE_SUCCESS;
 		goto exit;
@@ -3586,9 +3626,9 @@ ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 					ICE_FLOW_FIND_PROF_CHK_FLDS |
 					ICE_FLOW_FIND_PROF_CHK_VSI);
 	if (prof) {
-		if (prof->cfg.symm == symm)
+		if (prof->cfg.symm == cfg->symm)
 			goto exit;
-		prof->cfg.symm = symm;
+		prof->cfg.symm = cfg->symm;
 		goto update_symm;
 	}
 
@@ -3621,7 +3661,7 @@ ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 					vsi_handle,
 					ICE_FLOW_FIND_PROF_CHK_FLDS);
 	if (prof) {
-		if (prof->cfg.symm == symm) {
+		if (prof->cfg.symm == cfg->symm) {
 			status = ice_flow_assoc_prof(hw, blk, prof,
 						     vsi_handle);
 			if (!status)
@@ -3640,9 +3680,9 @@ ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 	 * segment information.
 	 */
 	status = ice_flow_add_prof(hw, blk, ICE_FLOW_RX,
-				   ICE_FLOW_GEN_PROFID(hashed_flds,
+				   ICE_FLOW_GEN_PROFID(cfg->hash_flds,
 						       segs[segs_cnt - 1].hdrs,
-						       segs_cnt),
+						       cfg->hdr_type),
 				   segs, segs_cnt, NULL, 0, &prof);
 	if (status)
 		goto exit;
@@ -3658,8 +3698,7 @@ ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
 
 	status = ice_add_rss_list(hw, vsi_handle, prof);
 
-	prof->cfg.symm = symm;
-
+	prof->cfg.symm = cfg->symm;
 update_symm:
 	ice_rss_update_symm(hw, prof);
 
@@ -3672,32 +3711,40 @@ ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
  * ice_add_rss_cfg - add an RSS configuration with specified hashed fields
  * @hw: pointer to the hardware structure
  * @vsi_handle: software VSI handle
- * @hashed_flds: hash bit fields (ICE_FLOW_HASH_*) to configure
- * @addl_hdrs: protocol header fields
- * @symm: symmetric hash enable/disable
+ * @cfg: configure parameters
  *
  * This function will generate a flow profile based on fields associated with
  * the input fields to hash on, the flow type and use the VSI number to add
  * a flow entry to the profile.
  */
 enum ice_status
-ice_add_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
-		u32 addl_hdrs, bool symm)
+ice_add_rss_cfg(struct ice_hw *hw, u16 vsi_handle,
+		const struct ice_rss_hash_cfg *cfg)
 {
+	struct ice_rss_hash_cfg local_cfg;
 	enum ice_status status;
 
-	if (hashed_flds == ICE_HASH_INVALID ||
-	    !ice_is_vsi_valid(hw, vsi_handle))
+	if (!ice_is_vsi_valid(hw, vsi_handle) ||
+	    !cfg || cfg->hdr_type > ICE_RSS_ANY_HEADERS ||
+	    cfg->hash_flds == ICE_HASH_INVALID)
 		return ICE_ERR_PARAM;
 
-	ice_acquire_lock(&hw->rss_locks);
-	status = ice_add_rss_cfg_sync(hw, vsi_handle, hashed_flds, addl_hdrs,
-				      ICE_RSS_OUTER_HEADERS, symm);
-	if (!status)
-		status = ice_add_rss_cfg_sync(hw, vsi_handle, hashed_flds,
-					      addl_hdrs, ICE_RSS_INNER_HEADERS,
-					      symm);
-	ice_release_lock(&hw->rss_locks);
+	local_cfg = *cfg;
+	if (cfg->hdr_type < ICE_RSS_ANY_HEADERS) {
+		ice_acquire_lock(&hw->rss_locks);
+		status = ice_add_rss_cfg_sync(hw, vsi_handle, &local_cfg);
+		ice_release_lock(&hw->rss_locks);
+	} else {
+		ice_acquire_lock(&hw->rss_locks);
+		local_cfg.hdr_type = ICE_RSS_OUTER_HEADERS;
+		status = ice_add_rss_cfg_sync(hw, vsi_handle, &local_cfg);
+		if (!status) {
+			local_cfg.hdr_type = ICE_RSS_INNER_HEADERS;
+			status = ice_add_rss_cfg_sync(hw, vsi_handle,
+						      &local_cfg);
+		}
+		ice_release_lock(&hw->rss_locks);
+	}
 
 	return status;
 }
@@ -3706,34 +3753,34 @@ ice_add_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
  * ice_rem_rss_cfg_sync - remove an existing RSS configuration
  * @hw: pointer to the hardware structure
  * @vsi_handle: software VSI handle
- * @hashed_flds: Packet hash types (ICE_FLOW_HASH_*) to remove
- * @addl_hdrs: Protocol header fields within a packet segment
- * @segs_cnt: packet segment count
+ * @cfg: configure parameters
  *
  * Assumption: lock has already been acquired for RSS list
  */
 static enum ice_status
-ice_rem_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
-		     u32 addl_hdrs, u8 segs_cnt)
+ice_rem_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle,
+		     const struct ice_rss_hash_cfg *cfg)
 {
 	const enum ice_block blk = ICE_BLK_RSS;
 	struct ice_flow_seg_info *segs;
 	struct ice_flow_prof *prof;
 	enum ice_status status;
+	u8 segs_cnt;
 
+	segs_cnt = (cfg->hdr_type == ICE_RSS_OUTER_HEADERS) ?
+			ICE_FLOW_SEG_SINGLE : ICE_FLOW_SEG_MAX;
 	segs = (struct ice_flow_seg_info *)ice_calloc(hw, segs_cnt,
 						      sizeof(*segs));
 	if (!segs)
 		return ICE_ERR_NO_MEMORY;
 
 	/* Construct the packet segment info from the hashed fields */
-	status = ice_flow_set_rss_seg_info(&segs[segs_cnt - 1], hashed_flds,
-					   addl_hdrs);
+	status = ice_flow_set_rss_seg_info(segs, segs_cnt, cfg);
 	if (status)
 		goto out;
 
 	/* Don't do RSS for GTPU Outer */
-	if (segs_cnt == ICE_RSS_OUTER_HEADERS &&
+	if (segs_cnt == ICE_FLOW_SEG_SINGLE &&
 	    segs[segs_cnt - 1].hdrs & ICE_FLOW_SEG_HDR_GTPU) {
 		status = ICE_SUCCESS;
 		goto out;
@@ -3768,8 +3815,7 @@ ice_rem_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
  * ice_rem_rss_cfg - remove an existing RSS config with matching hashed fields
  * @hw: pointer to the hardware structure
  * @vsi_handle: software VSI handle
- * @hashed_flds: Packet hash types (ICE_FLOW_HASH_*) to remove
- * @addl_hdrs: Protocol header fields within a packet segment
+ * @cfg: configure parameters
  *
  * This function will lookup the flow profile based on the input
  * hash field bitmap, iterate through the profile entry list of
@@ -3778,21 +3824,31 @@ ice_rem_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
  * turn build or update buffers for RSS XLT1 section.
  */
 enum ice_status
-ice_rem_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
-		u32 addl_hdrs)
+ice_rem_rss_cfg(struct ice_hw *hw, u16 vsi_handle,
+		const struct ice_rss_hash_cfg *cfg)
 {
+	struct ice_rss_hash_cfg local_cfg;
 	enum ice_status status;
 
-	if (hashed_flds == ICE_HASH_INVALID ||
-	    !ice_is_vsi_valid(hw, vsi_handle))
+	if (!ice_is_vsi_valid(hw, vsi_handle) ||
+	    !cfg || cfg->hdr_type > ICE_RSS_ANY_HEADERS ||
+	    cfg->hash_flds == ICE_HASH_INVALID)
 		return ICE_ERR_PARAM;
 
 	ice_acquire_lock(&hw->rss_locks);
-	status = ice_rem_rss_cfg_sync(hw, vsi_handle, hashed_flds, addl_hdrs,
-				      ICE_RSS_OUTER_HEADERS);
-	if (!status)
-		status = ice_rem_rss_cfg_sync(hw, vsi_handle, hashed_flds,
-					      addl_hdrs, ICE_RSS_INNER_HEADERS);
+	local_cfg = *cfg;
+	if (cfg->hdr_type < ICE_RSS_ANY_HEADERS) {
+		status = ice_rem_rss_cfg_sync(hw, vsi_handle, &local_cfg);
+	} else {
+		local_cfg.hdr_type = ICE_RSS_OUTER_HEADERS;
+		status = ice_rem_rss_cfg_sync(hw, vsi_handle, &local_cfg);
+
+		if (!status) {
+			local_cfg.hdr_type = ICE_RSS_INNER_HEADERS;
+			status = ice_rem_rss_cfg_sync(hw, vsi_handle,
+						      &local_cfg);
+		}
+	}
 	ice_release_lock(&hw->rss_locks);
 
 	return status;
@@ -3815,18 +3871,7 @@ enum ice_status ice_replay_rss_cfg(struct ice_hw *hw, u16 vsi_handle)
 	LIST_FOR_EACH_ENTRY(r, &hw->rss_list_head,
 			    ice_rss_cfg, l_entry) {
 		if (ice_is_bit_set(r->vsis, vsi_handle)) {
-			status = ice_add_rss_cfg_sync(hw, vsi_handle,
-						      r->hashed_flds,
-						      r->packet_hdr,
-						      ICE_RSS_OUTER_HEADERS,
-						      r->symm);
-			if (status)
-				break;
-			status = ice_add_rss_cfg_sync(hw, vsi_handle,
-						      r->hashed_flds,
-						      r->packet_hdr,
-						      ICE_RSS_INNER_HEADERS,
-						      r->symm);
+			status = ice_add_rss_cfg_sync(hw, vsi_handle, &r->hash);
 			if (status)
 				break;
 		}
@@ -3858,8 +3903,8 @@ u64 ice_get_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u32 hdrs)
 	LIST_FOR_EACH_ENTRY(r, &hw->rss_list_head,
 			    ice_rss_cfg, l_entry)
 		if (ice_is_bit_set(r->vsis, vsi_handle) &&
-		    r->packet_hdr == hdrs) {
-			rss_hash = r->hashed_flds;
+		    r->hash.addl_hdrs == hdrs) {
+			rss_hash = r->hash.hash_flds;
 			break;
 		}
 	ice_release_lock(&hw->rss_locks);
diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h
index 698a2303bc..2a9ae66454 100644
--- a/drivers/net/ice/base/ice_flow.h
+++ b/drivers/net/ice/base/ice_flow.h
@@ -323,6 +323,24 @@ enum ice_flow_avf_hdr_field {
 	BIT_ULL(ICE_AVF_FLOW_FIELD_UNICAST_IPV6_UDP) | \
 	BIT_ULL(ICE_AVF_FLOW_FIELD_MULTICAST_IPV6_UDP))
 
+enum ice_rss_cfg_hdr_type {
+	ICE_RSS_OUTER_HEADERS, /* take outer headers as inputset. */
+	ICE_RSS_INNER_HEADERS, /* take inner headers as inputset. */
+	/* take inner headers as inputset for packet with outer ipv4. */
+	ICE_RSS_INNER_HEADERS_W_OUTER_IPV4,
+	/* take inner headers as inputset for packet with outer ipv6. */
+	ICE_RSS_INNER_HEADERS_W_OUTER_IPV6,
+	/* take outer headers first then inner headers as inputset */
+	ICE_RSS_ANY_HEADERS
+};
+
+struct ice_rss_hash_cfg {
+	u32 addl_hdrs; /* protocol header fields */
+	u64 hash_flds; /* hash bit field (ICE_FLOW_HASH_*) to configure */
+	enum ice_rss_cfg_hdr_type hdr_type; /* to specify inner or outer */
+	bool symm; /* symmetric or asymmetric hash */
+};
+
 enum ice_flow_dir {
 	ICE_FLOW_DIR_UNDEFINED	= 0,
 	ICE_FLOW_TX		= 0x01,
@@ -336,6 +354,7 @@ enum ice_flow_priority {
 	ICE_FLOW_PRIO_HIGH
 };
 
+#define ICE_FLOW_SEG_SINGLE		1
 #define ICE_FLOW_SEG_MAX		2
 #define ICE_FLOW_SEG_RAW_FLD_MAX	2
 #define ICE_FLOW_PROFILE_MAX		1024
@@ -440,8 +459,7 @@ struct ice_flow_prof {
 		struct ice_acl_scen *scen;
 		/* struct fd */
 		u32 data;
-		/* Symmetric Hash for RSS */
-		bool symm;
+		bool symm; /* Symmetric Hash for RSS */
 	} cfg;
 
 	/* Default actions */
@@ -452,9 +470,7 @@ struct ice_rss_cfg {
 	struct LIST_ENTRY_TYPE l_entry;
 	/* bitmap of VSIs added to the RSS entry */
 	ice_declare_bitmap(vsis, ICE_MAX_VSI);
-	u64 hashed_flds;
-	u32 packet_hdr;
-	bool symm;
+	struct ice_rss_hash_cfg hash;
 };
 
 enum ice_flow_action_type {
@@ -531,10 +547,10 @@ enum ice_status
 ice_add_avf_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds);
 enum ice_status ice_rem_vsi_rss_cfg(struct ice_hw *hw, u16 vsi_handle);
 enum ice_status
-ice_add_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
-		u32 addl_hdrs, bool symm);
+ice_add_rss_cfg(struct ice_hw *hw, u16 vsi_handle,
+		const struct ice_rss_hash_cfg *cfg);
 enum ice_status
-ice_rem_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds,
-		u32 addl_hdrs);
+ice_rem_rss_cfg(struct ice_hw *hw, u16 vsi_handle,
+		const struct ice_rss_hash_cfg *cfg);
 u64 ice_get_rss_cfg(struct ice_hw *hw, u16 vsi_handle, u32 hdrs);
 #endif /* _ICE_FLOW_H_ */
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index d51f3faba4..bd0bd5206c 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -2436,10 +2436,7 @@ ice_dev_uninit(struct rte_eth_dev *dev)
 static bool
 is_hash_cfg_valid(struct ice_rss_hash_cfg *cfg)
 {
-	return ((cfg->hash_func >= ICE_RSS_HASH_TOEPLITZ &&
-		 cfg->hash_func <= ICE_RSS_HASH_JHASH) &&
-		(cfg->hash_flds != 0 && cfg->addl_hdrs != 0)) ?
-		true : false;
+	return (cfg->hash_flds != 0 && cfg->addl_hdrs != 0) ? true : false;
 }
 
 static void
@@ -2447,7 +2444,8 @@ hash_cfg_reset(struct ice_rss_hash_cfg *cfg)
 {
 	cfg->hash_flds = 0;
 	cfg->addl_hdrs = 0;
-	cfg->hash_func = 0;
+	cfg->symm = 0;
+	cfg->hdr_type = ICE_RSS_ANY_HEADERS;
 }
 
 static int
@@ -2460,8 +2458,7 @@ ice_hash_moveout(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
 	if (!is_hash_cfg_valid(cfg))
 		return -ENOENT;
 
-	status = ice_rem_rss_cfg(hw, vsi->idx, cfg->hash_flds,
-				 cfg->addl_hdrs);
+	status = ice_rem_rss_cfg(hw, vsi->idx, cfg);
 	if (status && status != ICE_ERR_DOES_NOT_EXIST) {
 		PMD_DRV_LOG(ERR,
 			    "ice_rem_rss_cfg failed for VSI:%d, error:%d\n",
@@ -2478,16 +2475,11 @@ ice_hash_moveback(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
 	enum ice_status status = ICE_SUCCESS;
 	struct ice_hw *hw = ICE_PF_TO_HW(pf);
 	struct ice_vsi *vsi = pf->main_vsi;
-	bool symm;
 
 	if (!is_hash_cfg_valid(cfg))
 		return -ENOENT;
 
-	symm = (cfg->hash_func == ICE_RSS_HASH_TOEPLITZ_SYMMETRIC) ?
-		true : false;
-
-	status = ice_add_rss_cfg(hw, vsi->idx, cfg->hash_flds,
-				 cfg->addl_hdrs, symm);
+	status = ice_add_rss_cfg(hw, vsi->idx, cfg);
 	if (status) {
 		PMD_DRV_LOG(ERR,
 			    "ice_add_rss_cfg failed for VSI:%d, error:%d\n",
@@ -2764,15 +2756,12 @@ ice_add_rss_cfg_pre(struct ice_pf *pf, uint32_t hdr)
 
 static int
 ice_add_rss_cfg_post_gtpu(struct ice_pf *pf, struct ice_hash_gtpu_ctx *ctx,
-			  u32 hdr, u64 fld, bool symm, u8 ctx_idx)
+			  u8 ctx_idx, struct ice_rss_hash_cfg *cfg)
 {
 	int ret;
 
-	if (ctx_idx < ICE_HASH_GTPU_CTX_MAX) {
-		ctx->ctx[ctx_idx].addl_hdrs = hdr;
-		ctx->ctx[ctx_idx].hash_flds = fld;
-		ctx->ctx[ctx_idx].hash_func = symm;
-	}
+	if (ctx_idx < ICE_HASH_GTPU_CTX_MAX)
+		ctx->ctx[ctx_idx] = *cfg;
 
 	switch (ctx_idx) {
 	case ICE_HASH_GTPU_CTX_EH_IP:
@@ -2851,16 +2840,16 @@ ice_add_rss_cfg_post_gtpu(struct ice_pf *pf, struct ice_hash_gtpu_ctx *ctx,
 }
 
 static int
-ice_add_rss_cfg_post(struct ice_pf *pf, uint32_t hdr, uint64_t fld, bool symm)
+ice_add_rss_cfg_post(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
 {
-	u8 gtpu_ctx_idx = calc_gtpu_ctx_idx(hdr);
+	u8 gtpu_ctx_idx = calc_gtpu_ctx_idx(cfg->addl_hdrs);
 
-	if (hdr & ICE_FLOW_SEG_HDR_IPV4)
-		return ice_add_rss_cfg_post_gtpu(pf, &pf->hash_ctx.gtpu4, hdr,
-						 fld, symm, gtpu_ctx_idx);
-	else if (hdr & ICE_FLOW_SEG_HDR_IPV6)
-		return ice_add_rss_cfg_post_gtpu(pf, &pf->hash_ctx.gtpu6, hdr,
-						 fld, symm, gtpu_ctx_idx);
+	if (cfg->addl_hdrs & ICE_FLOW_SEG_HDR_IPV4)
+		return ice_add_rss_cfg_post_gtpu(pf, &pf->hash_ctx.gtpu4,
+						 gtpu_ctx_idx, cfg);
+	else if (cfg->addl_hdrs & ICE_FLOW_SEG_HDR_IPV6)
+		return ice_add_rss_cfg_post_gtpu(pf, &pf->hash_ctx.gtpu6,
+						 gtpu_ctx_idx, cfg);
 
 	return 0;
 }
@@ -2881,36 +2870,36 @@ ice_rem_rss_cfg_post(struct ice_pf *pf, uint32_t hdr)
 
 int
 ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
-		uint64_t fld, uint32_t hdr)
+		     struct ice_rss_hash_cfg *cfg)
 {
 	struct ice_hw *hw = ICE_PF_TO_HW(pf);
 	int ret;
 
-	ret = ice_rem_rss_cfg(hw, vsi_id, fld, hdr);
+	ret = ice_rem_rss_cfg(hw, vsi_id, cfg);
 	if (ret && ret != ICE_ERR_DOES_NOT_EXIST)
 		PMD_DRV_LOG(ERR, "remove rss cfg failed\n");
 
-	ice_rem_rss_cfg_post(pf, hdr);
+	ice_rem_rss_cfg_post(pf, cfg->addl_hdrs);
 
 	return 0;
 }
 
 int
 ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
-		uint64_t fld, uint32_t hdr, bool symm)
+		     struct ice_rss_hash_cfg *cfg)
 {
 	struct ice_hw *hw = ICE_PF_TO_HW(pf);
 	int ret;
 
-	ret = ice_add_rss_cfg_pre(pf, hdr);
+	ret = ice_add_rss_cfg_pre(pf, cfg->addl_hdrs);
 	if (ret)
 		PMD_DRV_LOG(ERR, "add rss cfg pre failed\n");
 
-	ret = ice_add_rss_cfg(hw, vsi_id, fld, hdr, symm);
+	ret = ice_add_rss_cfg(hw, vsi_id, cfg);
 	if (ret)
 		PMD_DRV_LOG(ERR, "add rss cfg failed\n");
 
-	ret = ice_add_rss_cfg_post(pf, hdr, fld, symm);
+	ret = ice_add_rss_cfg_post(pf, cfg);
 	if (ret)
 		PMD_DRV_LOG(ERR, "add rss cfg post failed\n");
 
@@ -2921,13 +2910,16 @@ static void
 ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 {
 	struct ice_vsi *vsi = pf->main_vsi;
+	struct ice_rss_hash_cfg cfg;
 	int ret;
 
+	cfg.symm = 0;
+	cfg.hdr_type = ICE_RSS_ANY_HEADERS;
 	/* Configure RSS for IPv4 with src/dst addr as input set */
 	if (rss_hf & ETH_RSS_IPV4) {
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_FLOW_HASH_IPV4,
-				      ICE_FLOW_SEG_HDR_IPV4 |
-				      ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		cfg.hash_flds = ICE_FLOW_HASH_IPV4;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s IPV4 rss flow fail %d",
 				    __func__, ret);
@@ -2935,9 +2927,9 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 
 	/* Configure RSS for IPv6 with src/dst addr as input set */
 	if (rss_hf & ETH_RSS_IPV6) {
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_FLOW_HASH_IPV6,
-				      ICE_FLOW_SEG_HDR_IPV6 |
-				      ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		cfg.hash_flds = ICE_FLOW_HASH_IPV6;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s IPV6 rss flow fail %d",
 				    __func__, ret);
@@ -2945,10 +2937,10 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 
 	/* Configure RSS for udp4 with src/dst addr and port as input set */
 	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_UDP_IPV4,
-				      ICE_FLOW_SEG_HDR_UDP |
-				      ICE_FLOW_SEG_HDR_IPV4 |
-				      ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV4 |
+				ICE_FLOW_SEG_HDR_IPV_OTHER;
+		cfg.hash_flds = ICE_HASH_UDP_IPV4;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s UDP_IPV4 rss flow fail %d",
 				    __func__, ret);
@@ -2956,10 +2948,10 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 
 	/* Configure RSS for udp6 with src/dst addr and port as input set */
 	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_UDP_IPV6,
-				      ICE_FLOW_SEG_HDR_UDP |
-				      ICE_FLOW_SEG_HDR_IPV6 |
-				      ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV6 |
+				ICE_FLOW_SEG_HDR_IPV_OTHER;
+		cfg.hash_flds = ICE_HASH_UDP_IPV6;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s UDP_IPV6 rss flow fail %d",
 				    __func__, ret);
@@ -2967,10 +2959,10 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 
 	/* Configure RSS for tcp4 with src/dst addr and port as input set */
 	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_TCP_IPV4,
-				      ICE_FLOW_SEG_HDR_TCP |
-				      ICE_FLOW_SEG_HDR_IPV4 |
-				      ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV4 |
+				ICE_FLOW_SEG_HDR_IPV_OTHER;
+		cfg.hash_flds = ICE_HASH_TCP_IPV4;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s TCP_IPV4 rss flow fail %d",
 				    __func__, ret);
@@ -2978,10 +2970,10 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 
 	/* Configure RSS for tcp6 with src/dst addr and port as input set */
 	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_TCP_IPV6,
-				      ICE_FLOW_SEG_HDR_TCP |
-				      ICE_FLOW_SEG_HDR_IPV6 |
-				      ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV6 |
+				ICE_FLOW_SEG_HDR_IPV_OTHER;
+		cfg.hash_flds = ICE_HASH_TCP_IPV6;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s TCP_IPV6 rss flow fail %d",
 				    __func__, ret);
@@ -2989,10 +2981,10 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 
 	/* Configure RSS for sctp4 with src/dst addr and port as input set */
 	if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_SCTP_IPV4,
-				      ICE_FLOW_SEG_HDR_SCTP |
-				      ICE_FLOW_SEG_HDR_IPV4 |
-				      ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV4 |
+				ICE_FLOW_SEG_HDR_IPV_OTHER;
+		cfg.hash_flds = ICE_HASH_SCTP_IPV4;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s SCTP_IPV4 rss flow fail %d",
 				    __func__, ret);
@@ -3000,218 +2992,188 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 
 	/* Configure RSS for sctp6 with src/dst addr and port as input set */
 	if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_SCTP_IPV6,
-				      ICE_FLOW_SEG_HDR_SCTP |
-				      ICE_FLOW_SEG_HDR_IPV6 |
-				      ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV6 |
+				ICE_FLOW_SEG_HDR_IPV_OTHER;
+		cfg.hash_flds = ICE_HASH_SCTP_IPV6;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s SCTP_IPV6 rss flow fail %d",
 				    __func__, ret);
 	}
 
 	if (rss_hf & ETH_RSS_IPV4) {
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_FLOW_HASH_IPV4,
-				ICE_FLOW_SEG_HDR_GTPU_IP |
-				ICE_FLOW_SEG_HDR_IPV4 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_GTPU_IP | ICE_FLOW_SEG_HDR_IPV4 |
+				ICE_FLOW_SEG_HDR_IPV_OTHER;
+		cfg.hash_flds = ICE_FLOW_HASH_IPV4;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s GTPU_IPV4 rss flow fail %d",
 				    __func__, ret);
 
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_FLOW_HASH_IPV4,
-				ICE_FLOW_SEG_HDR_GTPU_EH |
-				ICE_FLOW_SEG_HDR_IPV4 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_GTPU_EH | ICE_FLOW_SEG_HDR_IPV4 |
+				ICE_FLOW_SEG_HDR_IPV_OTHER;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s GTPU_EH_IPV4 rss flow fail %d",
 				    __func__, ret);
 
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_FLOW_HASH_IPV4,
-				ICE_FLOW_SEG_HDR_PPPOE |
-				ICE_FLOW_SEG_HDR_IPV4 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV4 |
+				ICE_FLOW_SEG_HDR_IPV_OTHER;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s PPPoE_IPV4 rss flow fail %d",
 				    __func__, ret);
 	}
 
 	if (rss_hf & ETH_RSS_IPV6) {
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_FLOW_HASH_IPV6,
-				ICE_FLOW_SEG_HDR_GTPU_IP |
-				ICE_FLOW_SEG_HDR_IPV6 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_GTPU_IP | ICE_FLOW_SEG_HDR_IPV6 |
+				ICE_FLOW_SEG_HDR_IPV_OTHER;
+		cfg.hash_flds = ICE_FLOW_HASH_IPV6;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s GTPU_IPV6 rss flow fail %d",
 				    __func__, ret);
 
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_FLOW_HASH_IPV6,
-				ICE_FLOW_SEG_HDR_GTPU_EH |
-				ICE_FLOW_SEG_HDR_IPV6 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_GTPU_EH | ICE_FLOW_SEG_HDR_IPV6 |
+				ICE_FLOW_SEG_HDR_IPV_OTHER;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s GTPU_EH_IPV6 rss flow fail %d",
 				    __func__, ret);
 
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_FLOW_HASH_IPV6,
-				ICE_FLOW_SEG_HDR_PPPOE |
-				ICE_FLOW_SEG_HDR_IPV6 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV6 |
+				ICE_FLOW_SEG_HDR_IPV_OTHER;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s PPPoE_IPV6 rss flow fail %d",
 				    __func__, ret);
 	}
 
 	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_UDP_IPV4,
-				ICE_FLOW_SEG_HDR_GTPU_IP |
-				ICE_FLOW_SEG_HDR_UDP |
-				ICE_FLOW_SEG_HDR_IPV4 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_GTPU_IP | ICE_FLOW_SEG_HDR_UDP |
+				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		cfg.hash_flds = ICE_FLOW_HASH_IPV4;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s GTPU_IPV4_UDP rss flow fail %d",
 				    __func__, ret);
 
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_UDP_IPV4,
-				ICE_FLOW_SEG_HDR_GTPU_EH |
-				ICE_FLOW_SEG_HDR_UDP |
-				ICE_FLOW_SEG_HDR_IPV4 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_GTPU_EH | ICE_FLOW_SEG_HDR_UDP |
+				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s GTPU_EH_IPV4_UDP rss flow fail %d",
 				    __func__, ret);
 
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_UDP_IPV4,
-				ICE_FLOW_SEG_HDR_PPPOE |
-				ICE_FLOW_SEG_HDR_UDP |
-				ICE_FLOW_SEG_HDR_IPV4 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
+				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s PPPoE_IPV4_UDP rss flow fail %d",
 				    __func__, ret);
 	}
 
 	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_UDP_IPV6,
-				ICE_FLOW_SEG_HDR_GTPU_IP |
-				ICE_FLOW_SEG_HDR_UDP |
-				ICE_FLOW_SEG_HDR_IPV6 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_GTPU_IP | ICE_FLOW_SEG_HDR_UDP |
+				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		cfg.hash_flds = ICE_FLOW_HASH_IPV6;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s GTPU_IPV6_UDP rss flow fail %d",
 				    __func__, ret);
 
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_UDP_IPV6,
-				ICE_FLOW_SEG_HDR_GTPU_EH |
-				ICE_FLOW_SEG_HDR_UDP |
-				ICE_FLOW_SEG_HDR_IPV6 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_GTPU_EH | ICE_FLOW_SEG_HDR_UDP |
+				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s GTPU_EH_IPV6_UDP rss flow fail %d",
 				    __func__, ret);
 
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_UDP_IPV6,
-				ICE_FLOW_SEG_HDR_PPPOE |
-				ICE_FLOW_SEG_HDR_UDP |
-				ICE_FLOW_SEG_HDR_IPV6 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
+				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s PPPoE_IPV6_UDP rss flow fail %d",
 				    __func__, ret);
 	}
 
 	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_TCP_IPV4,
-				ICE_FLOW_SEG_HDR_GTPU_IP |
-				ICE_FLOW_SEG_HDR_TCP |
-				ICE_FLOW_SEG_HDR_IPV4 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_GTPU_IP | ICE_FLOW_SEG_HDR_TCP |
+				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		cfg.hash_flds = ICE_FLOW_HASH_IPV4;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s GTPU_IPV4_TCP rss flow fail %d",
 				    __func__, ret);
 
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_TCP_IPV4,
-				ICE_FLOW_SEG_HDR_GTPU_EH |
-				ICE_FLOW_SEG_HDR_TCP |
-				ICE_FLOW_SEG_HDR_IPV4 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_GTPU_EH | ICE_FLOW_SEG_HDR_TCP |
+				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s GTPU_EH_IPV4_TCP rss flow fail %d",
 				    __func__, ret);
 
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_TCP_IPV4,
-				ICE_FLOW_SEG_HDR_PPPOE |
-				ICE_FLOW_SEG_HDR_TCP |
-				ICE_FLOW_SEG_HDR_IPV4 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
+				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s PPPoE_IPV4_TCP rss flow fail %d",
 				    __func__, ret);
 	}
 
 	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_TCP_IPV6,
-				ICE_FLOW_SEG_HDR_GTPU_IP |
-				ICE_FLOW_SEG_HDR_TCP |
-				ICE_FLOW_SEG_HDR_IPV6 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_GTPU_IP | ICE_FLOW_SEG_HDR_TCP |
+				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		cfg.hash_flds = ICE_FLOW_HASH_IPV6;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s GTPU_IPV6_TCP rss flow fail %d",
 				    __func__, ret);
 
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_TCP_IPV6,
-				ICE_FLOW_SEG_HDR_GTPU_EH |
-				ICE_FLOW_SEG_HDR_TCP |
-				ICE_FLOW_SEG_HDR_IPV6 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_GTPU_EH | ICE_FLOW_SEG_HDR_TCP |
+				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s GTPU_EH_IPV6_TCP rss flow fail %d",
 				    __func__, ret);
 
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_TCP_IPV6,
-				ICE_FLOW_SEG_HDR_PPPOE |
-				ICE_FLOW_SEG_HDR_TCP |
-				ICE_FLOW_SEG_HDR_IPV6 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
+				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s PPPoE_IPV6_TCP rss flow fail %d",
 				    __func__, ret);
 	}
 
 	if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_SCTP_IPV4,
-				ICE_FLOW_SEG_HDR_GTPU_IP |
-				ICE_FLOW_SEG_HDR_SCTP |
-				ICE_FLOW_SEG_HDR_IPV4 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_GTPU_IP | ICE_FLOW_SEG_HDR_SCTP |
+				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		cfg.hash_flds = ICE_FLOW_HASH_IPV4;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s GTPU_IPV4_SCTP rss flow fail %d",
 				    __func__, ret);
 
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_SCTP_IPV4,
-				ICE_FLOW_SEG_HDR_GTPU_EH |
-				ICE_FLOW_SEG_HDR_SCTP |
-				ICE_FLOW_SEG_HDR_IPV4 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_GTPU_EH | ICE_FLOW_SEG_HDR_SCTP |
+				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s GTPU_EH_IPV4_SCTP rss flow fail %d",
 				    __func__, ret);
 	}
 
 	if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_SCTP_IPV6,
-				ICE_FLOW_SEG_HDR_GTPU_IP |
-				ICE_FLOW_SEG_HDR_SCTP |
-				ICE_FLOW_SEG_HDR_IPV6 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_GTPU_IP | ICE_FLOW_SEG_HDR_SCTP |
+				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		cfg.hash_flds = ICE_FLOW_HASH_IPV6;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s GTPU_IPV6_SCTP rss flow fail %d",
 				    __func__, ret);
 
-		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, ICE_HASH_SCTP_IPV6,
-				ICE_FLOW_SEG_HDR_GTPU_EH |
-				ICE_FLOW_SEG_HDR_SCTP |
-				ICE_FLOW_SEG_HDR_IPV6 |
-				ICE_FLOW_SEG_HDR_IPV_OTHER, 0);
+		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_GTPU_EH | ICE_FLOW_SEG_HDR_SCTP |
+				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
+		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
 		if (ret)
 			PMD_DRV_LOG(ERR, "%s GTPU_EH_IPV6_SCTP rss flow fail %d",
 				    __func__, ret);
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 05218af05e..71d1454685 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -11,6 +11,7 @@
 
 #include "base/ice_common.h"
 #include "base/ice_adminq_cmd.h"
+#include "base/ice_flow.h"
 
 #define ICE_VLAN_TAG_SIZE        4
 
@@ -378,19 +379,6 @@ struct ice_fdir_info {
 #define ICE_HASH_GTPU_CTX_DW_IP_TCP	8
 #define ICE_HASH_GTPU_CTX_MAX		9
 
-enum ice_rss_hash_func {
-	ICE_RSS_HASH_TOEPLITZ			= 0,
-	ICE_RSS_HASH_TOEPLITZ_SYMMETRIC		= 1,
-	ICE_RSS_HASH_XOR			= 2,
-	ICE_RSS_HASH_JHASH			= 3,
-};
-
-struct ice_rss_hash_cfg {
-	u32 addl_hdrs;
-	u64 hash_flds;
-	enum ice_rss_hash_func hash_func;
-};
-
 struct ice_hash_gtpu_ctx {
 	struct ice_rss_hash_cfg ctx[ICE_HASH_GTPU_CTX_MAX];
 };
@@ -542,9 +530,9 @@ void ice_vsi_enable_queues_intr(struct ice_vsi *vsi);
 void ice_vsi_disable_queues_intr(struct ice_vsi *vsi);
 void ice_vsi_queues_bind_intr(struct ice_vsi *vsi);
 int ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
-		uint64_t hash_fld, uint32_t pkt_hdr, bool symm);
+			 struct ice_rss_hash_cfg *cfg);
 int ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
-		uint64_t hash_fld, uint32_t pkt_hdr);
+			 struct ice_rss_hash_cfg *cfg);
 
 static inline int
 ice_align_floor(int n)
diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index 45c69e6bfd..fe3e06c579 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -1274,16 +1274,15 @@ ice_hash_create(struct ice_adapter *ad,
 
 		goto out;
 	} else {
-		filter_ptr->rss_cfg.packet_hdr = headermask;
-		filter_ptr->rss_cfg.hashed_flds = hash_field;
-		filter_ptr->rss_cfg.symm =
+		filter_ptr->rss_cfg.hash.addl_hdrs = headermask;
+		filter_ptr->rss_cfg.hash.hash_flds = hash_field;
+		filter_ptr->rss_cfg.hash.symm =
 			(hash_function ==
 				RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ);
+		filter_ptr->rss_cfg.hash.hdr_type = ICE_RSS_ANY_HEADERS;
 
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx,
-				filter_ptr->rss_cfg.hashed_flds,
-				filter_ptr->rss_cfg.packet_hdr,
-				filter_ptr->rss_cfg.symm);
+					   &filter_ptr->rss_cfg.hash);
 		if (ret) {
 			rte_flow_error_set(error, EINVAL,
 					RTE_FLOW_ERROR_TYPE_HANDLE, NULL,
@@ -1325,8 +1324,7 @@ ice_hash_destroy(struct ice_adapter *ad,
 		ICE_WRITE_REG(hw, VSIQF_HASH_CTL(vsi->vsi_id), reg);
 	} else {
 		ret = ice_rem_rss_cfg_wrap(pf, vsi->idx,
-				filter_ptr->rss_cfg.hashed_flds,
-				filter_ptr->rss_cfg.packet_hdr);
+					   &filter_ptr->rss_cfg.hash);
 		/* Fixme: Ignore the error if a rule does not exist.
 		 * Currently a rule for inputset change or symm turn on/off
 		 * will overwrite an exist rule, while application still
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 20/21] net/ice/base: add support for get/set RSS LUT to specify global LUT
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (18 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 19/21] net/ice/base: refactor RSS configure API Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 21/21] net/ice/base: update version Qi Zhang
  2020-10-29  8:20 ` [dpdk-dev] [PATCH v3 00/21] ice: update base code Yang, Qiming
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang, Brett Creeley

There is no way to specify a global RSS LUT ID with the current API and
0 is the only global LUT ID that can be supported since it's hard coded.
Upcoming support to specify a global LUT ID will require this
flexibility. To fix this, update the API for ice_aq_get_rss_lut() and
ice_aq_set_rss_lut() to take the new structure
ice_aq_get_set_rss_params, which includes a global_lut_id member. A new
structure was introduced instead of adding another parameter to the
previously mentioned functions for 2 reasons:

1. Reduce the number of parameters passed to the functions.
2. Reduce the amount of change required if the arguments ever need to be
   updated in the future.

Also, reduce duplicate code that was checking for an invalid vsi_handle
and lut parameter by moving the checks to the lower level
__ice_aq_get_set_rss_lut().

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/ice_common.c | 54 ++++++++++++++-----------------
 drivers/net/ice/base/ice_common.h |  6 ++--
 drivers/net/ice/base/ice_type.h   |  8 +++++
 drivers/net/ice/ice_ethdev.c      | 28 ++++++++++++----
 4 files changed, 55 insertions(+), 41 deletions(-)

diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c
index 0c1259b42a..304e55e210 100644
--- a/drivers/net/ice/base/ice_common.c
+++ b/drivers/net/ice/base/ice_common.c
@@ -3218,23 +3218,33 @@ ice_aq_sff_eeprom(struct ice_hw *hw, u16 lport, u8 bus_addr,
 /**
  * __ice_aq_get_set_rss_lut
  * @hw: pointer to the hardware structure
- * @vsi_id: VSI FW index
- * @lut_type: LUT table type
- * @lut: pointer to the LUT buffer provided by the caller
- * @lut_size: size of the LUT buffer
- * @glob_lut_idx: global LUT index
+ * @params: RSS LUT parameters
  * @set: set true to set the table, false to get the table
  *
  * Internal function to get (0x0B05) or set (0x0B03) RSS look up table
  */
 static enum ice_status
-__ice_aq_get_set_rss_lut(struct ice_hw *hw, u16 vsi_id, u8 lut_type, u8 *lut,
-			 u16 lut_size, u8 glob_lut_idx, bool set)
+__ice_aq_get_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *params, bool set)
 {
+	u16 flags = 0, vsi_id, lut_type, lut_size, glob_lut_idx, vsi_handle;
 	struct ice_aqc_get_set_rss_lut *cmd_resp;
 	struct ice_aq_desc desc;
 	enum ice_status status;
-	u16 flags = 0;
+	u8 *lut;
+
+	if (!params)
+		return ICE_ERR_PARAM;
+
+	vsi_handle = params->vsi_handle;
+	lut = params->lut;
+
+	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
+		return ICE_ERR_PARAM;
+
+	lut_size = params->lut_size;
+	lut_type = params->lut_type;
+	glob_lut_idx = params->global_lut_id;
+	vsi_id = ice_get_hw_vsi_num(hw, vsi_handle);
 
 	cmd_resp = &desc.params.get_set_rss_lut;
 
@@ -3311,43 +3321,27 @@ __ice_aq_get_set_rss_lut(struct ice_hw *hw, u16 vsi_id, u8 lut_type, u8 *lut,
 /**
  * ice_aq_get_rss_lut
  * @hw: pointer to the hardware structure
- * @vsi_handle: software VSI handle
- * @lut_type: LUT table type
- * @lut: pointer to the LUT buffer provided by the caller
- * @lut_size: size of the LUT buffer
+ * @get_params: RSS LUT parameters used to specify which RSS LUT to get
  *
  * get the RSS lookup table, PF or VSI type
  */
 enum ice_status
-ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
-		   u8 *lut, u16 lut_size)
+ice_aq_get_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *get_params)
 {
-	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
-		return ICE_ERR_PARAM;
-
-	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
-					lut_type, lut, lut_size, 0, false);
+	return __ice_aq_get_set_rss_lut(hw, get_params, false);
 }
 
 /**
  * ice_aq_set_rss_lut
  * @hw: pointer to the hardware structure
- * @vsi_handle: software VSI handle
- * @lut_type: LUT table type
- * @lut: pointer to the LUT buffer provided by the caller
- * @lut_size: size of the LUT buffer
+ * @set_params: RSS LUT parameters used to specify how to set the RSS LUT
  *
  * set the RSS lookup table, PF or VSI type
  */
 enum ice_status
-ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type,
-		   u8 *lut, u16 lut_size)
+ice_aq_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *set_params)
 {
-	if (!ice_is_vsi_valid(hw, vsi_handle) || !lut)
-		return ICE_ERR_PARAM;
-
-	return __ice_aq_get_set_rss_lut(hw, ice_get_hw_vsi_num(hw, vsi_handle),
-					lut_type, lut, lut_size, 0, true);
+	return __ice_aq_get_set_rss_lut(hw, set_params, true);
 }
 
 /**
diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h
index 0288fb73e0..8c16c7a024 100644
--- a/drivers/net/ice/base/ice_common.h
+++ b/drivers/net/ice/base/ice_common.h
@@ -86,11 +86,9 @@ ice_write_tx_drbell_q_ctx(struct ice_hw *hw,
 			  u32 tx_drbell_q_index);
 
 enum ice_status
-ice_aq_get_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
-		   u16 lut_size);
+ice_aq_get_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *get_params);
 enum ice_status
-ice_aq_set_rss_lut(struct ice_hw *hw, u16 vsi_handle, u8 lut_type, u8 *lut,
-		   u16 lut_size);
+ice_aq_set_rss_lut(struct ice_hw *hw, struct ice_aq_get_set_rss_lut_params *set_params);
 enum ice_status
 ice_aq_get_rss_key(struct ice_hw *hw, u16 vsi_handle,
 		   struct ice_aqc_get_set_rss_keys *keys);
diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h
index f93baed8d9..6b8d44f0b4 100644
--- a/drivers/net/ice/base/ice_type.h
+++ b/drivers/net/ice/base/ice_type.h
@@ -1069,6 +1069,14 @@ enum ice_sw_fwd_act_type {
 	ICE_INVAL_ACT
 };
 
+struct ice_aq_get_set_rss_lut_params {
+	u16 vsi_handle;		/* software VSI handle */
+	u16 lut_size;		/* size of the LUT buffer */
+	u8 lut_type;		/* type of the LUT (i.e. VSI, PF, Global) */
+	u8 *lut;		/* input RSS LUT for set and output RSS LUT for get */
+	u8 global_lut_id;	/* only valid when lut_type is global */
+};
+
 /* Checksum and Shadow RAM pointers */
 #define ICE_SR_NVM_CTRL_WORD			0x00
 #define ICE_SR_PHY_ANALOG_PTR			0x04
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index bd0bd5206c..317d8ad0bc 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -3185,6 +3185,7 @@ static int ice_init_rss(struct ice_pf *pf)
 	struct ice_hw *hw = ICE_PF_TO_HW(pf);
 	struct ice_vsi *vsi = pf->main_vsi;
 	struct rte_eth_dev *dev = pf->adapter->eth_dev;
+	struct ice_aq_get_set_rss_lut_params lut_params;
 	struct rte_eth_rss_conf *rss_conf;
 	struct ice_aqc_get_set_rss_keys key;
 	uint16_t i, nb_q;
@@ -3239,9 +3240,12 @@ static int ice_init_rss(struct ice_pf *pf)
 	for (i = 0; i < vsi->rss_lut_size; i++)
 		vsi->rss_lut[i] = i % nb_q;
 
-	ret = ice_aq_set_rss_lut(hw, vsi->idx,
-				 ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF,
-				 vsi->rss_lut, vsi->rss_lut_size);
+	lut_params.vsi_handle = vsi->idx;
+	lut_params.lut_size = vsi->rss_lut_size;
+	lut_params.lut_type = ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF;
+	lut_params.lut = vsi->rss_lut;
+	lut_params.global_lut_id = 0;
+	ret = ice_aq_set_rss_lut(hw, &lut_params);
 	if (ret)
 		goto out;
 
@@ -4166,6 +4170,7 @@ ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 static int
 ice_get_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 {
+	struct ice_aq_get_set_rss_lut_params lut_params;
 	struct ice_pf *pf = ICE_VSI_TO_PF(vsi);
 	struct ice_hw *hw = ICE_VSI_TO_HW(vsi);
 	int ret;
@@ -4174,8 +4179,12 @@ ice_get_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 		return -EINVAL;
 
 	if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
-		ret = ice_aq_get_rss_lut(hw, vsi->idx,
-			ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF, lut, lut_size);
+		lut_params.vsi_handle = vsi->idx;
+		lut_params.lut_size = lut_size;
+		lut_params.lut_type = ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF;
+		lut_params.lut = lut;
+		lut_params.global_lut_id = 0;
+		ret = ice_aq_get_rss_lut(hw, &lut_params);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Failed to get RSS lookup table");
 			return -EINVAL;
@@ -4194,6 +4203,7 @@ ice_get_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 static int
 ice_set_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 {
+	struct ice_aq_get_set_rss_lut_params lut_params;
 	struct ice_pf *pf;
 	struct ice_hw *hw;
 	int ret;
@@ -4205,8 +4215,12 @@ ice_set_rss_lut(struct ice_vsi *vsi, uint8_t *lut, uint16_t lut_size)
 	hw = ICE_VSI_TO_HW(vsi);
 
 	if (pf->flags & ICE_FLAG_RSS_AQ_CAPABLE) {
-		ret = ice_aq_set_rss_lut(hw, vsi->idx,
-			ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF, lut, lut_size);
+		lut_params.vsi_handle = vsi->idx;
+		lut_params.lut_size = lut_size;
+		lut_params.lut_type = ICE_AQC_GSET_RSS_LUT_TABLE_TYPE_PF;
+		lut_params.lut = lut;
+		lut_params.global_lut_id = 0;
+		ret = ice_aq_set_rss_lut(hw, &lut_params);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Failed to set RSS lookup table");
 			return -EINVAL;
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 21/21] net/ice/base: update version
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (19 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 20/21] net/ice/base: add support for get/set RSS LUT to specify global LUT Qi Zhang
@ 2020-10-28  3:23 ` Qi Zhang
  2020-10-29  8:20 ` [dpdk-dev] [PATCH v3 00/21] ice: update base code Yang, Qiming
  21 siblings, 0 replies; 24+ messages in thread
From: Qi Zhang @ 2020-10-28  3:23 UTC (permalink / raw)
  To: qiming.yang; +Cc: dev, Qi Zhang

Update base code version in readme.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ice/base/README | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ice/base/README b/drivers/net/ice/base/README
index 1e9c854ae8..5229e5fe7d 100644
--- a/drivers/net/ice/base/README
+++ b/drivers/net/ice/base/README
@@ -6,7 +6,7 @@ Intel® ICE driver
 ==================
 
 This directory contains source code of FreeBSD ice driver of version
-2020.06.17 released by the team which develops
+2020.10.21 released by the team which develops
 basic drivers for any ice NIC. The directory of base/ contains the
 original source package.
 This driver is valid for the product(s) listed below
-- 
2.25.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v3 00/21] ice: update base code
  2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
                   ` (20 preceding siblings ...)
  2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 21/21] net/ice/base: update version Qi Zhang
@ 2020-10-29  8:20 ` Yang, Qiming
  2020-10-29  8:31   ` Zhang, Qi Z
  21 siblings, 1 reply; 24+ messages in thread
From: Yang, Qiming @ 2020-10-29  8:20 UTC (permalink / raw)
  To: Zhang, Qi Z; +Cc: dev

> -----Original Message-----
> From: Zhang, Qi Z <qi.z.zhang@intel.com>
> Sent: 2020年10月28日 11:23
> To: Yang, Qiming <qiming.yang@intel.com>
> Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>
> Subject: [PATCH v3 00/21] ice: update base code
> 
> main change:
> 1. Refactor the RSS configure API.
> 2. Add global LUT support .
> 3. copule fix and code clean
> 
> v3:
> - fix gtpu rss bug in patch 19/22
> 
> v2:
> - fix missing code in patch 19/21.
> 
> *** BLURB HERE ***
> 
> Qi Zhang (21):
>   net/ice/base: add tunnel support for FDIR
>   net/ice/base: add NVM Write Response flags
>   net/ice/base: modify ptype bitmap for outer MAC
>   net/ice/base: rename ptype bitmap
>   net/ice/base: move sched function prototypes
>   net/ice/base: read security revision
>   net/ice/base: add functions to allocate and free a RSS global LUT
>   net/ice/base: add more capability to admin queue
>   net/ice/base: update to use package info from ice segment
>   net/ice/base: use malloc instead of calloc
>   net/ice/base: add support for class 5+ modules
>   net/ice/base: return error directly
>   net/ice/base: implement shared rate limiter
>   net/ice/base: recognize 860 as iSCSI port in CEE mode
>   net/ice/base: fix parameter name in comment
>   net/ice/base: support extended GPIO access
>   net/ice/base: remove duplicated AQ command flag setting
>   net/ice/base: introduce and use FLEX_ARRAY_SIZE where possible
>   net/ice/base: refactor RSS configure API
>   net/ice/base: add support for get/set RSS LUT to specify global LUT
>   net/ice/base: update version
> 
>  drivers/net/ice/base/README           |   2 +-
>  drivers/net/ice/base/ice_adminq_cmd.h |  35 +-
>  drivers/net/ice/base/ice_common.c     |  58 ++-
>  drivers/net/ice/base/ice_common.h     |  13 +-
>  drivers/net/ice/base/ice_dcb.c        |  38 +-
>  drivers/net/ice/base/ice_fdir.c       |   8 +
>  drivers/net/ice/base/ice_fdir.h       |   9 +
>  drivers/net/ice/base/ice_flex_pipe.c  |  46 +--
>  drivers/net/ice/base/ice_flex_type.h  |   8 +
>  drivers/net/ice/base/ice_flow.c       | 265 ++++++++------
>  drivers/net/ice/base/ice_flow.h       |  34 +-
>  drivers/net/ice/base/ice_nvm.c        | 174 +++++++++
>  drivers/net/ice/base/ice_sched.c      | 493 +++++++++++++++++---------
>  drivers/net/ice/base/ice_sched.h      |  29 +-
>  drivers/net/ice/base/ice_switch.c     |  68 +++-
>  drivers/net/ice/base/ice_switch.h     |   2 +
>  drivers/net/ice/base/ice_type.h       |  64 +++-
>  drivers/net/ice/ice_ethdev.c          | 346 +++++++++---------
>  drivers/net/ice/ice_ethdev.h          |  18 +-
>  drivers/net/ice/ice_hash.c            |  14 +-
>  20 files changed, 1138 insertions(+), 586 deletions(-)
> 
> --
> 2.25.4

Acked-by: Qiming Yang <qiming.yang@intel.com>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v3 00/21] ice: update base code
  2020-10-29  8:20 ` [dpdk-dev] [PATCH v3 00/21] ice: update base code Yang, Qiming
@ 2020-10-29  8:31   ` Zhang, Qi Z
  0 siblings, 0 replies; 24+ messages in thread
From: Zhang, Qi Z @ 2020-10-29  8:31 UTC (permalink / raw)
  To: Yang, Qiming; +Cc: dev



> -----Original Message-----
> From: Yang, Qiming <qiming.yang@intel.com>
> Sent: Thursday, October 29, 2020 4:20 PM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [PATCH v3 00/21] ice: update base code
> 
> > -----Original Message-----
> > From: Zhang, Qi Z <qi.z.zhang@intel.com>
> > Sent: 2020年10月28日 11:23
> > To: Yang, Qiming <qiming.yang@intel.com>
> > Cc: dev@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>
> > Subject: [PATCH v3 00/21] ice: update base code
> >
> > main change:
> > 1. Refactor the RSS configure API.
> > 2. Add global LUT support .
> > 3. copule fix and code clean
> >
> > v3:
> > - fix gtpu rss bug in patch 19/22
> >
> > v2:
> > - fix missing code in patch 19/21.
> >
> > *** BLURB HERE ***
> >
> > Qi Zhang (21):
> >   net/ice/base: add tunnel support for FDIR
> >   net/ice/base: add NVM Write Response flags
> >   net/ice/base: modify ptype bitmap for outer MAC
> >   net/ice/base: rename ptype bitmap
> >   net/ice/base: move sched function prototypes
> >   net/ice/base: read security revision
> >   net/ice/base: add functions to allocate and free a RSS global LUT
> >   net/ice/base: add more capability to admin queue
> >   net/ice/base: update to use package info from ice segment
> >   net/ice/base: use malloc instead of calloc
> >   net/ice/base: add support for class 5+ modules
> >   net/ice/base: return error directly
> >   net/ice/base: implement shared rate limiter
> >   net/ice/base: recognize 860 as iSCSI port in CEE mode
> >   net/ice/base: fix parameter name in comment
> >   net/ice/base: support extended GPIO access
> >   net/ice/base: remove duplicated AQ command flag setting
> >   net/ice/base: introduce and use FLEX_ARRAY_SIZE where possible
> >   net/ice/base: refactor RSS configure API
> >   net/ice/base: add support for get/set RSS LUT to specify global LUT
> >   net/ice/base: update version
> >
> >  drivers/net/ice/base/README           |   2 +-
> >  drivers/net/ice/base/ice_adminq_cmd.h |  35 +-
> >  drivers/net/ice/base/ice_common.c     |  58 ++-
> >  drivers/net/ice/base/ice_common.h     |  13 +-
> >  drivers/net/ice/base/ice_dcb.c        |  38 +-
> >  drivers/net/ice/base/ice_fdir.c       |   8 +
> >  drivers/net/ice/base/ice_fdir.h       |   9 +
> >  drivers/net/ice/base/ice_flex_pipe.c  |  46 +--
> >  drivers/net/ice/base/ice_flex_type.h  |   8 +
> >  drivers/net/ice/base/ice_flow.c       | 265 ++++++++------
> >  drivers/net/ice/base/ice_flow.h       |  34 +-
> >  drivers/net/ice/base/ice_nvm.c        | 174 +++++++++
> >  drivers/net/ice/base/ice_sched.c      | 493 +++++++++++++++++---------
> >  drivers/net/ice/base/ice_sched.h      |  29 +-
> >  drivers/net/ice/base/ice_switch.c     |  68 +++-
> >  drivers/net/ice/base/ice_switch.h     |   2 +
> >  drivers/net/ice/base/ice_type.h       |  64 +++-
> >  drivers/net/ice/ice_ethdev.c          | 346 +++++++++---------
> >  drivers/net/ice/ice_ethdev.h          |  18 +-
> >  drivers/net/ice/ice_hash.c            |  14 +-
> >  20 files changed, 1138 insertions(+), 586 deletions(-)
> >
> > --
> > 2.25.4
> 
> Acked-by: Qiming Yang <qiming.yang@intel.com>

Applied to dpdk-next-net-intel.

Thanks
Qi


^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2020-10-29  8:32 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-28  3:22 [dpdk-dev] [PATCH v3 00/21] ice: update base code Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 01/21] net/ice/base: add tunnel support for FDIR Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 02/21] net/ice/base: add NVM Write Response flags Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 03/21] net/ice/base: modify ptype bitmap for outer MAC Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 04/21] net/ice/base: rename ptype bitmap Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 05/21] net/ice/base: move sched function prototypes Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 06/21] net/ice/base: read security revision Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 07/21] net/ice/base: add functions to allocate and free a RSS global LUT Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 08/21] net/ice/base: add more capability to admin queue Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 09/21] net/ice/base: update to use package info from ice segment Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 10/21] net/ice/base: use malloc instead of calloc Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 11/21] net/ice/base: add support for class 5+ modules Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 12/21] net/ice/base: return error directly Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 13/21] net/ice/base: implement shared rate limiter Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 14/21] net/ice/base: recognize 860 as iSCSI port in CEE mode Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 15/21] net/ice/base: fix parameter name in comment Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 16/21] net/ice/base: support extended GPIO access Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 17/21] net/ice/base: remove duplicated AQ command flag setting Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 18/21] net/ice/base: introduce and use FLEX_ARRAY_SIZE where possible Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 19/21] net/ice/base: refactor RSS configure API Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 20/21] net/ice/base: add support for get/set RSS LUT to specify global LUT Qi Zhang
2020-10-28  3:23 ` [dpdk-dev] [PATCH v3 21/21] net/ice/base: update version Qi Zhang
2020-10-29  8:20 ` [dpdk-dev] [PATCH v3 00/21] ice: update base code Yang, Qiming
2020-10-29  8:31   ` Zhang, Qi Z

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).