DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 00/20] bnxt patches
@ 2020-07-06  8:24 Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 01/20] net/bnxt: vxlan encap and decap with src property enabled Somnath Kotur
                   ` (19 more replies)
  0 siblings, 20 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Somath Kotur <somnath.kotur@broadcom.com>

Added support for the following in host-based flow management
* VLAN push
* VLAN pop
* TF-ULP support for NAT (L3/L4 rewrite based)
* Enabling flow control ops on the VF rep device for full 
  offload

Jay Ding (2):
  net/bnxt: Updated hsi_struct_def_dpdk.h
  nxt/bnxt: Added HWRM support for global cfg

Kishore Padmanabha (16):
  net/bnxt: vxlan encap and decap with src property enabled
  net/bnxt: add support vlan header bitmap
  net/bnxt: add support for negative conditional opcodes
  net/bnxt: add validations to dpdk port id and phy port parsing
  net/bnxt: add support for index opcode constant
  net/bnxt: cleanup and refactoring
  net/bnxt: add support for vlan push and vlan pop actions
  net/bnxt: remove vnic and vport act bits from template matching
  net/bnxt: fix vxlan outer ip protocol id encapsulation
  net/bnxt: add number of vlan tags in the computed field list
  net/bnxt: enable support for PF and VF port action items
  net/bnxt: port configuration changes to support full offload
  net/bnxt: add support for conditional opcodes for mapper result table
  net/bnxt: add support for nat rte action items
  net/bnxt: add support for tp src/dst rte action items
  net/bnxt: use VF vnic when port action is for a VF rep port

Somnath Kotur (2):
  net/bnxt: enable flow ctrl ops for the VF-rep device
  net/bnxt: use byte/pkt count shift/masks from the device template

 drivers/net/bnxt/bnxt.h                        |    5 +
 drivers/net/bnxt/bnxt_ethdev.c                 |    9 +-
 drivers/net/bnxt/bnxt_reps.c                   |    3 +-
 drivers/net/bnxt/hsi_struct_def_dpdk.h         | 1486 +++++++++++++++++++++++-
 drivers/net/bnxt/tf_core/tf_em_host.c          |    2 +-
 drivers/net/bnxt/tf_core/tf_msg.c              |  118 +-
 drivers/net/bnxt/tf_core/tf_session.c          |    8 +
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h       |    8 +-
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c             |   17 +-
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c        |   34 +-
 drivers/net/bnxt/tf_ulp/ulp_def_rules.c        |   11 +-
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c           |   27 +-
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h           |    6 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.c           |  101 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h           |    2 +-
 drivers/net/bnxt/tf_ulp/ulp_matcher.c          |   10 +-
 drivers/net/bnxt/tf_ulp/ulp_port_db.c          |  105 +-
 drivers/net/bnxt/tf_ulp/ulp_port_db.h          |   55 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c       |  739 +++++++++---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h       |   52 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_act.c  |    6 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h |  179 +--
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c  |   44 +-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h  |    9 +-
 24 files changed, 2680 insertions(+), 356 deletions(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 01/20] net/bnxt: vxlan encap and decap with src property enabled
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
@ 2020-07-06  8:24 ` Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 02/20] net/bnxt: add support vlan header bitmap Somnath Kotur
                   ` (18 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

The vxlan encap and decap flows need to allocate the source
record property and populate the action fields during the
flow parsing.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c       |  7 ++++++-
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 10 ++++++++++
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index c058611..fc29ff1 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -103,6 +103,9 @@ ulp_ctx_session_open(struct bnxt *bp,
 	/* EM */
 	resources->em_cnt[TF_DIR_RX].cnt[TF_EM_TBL_TYPE_EM_RECORD] = 2048;
 
+	/* EEM */
+	resources->em_cnt[TF_DIR_RX].cnt[TF_EM_TBL_TYPE_TBL_SCOPE] = 1;
+
 	/** TX **/
 	/* Identifiers */
 	resources->ident_cnt[TF_DIR_TX].cnt[TF_IDENT_TYPE_L2_CTXT] = 8;
@@ -127,9 +130,11 @@ ulp_ctx_session_open(struct bnxt *bp,
 	resources->em_cnt[TF_DIR_TX].cnt[TF_EM_TBL_TYPE_EM_RECORD] = 8;
 
 	/* EEM */
-	resources->em_cnt[TF_DIR_RX].cnt[TF_EM_TBL_TYPE_TBL_SCOPE] = 1;
 	resources->em_cnt[TF_DIR_TX].cnt[TF_EM_TBL_TYPE_TBL_SCOPE] = 1;
 
+	/* SP */
+	resources->tbl_cnt[TF_DIR_TX].cnt[TF_TBL_TYPE_ACT_SP_SMAC_IPV4] = 128;
+
 	rc = tf_open_session(&bp->tfp, &params);
 	if (rc) {
 		BNXT_TF_DBG(ERR, "Failed to open TF session - %s, rc = %d\n",
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 073b353..1bf0b76 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1058,6 +1058,11 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 			      eth_spec->dst.addr_bytes,
 			      BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_DMAC);
 
+	buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC];
+	ulp_encap_buffer_copy(buff,
+			      eth_spec->src.addr_bytes,
+			      BNXT_ULP_ACT_PROP_SZ_ENCAP_L2_SMAC);
+
 	/* Goto the next item */
 	if (!ulp_rte_item_skip_void(&item, 1))
 		return BNXT_TF_RC_ERROR;
@@ -1131,6 +1136,11 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 				      (const uint8_t *)&ipv4_spec->hdr.dst_addr,
 				      BNXT_ULP_ENCAP_IPV4_DEST_IP);
 
+		buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC];
+		ulp_encap_buffer_copy(buff,
+				      (const uint8_t *)&ipv4_spec->hdr.src_addr,
+				      BNXT_ULP_ACT_PROP_SZ_ENCAP_IP_SRC);
+
 		/* Update the ip size details */
 		ip_size = tfp_cpu_to_be_32(ip_size);
 		memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SZ],
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 02/20] net/bnxt: add support vlan header bitmap
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 01/20] net/bnxt: vxlan encap and decap with src property enabled Somnath Kotur
@ 2020-07-06  8:24 ` Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 03/20] net/bnxt: add support for negative conditional opcodes Somnath Kotur
                   ` (17 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Add support for the vlan headers in the matching of the flow
patterns.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Michael Baucom <michael.baucom@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c       | 20 +++---
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h | 84 +++++++++++++-------------
 2 files changed, 53 insertions(+), 51 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 1bf0b76..a4dbd84 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -442,43 +442,43 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 	/* Update the hdr_bitmap of the vlans */
 	hdr_bit = &params->hdr_bitmap;
 	if (ULP_BITMAP_ISSET(hdr_bit->bits, BNXT_ULP_HDR_BIT_O_ETH) &&
+	    !ULP_BITMAP_ISSET(hdr_bit->bits, BNXT_ULP_HDR_BIT_I_ETH) &&
 	    !outer_vtag_num) {
 		/* Update the vlan tag num */
 		outer_vtag_num++;
 		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_O_VTAG_NUM,
 				    outer_vtag_num);
-		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_O_VTAG_PRESENT, 1);
+		ULP_BITMAP_SET(params->hdr_bitmap.bits,
+			       BNXT_ULP_HDR_BIT_OO_VLAN);
 	} else if (ULP_BITMAP_ISSET(hdr_bit->bits, BNXT_ULP_HDR_BIT_O_ETH) &&
-		   ULP_COMP_FLD_IDX_RD(params,
-				       BNXT_ULP_CF_IDX_O_VTAG_PRESENT) &&
+		   !ULP_BITMAP_ISSET(hdr_bit->bits, BNXT_ULP_HDR_BIT_I_ETH) &&
 		   outer_vtag_num == 1) {
 		/* update the vlan tag num */
 		outer_vtag_num++;
 		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_O_VTAG_NUM,
 				    outer_vtag_num);
 		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_O_TWO_VTAGS, 1);
+		ULP_BITMAP_SET(params->hdr_bitmap.bits,
+			       BNXT_ULP_HDR_BIT_OI_VLAN);
 	} else if (ULP_BITMAP_ISSET(hdr_bit->bits, BNXT_ULP_HDR_BIT_O_ETH) &&
-		   ULP_COMP_FLD_IDX_RD(params,
-				       BNXT_ULP_CF_IDX_O_VTAG_PRESENT) &&
 		   ULP_BITMAP_ISSET(hdr_bit->bits, BNXT_ULP_HDR_BIT_I_ETH) &&
 		   !inner_vtag_num) {
 		/* update the vlan tag num */
 		inner_vtag_num++;
 		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_I_VTAG_NUM,
 				    inner_vtag_num);
-		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_I_VTAG_PRESENT, 1);
+		ULP_BITMAP_SET(params->hdr_bitmap.bits,
+			       BNXT_ULP_HDR_BIT_IO_VLAN);
 	} else if (ULP_BITMAP_ISSET(hdr_bit->bits, BNXT_ULP_HDR_BIT_O_ETH) &&
-		   ULP_COMP_FLD_IDX_RD(params,
-				       BNXT_ULP_CF_IDX_O_VTAG_PRESENT) &&
 		   ULP_BITMAP_ISSET(hdr_bit->bits, BNXT_ULP_HDR_BIT_I_ETH) &&
-		   ULP_COMP_FLD_IDX_RD(params,
-				       BNXT_ULP_CF_IDX_O_VTAG_PRESENT) &&
 		   inner_vtag_num == 1) {
 		/* update the vlan tag num */
 		inner_vtag_num++;
 		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_I_VTAG_NUM,
 				    inner_vtag_num);
 		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_I_TWO_VTAGS, 1);
+		ULP_BITMAP_SET(params->hdr_bitmap.bits,
+			       BNXT_ULP_HDR_BIT_II_VLAN);
 	} else {
 		BNXT_TF_DBG(ERR, "Error Parsing:Vlan hdr found withtout eth\n");
 		return BNXT_TF_RC_ERROR;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 6955464..e13d20b 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -61,18 +61,22 @@ enum bnxt_ulp_action_bit {
 
 enum bnxt_ulp_hdr_bit {
 	BNXT_ULP_HDR_BIT_O_ETH               = 0x0000000000000001,
-	BNXT_ULP_HDR_BIT_O_IPV4              = 0x0000000000000002,
-	BNXT_ULP_HDR_BIT_O_IPV6              = 0x0000000000000004,
-	BNXT_ULP_HDR_BIT_O_TCP               = 0x0000000000000008,
-	BNXT_ULP_HDR_BIT_O_UDP               = 0x0000000000000010,
-	BNXT_ULP_HDR_BIT_T_VXLAN             = 0x0000000000000020,
-	BNXT_ULP_HDR_BIT_T_GRE               = 0x0000000000000040,
-	BNXT_ULP_HDR_BIT_I_ETH               = 0x0000000000000080,
-	BNXT_ULP_HDR_BIT_I_IPV4              = 0x0000000000000100,
-	BNXT_ULP_HDR_BIT_I_IPV6              = 0x0000000000000200,
-	BNXT_ULP_HDR_BIT_I_TCP               = 0x0000000000000400,
-	BNXT_ULP_HDR_BIT_I_UDP               = 0x0000000000000800,
-	BNXT_ULP_HDR_BIT_LAST                = 0x0000000000001000
+	BNXT_ULP_HDR_BIT_OO_VLAN             = 0x0000000000000002,
+	BNXT_ULP_HDR_BIT_OI_VLAN             = 0x0000000000000004,
+	BNXT_ULP_HDR_BIT_O_IPV4              = 0x0000000000000008,
+	BNXT_ULP_HDR_BIT_O_IPV6              = 0x0000000000000010,
+	BNXT_ULP_HDR_BIT_O_TCP               = 0x0000000000000020,
+	BNXT_ULP_HDR_BIT_O_UDP               = 0x0000000000000040,
+	BNXT_ULP_HDR_BIT_T_VXLAN             = 0x0000000000000080,
+	BNXT_ULP_HDR_BIT_T_GRE               = 0x0000000000000100,
+	BNXT_ULP_HDR_BIT_I_ETH               = 0x0000000000000200,
+	BNXT_ULP_HDR_BIT_IO_VLAN             = 0x0000000000000400,
+	BNXT_ULP_HDR_BIT_II_VLAN             = 0x0000000000000800,
+	BNXT_ULP_HDR_BIT_I_IPV4              = 0x0000000000001000,
+	BNXT_ULP_HDR_BIT_I_IPV6              = 0x0000000000002000,
+	BNXT_ULP_HDR_BIT_I_TCP               = 0x0000000000004000,
+	BNXT_ULP_HDR_BIT_I_UDP               = 0x0000000000008000,
+	BNXT_ULP_HDR_BIT_LAST                = 0x0000000000010000
 };
 
 enum bnxt_ulp_act_type {
@@ -92,35 +96,33 @@ enum bnxt_ulp_cf_idx {
 	BNXT_ULP_CF_IDX_NOT_USED = 0,
 	BNXT_ULP_CF_IDX_MPLS_TAG_NUM = 1,
 	BNXT_ULP_CF_IDX_O_VTAG_NUM = 2,
-	BNXT_ULP_CF_IDX_O_VTAG_PRESENT = 3,
-	BNXT_ULP_CF_IDX_O_TWO_VTAGS = 4,
-	BNXT_ULP_CF_IDX_I_VTAG_NUM = 5,
-	BNXT_ULP_CF_IDX_I_VTAG_PRESENT = 6,
-	BNXT_ULP_CF_IDX_I_TWO_VTAGS = 7,
-	BNXT_ULP_CF_IDX_INCOMING_IF = 8,
-	BNXT_ULP_CF_IDX_DIRECTION = 9,
-	BNXT_ULP_CF_IDX_SVIF_FLAG = 10,
-	BNXT_ULP_CF_IDX_O_L3 = 11,
-	BNXT_ULP_CF_IDX_I_L3 = 12,
-	BNXT_ULP_CF_IDX_O_L4 = 13,
-	BNXT_ULP_CF_IDX_I_L4 = 14,
-	BNXT_ULP_CF_IDX_DEV_PORT_ID = 15,
-	BNXT_ULP_CF_IDX_DRV_FUNC_SVIF = 16,
-	BNXT_ULP_CF_IDX_DRV_FUNC_SPIF = 17,
-	BNXT_ULP_CF_IDX_DRV_FUNC_PARIF = 18,
-	BNXT_ULP_CF_IDX_DRV_FUNC_VNIC = 19,
-	BNXT_ULP_CF_IDX_DRV_FUNC_PHY_PORT = 20,
-	BNXT_ULP_CF_IDX_VF_FUNC_SVIF = 21,
-	BNXT_ULP_CF_IDX_VF_FUNC_SPIF = 22,
-	BNXT_ULP_CF_IDX_VF_FUNC_PARIF = 23,
-	BNXT_ULP_CF_IDX_VF_FUNC_VNIC = 24,
-	BNXT_ULP_CF_IDX_PHY_PORT_SVIF = 25,
-	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 26,
-	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 27,
-	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 28,
-	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG = 29,
-	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG = 30,
-	BNXT_ULP_CF_IDX_LAST = 31
+	BNXT_ULP_CF_IDX_O_TWO_VTAGS = 3,
+	BNXT_ULP_CF_IDX_I_VTAG_NUM = 4,
+	BNXT_ULP_CF_IDX_I_TWO_VTAGS = 5,
+	BNXT_ULP_CF_IDX_INCOMING_IF = 6,
+	BNXT_ULP_CF_IDX_DIRECTION = 7,
+	BNXT_ULP_CF_IDX_SVIF_FLAG = 8,
+	BNXT_ULP_CF_IDX_O_L3 = 9,
+	BNXT_ULP_CF_IDX_I_L3 = 10,
+	BNXT_ULP_CF_IDX_O_L4 = 11,
+	BNXT_ULP_CF_IDX_I_L4 = 12,
+	BNXT_ULP_CF_IDX_DEV_PORT_ID = 13,
+	BNXT_ULP_CF_IDX_DRV_FUNC_SVIF = 14,
+	BNXT_ULP_CF_IDX_DRV_FUNC_SPIF = 15,
+	BNXT_ULP_CF_IDX_DRV_FUNC_PARIF = 16,
+	BNXT_ULP_CF_IDX_DRV_FUNC_VNIC = 17,
+	BNXT_ULP_CF_IDX_DRV_FUNC_PHY_PORT = 18,
+	BNXT_ULP_CF_IDX_VF_FUNC_SVIF = 19,
+	BNXT_ULP_CF_IDX_VF_FUNC_SPIF = 20,
+	BNXT_ULP_CF_IDX_VF_FUNC_PARIF = 21,
+	BNXT_ULP_CF_IDX_VF_FUNC_VNIC = 22,
+	BNXT_ULP_CF_IDX_PHY_PORT_SVIF = 23,
+	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 24,
+	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 25,
+	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 26,
+	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG = 27,
+	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG = 28,
+	BNXT_ULP_CF_IDX_LAST = 29
 };
 
 enum bnxt_ulp_cond_opcode {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 03/20] net/bnxt: add support for negative conditional opcodes
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 01/20] net/bnxt: vxlan encap and decap with src property enabled Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 02/20] net/bnxt: add support vlan header bitmap Somnath Kotur
@ 2020-07-06  8:24 ` Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 04/20] net/bnxt: add validations to dpdk port id and phy port parsing Somnath Kotur
                   ` (16 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for negative conditional opcodes in the
mapper processing.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Michael Baucom <michael.baucom@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c           | 21 ++++++++++++++++++---
 drivers/net/bnxt/tf_ulp/ulp_template_db_act.c  |  6 +++---
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h | 11 +++++++----
 3 files changed, 28 insertions(+), 10 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index 3f175fb..eb77328 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -2006,21 +2006,36 @@ ulp_mapper_tbl_cond_opcode_process(struct bnxt_ulp_mapper_parms *parms,
 	case BNXT_ULP_COND_OPCODE_NOP:
 		rc = 0;
 		break;
-	case BNXT_ULP_COND_OPCODE_COMP_FIELD:
+	case BNXT_ULP_COND_OPCODE_COMP_FIELD_IS_SET:
 		if (tbl->cond_operand < BNXT_ULP_CF_IDX_LAST &&
 		    ULP_COMP_FLD_IDX_RD(parms, tbl->cond_operand))
 			rc = 0;
 		break;
-	case BNXT_ULP_COND_OPCODE_ACTION_BIT:
+	case BNXT_ULP_COND_OPCODE_ACTION_BIT_IS_SET:
 		if (ULP_BITMAP_ISSET(parms->act_bitmap->bits,
 				     tbl->cond_operand))
 			rc = 0;
 		break;
-	case BNXT_ULP_COND_OPCODE_HDR_BIT:
+	case BNXT_ULP_COND_OPCODE_HDR_BIT_IS_SET:
 		if (ULP_BITMAP_ISSET(parms->hdr_bitmap->bits,
 				     tbl->cond_operand))
 			rc = 0;
 		break;
+	case BNXT_ULP_COND_OPCODE_COMP_FIELD_NOT_SET:
+		if (tbl->cond_operand < BNXT_ULP_CF_IDX_LAST &&
+		    !ULP_COMP_FLD_IDX_RD(parms, tbl->cond_operand))
+			rc = 0;
+		break;
+	case BNXT_ULP_COND_OPCODE_ACTION_BIT_NOT_SET:
+		if (!ULP_BITMAP_ISSET(parms->act_bitmap->bits,
+				      tbl->cond_operand))
+			rc = 0;
+		break;
+	case BNXT_ULP_COND_OPCODE_HDR_BIT_NOT_SET:
+		if (!ULP_BITMAP_ISSET(parms->hdr_bitmap->bits,
+				      tbl->cond_operand))
+			rc = 0;
+		break;
 	default:
 		BNXT_TF_DBG(ERR,
 			    "Invalid arg in mapper tbl for cond opcode\n");
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
index 3d65073..c587ff5 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_act.c
@@ -284,7 +284,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.resource_type = TF_TBL_TYPE_ACT_STATS_64,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_INT_COUNT,
-	.cond_opcode = BNXT_ULP_COND_OPCODE_ACTION_BIT,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_ACTION_BIT_IS_SET,
 	.cond_operand = BNXT_ULP_ACTION_BIT_COUNT,
 	.direction = TF_DIR_RX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
@@ -331,7 +331,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.resource_type = TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
-	.cond_opcode = BNXT_ULP_COND_OPCODE_COMP_FIELD,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_COMP_FIELD_IS_SET,
 	.cond_operand = BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG,
 	.direction = TF_DIR_TX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
@@ -348,7 +348,7 @@ struct bnxt_ulp_mapper_tbl_info ulp_act_tbl_list[] = {
 	.resource_type = TF_TBL_TYPE_ACT_SP_SMAC_IPV4,
 	.resource_sub_type =
 		BNXT_ULP_RESOURCE_SUB_TYPE_INDEX_TYPE_NORMAL,
-	.cond_opcode = BNXT_ULP_COND_OPCODE_COMP_FIELD,
+	.cond_opcode = BNXT_ULP_COND_OPCODE_COMP_FIELD_IS_SET,
 	.cond_operand = BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG,
 	.direction = TF_DIR_TX,
 	.srch_b4_alloc = BNXT_ULP_SEARCH_BEFORE_ALLOC_NO,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index e13d20b..6d6a734 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -127,10 +127,13 @@ enum bnxt_ulp_cf_idx {
 
 enum bnxt_ulp_cond_opcode {
 	BNXT_ULP_COND_OPCODE_NOP = 0,
-	BNXT_ULP_COND_OPCODE_COMP_FIELD = 1,
-	BNXT_ULP_COND_OPCODE_ACTION_BIT = 2,
-	BNXT_ULP_COND_OPCODE_HDR_BIT = 3,
-	BNXT_ULP_COND_OPCODE_LAST = 4
+	BNXT_ULP_COND_OPCODE_COMP_FIELD_IS_SET = 1,
+	BNXT_ULP_COND_OPCODE_ACTION_BIT_IS_SET = 2,
+	BNXT_ULP_COND_OPCODE_HDR_BIT_IS_SET = 3,
+	BNXT_ULP_COND_OPCODE_COMP_FIELD_NOT_SET = 4,
+	BNXT_ULP_COND_OPCODE_ACTION_BIT_NOT_SET = 5,
+	BNXT_ULP_COND_OPCODE_HDR_BIT_NOT_SET = 6,
+	BNXT_ULP_COND_OPCODE_LAST = 7
 };
 
 enum bnxt_ulp_critical_resource {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 04/20] net/bnxt: add validations to dpdk port id and phy port parsing
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
                   ` (2 preceding siblings ...)
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 03/20] net/bnxt: add support for negative conditional opcodes Somnath Kotur
@ 2020-07-06  8:24 ` Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 05/20] net/bnxt: add support for index opcode constant Somnath Kotur
                   ` (15 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added validations to the ulp parser to validate the dpdk port id
and phy port index during the flow creation.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Michael Baucom <michael.baucom@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 50 ++++++++++++++++++++++++++++++--
 1 file changed, 48 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index a4dbd84..b8146c8 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -297,8 +297,13 @@ ulp_rte_port_id_hdr_handler(const struct rte_flow_item *item,
 	 * Copy the rte_flow_item for Port into hdr_field using port id
 	 * header fields.
 	 */
-	if (port_spec)
+	if (port_spec) {
 		svif = (uint16_t)port_spec->id;
+		if (svif >= RTE_MAX_ETHPORTS) {
+			BNXT_TF_DBG(ERR, "ParseErr:Portid is not valid\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+	}
 	if (port_mask)
 		mask = (uint16_t)port_mask->id;
 
@@ -314,6 +319,8 @@ ulp_rte_phy_port_hdr_handler(const struct rte_flow_item *item,
 	const struct rte_flow_item_phy_port *port_spec = item->spec;
 	const struct rte_flow_item_phy_port *port_mask = item->mask;
 	uint32_t svif = 0, mask = 0;
+	struct bnxt_ulp_device_params *dparms;
+	uint32_t dev_id;
 
 	/* Copy the rte_flow_item for phy port into hdr_field */
 	if (port_spec)
@@ -321,6 +328,22 @@ ulp_rte_phy_port_hdr_handler(const struct rte_flow_item *item,
 	if (port_mask)
 		mask = port_mask->index;
 
+	if (bnxt_ulp_cntxt_dev_id_get(params->ulp_ctx, &dev_id)) {
+		BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
+		return -EINVAL;
+	}
+
+	dparms = bnxt_ulp_device_params_get(dev_id);
+	if (!dparms) {
+		BNXT_TF_DBG(DEBUG, "Failed to get device parms\n");
+		return -EINVAL;
+	}
+
+	if (svif > dparms->num_phy_ports) {
+		BNXT_TF_DBG(ERR, "ParseErr:Phy Port is not valid\n");
+		return BNXT_TF_RC_PARSE_ERR;
+	}
+
 	/* Update the SVIF details */
 	return ulp_rte_parser_svif_set(params, item->type, svif, mask);
 }
@@ -1330,7 +1353,12 @@ ulp_rte_port_id_act_handler(const struct rte_flow_action *act_item,
 				    "ParseErr:Portid Original not supported\n");
 			return BNXT_TF_RC_PARSE_ERR;
 		}
-		/* TBD: Update the computed VNIC using port conversion */
+		/* Update the computed VNIC using port conversion */
+		if (port_id->id >= RTE_MAX_ETHPORTS) {
+			BNXT_TF_DBG(ERR,
+				    "ParseErr:Portid is not valid\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
 		pid = bnxt_get_vnic_id(port_id->id, BNXT_ULP_INTF_TYPE_INVALID);
 		pid = rte_cpu_to_be_32(pid);
 		memcpy(&param->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
@@ -1349,6 +1377,8 @@ ulp_rte_phy_port_act_handler(const struct rte_flow_action *action_item,
 {
 	const struct rte_flow_action_phy_port *phy_port;
 	uint32_t vport;
+	struct bnxt_ulp_device_params *dparms;
+	uint32_t dev_id;
 
 	phy_port = action_item->conf;
 	if (phy_port) {
@@ -1357,6 +1387,22 @@ ulp_rte_phy_port_act_handler(const struct rte_flow_action *action_item,
 				    "Parse Err:Port Original not supported\n");
 			return BNXT_TF_RC_PARSE_ERR;
 		}
+		if (bnxt_ulp_cntxt_dev_id_get(prm->ulp_ctx, &dev_id)) {
+			BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
+			return -EINVAL;
+		}
+
+		dparms = bnxt_ulp_device_params_get(dev_id);
+		if (!dparms) {
+			BNXT_TF_DBG(DEBUG, "Failed to get device parms\n");
+			return -EINVAL;
+		}
+
+		if (phy_port->index > dparms->num_phy_ports) {
+			BNXT_TF_DBG(ERR, "ParseErr:Phy Port is not valid\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+
 		/* Get the vport of the physical port */
 		/* TBD: shall be changed later to portdb call */
 		vport = 1 << phy_port->index;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 05/20] net/bnxt: add support for index opcode constant
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
                   ` (3 preceding siblings ...)
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 04/20] net/bnxt: add validations to dpdk port id and phy port parsing Somnath Kotur
@ 2020-07-06  8:24 ` Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 06/20] net/bnxt: updated hsi_struct_def_dpdk.h Somnath Kotur
                   ` (14 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Add support for index opcode constant so that
parif configuration could be constant value.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Michael Baucom <michael.baucom@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_def_rules.c        | 11 ++++++++---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c           |  7 +++++--
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h |  4 +++-
 3 files changed, 16 insertions(+), 6 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_def_rules.c b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c
index 46b558f..b01ad0b 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_def_rules.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_def_rules.c
@@ -12,6 +12,8 @@
 #include "ulp_flow_db.h"
 #include "ulp_mapper.h"
 
+#define BNXT_ULP_FREE_PARIF_BASE 11
+
 struct bnxt_ulp_def_param_handler {
 	int32_t (*vfr_func)(struct bnxt_ulp_context *ulp_ctx,
 			    struct ulp_tlv_param *param,
@@ -81,12 +83,15 @@ ulp_set_parif_in_comp_fld(struct bnxt_ulp_context *ulp_ctx,
 	if (rc)
 		return rc;
 
-	if (parif_type == BNXT_ULP_PHY_PORT_PARIF)
+	if (parif_type == BNXT_ULP_PHY_PORT_PARIF) {
 		idx = BNXT_ULP_CF_IDX_PHY_PORT_PARIF;
-	else if (parif_type == BNXT_ULP_DRV_FUNC_PARIF)
+	} else if (parif_type == BNXT_ULP_DRV_FUNC_PARIF) {
 		idx = BNXT_ULP_CF_IDX_DRV_FUNC_PARIF;
-	else
+		/* Parif needs to be reset to a free partition */
+		parif += BNXT_ULP_FREE_PARIF_BASE;
+	} else {
 		idx = BNXT_ULP_CF_IDX_VF_FUNC_PARIF;
+	}
 
 	ULP_COMP_FLD_IDX_WR(mapper_params, idx, parif);
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index eb77328..b0d31a8 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -1922,11 +1922,14 @@ ulp_mapper_if_tbl_process(struct bnxt_ulp_mapper_parms *parms,
 	}
 
 	/* Get the index details from computed field */
-	if (tbl->index_opcode != BNXT_ULP_INDEX_OPCODE_COMP_FIELD) {
+	if (tbl->index_opcode == BNXT_ULP_INDEX_OPCODE_COMP_FIELD) {
+		idx = ULP_COMP_FLD_IDX_RD(parms, tbl->index_operand);
+	} else if (tbl->index_opcode == BNXT_ULP_INDEX_OPCODE_CONSTANT) {
+		idx = tbl->index_operand;
+	} else {
 		BNXT_TF_DBG(ERR, "Invalid tbl index opcode\n");
 		return -EINVAL;
 	}
-	idx = ULP_COMP_FLD_IDX_RD(parms, tbl->index_operand);
 
 	/* Perform the tf table set by filling the set params */
 	iftbl_params.dir = tbl->direction;
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 6d6a734..892d8ea 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -188,7 +188,8 @@ enum bnxt_ulp_index_opcode {
 	BNXT_ULP_INDEX_OPCODE_ALLOCATE = 1,
 	BNXT_ULP_INDEX_OPCODE_GLOBAL = 2,
 	BNXT_ULP_INDEX_OPCODE_COMP_FIELD = 3,
-	BNXT_ULP_INDEX_OPCODE_LAST = 4
+	BNXT_ULP_INDEX_OPCODE_CONSTANT = 4,
+	BNXT_ULP_INDEX_OPCODE_LAST = 5
 };
 
 enum bnxt_ulp_mapper_opc {
@@ -511,6 +512,7 @@ enum bnxt_ulp_sym {
 	BNXT_ULP_SYM_IP_PROTO_IP_IN_IP = 4,
 	BNXT_ULP_SYM_IP_PROTO_TCP = 6,
 	BNXT_ULP_SYM_IP_PROTO_UDP = 17,
+	BNXT_ULP_SYM_VF_FUNC_PARIF = 15,
 	BNXT_ULP_SYM_NO = 0,
 	BNXT_ULP_SYM_YES = 1
 };
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 06/20] net/bnxt: updated hsi_struct_def_dpdk.h
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
                   ` (4 preceding siblings ...)
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 05/20] net/bnxt: add support for index opcode constant Somnath Kotur
@ 2020-07-06  8:24 ` Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 07/20] nxt/bnxt: added HWRM support for global cfg Somnath Kotur
                   ` (13 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Jay Ding <jay.ding@broadcom.com>

Brought in the latest hsi_struct_def_dpdk.h in order to get
the TF global cfg set/get HWRM cmds.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/hsi_struct_def_dpdk.h | 1486 +++++++++++++++++++++++++++++++-
 1 file changed, 1468 insertions(+), 18 deletions(-)

diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h
index 30516eb..b76aee2 100644
--- a/drivers/net/bnxt/hsi_struct_def_dpdk.h
+++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h
@@ -341,9 +341,14 @@ struct cmd_nums {
 	#define HWRM_RING_CMPL_RING_QAGGINT_PARAMS        UINT32_C(0x52)
 	#define HWRM_RING_CMPL_RING_CFG_AGGINT_PARAMS     UINT32_C(0x53)
 	#define HWRM_RING_AGGINT_QCAPS                    UINT32_C(0x54)
+	#define HWRM_RING_SQ_ALLOC                        UINT32_C(0x55)
+	#define HWRM_RING_SQ_CFG                          UINT32_C(0x56)
+	#define HWRM_RING_SQ_FREE                         UINT32_C(0x57)
 	#define HWRM_RING_RESET                           UINT32_C(0x5e)
 	#define HWRM_RING_GRP_ALLOC                       UINT32_C(0x60)
 	#define HWRM_RING_GRP_FREE                        UINT32_C(0x61)
+	#define HWRM_RING_CFG                             UINT32_C(0x62)
+	#define HWRM_RING_QCFG                            UINT32_C(0x63)
 	/* Reserved for future use. */
 	#define HWRM_RESERVED5                            UINT32_C(0x64)
 	/* Reserved for future use. */
@@ -695,6 +700,10 @@ struct cmd_nums {
 	/* Experimental */
 	#define HWRM_TF_TCAM_FREE                         UINT32_C(0x2fb)
 	/* Experimental */
+	#define HWRM_TF_GLOBAL_CFG_SET                    UINT32_C(0x2fc)
+	/* Experimental */
+	#define HWRM_TF_GLOBAL_CFG_GET                    UINT32_C(0x2fd)
+	/* Experimental */
 	#define HWRM_SV                                   UINT32_C(0x400)
 	/* Experimental */
 	#define HWRM_DBG_READ_DIRECT                      UINT32_C(0xff10)
@@ -933,8 +942,8 @@ struct hwrm_err_output {
 #define HWRM_VERSION_MINOR 10
 #define HWRM_VERSION_UPDATE 1
 /* non-zero means beta version */
-#define HWRM_VERSION_RSVD 45
-#define HWRM_VERSION_STR "1.10.1.45"
+#define HWRM_VERSION_RSVD 48
+#define HWRM_VERSION_STR "1.10.1.48"
 
 /****************
  * hwrm_ver_get *
@@ -8889,6 +8898,16 @@ struct hwrm_func_vf_cfg_input {
 	 */
 	#define HWRM_FUNC_VF_CFG_INPUT_FLAGS_L2_CTX_ASSETS_TEST \
 		UINT32_C(0x80)
+	/*
+	 * If this bit is set to 1, the VF driver is requesting FW to enable
+	 * PPP TX PUSH feature on all the TX rings specified in the
+	 * num_tx_rings field. By default, the PPP TX push feature is
+	 * disabled for all the TX rings of the VF. This flag is ignored if
+	 * the num_tx_rings field is not specified or the VF doesn't support
+	 * PPP tx push feature.
+	 */
+	#define HWRM_FUNC_VF_CFG_INPUT_FLAGS_PPP_PUSH_MODE_ENABLE \
+		UINT32_C(0x100)
 	/* The number of RSS/COS contexts requested for the VF. */
 	uint16_t	num_rsscos_ctxs;
 	/* The number of completion rings requested for the VF. */
@@ -9367,7 +9386,30 @@ struct hwrm_func_qcaps_output {
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_HOT_RESET_IF_SUPPORT \
 		UINT32_C(0x8)
-	uint8_t	unused_1[3];
+	/* If 1, the proxy mode is supported on this function */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_PROXY_MODE_SUPPORT \
+		UINT32_C(0x10)
+	/*
+	 * If 1, the tx rings source interface override feature is supported
+	 * on this function.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_TX_PROXY_SRC_INTF_OVERRIDE_SUPPORT \
+		UINT32_C(0x20)
+	/*
+	 * If 1, the device supports scheduler queues. SQs can be managed
+	 * using RING_SQ_ALLOC/CFG/FREE commands.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_SQ_SUPPORTED \
+		UINT32_C(0x40)
+	/*
+	 * If set to 1, then this function supports the TX push mode that
+	 * uses ping-pong buffers from the push pages.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_PPP_PUSH_MODE_SUPPORTED \
+		UINT32_C(0x80)
+	/* The maximum number of SQs supported by this device. */
+	uint8_t	max_sqs;
+	uint8_t	unused_1[2];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -9538,6 +9580,13 @@ struct hwrm_func_qcfg_output {
 	#define HWRM_FUNC_QCFG_OUTPUT_FLAGS_HOT_RESET_ALLOWED \
 		UINT32_C(0x200)
 	/*
+	 * If set to 1, then the PPP tx push mode is enabled for all the
+	 * reserved TX rings of this function. If set to 0, then PPP tx push
+	 * mode is disabled for all the reserved TX rings of this function.
+	 */
+	#define HWRM_FUNC_QCFG_OUTPUT_FLAGS_PPP_PUSH_MODE_ENABLED \
+		UINT32_C(0x400)
+	/*
 	 * This value is current MAC address configured for this
 	 * function. A value of 00-00-00-00-00-00 indicates no
 	 * MAC address is currently configured.
@@ -9869,7 +9918,7 @@ struct hwrm_func_qcfg_output {
  *****************/
 
 
-/* hwrm_func_cfg_input (size:704b/88B) */
+/* hwrm_func_cfg_input (size:768b/96B) */
 struct hwrm_func_cfg_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
@@ -10100,6 +10149,16 @@ struct hwrm_func_cfg_input {
 	 */
 	#define HWRM_FUNC_CFG_INPUT_FLAGS_HOT_RESET_IF_EN_DIS \
 		UINT32_C(0x4000000)
+	/*
+	 * If this bit is set to 1, the PF driver is requesting FW
+	 * to enable PPP TX PUSH feature on all the TX rings specified in
+	 * the num_tx_rings field. By default, the PPP TX push feature is
+	 * disabled for all the TX rings of the function. This flag is
+	 * ignored if num_tx_rings field is not specified or the function
+	 * doesn't support PPP tx push feature.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_FLAGS_PPP_PUSH_MODE_ENABLE \
+		UINT32_C(0x8000000)
 	uint32_t	enables;
 	/*
 	 * This bit must be '1' for the mtu field to be
@@ -10246,6 +10305,12 @@ struct hwrm_func_cfg_input {
 	#define HWRM_FUNC_CFG_INPUT_ENABLES_HOT_RESET_IF_SUPPORT \
 		UINT32_C(0x800000)
 	/*
+	 * This bit must be '1' for the sq_id field to be
+	 * configured.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_ENABLES_SQ_ID \
+		UINT32_C(0x1000000)
+	/*
 	 * The maximum transmission unit of the function.
 	 * The HWRM should make sure that the mtu of
 	 * the function does not exceed the mtu of the physical
@@ -10509,6 +10574,9 @@ struct hwrm_func_cfg_input {
 	 * be reserved for this function on the RX side.
 	 */
 	uint16_t	num_mcast_filters;
+	/* Used by a PF driver to associate a SQ with a VF. */
+	uint16_t	sq_id;
+	uint8_t	unused_0[6];
 } __rte_packed;
 
 /* hwrm_func_cfg_output (size:128b/16B) */
@@ -10682,7 +10750,7 @@ struct hwrm_func_qstats_output {
  ************************/
 
 
-/* hwrm_func_qstats_ext_input (size:192b/24B) */
+/* hwrm_func_qstats_ext_input (size:256b/32B) */
 struct hwrm_func_qstats_ext_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
@@ -10737,7 +10805,22 @@ struct hwrm_func_qstats_ext_input {
 	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK UINT32_C(0x2)
 	#define HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_LAST \
 		HWRM_FUNC_QSTATS_EXT_INPUT_FLAGS_COUNTER_MASK
-	uint8_t	unused_0[5];
+	uint8_t	unused_0[1];
+	uint32_t	enables;
+	/*
+	 * This bit must be '1' for the sq_id and traffic_class fields to be
+	 * configured.
+	 */
+	#define HWRM_FUNC_QSTATS_EXT_INPUT_ENABLES_SQ_ID     UINT32_C(0x1)
+	/* Specifies the SQ for which to gather statistics */
+	uint16_t	sq_id;
+	/*
+	 * Specifies the traffic class for which to gather statistics. Valid
+	 * values are 0 through (max_configurable_queues - 1), where
+	 * max_configurable_queues is in the response of HWRM_QUEUE_QPORTCFG
+	 */
+	uint16_t	traffic_class;
+	uint8_t	unused_1[4];
 } __rte_packed;
 
 /* hwrm_func_qstats_ext_output (size:1472b/184B) */
@@ -15128,8 +15211,13 @@ struct hwrm_port_phy_cfg_input {
 	#define HWRM_PORT_PHY_CFG_INPUT_FLAGS_DEPRECATED \
 		UINT32_C(0x2)
 	/*
-	 * When this bit is set to '1', the link shall be forced to
-	 * the force_link_speed value.
+	 * When this bit is set to '1', and the force_pam4_link_speed
+	 * bit in the 'enables' field is '0', the link shall be forced
+	 * to the force_link_speed value.
+	 *
+	 * When this bit is set to '1', and the force_pam4_link_speed
+	 * bit in the 'enables' field is '1', the link shall be forced
+	 * to the force_pam4_link_speed value.
 	 *
 	 * When this bit is set to '1', the HWRM client should
 	 * not enable any of the auto negotiation related
@@ -15602,7 +15690,23 @@ struct hwrm_port_phy_cfg_input {
 	/* 10Gb link speed */
 	#define HWRM_PORT_PHY_CFG_INPUT_EEE_LINK_SPEED_MASK_10GB \
 		UINT32_C(0x40)
-	uint8_t	unused_2[2];
+	/*
+	 * This is the speed that will be used if the force and force_pam4
+	 * bits are '1'.  If unsupported speed is selected, an error
+	 * will be generated.
+	 */
+	uint16_t	force_pam4_link_speed;
+	/* 50Gb link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_50GB \
+		UINT32_C(0x1f4)
+	/* 100Gb link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_100GB \
+		UINT32_C(0x3e8)
+	/* 200Gb link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB \
+		UINT32_C(0x7d0)
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_LAST \
+		HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB
 	/*
 	 * Requested setting of TX LPI timer in microseconds.
 	 * This field is valid only when EEE is enabled and TX LPI is
@@ -25307,6 +25411,9 @@ struct hwrm_vnic_cfg_input {
 	/* This bit must be '1' for the queue_id field to be configured. */
 	#define HWRM_VNIC_CFG_INPUT_ENABLES_QUEUE_ID \
 		UINT32_C(0x80)
+	/* This bit must be '1' for the rx_csum_v2_mode field to be configured. */
+	#define HWRM_VNIC_CFG_INPUT_ENABLES_RX_CSUM_V2_MODE \
+		UINT32_C(0x100)
 	/* Logical vnic ID */
 	uint16_t	vnic_id;
 	/*
@@ -25364,7 +25471,41 @@ struct hwrm_vnic_cfg_input {
 	 * filter rules with destination VNIC specified.
 	 */
 	uint16_t	queue_id;
-	uint8_t	unused0[6];
+	/*
+	 * If the device supports the RX V2 and RX TPA start V2 completion
+	 * records as indicated by the HWRM_VNIC_QCAPS command, this field is
+	 * used to specify the two RX checksum modes supported by these
+	 * completion records.
+	 */
+	uint8_t	rx_csum_v2_mode;
+	/*
+	 * When configured with this checksum mode, the number of header
+	 * groups in the delivered packet with a valid IP checksum and
+	 * the number of header groups in the delivered packet with a valid
+	 * L4 checksum are reported. Valid checksums are counted from the
+	 * outermost header group to the innermost header group, stopping at
+	 * the first error.  This is the default checksum mode supported if
+	 * the driver doesn't explicitly configure the RX checksum mode.
+	 */
+	#define HWRM_VNIC_CFG_INPUT_RX_CSUM_V2_MODE_DEFAULT UINT32_C(0x0)
+	/*
+	 * When configured with this checksum mode, the checksum status is
+	 * reported using 'all ok' mode. In the RX completion record, one
+	 * bit indicates if the IP checksum is valid for all the parsed
+	 * header groups with an IP checksum. Another bit indicates if the
+	 * L4 checksum is valid for all the parsed header groups with an L4
+	 * checksum. The number of header groups that were parsed by the
+	 * chip and passed in the delivered packet is also reported.
+	 */
+	#define HWRM_VNIC_CFG_INPUT_RX_CSUM_V2_MODE_ALL_OK  UINT32_C(0x1)
+	/*
+	 * Any rx_csum_v2_mode value larger than or equal to this is not
+	 * valid
+	 */
+	#define HWRM_VNIC_CFG_INPUT_RX_CSUM_V2_MODE_MAX     UINT32_C(0x2)
+	#define HWRM_VNIC_CFG_INPUT_RX_CSUM_V2_MODE_LAST \
+		HWRM_VNIC_CFG_INPUT_RX_CSUM_V2_MODE_MAX
+	uint8_t	unused0[5];
 } __rte_packed;
 
 /* hwrm_vnic_cfg_output (size:128b/16B) */
@@ -25539,7 +25680,33 @@ struct hwrm_vnic_qcfg_output {
 	 * queue association.
 	 */
 	uint16_t	queue_id;
-	uint8_t	unused_1[5];
+	/*
+	 * If the device supports the RX V2 and RX TPA start V2 completion
+	 * records as indicated by the HWRM_VNIC_QCAPS command, this field is
+	 * used to specify the current RX checksum mode configured for all the
+	 * RX rings of a VNIC.
+	 */
+	uint8_t	rx_csum_v2_mode;
+	/*
+	 * This value indicates that the VNIC is configured to use the
+	 * default RX checksum mode for all the rings associated with this
+	 * VNIC.
+	 */
+	#define HWRM_VNIC_QCFG_OUTPUT_RX_CSUM_V2_MODE_DEFAULT UINT32_C(0x0)
+	/*
+	 * This value indicates that the VNIC is configured to use the RX
+	 * checksum ‘all_ok’ mode for all the rings associated with this
+	 * VNIC.
+	 */
+	#define HWRM_VNIC_QCFG_OUTPUT_RX_CSUM_V2_MODE_ALL_OK  UINT32_C(0x1)
+	/*
+	 * Any rx_csum_v2_mode value larger than or equal to this is not
+	 * valid
+	 */
+	#define HWRM_VNIC_QCFG_OUTPUT_RX_CSUM_V2_MODE_MAX     UINT32_C(0x2)
+	#define HWRM_VNIC_QCFG_OUTPUT_RX_CSUM_V2_MODE_LAST \
+		HWRM_VNIC_QCFG_OUTPUT_RX_CSUM_V2_MODE_MAX
+	uint8_t	unused_1[4];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -25677,6 +25844,17 @@ struct hwrm_vnic_qcaps_output {
 	#define HWRM_VNIC_QCAPS_OUTPUT_FLAGS_COS_ASSIGNMENT_CAP \
 		UINT32_C(0x100)
 	/*
+	 * When this bit is '1', it indicates that HW and firmware supports
+	 * the use of RX V2 and RX TPA start V2 completion records for all
+	 * the RX rings of a VNIC. Once set, this feature is mandatory to
+	 * be used for the RX rings of the VNIC. Additionally, two new RX
+	 * checksum features supported by these ompletion records can be
+	 * configured using the HWRM_VNIC_CFG on a VNIC. If set to '0', the
+	 * HW and the firmware does not support this feature.
+	 */
+	#define HWRM_VNIC_QCAPS_OUTPUT_FLAGS_RX_CMPL_V2_CAP \
+		UINT32_C(0x200)
+	/*
 	 * This field advertises the maximum concurrent TPA aggregations
 	 * supported by the VNIC on new devices that support TPA v2.
 	 * '0' means that TPA v2 is not supported.
@@ -26308,6 +26486,13 @@ struct hwrm_vnic_plcmodes_cfg_input {
 	 */
 	#define HWRM_VNIC_PLCMODES_CFG_INPUT_FLAGS_HDS_ROCE \
 		UINT32_C(0x20)
+	/*
+	 * When this bit is '1', the VNIC shall be configured use the virtio
+	 * placement algorithm. This feature can only be configured when
+	 * proxy mode is supported on the function.
+	 */
+	#define HWRM_VNIC_PLCMODES_CFG_INPUT_FLAGS_VIRTIO_PLACEMENT \
+		UINT32_C(0x40)
 	uint32_t	enables;
 	/*
 	 * This bit must be '1' for the jumbo_thresh_valid field to be
@@ -26327,6 +26512,12 @@ struct hwrm_vnic_plcmodes_cfg_input {
 	 */
 	#define HWRM_VNIC_PLCMODES_CFG_INPUT_ENABLES_HDS_THRESHOLD_VALID \
 		UINT32_C(0x4)
+	/*
+	 * This bit must be '1' for the max_bds_valid field to be
+	 * configured.
+	 */
+	#define HWRM_VNIC_PLCMODES_CFG_INPUT_ENABLES_MAX_BDS_VALID \
+		UINT32_C(0x8)
 	/* Logical vnic ID */
 	uint32_t	vnic_id;
 	/*
@@ -26354,7 +26545,21 @@ struct hwrm_vnic_plcmodes_cfg_input {
 	 * This value shall be in multiple of 4 bytes.
 	 */
 	uint16_t	hds_threshold;
-	uint8_t	unused_0[6];
+	/*
+	 * When virtio placement algorithm is enabled, this
+	 * value is used to determine the the maximum number of BDs
+	 * that can be used to place an Rx Packet.
+	 * If an incoming packet does not fit in the buffers described
+	 * by the max BDs, the packet will be dropped and an error
+	 * will be reported in the completion. Valid values for this
+	 * field are between 1 and 8. If the VNIC uses header-data-
+	 * separation and/or TPA with buffer spanning enabled, valid
+	 * values for this field are between 2 and 8.
+	 * This feature can only be configured when proxy mode is
+	 * supported on the function.
+	 */
+	uint16_t	max_bds;
+	uint8_t	unused_0[4];
 } __rte_packed;
 
 /* hwrm_vnic_plcmodes_cfg_output (size:128b/16B) */
@@ -26372,8 +26577,9 @@ struct hwrm_vnic_plcmodes_cfg_output {
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
 	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
 	 */
 	uint8_t	valid;
 } __rte_packed;
@@ -26472,6 +26678,13 @@ struct hwrm_vnic_plcmodes_qcfg_output {
 	#define HWRM_VNIC_PLCMODES_QCFG_OUTPUT_FLAGS_DFLT_VNIC \
 		UINT32_C(0x40)
 	/*
+	 * When this bit is '1', the VNIC is configured to use the virtio
+	 * placement algorithm. This feature can only be configured when
+	 * proxy mode is supported on the function.
+	 */
+	#define HWRM_VNIC_PLCMODES_QCFG_OUTPUT_FLAGS_VIRTIO_PLACEMENT \
+		UINT32_C(0x80)
+	/*
 	 * When jumbo placement algorithm is enabled, this value
 	 * is used to determine the threshold for jumbo placement.
 	 * Packets with length larger than this value will be
@@ -26496,13 +26709,28 @@ struct hwrm_vnic_plcmodes_qcfg_output {
 	 * This value shall be in multiple of 4 bytes.
 	 */
 	uint16_t	hds_threshold;
-	uint8_t	unused_0[5];
+	/*
+	 * When virtio placement algorithm is enabled, this
+	 * value is used to determine the the maximum number of BDs
+	 * that can be used to place an Rx Packet.
+	 * If an incoming packet does not fit in the buffers described
+	 * by the max BDs, the packet will be dropped and an error
+	 * will be reported in the completion. Valid values for this
+	 * field are between 1 and 8. If the VNIC uses header-data-
+	 * separation and/or TPA with buffer spanning enabled, valid
+	 * values for this field are between 2 and 8.
+	 * This feature can only be configured when proxy mode is supported
+	 * on the function
+	 */
+	uint16_t	max_bds;
+	uint8_t	unused_0[3];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
 	 * to indicate that the output has been completely written.
-	 * When writing a command completion or response to an internal processor,
-	 * the order of writes has to be such that this field is written last.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
 	 */
 	uint8_t	valid;
 } __rte_packed;
@@ -26700,6 +26928,12 @@ struct hwrm_ring_alloc_input {
 	 */
 	#define HWRM_RING_ALLOC_INPUT_ENABLES_RX_BUF_SIZE_VALID \
 		UINT32_C(0x100)
+	/*
+	 * This bit must be '1' for the sq_id field to be
+	 * configured.
+	 */
+	#define HWRM_RING_ALLOC_INPUT_ENABLES_SQ_ID \
+		UINT32_C(0x200)
 	/* Ring Type. */
 	uint8_t	ring_type;
 	/* L2 Completion Ring (CR) */
@@ -26765,7 +26999,8 @@ struct hwrm_ring_alloc_input {
 	 *    element of the ring.
 	 */
 	uint8_t	page_tbl_depth;
-	uint8_t	unused_1[2];
+	/* Used by a PF driver to associate a SQ with one of its TX rings. */
+	uint16_t	sq_id;
 	/*
 	 * Number of 16B units in the ring.  Minimum size for
 	 * a ring is 16 16B entries.
@@ -27132,6 +27367,290 @@ struct hwrm_ring_reset_output {
 	uint8_t	valid;
 } __rte_packed;
 
+/*****************
+ * hwrm_ring_cfg *
+ *****************/
+
+
+/* hwrm_ring_cfg_input (size:256b/32B) */
+struct hwrm_ring_cfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Ring Type. */
+	uint8_t	ring_type;
+	/* TX Ring (TR) */
+	#define HWRM_RING_CFG_INPUT_RING_TYPE_TX UINT32_C(0x1)
+	/* RX Ring (RR) */
+	#define HWRM_RING_CFG_INPUT_RING_TYPE_RX UINT32_C(0x2)
+	#define HWRM_RING_CFG_INPUT_RING_TYPE_LAST \
+		HWRM_RING_CFG_INPUT_RING_TYPE_RX
+	uint8_t	unused_0;
+	/* Physical number of the ring. */
+	uint16_t	ring_id;
+	/* Ring config enable bits. */
+	uint16_t	enables;
+	/*
+	 * For Rx rings, the incoming packet data can be placed at either
+	 * a 0B, 2B, 10B or 12B offset from the start of the Rx packet
+	 * buffer.
+	 * When '1', the received packet will be padded with 2B, 10B or 12B
+	 * of zeros at the front of the packet. The exact offset is specified
+	 * by rx_sop_pad_bytes parameter.
+	 * When '0', the received packet will not be padded.
+	 * Note that this flag is only used for Rx rings and is ignored
+	 * for all other rings included Rx Aggregation rings.
+	 */
+	#define HWRM_RING_CFG_INPUT_ENABLES_RX_SOP_PAD_ENABLE \
+		UINT32_C(0x1)
+	/*
+	 * Proxy mode enable, for Tx, Rx and Rx aggregation rings only.
+	 * When rings are allocated, the PCI function on which driver issues
+	 * HWRM_RING_CFG command is assumed to own the rings. Hardware takes
+	 * the buffer descriptors (BDs) from those rings is assumed to issue
+	 * packet payload DMA using same PCI function. When proxy mode is
+	 * enabled, hardware can perform payload DMA using another PCI
+	 * function on same or different host.
+	 * When set to '0', the PCI function on which driver issues
+	 * HWRM_RING_CFG command is used for host payload DMA operation.
+	 * When set to '1', the host PCI function specified by proxy_fid is
+	 * used for host payload DMA operation.
+	 */
+	#define HWRM_RING_CFG_INPUT_ENABLES_PROXY_MODE_ENABLE \
+		UINT32_C(0x2)
+	/*
+	 * Tx ring packet source interface override, for Tx rings only.
+	 * When TX rings are allocated, the PCI function on which driver
+	 * issues HWRM_RING_CFG is assumed to be source interface of
+	 * packets sent from TX ring.
+	 * When set to '1', the host PCI function specified by proxy_fid
+	 * is used as source interface of the transmitted packets.
+	 */
+	#define HWRM_RING_CFG_INPUT_ENABLES_TX_PROXY_SRC_INTF_OVERRIDE \
+		UINT32_C(0x4)
+	/* The sq_id field is valid */
+	#define HWRM_RING_CFG_INPUT_ENABLES_SQ_ID \
+		UINT32_C(0x8)
+	/* Update completion ring ID associated with Tx or Rx ring. */
+	#define HWRM_RING_CFG_INPUT_ENABLES_CMPL_RING_ID_UPDATE \
+		UINT32_C(0x10)
+	/*
+	 * Proxy function FID value.
+	 * This value is only used when either proxy_mode_enable flag or
+	 * tx_proxy_svif_override is set to '1'.
+	 * When proxy_mode_enable is set to '1', it identifies a host PCI
+	 * function used for host payload DMA operations.
+	 * When tx_proxy_src_intf is set to '1', it identifies a host PCI
+	 * function as source interface for all transmitted packets from
+	 * the TX ring.
+	 */
+	uint16_t	proxy_fid;
+	/*
+	 * Identifies the new scheduler queue (SQ) to associate with the ring.
+	 * Only valid for Tx rings.
+	 * A value of zero indicates that the Tx ring should be associated
+	 * with the default scheduler queue (SQ).
+	 */
+	uint16_t	sq_id;
+	/*
+	 * This field is valid for TX or Rx rings. This value identifies the
+	 * new completion ring ID to associate with the TX or Rx ring.
+	 */
+	uint16_t	cmpl_ring_id;
+	/*
+	 * Rx SOP padding amount in bytes.
+	 * This value is only used when rx_sop_pad_enable flag is set to '1'.
+	 */
+	uint8_t	rx_sop_pad_bytes;
+	uint8_t	unused_1[3];
+} __rte_packed;
+
+/* hwrm_ring_cfg_output (size:128b/16B) */
+struct hwrm_ring_cfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/******************
+ * hwrm_ring_qcfg *
+ ******************/
+
+
+/* hwrm_ring_qcfg_input (size:192b/24B) */
+struct hwrm_ring_qcfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Ring Type. */
+	uint8_t	ring_type;
+	/* TX Ring (TR) */
+	#define HWRM_RING_QCFG_INPUT_RING_TYPE_TX UINT32_C(0x1)
+	/* RX Ring (RR) */
+	#define HWRM_RING_QCFG_INPUT_RING_TYPE_RX UINT32_C(0x2)
+	#define HWRM_RING_QCFG_INPUT_RING_TYPE_LAST \
+		HWRM_RING_QCFG_INPUT_RING_TYPE_RX
+	uint8_t	unused_0[5];
+	/* Physical number of the ring. */
+	uint16_t	ring_id;
+} __rte_packed;
+
+/* hwrm_ring_qcfg_output (size:192b/24B) */
+struct hwrm_ring_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Ring config enable bits. */
+	uint16_t	enables;
+	/*
+	 * For Rx rings, the incoming packet data can be placed at either
+	 * a 0B, 2B, 10B or 12B offset from the start of the Rx packet
+	 * buffer.
+	 * When '1', the received packet will be padded with 2B, 10B or 12B
+	 * of zeros at the front of the packet. The exact offset is specified
+	 * by rx_sop_pad_bytes parameter.
+	 * When '0', the received packet will not be padded.
+	 * Note that this flag is only used for Rx rings and is ignored
+	 * for all other rings included Rx Aggregation rings.
+	 */
+	#define HWRM_RING_QCFG_OUTPUT_ENABLES_RX_SOP_PAD_ENABLE \
+		UINT32_C(0x1)
+	/*
+	 * Proxy mode enable, for Tx, Rx and Rx aggregation rings only.
+	 * When rings are allocated, the PCI function on which driver issues
+	 * HWRM_RING_CFG command is assumed to own the rings. Hardware takes
+	 * the buffer descriptors (BDs) from those rings is assumed to issue
+	 * packet payload DMA using same PCI function. When proxy mode is
+	 * enabled, hardware can perform payload DMA using another PCI
+	 * function on same or different host.
+	 * When set to '0', the PCI function on which driver issues
+	 * HWRM_RING_CFG command is used for host payload DMA operation.
+	 * When set to '1', the host PCI function specified by proxy_fid is
+	 * used for host payload DMA operation.
+	 */
+	#define HWRM_RING_QCFG_OUTPUT_ENABLES_PROXY_MODE_ENABLE \
+		UINT32_C(0x2)
+	/*
+	 * Tx ring packet source interface override, for Tx rings only.
+	 * When TX rings are allocated, the PCI function on which driver
+	 * issues HWRM_RING_CFG is assumed to be source interface of
+	 * packets sent from TX ring.
+	 * When set to '1', the host PCI function specified by proxy_fid is
+	 * used as source interface of the transmitted packets.
+	 */
+	#define HWRM_RING_QCFG_OUTPUT_ENABLES_TX_PROXY_SRC_INTF_OVERRIDE \
+		UINT32_C(0x4)
+	/*
+	 * Proxy function FID value.
+	 * This value is only used when either proxy_mode_enable flag or
+	 * tx_proxy_svif_override is set to '1'.
+	 * When proxy_mode_enable is set to '1', it identifies a host PCI
+	 * function used for host payload DMA operations.
+	 * When tx_proxy_src_intf is set to '1', it identifies a host PCI
+	 * function as source interface for all transmitted packets from the TX
+	 * ring.
+	 */
+	uint16_t	proxy_fid;
+	/*
+	 * Identifies the new scheduler queue (SQ) to associate with the ring.
+	 * Only valid for Tx rings.
+	 * A value of zero indicates that the Tx ring should be associated with
+	 * the default scheduler queue (SQ).
+	 */
+	uint16_t	sq_id;
+	/*
+	 * This field is used when ring_type is a TX or Rx ring.
+	 * This value indicates what completion ring the TX or Rx ring
+	 * is associated with.
+	 */
+	uint16_t	cmpl_ring_id;
+	/*
+	 * Rx SOP padding amount in bytes.
+	 * This value is only used when rx_sop_pad_enable flag is set to '1'.
+	 */
+	uint8_t	rx_sop_pad_bytes;
+	uint8_t	unused_0[6];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal
+	 * processor, the order of writes has to be such that this field is
+	 * written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
 /**************************
  * hwrm_ring_aggint_qcaps *
  **************************/
@@ -27702,6 +28221,780 @@ struct hwrm_ring_grp_free_output {
 	 */
 	uint8_t	valid;
 } __rte_packed;
+
+/**********************
+ * hwrm_ring_sq_alloc *
+ **********************/
+
+
+/* hwrm_ring_sq_alloc_input (size:1088b/136B) */
+struct hwrm_ring_sq_alloc_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	uint32_t	enables;
+	/*
+	 * This bit must be '1' for the tqm_ring0 fields to be
+	 * configured.
+	 */
+	#define HWRM_RING_SQ_ALLOC_INPUT_ENABLES_TQM_RING0     UINT32_C(0x1)
+	/*
+	 * This bit must be '1' for the tqm_ring1 fields to be
+	 * configured.
+	 */
+	#define HWRM_RING_SQ_ALLOC_INPUT_ENABLES_TQM_RING1     UINT32_C(0x2)
+	/*
+	 * This bit must be '1' for the tqm_ring2 fields to be
+	 * configured.
+	 */
+	#define HWRM_RING_SQ_ALLOC_INPUT_ENABLES_TQM_RING2     UINT32_C(0x4)
+	/*
+	 * This bit must be '1' for the tqm_ring3 fields to be
+	 * configured.
+	 */
+	#define HWRM_RING_SQ_ALLOC_INPUT_ENABLES_TQM_RING3     UINT32_C(0x8)
+	/*
+	 * This bit must be '1' for the tqm_ring4 fields to be
+	 * configured.
+	 */
+	#define HWRM_RING_SQ_ALLOC_INPUT_ENABLES_TQM_RING4     UINT32_C(0x10)
+	/*
+	 * This bit must be '1' for the tqm_ring5 fields to be
+	 * configured.
+	 */
+	#define HWRM_RING_SQ_ALLOC_INPUT_ENABLES_TQM_RING5     UINT32_C(0x20)
+	/*
+	 * This bit must be '1' for the tqm_ring6 fields to be
+	 * configured.
+	 */
+	#define HWRM_RING_SQ_ALLOC_INPUT_ENABLES_TQM_RING6     UINT32_C(0x40)
+	/*
+	 * This bit must be '1' for the tqm_ring7 fields to be
+	 * configured.
+	 */
+	#define HWRM_RING_SQ_ALLOC_INPUT_ENABLES_TQM_RING7     UINT32_C(0x80)
+	/* Reserved for future use. */
+	uint32_t	reserved;
+	/* TQM ring 0 page size and level. */
+	uint8_t	tqm_ring0_pg_size_tqm_ring0_lvl;
+	/* TQM ring 0 PBL indirect levels. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING0_LVL_MASK      UINT32_C(0xf)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING0_LVL_SFT       0
+	/* PBL pointer is physical start address. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING0_LVL_LVL_0 \
+		UINT32_C(0x0)
+	/* PBL pointer points to PTE table. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING0_LVL_LVL_1 \
+		UINT32_C(0x1)
+	/*
+	 * PBL pointer points to PDE table with each entry pointing to PTE
+	 * tables.
+	 */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING0_LVL_LVL_2 \
+		UINT32_C(0x2)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING0_LVL_LAST \
+		HWRM_RING_SQ_ALLOC_INPUT_TQM_RING0_LVL_LVL_2
+	/* TQM ring 0 page size. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING0_PG_SIZE_MASK  UINT32_C(0xf0)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING0_PG_SIZE_SFT   4
+	/* 4KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING0_PG_SIZE_PG_4K \
+		(UINT32_C(0x0) << 4)
+	/* 8KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING0_PG_SIZE_PG_8K \
+		(UINT32_C(0x1) << 4)
+	/* 64KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING0_PG_SIZE_PG_64K \
+		(UINT32_C(0x2) << 4)
+	/* 2MB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING0_PG_SIZE_PG_2M \
+		(UINT32_C(0x3) << 4)
+	/* 8MB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING0_PG_SIZE_PG_8M \
+		(UINT32_C(0x4) << 4)
+	/* 1GB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING0_PG_SIZE_PG_1G \
+		(UINT32_C(0x5) << 4)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING0_PG_SIZE_LAST \
+		HWRM_RING_SQ_ALLOC_INPUT_TQM_RING0_PG_SIZE_PG_1G
+	/* TQM ring 1 page size and level. */
+	uint8_t	tqm_ring1_pg_size_tqm_ring1_lvl;
+	/* TQM ring 1 PBL indirect levels. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING1_LVL_MASK      UINT32_C(0xf)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING1_LVL_SFT       0
+	/* PBL pointer is physical start address. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING1_LVL_LVL_0 \
+		UINT32_C(0x0)
+	/* PBL pointer points to PTE table. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING1_LVL_LVL_1 \
+		UINT32_C(0x1)
+	/*
+	 * PBL pointer points to PDE table with each entry pointing to PTE
+	 * tables.
+	 */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING1_LVL_LVL_2 \
+		UINT32_C(0x2)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING1_LVL_LAST \
+		HWRM_RING_SQ_ALLOC_INPUT_TQM_RING1_LVL_LVL_2
+	/* TQM ring 1 page size. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING1_PG_SIZE_MASK  UINT32_C(0xf0)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING1_PG_SIZE_SFT   4
+	/* 4KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING1_PG_SIZE_PG_4K \
+		(UINT32_C(0x0) << 4)
+	/* 8KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING1_PG_SIZE_PG_8K \
+		(UINT32_C(0x1) << 4)
+	/* 64KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING1_PG_SIZE_PG_64K \
+		(UINT32_C(0x2) << 4)
+	/* 2MB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING1_PG_SIZE_PG_2M \
+		(UINT32_C(0x3) << 4)
+	/* 8MB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING1_PG_SIZE_PG_8M \
+		(UINT32_C(0x4) << 4)
+	/* 1GB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING1_PG_SIZE_PG_1G \
+		(UINT32_C(0x5) << 4)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING1_PG_SIZE_LAST \
+		HWRM_RING_SQ_ALLOC_INPUT_TQM_RING1_PG_SIZE_PG_1G
+	/* TQM ring 2 page size and level. */
+	uint8_t	tqm_ring2_pg_size_tqm_ring2_lvl;
+	/* TQM ring 2 PBL indirect levels. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING2_LVL_MASK      UINT32_C(0xf)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING2_LVL_SFT       0
+	/* PBL pointer is physical start address. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING2_LVL_LVL_0 \
+		UINT32_C(0x0)
+	/* PBL pointer points to PTE table. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING2_LVL_LVL_1 \
+		UINT32_C(0x1)
+	/*
+	 * PBL pointer points to PDE table with each entry pointing to PTE
+	 * tables.
+	 */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING2_LVL_LVL_2 \
+		UINT32_C(0x2)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING2_LVL_LAST \
+		HWRM_RING_SQ_ALLOC_INPUT_TQM_RING2_LVL_LVL_2
+	/* TQM ring 2 page size. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING2_PG_SIZE_MASK  UINT32_C(0xf0)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING2_PG_SIZE_SFT   4
+	/* 4KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING2_PG_SIZE_PG_4K \
+		(UINT32_C(0x0) << 4)
+	/* 8KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING2_PG_SIZE_PG_8K \
+		(UINT32_C(0x1) << 4)
+	/* 64KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING2_PG_SIZE_PG_64K \
+		(UINT32_C(0x2) << 4)
+	/* 2MB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING2_PG_SIZE_PG_2M \
+		(UINT32_C(0x3) << 4)
+	/* 8MB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING2_PG_SIZE_PG_8M \
+		(UINT32_C(0x4) << 4)
+	/* 1GB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING2_PG_SIZE_PG_1G \
+		(UINT32_C(0x5) << 4)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING2_PG_SIZE_LAST \
+		HWRM_RING_SQ_ALLOC_INPUT_TQM_RING2_PG_SIZE_PG_1G
+	/* TQM ring 3 page size and level. */
+	uint8_t	tqm_ring3_pg_size_tqm_ring3_lvl;
+	/* TQM ring 3 PBL indirect levels. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING3_LVL_MASK      UINT32_C(0xf)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING3_LVL_SFT       0
+	/* PBL pointer is physical start address. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING3_LVL_LVL_0 \
+		UINT32_C(0x0)
+	/* PBL pointer points to PTE table. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING3_LVL_LVL_1 \
+		UINT32_C(0x1)
+	/*
+	 * PBL pointer points to PDE table with each entry pointing to PTE
+	 * tables.
+	 */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING3_LVL_LVL_2 \
+		UINT32_C(0x2)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING3_LVL_LAST \
+		HWRM_RING_SQ_ALLOC_INPUT_TQM_RING3_LVL_LVL_2
+	/* TQM ring 3 page size. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING3_PG_SIZE_MASK  UINT32_C(0xf0)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING3_PG_SIZE_SFT   4
+	/* 4KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING3_PG_SIZE_PG_4K \
+		(UINT32_C(0x0) << 4)
+	/* 8KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING3_PG_SIZE_PG_8K \
+		(UINT32_C(0x1) << 4)
+	/* 64KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING3_PG_SIZE_PG_64K \
+		(UINT32_C(0x2) << 4)
+	/* 2MB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING3_PG_SIZE_PG_2M \
+		(UINT32_C(0x3) << 4)
+	/* 8MB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING3_PG_SIZE_PG_8M \
+		(UINT32_C(0x4) << 4)
+	/* 1GB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING3_PG_SIZE_PG_1G \
+		(UINT32_C(0x5) << 4)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING3_PG_SIZE_LAST \
+		HWRM_RING_SQ_ALLOC_INPUT_TQM_RING3_PG_SIZE_PG_1G
+	/* TQM ring 4 page size and level. */
+	uint8_t	tqm_ring4_pg_size_tqm_ring4_lvl;
+	/* TQM ring 4 PBL indirect levels. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING4_LVL_MASK      UINT32_C(0xf)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING4_LVL_SFT       0
+	/* PBL pointer is physical start address. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING4_LVL_LVL_0 \
+		UINT32_C(0x0)
+	/* PBL pointer points to PTE table. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING4_LVL_LVL_1 \
+		UINT32_C(0x1)
+	/*
+	 * PBL pointer points to PDE table with each entry pointing to PTE
+	 * tables.
+	 */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING4_LVL_LVL_2 \
+		UINT32_C(0x2)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING4_LVL_LAST \
+		HWRM_RING_SQ_ALLOC_INPUT_TQM_RING4_LVL_LVL_2
+	/* TQM ring 4 page size. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING4_PG_SIZE_MASK  UINT32_C(0xf0)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING4_PG_SIZE_SFT   4
+	/* 4KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING4_PG_SIZE_PG_4K \
+		(UINT32_C(0x0) << 4)
+	/* 8KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING4_PG_SIZE_PG_8K \
+		(UINT32_C(0x1) << 4)
+	/* 64KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING4_PG_SIZE_PG_64K \
+		(UINT32_C(0x2) << 4)
+	/* 2MB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING4_PG_SIZE_PG_2M \
+		(UINT32_C(0x3) << 4)
+	/* 8MB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING4_PG_SIZE_PG_8M \
+		(UINT32_C(0x4) << 4)
+	/* 1GB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING4_PG_SIZE_PG_1G \
+		(UINT32_C(0x5) << 4)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING4_PG_SIZE_LAST \
+		HWRM_RING_SQ_ALLOC_INPUT_TQM_RING4_PG_SIZE_PG_1G
+	/* TQM ring 5 page size and level. */
+	uint8_t	tqm_ring5_pg_size_tqm_ring5_lvl;
+	/* TQM ring 5 PBL indirect levels. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING5_LVL_MASK      UINT32_C(0xf)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING5_LVL_SFT       0
+	/* PBL pointer is physical start address. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING5_LVL_LVL_0 \
+		UINT32_C(0x0)
+	/* PBL pointer points to PTE table. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING5_LVL_LVL_1 \
+		UINT32_C(0x1)
+	/*
+	 * PBL pointer points to PDE table with each entry pointing to PTE
+	 * tables.
+	 */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING5_LVL_LVL_2 \
+		UINT32_C(0x2)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING5_LVL_LAST \
+		HWRM_RING_SQ_ALLOC_INPUT_TQM_RING5_LVL_LVL_2
+	/* TQM ring 5 page size. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING5_PG_SIZE_MASK  UINT32_C(0xf0)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING5_PG_SIZE_SFT   4
+	/* 4KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING5_PG_SIZE_PG_4K \
+		(UINT32_C(0x0) << 4)
+	/* 8KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING5_PG_SIZE_PG_8K \
+		(UINT32_C(0x1) << 4)
+	/* 64KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING5_PG_SIZE_PG_64K \
+		(UINT32_C(0x2) << 4)
+	/* 2MB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING5_PG_SIZE_PG_2M \
+		(UINT32_C(0x3) << 4)
+	/* 8MB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING5_PG_SIZE_PG_8M \
+		(UINT32_C(0x4) << 4)
+	/* 1GB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING5_PG_SIZE_PG_1G \
+		(UINT32_C(0x5) << 4)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING5_PG_SIZE_LAST \
+		HWRM_RING_SQ_ALLOC_INPUT_TQM_RING5_PG_SIZE_PG_1G
+	/* TQM ring 6 page size and level. */
+	uint8_t	tqm_ring6_pg_size_tqm_ring6_lvl;
+	/* TQM ring 6 PBL indirect levels. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING6_LVL_MASK      UINT32_C(0xf)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING6_LVL_SFT       0
+	/* PBL pointer is physical start address. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING6_LVL_LVL_0 \
+		UINT32_C(0x0)
+	/* PBL pointer points to PTE table. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING6_LVL_LVL_1 \
+		UINT32_C(0x1)
+	/*
+	 * PBL pointer points to PDE table with each entry pointing to PTE
+	 * tables.
+	 */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING6_LVL_LVL_2 \
+		UINT32_C(0x2)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING6_LVL_LAST \
+		HWRM_RING_SQ_ALLOC_INPUT_TQM_RING6_LVL_LVL_2
+	/* TQM ring 6 page size. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING6_PG_SIZE_MASK  UINT32_C(0xf0)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING6_PG_SIZE_SFT   4
+	/* 4KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING6_PG_SIZE_PG_4K \
+		(UINT32_C(0x0) << 4)
+	/* 8KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING6_PG_SIZE_PG_8K \
+		(UINT32_C(0x1) << 4)
+	/* 64KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING6_PG_SIZE_PG_64K \
+		(UINT32_C(0x2) << 4)
+	/* 2MB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING6_PG_SIZE_PG_2M \
+		(UINT32_C(0x3) << 4)
+	/* 8MB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING6_PG_SIZE_PG_8M \
+		(UINT32_C(0x4) << 4)
+	/* 1GB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING6_PG_SIZE_PG_1G \
+		(UINT32_C(0x5) << 4)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING6_PG_SIZE_LAST \
+		HWRM_RING_SQ_ALLOC_INPUT_TQM_RING6_PG_SIZE_PG_1G
+	/* TQM ring 7 page size and level. */
+	uint8_t	tqm_ring7_pg_size_tqm_ring7_lvl;
+	/* TQM ring 7 PBL indirect levels. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING7_LVL_MASK      UINT32_C(0xf)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING7_LVL_SFT       0
+	/* PBL pointer is physical start address. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING7_LVL_LVL_0 \
+		UINT32_C(0x0)
+	/* PBL pointer points to PTE table. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING7_LVL_LVL_1 \
+		UINT32_C(0x1)
+	/*
+	 * PBL pointer points to PDE table with each entry pointing to PTE
+	 * tables.
+	 */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING7_LVL_LVL_2 \
+		UINT32_C(0x2)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING7_LVL_LAST \
+		HWRM_RING_SQ_ALLOC_INPUT_TQM_RING7_LVL_LVL_2
+	/* TQM ring 7 page size. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING7_PG_SIZE_MASK  UINT32_C(0xf0)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING7_PG_SIZE_SFT   4
+	/* 4KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING7_PG_SIZE_PG_4K \
+		(UINT32_C(0x0) << 4)
+	/* 8KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING7_PG_SIZE_PG_8K \
+		(UINT32_C(0x1) << 4)
+	/* 64KB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING7_PG_SIZE_PG_64K \
+		(UINT32_C(0x2) << 4)
+	/* 2MB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING7_PG_SIZE_PG_2M \
+		(UINT32_C(0x3) << 4)
+	/* 8MB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING7_PG_SIZE_PG_8M \
+		(UINT32_C(0x4) << 4)
+	/* 1GB. */
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING7_PG_SIZE_PG_1G \
+		(UINT32_C(0x5) << 4)
+	#define HWRM_RING_SQ_ALLOC_INPUT_TQM_RING7_PG_SIZE_LAST \
+		HWRM_RING_SQ_ALLOC_INPUT_TQM_RING7_PG_SIZE_PG_1G
+	/* TQM ring 0 page directory. */
+	uint64_t	tqm_ring0_page_dir;
+	/* TQM ring 1 page directory. */
+	uint64_t	tqm_ring1_page_dir;
+	/* TQM ring 2 page directory. */
+	uint64_t	tqm_ring2_page_dir;
+	/* TQM ring 3 page directory. */
+	uint64_t	tqm_ring3_page_dir;
+	/* TQM ring 4 page directory. */
+	uint64_t	tqm_ring4_page_dir;
+	/* TQM ring 5 page directory. */
+	uint64_t	tqm_ring5_page_dir;
+	/* TQM ring 6 page directory. */
+	uint64_t	tqm_ring6_page_dir;
+	/* TQM ring 7 page directory. */
+	uint64_t	tqm_ring7_page_dir;
+	/*
+	 * Number of TQM ring 0 entries.
+	 *
+	 * TQM fastpath rings should be sized large enough to accommodate the
+	 * maximum number of QPs (either L2 or RoCE, or both if shared)
+	 * that can be enqueued to the TQM ring.
+	 *
+	 * Note that TQM ring sizes cannot be extended while the system is
+	 * operational. If a PF driver needs to extend a TQM ring, it needs
+	 * to delete the SQ and then reallocate it.
+	 */
+	uint32_t	tqm_ring0_num_entries;
+	/*
+	 * Number of TQM ring 1 entries.
+	 *
+	 * TQM fastpath rings should be sized large enough to accommodate the
+	 * maximum number of QPs (either L2 or RoCE, or both if shared)
+	 * that can be enqueued to the TQM ring.
+	 *
+	 * Note that TQM ring sizes cannot be extended while the system is
+	 * operational. If a PF driver needs to extend a TQM ring, it needs
+	 * to delete the SQ and then reallocate it.
+	 */
+	uint32_t	tqm_ring1_num_entries;
+	/*
+	 * Number of TQM ring 2 entries.
+	 *
+	 * TQM fastpath rings should be sized large enough to accommodate the
+	 * maximum number of QPs (either L2 or RoCE, or both if shared)
+	 * that can be enqueued to the TQM ring.
+	 *
+	 * Note that TQM ring sizes cannot be extended while the system is
+	 * operational. If a PF driver needs to extend a TQM ring, it needs
+	 * to delete the SQ and then reallocate it.
+	 */
+	uint32_t	tqm_ring2_num_entries;
+	/*
+	 * Number of TQM ring 3 entries.
+	 *
+	 * TQM fastpath rings should be sized large enough to accommodate the
+	 * maximum number of QPs (either L2 or RoCE, or both if shared)
+	 * that can be enqueued to the TQM ring.
+	 *
+	 * Note that TQM ring sizes cannot be extended while the system is
+	 * operational. If a PF driver needs to extend a TQM ring, it needs
+	 * to delete the SQ and then reallocate it.
+	 */
+	uint32_t	tqm_ring3_num_entries;
+	/*
+	 * Number of TQM ring 4 entries.
+	 *
+	 * TQM fastpath rings should be sized large enough to accommodate the
+	 * maximum number of QPs (either L2 or RoCE, or both if shared)
+	 * that can be enqueued to the TQM ring.
+	 *
+	 * Note that TQM ring sizes cannot be extended while the system is
+	 * operational. If a PF driver needs to extend a TQM ring, it needs
+	 * to delete the SQ and then reallocate it.
+	 */
+	uint32_t	tqm_ring4_num_entries;
+	/*
+	 * Number of TQM ring 5 entries.
+	 *
+	 * TQM fastpath rings should be sized large enough to accommodate the
+	 * maximum number of QPs (either L2 or RoCE, or both if shared)
+	 * that can be enqueued to the TQM ring.
+	 *
+	 * Note that TQM ring sizes cannot be extended while the system is
+	 * operational. If a PF driver needs to extend a TQM ring, it needs
+	 * to delete the SQ and then reallocate it.
+	 */
+	uint32_t	tqm_ring5_num_entries;
+	/*
+	 * Number of TQM ring 6 entries.
+	 *
+	 * TQM fastpath rings should be sized large enough to accommodate the
+	 * maximum number of QPs (either L2 or RoCE, or both if shared)
+	 * that can be enqueued to the TQM ring.
+	 *
+	 * Note that TQM ring sizes cannot be extended while the system is
+	 * operational. If a PF driver needs to extend a TQM ring, it needs
+	 * to delete the SQ and then reallocate it.
+	 */
+	uint32_t	tqm_ring6_num_entries;
+	/*
+	 * Number of TQM ring 7 entries.
+	 *
+	 * TQM fastpath rings should be sized large enough to accommodate the
+	 * maximum number of QPs (either L2 or RoCE, or both if shared)
+	 * that can be enqueued to the TQM ring.
+	 *
+	 * Note that TQM ring sizes cannot be extended while the system is
+	 * operational. If a PF driver needs to extend a TQM ring, it needs
+	 * to delete the SQ and then reallocate it.
+	 */
+	uint32_t	tqm_ring7_num_entries;
+	/* Number of bytes that have been allocated for each context entry. */
+	uint16_t	tqm_entry_size;
+	uint8_t	unused_0[6];
+} __rte_packed;
+
+/* hwrm_ring_sq_alloc_output (size:128b/16B) */
+struct hwrm_ring_sq_alloc_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/*
+	 * This is an identifier for the SQ to be used in other HWRM commands
+	 * that need to reference this SQ. This value is greater than zero
+	 * (i.e. a sq_id of zero references the default SQ).
+	 */
+	uint16_t	sq_id;
+	uint8_t	unused_0[5];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/********************
+ * hwrm_ring_sq_cfg *
+ ********************/
+
+
+/* hwrm_ring_sq_cfg_input (size:768b/96B) */
+struct hwrm_ring_sq_cfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/*
+	 * Identifies the SQ being configured. A sq_id of zero refers to the
+	 * default SQ.
+	 */
+	uint16_t	sq_id;
+	/*
+	 * This field is an 8 bit bitmap that indicates which TCs are enabled
+	 * in this SQ. Bit 0 represents traffic class 0 and bit 7 represents
+	 * traffic class 7.
+	 */
+	uint8_t	tc_enabled;
+	uint8_t	unused_0;
+	uint32_t	flags;
+	/* The tc_max_bw array and the max_bw parameters are valid */
+	#define HWRM_RING_SQ_CFG_INPUT_FLAGS_TC_MAX_BW_ENABLED \
+		UINT32_C(0x1)
+	/* The tc_min_bw array is valid */
+	#define HWRM_RING_SQ_CFG_INPUT_FLAGS_TC_MIN_BW_ENABLED \
+		UINT32_C(0x2)
+	/* Maximum bandwidth of the traffic class, specified in Mbps. */
+	uint32_t	max_bw_tc0;
+	/* Maximum bandwidth of the traffic class, specified in Mbps. */
+	uint32_t	max_bw_tc1;
+	/* Maximum bandwidth of the traffic class, specified in Mbps. */
+	uint32_t	max_bw_tc2;
+	/* Maximum bandwidth of the traffic class, specified in Mbps. */
+	uint32_t	max_bw_tc3;
+	/* Maximum bandwidth of the traffic class, specified in Mbps. */
+	uint32_t	max_bw_tc4;
+	/* Maximum bandwidth of the traffic class, specified in Mbps. */
+	uint32_t	max_bw_tc5;
+	/* Maximum bandwidth of the traffic class, specified in Mbps. */
+	uint32_t	max_bw_tc6;
+	/* Maximum bandwidth of the traffic class, specified in Mbps. */
+	uint32_t	max_bw_tc7;
+	/*
+	 * Bandwidth reservation for the traffic class, specified in Mbps.
+	 * A value of zero signifies that traffic belonging to this class
+	 * shares the bandwidth reservation for the same traffic class of
+	 * the default SQ.
+	 */
+	uint32_t	min_bw_tc0;
+	/*
+	 * Bandwidth reservation for the traffic class, specified in Mbps.
+	 * A value of zero signifies that traffic belonging to this class
+	 * shares the bandwidth reservation for the same traffic class of
+	 * the default SQ.
+	 */
+	uint32_t	min_bw_tc1;
+	/*
+	 * Bandwidth reservation for the traffic class, specified in Mbps.
+	 * A value of zero signifies that traffic belonging to this class
+	 * shares the bandwidth reservation for the same traffic class of
+	 * the default SQ.
+	 */
+	uint32_t	min_bw_tc2;
+	/*
+	 * Bandwidth reservation for the traffic class, specified in Mbps.
+	 * A value of zero signifies that traffic belonging to this class
+	 * shares the bandwidth reservation for the same traffic class of
+	 * the default SQ.
+	 */
+	uint32_t	min_bw_tc3;
+	/*
+	 * Bandwidth reservation for the traffic class, specified in Mbps.
+	 * A value of zero signifies that traffic belonging to this class
+	 * shares the bandwidth reservation for the same traffic class of
+	 * the default SQ.
+	 */
+	uint32_t	min_bw_tc4;
+	/*
+	 * Bandwidth reservation for the traffic class, specified in Mbps.
+	 * A value of zero signifies that traffic belonging to this class
+	 * shares the bandwidth reservation for the same traffic class of
+	 * the default SQ.
+	 */
+	uint32_t	min_bw_tc5;
+	/*
+	 * Bandwidth reservation for the traffic class, specified in Mbps.
+	 * A value of zero signifies that traffic belonging to this class
+	 * shares the bandwidth reservation for the same traffic class of
+	 * the default SQ.
+	 */
+	uint32_t	min_bw_tc6;
+	/*
+	 * Bandwidth reservation for the traffic class, specified in Mbps.
+	 * A value of zero signifies that traffic belonging to this class
+	 * shares the bandwidth reservation for the same traffic class of
+	 * the default SQ.
+	 */
+	uint32_t	min_bw_tc7;
+	/*
+	 * Indicates the max bandwidth for all enabled traffic classes in
+	 * this SQ, specified in Mbps.
+	 */
+	uint32_t	max_bw;
+	uint8_t	unused_1[4];
+} __rte_packed;
+
+/* hwrm_ring_sq_cfg_output (size:128b/16B) */
+struct hwrm_ring_sq_cfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/*********************
+ * hwrm_ring_sq_free *
+ *********************/
+
+
+/* hwrm_ring_sq_free_input (size:192b/24B) */
+struct hwrm_ring_sq_free_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Identifies the SQ being freed. */
+	uint16_t	sq_id;
+	uint8_t	unused_0[6];
+} __rte_packed;
+
+/* hwrm_ring_sq_free_output (size:128b/16B) */
+struct hwrm_ring_sq_free_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint8_t	unused_0[7];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
 /*
  * special reserved flow ID to identify per function default
  * flows for vSwitch offload
@@ -37315,6 +38608,163 @@ struct hwrm_tf_tcam_free_output {
 	uint8_t	valid;
 } __rte_packed;
 
+/**************************
+ * hwrm_tf_global_cfg_set *
+ **************************/
+
+
+/* hwrm_tf_global_cfg_set_input (size:448b/56B) */
+struct hwrm_tf_global_cfg_set_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_GLOBAL_CFG_SET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_GLOBAL_CFG_SET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_GLOBAL_CFG_SET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_GLOBAL_CFG_SET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_GLOBAL_CFG_SET_INPUT_FLAGS_DIR_TX
+	/* Global Cfg type */
+	uint32_t	type;
+	/* Offset of the type */
+	uint32_t	offset;
+	/* Size of the data to set in bytes */
+	uint16_t	size;
+	/* unused. */
+	uint8_t	unused0[6];
+	/* Data to set */
+	uint8_t	data[16];
+} __rte_packed;
+
+/* hwrm_tf_global_cfg_set_output (size:128b/16B) */
+struct hwrm_tf_global_cfg_set_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* unused. */
+	uint8_t	unused0[7];
+	/*
+	 * This field is used in Output records to indicate that the
+	 * output is completely written to RAM. This field should be
+	 * read as '1' to indicate that the output has been
+	 * completely written.  When writing a command completion or
+	 * response to an internal processor, the order of writes has
+	 * to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
+/**************************
+ * hwrm_tf_global_cfg_get *
+ **************************/
+
+
+/* hwrm_tf_global_cfg_get_input (size:320b/40B) */
+struct hwrm_tf_global_cfg_get_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Firmware session id returned when HWRM_TF_SESSION_OPEN is sent. */
+	uint32_t	fw_session_id;
+	/* Control flags. */
+	uint32_t	flags;
+	/* Indicates the flow direction. */
+	#define HWRM_TF_GLOBAL_CFG_GET_INPUT_FLAGS_DIR     UINT32_C(0x1)
+	/* If this bit set to 0, then it indicates rx flow. */
+	#define HWRM_TF_GLOBAL_CFG_GET_INPUT_FLAGS_DIR_RX    UINT32_C(0x0)
+	/* If this bit is set to 1, then it indicates that tx flow. */
+	#define HWRM_TF_GLOBAL_CFG_GET_INPUT_FLAGS_DIR_TX    UINT32_C(0x1)
+	#define HWRM_TF_GLOBAL_CFG_GET_INPUT_FLAGS_DIR_LAST \
+		HWRM_TF_GLOBAL_CFG_GET_INPUT_FLAGS_DIR_TX
+	/* Global Cfg type */
+	uint32_t	type;
+	/* Offset of the type */
+	uint32_t	offset;
+	/* Size of the data to set in bytes */
+	uint16_t	size;
+	/* unused. */
+	uint8_t	unused0[6];
+} __rte_packed;
+
+/* hwrm_tf_global_cfg_get_output (size:256b/32B) */
+struct hwrm_tf_global_cfg_get_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	/* Size of the data read in bytes */
+	uint16_t	size;
+	/* unused. */
+	uint8_t	unused0[6];
+	/* Data to set */
+	uint8_t	data[16];
+} __rte_packed;
+
 /******************************
  * hwrm_tunnel_dst_port_query *
  ******************************/
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 07/20] nxt/bnxt: added HWRM support for global cfg
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
                   ` (5 preceding siblings ...)
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 06/20] net/bnxt: updated hsi_struct_def_dpdk.h Somnath Kotur
@ 2020-07-06  8:24 ` Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 08/20] net/bnxt: cleanup and refactoring Somnath Kotur
                   ` (12 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Jay Ding <jay.ding@broadcom.com>

Change global cfg from tunneled to non-tunneled
HWRM cmds.

Signed-off-by: Jay Ding <jay.ding@broadcom.com>
Reviewed-by: Michael Wildt <michael.wildt@broadcom.com>
Reviewed-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_msg.c | 43 ++++++++++++++++++++-------------------
 1 file changed, 22 insertions(+), 21 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index ed506de..77c9659 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -1021,8 +1021,8 @@ tf_msg_get_global_cfg(struct tf *tfp,
 {
 	int rc = 0;
 	struct tfp_send_msg_parms parms = { 0 };
-	tf_get_global_cfg_input_t req = { 0 };
-	tf_get_global_cfg_output_t resp = { 0 };
+	struct hwrm_tf_global_cfg_get_input req = { 0 };
+	struct hwrm_tf_global_cfg_get_output resp = { 0 };
 	uint32_t flags = 0;
 	uint8_t fw_session_id;
 	uint16_t resp_size = 0;
@@ -1037,8 +1037,8 @@ tf_msg_get_global_cfg(struct tf *tfp,
 	}
 
 	flags = (params->dir == TF_DIR_TX ?
-		 TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX :
-		 TF_GET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX);
+		 HWRM_TF_GLOBAL_CFG_GET_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_GLOBAL_CFG_GET_INPUT_FLAGS_DIR_RX);
 
 	/* Populate the request */
 	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
@@ -1047,15 +1047,14 @@ tf_msg_get_global_cfg(struct tf *tfp,
 	req.offset = tfp_cpu_to_le_32(params->offset);
 	req.size = tfp_cpu_to_le_32(params->config_sz_in_bytes);
 
-	MSG_PREP(parms,
-		 TF_KONG_MB,
-		 HWRM_TF,
-		 HWRM_TFT_GET_GLOBAL_CFG,
-		 req,
-		 resp);
-
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	parms.tf_type = HWRM_TF_GLOBAL_CFG_GET;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
+	rc = tfp_send_msg_direct(tfp, &parms);
 	if (rc != 0)
 		return rc;
 
@@ -1080,7 +1079,8 @@ tf_msg_set_global_cfg(struct tf *tfp,
 {
 	int rc = 0;
 	struct tfp_send_msg_parms parms = { 0 };
-	tf_set_global_cfg_input_t req = { 0 };
+	struct hwrm_tf_global_cfg_set_input req = { 0 };
+	struct hwrm_tf_global_cfg_set_output resp = { 0 };
 	uint32_t flags = 0;
 	uint8_t fw_session_id;
 
@@ -1094,8 +1094,8 @@ tf_msg_set_global_cfg(struct tf *tfp,
 	}
 
 	flags = (params->dir == TF_DIR_TX ?
-		 TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_TX :
-		 TF_SET_GLOBAL_CFG_INPUT_FLAGS_DIR_RX);
+		 HWRM_TF_GLOBAL_CFG_SET_INPUT_FLAGS_DIR_TX :
+		 HWRM_TF_GLOBAL_CFG_SET_INPUT_FLAGS_DIR_RX);
 
 	/* Populate the request */
 	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
@@ -1106,13 +1106,14 @@ tf_msg_set_global_cfg(struct tf *tfp,
 		   params->config_sz_in_bytes);
 	req.size = tfp_cpu_to_le_32(params->config_sz_in_bytes);
 
-	MSG_PREP_NO_RESP(parms,
-			 TF_KONG_MB,
-			 HWRM_TF,
-			 HWRM_TFT_SET_GLOBAL_CFG,
-			 req);
+	parms.tf_type = HWRM_TF_GLOBAL_CFG_SET;
+	parms.req_data = (uint32_t *)&req;
+	parms.req_size = sizeof(req);
+	parms.resp_data = (uint32_t *)&resp;
+	parms.resp_size = sizeof(resp);
+	parms.mailbox = TF_KONG_MB;
 
-	rc = tfp_send_msg_tunneled(tfp, &parms);
+	rc = tfp_send_msg_direct(tfp, &parms);
 
 	if (rc != 0)
 		return rc;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 08/20] net/bnxt: cleanup and refactoring
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
                   ` (6 preceding siblings ...)
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 07/20] nxt/bnxt: added HWRM support for global cfg Somnath Kotur
@ 2020-07-06  8:24 ` Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 09/20] net/bnxt: add support for vlan push and vlan pop actions Somnath Kotur
                   ` (11 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

The return value of the function is explicitly ignored
in this case, since scope id may not be valid for internal EM
entries.
Additional minor refactoring and cleanups.

Signed-off-by: Michael Wildt <michael.wildt@broadcom.com>
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/tf_core/tf_em_host.c |  2 +-
 drivers/net/bnxt/tf_core/tf_msg.c     | 75 +++++++++++++++++++++++++++++++++--
 drivers/net/bnxt/tf_core/tf_session.c |  8 ++++
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c    |  2 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.c  |  4 +-
 5 files changed, 84 insertions(+), 7 deletions(-)

diff --git a/drivers/net/bnxt/tf_core/tf_em_host.c b/drivers/net/bnxt/tf_core/tf_em_host.c
index 8cc92c4..24287c0 100644
--- a/drivers/net/bnxt/tf_core/tf_em_host.c
+++ b/drivers/net/bnxt/tf_core/tf_em_host.c
@@ -333,7 +333,7 @@ tf_em_ctx_reg(struct tf *tfp,
 {
 	struct hcapi_cfa_em_ctx_mem_info *ctxp = &tbl_scope_cb->em_ctx_info[dir];
 	struct hcapi_cfa_em_table *tbl;
-	int rc;
+	int rc = 0;
 	int i;
 
 	for (i = TF_KEY0_TABLE; i < TF_MAX_TABLE; i++) {
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 77c9659..1e14d92 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -3,6 +3,7 @@
  * All rights reserved.
  */
 
+#include <assert.h>
 #include <inttypes.h>
 #include <stdbool.h>
 #include <stdlib.h>
@@ -21,6 +22,40 @@
 /* Logging defines */
 #define TF_RM_MSG_DEBUG  0
 
+/* Specific msg size defines as we cannot use defines in tf.yaml. This
+ * means we have to manually sync hwrm with these defines if the
+ * tf.yaml changes.
+ */
+#define TF_MSG_SET_GLOBAL_CFG_DATA_SIZE  16
+#define TF_MSG_EM_INSERT_KEY_SIZE        8
+#define TF_MSG_TCAM_SET_DEV_DATA_SIZE    88
+#define TF_MSG_TBL_TYPE_SET_DATA_SIZE    88
+
+/* Compile check - Catch any msg changes that we depend on, like the
+ * defines listed above for array size checking.
+ *
+ * Checking array size is dangerous in that the type could change and
+ * we wouldn't be able to catch it. Thus we check if the complete msg
+ * changed instead. Best we can do.
+ *
+ * If failure is observed then both msg size (defines below) and the
+ * array size (define above) should be checked and compared.
+ */
+#define TF_MSG_SIZE_HWRM_TF_GLOBAL_CFG_SET 56
+static_assert(sizeof(struct hwrm_tf_global_cfg_set_input) ==
+	      TF_MSG_SIZE_HWRM_TF_GLOBAL_CFG_SET,
+	      "HWRM message size changed: hwrm_tf_global_cfg_set_input");
+
+#define TF_MSG_SIZE_HWRM_TF_EM_INSERT      104
+static_assert(sizeof(struct hwrm_tf_em_insert_input) ==
+	      TF_MSG_SIZE_HWRM_TF_EM_INSERT,
+	      "HWRM message size changed: hwrm_tf_em_insert_input");
+
+#define TF_MSG_SIZE_HWRM_TF_TBL_TYPE_SET   128
+static_assert(sizeof(struct hwrm_tf_tbl_type_set_input) ==
+	      TF_MSG_SIZE_HWRM_TF_TBL_TYPE_SET,
+	      "HWRM message size changed: hwrm_tf_tbl_type_set_input");
+
 /**
  * This is the MAX data we can transport across regular HWRM
  */
@@ -321,7 +356,7 @@ tf_msg_session_resc_qcaps(struct tf *tfp,
 		TFP_DRV_LOG(ERR,
 			    "%s: QCAPS message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
-			    strerror(-EINVAL));
+			    strerror(EINVAL));
 		rc = -EINVAL;
 		goto cleanup;
 	}
@@ -436,7 +471,7 @@ tf_msg_session_resc_alloc(struct tf *tfp,
 		TFP_DRV_LOG(ERR,
 			    "%s: Alloc message size error, rc:%s\n",
 			    tf_dir_2_str(dir),
-			    strerror(-EINVAL));
+			    strerror(EINVAL));
 		rc = -EINVAL;
 		goto cleanup;
 	}
@@ -546,6 +581,7 @@ tf_msg_insert_em_internal_entry(struct tf *tfp,
 		(struct tf_em_64b_entry *)em_parms->em_record;
 	uint16_t flags;
 	uint8_t fw_session_id;
+	uint8_t msg_key_size;
 
 	rc = tf_session_get_fw_session_id(tfp, &fw_session_id);
 	if (rc) {
@@ -558,9 +594,21 @@ tf_msg_insert_em_internal_entry(struct tf *tfp,
 
 	/* Populate the request */
 	req.fw_session_id = tfp_cpu_to_le_32(fw_session_id);
+
+	/* Check for key size conformity */
+	msg_key_size = (em_parms->key_sz_in_bits + 7) / 8;
+	if (msg_key_size > TF_MSG_EM_INSERT_KEY_SIZE) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid parameters for msg type, rc:%s\n",
+			    tf_dir_2_str(em_parms->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
 	tfp_memcpy(req.em_key,
 		   em_parms->key,
-		   ((em_parms->key_sz_in_bits + 7) / 8));
+		   msg_key_size);
 
 	flags = (em_parms->dir == TF_DIR_TX ?
 		 HWRM_TF_EM_INSERT_INPUT_FLAGS_DIR_TX :
@@ -942,6 +990,16 @@ tf_msg_set_tbl_entry(struct tf *tfp,
 	req.size = tfp_cpu_to_le_16(size);
 	req.index = tfp_cpu_to_le_32(index);
 
+	/* Check for data size conformity */
+	if (size > TF_MSG_TBL_TYPE_SET_DATA_SIZE) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid parameters for msg type, rc:%s\n",
+			    tf_dir_2_str(dir),
+			    strerror(-rc));
+		return rc;
+	}
+
 	tfp_memcpy(&req.data,
 		   data,
 		   size);
@@ -1102,6 +1160,17 @@ tf_msg_set_global_cfg(struct tf *tfp,
 	req.flags = tfp_cpu_to_le_32(flags);
 	req.type = tfp_cpu_to_le_32(params->type);
 	req.offset = tfp_cpu_to_le_32(params->offset);
+
+	/* Check for data size conformity */
+	if (params->config_sz_in_bytes > TF_MSG_SET_GLOBAL_CFG_DATA_SIZE) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "%s: Invalid parameters for msg type, rc:%s\n",
+			    tf_dir_2_str(params->dir),
+			    strerror(-rc));
+		return rc;
+	}
+
 	tfp_memcpy(req.data, params->config,
 		   params->config_sz_in_bytes);
 	req.size = tfp_cpu_to_le_32(params->config_sz_in_bytes);
diff --git a/drivers/net/bnxt/tf_core/tf_session.c b/drivers/net/bnxt/tf_core/tf_session.c
index 932a14a..ca46f9b 100644
--- a/drivers/net/bnxt/tf_core/tf_session.c
+++ b/drivers/net/bnxt/tf_core/tf_session.c
@@ -472,6 +472,14 @@ tf_session_close_session(struct tf *tfp,
 
 	client = tf_session_find_session_client_by_fid(tfs,
 						       fid);
+	if (!client) {
+		rc = -EINVAL;
+		TFP_DRV_LOG(ERR,
+			    "Client not part of the session, unable to close, rc:%s\n",
+			    strerror(-rc));
+		return rc;
+	}
+
 	/* In case multiple clients we chose to close those first */
 	if (tfs->ref_count > 1) {
 		/* Linaro gcc can't static init this structure */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index fc29ff1..469ad36 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -594,7 +594,7 @@ bnxt_ulp_init(struct bnxt *bp)
 	int rc;
 
 	if (bp->ulp_ctx) {
-		BNXT_TF_DBG(ERR, "ulp ctx already allocated\n");
+		BNXT_TF_DBG(DEBUG, "ulp ctx already allocated\n");
 		return -EINVAL;
 	}
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index b0d31a8..dd99ea3 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -537,10 +537,10 @@ ulp_mapper_index_entry_free(struct bnxt_ulp_context *ulp,
 	};
 
 	/*
-	 * Just set the table scope, it will be ignored if not necessary
+	 * Just get the table scope, it will be ignored if not necessary
 	 * by the tf_free_tbl_entry
 	 */
-	bnxt_ulp_cntxt_tbl_scope_id_get(ulp, &fparms.tbl_scope_id);
+	(void)bnxt_ulp_cntxt_tbl_scope_id_get(ulp, &fparms.tbl_scope_id);
 
 	return tf_free_tbl_entry(tfp, &fparms);
 }
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 09/20] net/bnxt: add support for vlan push and vlan pop actions
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
                   ` (7 preceding siblings ...)
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 08/20] net/bnxt: cleanup and refactoring Somnath Kotur
@ 2020-07-06  8:24 ` Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 10/20] net/bnxt: remove vnic and vport act bits from template matching Somnath Kotur
                   ` (10 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Add support for the vlan push and vlan pop actions

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Michael Baucom <michael.baucom@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c       | 84 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h       | 20 ++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h | 56 ++++++++---------
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c  | 28 ++++-----
 4 files changed, 146 insertions(+), 42 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index b8146c8..8d35429 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1415,3 +1415,87 @@ ulp_rte_phy_port_act_handler(const struct rte_flow_action *action_item,
 	ULP_BITMAP_SET(prm->act_bitmap.bits, BNXT_ULP_ACTION_BIT_VPORT);
 	return BNXT_TF_RC_SUCCESS;
 }
+
+/* Function to handle the parsing of RTE Flow action pop vlan. */
+int32_t
+ulp_rte_of_pop_vlan_act_handler(const struct rte_flow_action *a __rte_unused,
+				struct ulp_rte_parser_params *params)
+{
+	/* Update the act_bitmap with pop */
+	ULP_BITMAP_SET(params->act_bitmap.bits, BNXT_ULP_ACTION_BIT_POP_VLAN);
+	return BNXT_TF_RC_SUCCESS;
+}
+
+/* Function to handle the parsing of RTE Flow action push vlan. */
+int32_t
+ulp_rte_of_push_vlan_act_handler(const struct rte_flow_action *action_item,
+				 struct ulp_rte_parser_params *params)
+{
+	const struct rte_flow_action_of_push_vlan *push_vlan;
+	uint16_t ethertype;
+	struct ulp_rte_act_prop *act = &params->act_prop;
+
+	push_vlan = action_item->conf;
+	if (push_vlan) {
+		ethertype = push_vlan->ethertype;
+		if (tfp_cpu_to_be_16(ethertype) != RTE_ETHER_TYPE_VLAN) {
+			BNXT_TF_DBG(ERR,
+				    "Parse Err: Ethertype not supported\n");
+			return BNXT_TF_RC_PARSE_ERR;
+		}
+		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_PUSH_VLAN],
+		       &ethertype, BNXT_ULP_ACT_PROP_SZ_PUSH_VLAN);
+		/* Update the hdr_bitmap with push vlan */
+		ULP_BITMAP_SET(params->act_bitmap.bits,
+			       BNXT_ULP_ACTION_BIT_PUSH_VLAN);
+		return BNXT_TF_RC_SUCCESS;
+	}
+	BNXT_TF_DBG(ERR, "Parse Error: Push vlan arg is invalid\n");
+	return BNXT_TF_RC_ERROR;
+}
+
+/* Function to handle the parsing of RTE Flow action set vlan id. */
+int32_t
+ulp_rte_of_set_vlan_vid_act_handler(const struct rte_flow_action *action_item,
+				    struct ulp_rte_parser_params *params)
+{
+	const struct rte_flow_action_of_set_vlan_vid *vlan_vid;
+	uint32_t vid;
+	struct ulp_rte_act_prop *act = &params->act_prop;
+
+	vlan_vid = action_item->conf;
+	if (vlan_vid && vlan_vid->vlan_vid) {
+		vid = vlan_vid->vlan_vid;
+		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_SET_VLAN_VID],
+		       &vid, BNXT_ULP_ACT_PROP_SZ_SET_VLAN_VID);
+		/* Update the hdr_bitmap with vlan vid */
+		ULP_BITMAP_SET(params->act_bitmap.bits,
+			       BNXT_ULP_ACTION_BIT_SET_VLAN_VID);
+		return BNXT_TF_RC_SUCCESS;
+	}
+	BNXT_TF_DBG(ERR, "Parse Error: Vlan vid arg is invalid\n");
+	return BNXT_TF_RC_ERROR;
+}
+
+/* Function to handle the parsing of RTE Flow action set vlan pcp. */
+int32_t
+ulp_rte_of_set_vlan_pcp_act_handler(const struct rte_flow_action *action_item,
+				    struct ulp_rte_parser_params *params)
+{
+	const struct rte_flow_action_of_set_vlan_pcp *vlan_pcp;
+	uint8_t pcp;
+	struct ulp_rte_act_prop *act = &params->act_prop;
+
+	vlan_pcp = action_item->conf;
+	if (vlan_pcp) {
+		pcp = vlan_pcp->vlan_pcp;
+		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_SET_VLAN_PCP],
+		       &pcp, BNXT_ULP_ACT_PROP_SZ_SET_VLAN_PCP);
+		/* Update the hdr_bitmap with vlan vid */
+		ULP_BITMAP_SET(params->act_bitmap.bits,
+			       BNXT_ULP_ACTION_BIT_SET_VLAN_PCP);
+		return BNXT_TF_RC_SUCCESS;
+	}
+	BNXT_TF_DBG(ERR, "Parse Error: Vlan pcp arg is invalid\n");
+	return BNXT_TF_RC_ERROR;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
index 8788431..1bb4721 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
@@ -170,4 +170,24 @@ int32_t
 ulp_rte_phy_port_act_handler(const struct rte_flow_action *action_item,
 			     struct ulp_rte_parser_params *params);
 
+/* Function to handle the parsing of RTE Flow action pop vlan. */
+int32_t
+ulp_rte_of_pop_vlan_act_handler(const struct rte_flow_action *action_item,
+				struct ulp_rte_parser_params *params);
+
+/* Function to handle the parsing of RTE Flow action push vlan. */
+int32_t
+ulp_rte_of_push_vlan_act_handler(const struct rte_flow_action *action_item,
+				 struct ulp_rte_parser_params *params);
+
+/* Function to handle the parsing of RTE Flow action set vlan id. */
+int32_t
+ulp_rte_of_set_vlan_vid_act_handler(const struct rte_flow_action *action_item,
+				    struct ulp_rte_parser_params *params);
+
+/* Function to handle the parsing of RTE Flow action set vlan pcp. */
+int32_t
+ulp_rte_of_set_vlan_pcp_act_handler(const struct rte_flow_action *action_item,
+				    struct ulp_rte_parser_params *params);
+
 #endif /* _ULP_RTE_PARSER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 892d8ea..f232bdb 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -539,9 +539,9 @@ enum bnxt_ulp_act_prop_sz {
 	BNXT_ULP_ACT_PROP_SZ_METER = 4,
 	BNXT_ULP_ACT_PROP_SZ_SET_MAC_SRC = 8,
 	BNXT_ULP_ACT_PROP_SZ_SET_MAC_DST = 8,
-	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_VLAN = 4,
-	BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_PCP = 4,
-	BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_VID = 4,
+	BNXT_ULP_ACT_PROP_SZ_PUSH_VLAN = 2,
+	BNXT_ULP_ACT_PROP_SZ_SET_VLAN_PCP = 1,
+	BNXT_ULP_ACT_PROP_SZ_SET_VLAN_VID = 2,
 	BNXT_ULP_ACT_PROP_SZ_SET_IPV4_SRC = 4,
 	BNXT_ULP_ACT_PROP_SZ_SET_IPV4_DST = 4,
 	BNXT_ULP_ACT_PROP_SZ_SET_IPV6_SRC = 16,
@@ -583,31 +583,31 @@ enum bnxt_ulp_act_prop_idx {
 	BNXT_ULP_ACT_PROP_IDX_METER = 52,
 	BNXT_ULP_ACT_PROP_IDX_SET_MAC_SRC = 56,
 	BNXT_ULP_ACT_PROP_IDX_SET_MAC_DST = 64,
-	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_VLAN = 72,
-	BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_PCP = 76,
-	BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_VID = 80,
-	BNXT_ULP_ACT_PROP_IDX_SET_IPV4_SRC = 84,
-	BNXT_ULP_ACT_PROP_IDX_SET_IPV4_DST = 88,
-	BNXT_ULP_ACT_PROP_IDX_SET_IPV6_SRC = 92,
-	BNXT_ULP_ACT_PROP_IDX_SET_IPV6_DST = 108,
-	BNXT_ULP_ACT_PROP_IDX_SET_TP_SRC = 124,
-	BNXT_ULP_ACT_PROP_IDX_SET_TP_DST = 128,
-	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_0 = 132,
-	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_1 = 136,
-	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_2 = 140,
-	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_3 = 144,
-	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_4 = 148,
-	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_5 = 152,
-	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_6 = 156,
-	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_7 = 160,
-	BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_DMAC = 164,
-	BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC = 170,
-	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG = 176,
-	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP = 184,
-	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC = 216,
-	BNXT_ULP_ACT_PROP_IDX_ENCAP_UDP = 232,
-	BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN = 236,
-	BNXT_ULP_ACT_PROP_IDX_LAST = 268
+	BNXT_ULP_ACT_PROP_IDX_PUSH_VLAN = 72,
+	BNXT_ULP_ACT_PROP_IDX_SET_VLAN_PCP = 74,
+	BNXT_ULP_ACT_PROP_IDX_SET_VLAN_VID = 75,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV4_SRC = 77,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV4_DST = 81,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV6_SRC = 85,
+	BNXT_ULP_ACT_PROP_IDX_SET_IPV6_DST = 101,
+	BNXT_ULP_ACT_PROP_IDX_SET_TP_SRC = 117,
+	BNXT_ULP_ACT_PROP_IDX_SET_TP_DST = 121,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_0 = 125,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_1 = 129,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_2 = 133,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_3 = 137,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_4 = 141,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_5 = 145,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_6 = 149,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_7 = 153,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_DMAC = 157,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC = 163,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG = 169,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP = 177,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC = 209,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_UDP = 225,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN = 229,
+	BNXT_ULP_ACT_PROP_IDX_LAST = 261
 };
 
 enum bnxt_ulp_class_hid {
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index f0a57cf..2cc3458 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -41,12 +41,12 @@ uint32_t ulp_act_prop_map_table[] = {
 		BNXT_ULP_ACT_PROP_SZ_SET_MAC_SRC,
 	[BNXT_ULP_ACT_PROP_IDX_SET_MAC_DST] =
 		BNXT_ULP_ACT_PROP_SZ_SET_MAC_DST,
-	[BNXT_ULP_ACT_PROP_IDX_OF_PUSH_VLAN] =
-		BNXT_ULP_ACT_PROP_SZ_OF_PUSH_VLAN,
-	[BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_PCP] =
-		BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_PCP,
-	[BNXT_ULP_ACT_PROP_IDX_OF_SET_VLAN_VID] =
-		BNXT_ULP_ACT_PROP_SZ_OF_SET_VLAN_VID,
+	[BNXT_ULP_ACT_PROP_IDX_PUSH_VLAN] =
+		BNXT_ULP_ACT_PROP_SZ_PUSH_VLAN,
+	[BNXT_ULP_ACT_PROP_IDX_SET_VLAN_PCP] =
+		BNXT_ULP_ACT_PROP_SZ_SET_VLAN_PCP,
+	[BNXT_ULP_ACT_PROP_IDX_SET_VLAN_VID] =
+		BNXT_ULP_ACT_PROP_SZ_SET_VLAN_VID,
 	[BNXT_ULP_ACT_PROP_IDX_SET_IPV4_SRC] =
 		BNXT_ULP_ACT_PROP_SZ_SET_IPV4_SRC,
 	[BNXT_ULP_ACT_PROP_IDX_SET_IPV4_DST] =
@@ -183,20 +183,20 @@ struct bnxt_ulp_rte_act_info ulp_act_info[] = {
 		.proto_act_func          = NULL
 	},
 	[RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = {
-		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
-		.proto_act_func          = NULL
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_of_pop_vlan_act_handler
 	},
 	[RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = {
-		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
-		.proto_act_func          = NULL
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_of_push_vlan_act_handler
 	},
 	[RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID] = {
-		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
-		.proto_act_func          = NULL
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_of_set_vlan_vid_act_handler
 	},
 	[RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP] = {
-		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
-		.proto_act_func          = NULL
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_of_set_vlan_pcp_act_handler
 	},
 	[RTE_FLOW_ACTION_TYPE_OF_POP_MPLS] = {
 		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 10/20] net/bnxt: remove vnic and vport act bits from template matching
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
                   ` (8 preceding siblings ...)
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 09/20] net/bnxt: add support for vlan push and vlan pop actions Somnath Kotur
@ 2020-07-06  8:24 ` Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 11/20] net/bnxt: fix vxlan outer ip protocol id encapsulation Somnath Kotur
                   ` (9 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Removed the vnic and vport bitmaps from template matching. It
is assumed that these will be populated implicitly and based
on the direction the appropriate action property shall be used.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Michael Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_port_db.c    |  27 ++++++-
 drivers/net/bnxt/tf_ulp/ulp_port_db.h    |  15 ++++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 133 ++++++++++++++++++++-----------
 3 files changed, 127 insertions(+), 48 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index 659cefa..3c5a218 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -71,7 +71,7 @@ int32_t	ulp_port_db_init(struct bnxt_ulp_context *ulp_ctxt, uint8_t port_cnt)
 			    "Failed to allocate mem for phy port list\n");
 		goto error_free;
 	}
-
+	port_db->phy_port_cnt = port_cnt;
 	return 0;
 
 error_free:
@@ -436,3 +436,28 @@ ulp_port_db_vport_get(struct bnxt_ulp_context *ulp_ctxt,
 	*vport = port_db->phy_port_list[phy_port_id].port_vport;
 	return 0;
 }
+
+/*
+ * Api to get the vport for a given physical port.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * phy_port [in] physical port index
+ * out_port [out] the port of the given physical index
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_phy_port_vport_get(struct bnxt_ulp_context *ulp_ctxt,
+			       uint32_t phy_port,
+			       uint16_t *out_port)
+{
+	struct bnxt_ulp_port_db *port_db;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || phy_port >= port_db->phy_port_cnt) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	*out_port = port_db->phy_port_list[phy_port].port_vport;
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index b1419a3..e3870f9 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -71,6 +71,7 @@ struct bnxt_ulp_port_db {
 	/* dpdk device external port list */
 	uint16_t			dev_port_list[RTE_MAX_ETHPORTS];
 	struct ulp_phy_port_info	*phy_port_list;
+	uint16_t			phy_port_cnt;
 	struct ulp_func_if_info		ulp_func_id_tbl[BNXT_PORT_DB_MAX_FUNC];
 };
 
@@ -203,4 +204,18 @@ int32_t
 ulp_port_db_vport_get(struct bnxt_ulp_context *ulp_ctxt,
 		      uint32_t ifindex,	uint16_t *vport);
 
+/*
+ * Api to get the vport for a given physical port.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * phy_port [in] physical port index
+ * out_port [out] the port of the given physical index
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_phy_port_vport_get(struct bnxt_ulp_context *ulp_ctxt,
+			       uint32_t phy_port,
+			       uint16_t *out_port);
+
 #endif /* _ULP_PORT_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 8d35429..b4bf431 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -238,8 +238,17 @@ ulp_rte_parser_vnic_process(struct ulp_rte_parser_params *params)
 	struct ulp_rte_act_bitmap *act = &params->act_bitmap;
 
 	if (ULP_BITMAP_ISSET(act->bits, BNXT_ULP_ACTION_BIT_VNIC) ||
-	    ULP_BITMAP_ISSET(act->bits, BNXT_ULP_ACTION_BIT_VPORT))
+	    ULP_BITMAP_ISSET(act->bits, BNXT_ULP_ACTION_BIT_VPORT)) {
+		/*
+		 * Reset the vnic/vport action bitmaps
+		 * it is not required for match
+		 */
+		ULP_BITMAP_RESET(params->act_bitmap.bits,
+				 BNXT_ULP_ACTION_BIT_VNIC);
+		ULP_BITMAP_RESET(params->act_bitmap.bits,
+				 BNXT_ULP_ACTION_BIT_VPORT);
 		return BNXT_TF_RC_SUCCESS;
+	}
 
 	/* Update the vnic details */
 	ulp_rte_pf_act_handler(NULL, params);
@@ -1344,28 +1353,59 @@ ulp_rte_port_id_act_handler(const struct rte_flow_action *act_item,
 			    struct ulp_rte_parser_params *param)
 {
 	const struct rte_flow_action_port_id *port_id;
+	struct ulp_rte_act_prop *act;
 	uint32_t pid;
+	int32_t rc;
+	uint32_t ifindex;
+	uint16_t pid_s;
 
 	port_id = act_item->conf;
-	if (port_id) {
-		if (port_id->original) {
-			BNXT_TF_DBG(ERR,
-				    "ParseErr:Portid Original not supported\n");
-			return BNXT_TF_RC_PARSE_ERR;
-		}
-		/* Update the computed VNIC using port conversion */
-		if (port_id->id >= RTE_MAX_ETHPORTS) {
-			BNXT_TF_DBG(ERR,
-				    "ParseErr:Portid is not valid\n");
-			return BNXT_TF_RC_PARSE_ERR;
-		}
-		pid = bnxt_get_vnic_id(port_id->id, BNXT_ULP_INTF_TYPE_INVALID);
+	if (!port_id) {
+		BNXT_TF_DBG(ERR,
+			    "ParseErr: Invalid Argument\n");
+		return BNXT_TF_RC_PARSE_ERR;
+	}
+	if (port_id->original) {
+		BNXT_TF_DBG(ERR,
+			    "ParseErr:Portid Original not supported\n");
+		return BNXT_TF_RC_PARSE_ERR;
+	}
+
+	/* Get the port db ifindex */
+	rc = ulp_port_db_dev_port_to_ulp_index(param->ulp_ctx,
+					       port_id->id,
+					       &ifindex);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Invalid port id\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	act = &param->act_prop;
+	if (param->dir == ULP_DIR_EGRESS) {
+		rc = ulp_port_db_vport_get(param->ulp_ctx,
+					   ifindex, &pid_s);
+		if (rc)
+			return BNXT_TF_RC_ERROR;
+
+		pid = pid_s;
 		pid = rte_cpu_to_be_32(pid);
-		memcpy(&param->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
+		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_VPORT],
+		       &pid, BNXT_ULP_ACT_PROP_SZ_VPORT);
+	} else {
+		rc = ulp_port_db_default_vnic_get(param->ulp_ctx,
+						  ifindex,
+						  BNXT_ULP_DRV_FUNC_VNIC,
+						  &pid_s);
+		if (rc)
+			return BNXT_TF_RC_ERROR;
+
+		pid = pid_s;
+		pid = rte_cpu_to_be_32(pid);
+		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 		       &pid, BNXT_ULP_ACT_PROP_SZ_VNIC);
 	}
 
-	/* Update the hdr_bitmap with count */
+	/*Update the hdr_bitmap with vnic */
 	ULP_BITMAP_SET(param->act_bitmap.bits, BNXT_ULP_ACTION_BIT_VNIC);
 	return BNXT_TF_RC_SUCCESS;
 }
@@ -1376,42 +1416,41 @@ ulp_rte_phy_port_act_handler(const struct rte_flow_action *action_item,
 			     struct ulp_rte_parser_params *prm)
 {
 	const struct rte_flow_action_phy_port *phy_port;
-	uint32_t vport;
-	struct bnxt_ulp_device_params *dparms;
-	uint32_t dev_id;
+	uint32_t pid;
+	int32_t rc;
+	uint16_t pid_s;
 
 	phy_port = action_item->conf;
-	if (phy_port) {
-		if (phy_port->original) {
-			BNXT_TF_DBG(ERR,
-				    "Parse Err:Port Original not supported\n");
-			return BNXT_TF_RC_PARSE_ERR;
-		}
-		if (bnxt_ulp_cntxt_dev_id_get(prm->ulp_ctx, &dev_id)) {
-			BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
-			return -EINVAL;
-		}
-
-		dparms = bnxt_ulp_device_params_get(dev_id);
-		if (!dparms) {
-			BNXT_TF_DBG(DEBUG, "Failed to get device parms\n");
-			return -EINVAL;
-		}
-
-		if (phy_port->index > dparms->num_phy_ports) {
-			BNXT_TF_DBG(ERR, "ParseErr:Phy Port is not valid\n");
-			return BNXT_TF_RC_PARSE_ERR;
-		}
+	if (!phy_port) {
+		BNXT_TF_DBG(ERR,
+			    "ParseErr: Invalid Argument\n");
+		return BNXT_TF_RC_PARSE_ERR;
+	}
 
-		/* Get the vport of the physical port */
-		/* TBD: shall be changed later to portdb call */
-		vport = 1 << phy_port->index;
-		vport = rte_cpu_to_be_32(vport);
-		memcpy(&prm->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VPORT],
-		       &vport, BNXT_ULP_ACT_PROP_SZ_VPORT);
+	if (phy_port->original) {
+		BNXT_TF_DBG(ERR,
+			    "Parse Err:Port Original not supported\n");
+		return BNXT_TF_RC_PARSE_ERR;
+	}
+	if (prm->dir != ULP_DIR_EGRESS) {
+		BNXT_TF_DBG(ERR,
+			    "Parse Err:Phy ports are valid only for egress\n");
+		return BNXT_TF_RC_PARSE_ERR;
+	}
+	/* Get the physical port details from port db */
+	rc = ulp_port_db_phy_port_vport_get(prm->ulp_ctx, phy_port->index,
+					    &pid_s);
+	if (rc) {
+		BNXT_TF_DBG(DEBUG, "Failed to get port details\n");
+		return -EINVAL;
 	}
 
-	/* Update the hdr_bitmap with count */
+	pid = pid_s;
+	pid = rte_cpu_to_be_32(pid);
+	memcpy(&prm->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VPORT],
+	       &pid, BNXT_ULP_ACT_PROP_SZ_VPORT);
+
+	/* update the hdr_bitmap with vport */
 	ULP_BITMAP_SET(prm->act_bitmap.bits, BNXT_ULP_ACTION_BIT_VPORT);
 	return BNXT_TF_RC_SUCCESS;
 }
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 11/20] net/bnxt: fix vxlan outer ip protocol id encapsulation
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
                   ` (9 preceding siblings ...)
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 10/20] net/bnxt: remove vnic and vport act bits from template matching Somnath Kotur
@ 2020-07-06  8:24 ` Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 12/20] net/bnxt: add number of vlan tags in the computed field list Somnath Kotur
                   ` (8 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

The outer ip protocol was not encapsulated in the right location
when ip header is sent by the application. The order of encapsulation
has to be reversed.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Michael Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index b4bf431..2e310e0 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1151,15 +1151,15 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 			const uint8_t *tmp_buff;
 
 			buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP];
-			ulp_encap_buffer_copy(buff,
-					      &ipv4_spec->hdr.version_ihl,
-					      BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS);
-			buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP +
-			     BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS];
 			tmp_buff = (const uint8_t *)&ipv4_spec->hdr.packet_id;
 			ulp_encap_buffer_copy(buff,
 					      tmp_buff,
 					      BNXT_ULP_ENCAP_IPV4_ID_PROTO);
+			buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP +
+			     BNXT_ULP_ENCAP_IPV4_ID_PROTO];
+			ulp_encap_buffer_copy(buff,
+					      &ipv4_spec->hdr.version_ihl,
+					      BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS);
 		}
 		buff = &ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_IP +
 		    BNXT_ULP_ENCAP_IPV4_VER_HLEN_TOS +
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 12/20] net/bnxt: add number of vlan tags in the computed field list
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
                   ` (10 preceding siblings ...)
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 11/20] net/bnxt: fix vxlan outer ip protocol id encapsulation Somnath Kotur
@ 2020-07-06  8:24 ` Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 13/20] net/bnxt: enable support for PF and VF port action items Somnath Kotur
                   ` (7 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added number of vlan tags in the computed field list so conditional
table execution could be done based on number of vlan tags in the
flow create.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Michael Baucom <michael.baucom@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c       | 10 +++++
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h | 60 ++++++++++++++------------
 2 files changed, 43 insertions(+), 27 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 2e310e0..58090bf 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -88,6 +88,10 @@ bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
 		ULP_BITMAP_SET(params->hdr_bitmap.bits,
 			       BNXT_ULP_FLOW_DIR_BITMASK_EGR);
 
+	/* Set the computed flags for no vlan tags before parsing */
+	ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_O_NO_VTAG, 1);
+	ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_I_NO_VTAG, 1);
+
 	/* Parse all the items in the pattern */
 	while (item && item->type != RTE_FLOW_ITEM_TYPE_END) {
 		/* get the header information from the flow_hdr_info table */
@@ -480,6 +484,8 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 		outer_vtag_num++;
 		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_O_VTAG_NUM,
 				    outer_vtag_num);
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_O_NO_VTAG, 0);
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_O_ONE_VTAG, 1);
 		ULP_BITMAP_SET(params->hdr_bitmap.bits,
 			       BNXT_ULP_HDR_BIT_OO_VLAN);
 	} else if (ULP_BITMAP_ISSET(hdr_bit->bits, BNXT_ULP_HDR_BIT_O_ETH) &&
@@ -490,6 +496,7 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_O_VTAG_NUM,
 				    outer_vtag_num);
 		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_O_TWO_VTAGS, 1);
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_O_ONE_VTAG, 0);
 		ULP_BITMAP_SET(params->hdr_bitmap.bits,
 			       BNXT_ULP_HDR_BIT_OI_VLAN);
 	} else if (ULP_BITMAP_ISSET(hdr_bit->bits, BNXT_ULP_HDR_BIT_O_ETH) &&
@@ -499,6 +506,8 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 		inner_vtag_num++;
 		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_I_VTAG_NUM,
 				    inner_vtag_num);
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_I_NO_VTAG, 0);
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_I_ONE_VTAG, 1);
 		ULP_BITMAP_SET(params->hdr_bitmap.bits,
 			       BNXT_ULP_HDR_BIT_IO_VLAN);
 	} else if (ULP_BITMAP_ISSET(hdr_bit->bits, BNXT_ULP_HDR_BIT_O_ETH) &&
@@ -509,6 +518,7 @@ ulp_rte_vlan_hdr_handler(const struct rte_flow_item *item,
 		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_I_VTAG_NUM,
 				    inner_vtag_num);
 		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_I_TWO_VTAGS, 1);
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_I_ONE_VTAG, 0);
 		ULP_BITMAP_SET(params->hdr_bitmap.bits,
 			       BNXT_ULP_HDR_BIT_II_VLAN);
 	} else {
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index f232bdb..13d1782 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -96,33 +96,39 @@ enum bnxt_ulp_cf_idx {
 	BNXT_ULP_CF_IDX_NOT_USED = 0,
 	BNXT_ULP_CF_IDX_MPLS_TAG_NUM = 1,
 	BNXT_ULP_CF_IDX_O_VTAG_NUM = 2,
-	BNXT_ULP_CF_IDX_O_TWO_VTAGS = 3,
-	BNXT_ULP_CF_IDX_I_VTAG_NUM = 4,
-	BNXT_ULP_CF_IDX_I_TWO_VTAGS = 5,
-	BNXT_ULP_CF_IDX_INCOMING_IF = 6,
-	BNXT_ULP_CF_IDX_DIRECTION = 7,
-	BNXT_ULP_CF_IDX_SVIF_FLAG = 8,
-	BNXT_ULP_CF_IDX_O_L3 = 9,
-	BNXT_ULP_CF_IDX_I_L3 = 10,
-	BNXT_ULP_CF_IDX_O_L4 = 11,
-	BNXT_ULP_CF_IDX_I_L4 = 12,
-	BNXT_ULP_CF_IDX_DEV_PORT_ID = 13,
-	BNXT_ULP_CF_IDX_DRV_FUNC_SVIF = 14,
-	BNXT_ULP_CF_IDX_DRV_FUNC_SPIF = 15,
-	BNXT_ULP_CF_IDX_DRV_FUNC_PARIF = 16,
-	BNXT_ULP_CF_IDX_DRV_FUNC_VNIC = 17,
-	BNXT_ULP_CF_IDX_DRV_FUNC_PHY_PORT = 18,
-	BNXT_ULP_CF_IDX_VF_FUNC_SVIF = 19,
-	BNXT_ULP_CF_IDX_VF_FUNC_SPIF = 20,
-	BNXT_ULP_CF_IDX_VF_FUNC_PARIF = 21,
-	BNXT_ULP_CF_IDX_VF_FUNC_VNIC = 22,
-	BNXT_ULP_CF_IDX_PHY_PORT_SVIF = 23,
-	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 24,
-	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 25,
-	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 26,
-	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG = 27,
-	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG = 28,
-	BNXT_ULP_CF_IDX_LAST = 29
+	BNXT_ULP_CF_IDX_O_NO_VTAG = 3,
+	BNXT_ULP_CF_IDX_O_ONE_VTAG = 4,
+	BNXT_ULP_CF_IDX_O_TWO_VTAGS = 5,
+	BNXT_ULP_CF_IDX_I_VTAG_NUM = 6,
+	BNXT_ULP_CF_IDX_I_NO_VTAG = 7,
+	BNXT_ULP_CF_IDX_I_ONE_VTAG = 8,
+	BNXT_ULP_CF_IDX_I_TWO_VTAGS = 9,
+	BNXT_ULP_CF_IDX_INCOMING_IF = 10,
+	BNXT_ULP_CF_IDX_DIRECTION = 11,
+	BNXT_ULP_CF_IDX_SVIF_FLAG = 12,
+	BNXT_ULP_CF_IDX_O_L3 = 13,
+	BNXT_ULP_CF_IDX_I_L3 = 14,
+	BNXT_ULP_CF_IDX_O_L4 = 15,
+	BNXT_ULP_CF_IDX_I_L4 = 16,
+	BNXT_ULP_CF_IDX_DEV_PORT_ID = 17,
+	BNXT_ULP_CF_IDX_DRV_FUNC_SVIF = 18,
+	BNXT_ULP_CF_IDX_DRV_FUNC_SPIF = 19,
+	BNXT_ULP_CF_IDX_DRV_FUNC_PARIF = 20,
+	BNXT_ULP_CF_IDX_DRV_FUNC_VNIC = 21,
+	BNXT_ULP_CF_IDX_DRV_FUNC_PHY_PORT = 22,
+	BNXT_ULP_CF_IDX_VF_FUNC_SVIF = 23,
+	BNXT_ULP_CF_IDX_VF_FUNC_SPIF = 24,
+	BNXT_ULP_CF_IDX_VF_FUNC_PARIF = 25,
+	BNXT_ULP_CF_IDX_VF_FUNC_VNIC = 26,
+	BNXT_ULP_CF_IDX_PHY_PORT_SVIF = 27,
+	BNXT_ULP_CF_IDX_PHY_PORT_SPIF = 28,
+	BNXT_ULP_CF_IDX_PHY_PORT_PARIF = 29,
+	BNXT_ULP_CF_IDX_PHY_PORT_VPORT = 30,
+	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV4_FLAG = 31,
+	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG = 32,
+	BNXT_ULP_CF_IDX_ACT_DEC_TTL = 33,
+	BNXT_ULP_CF_IDX_ACT_T_DEC_TTL = 34,
+	BNXT_ULP_CF_IDX_LAST = 35
 };
 
 enum bnxt_ulp_cond_opcode {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 13/20] net/bnxt: enable support for PF and VF port action items
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
                   ` (11 preceding siblings ...)
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 12/20] net/bnxt: add number of vlan tags in the computed field list Somnath Kotur
@ 2020-07-06  8:24 ` Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 14/20] net/bnxt: port configuration changes to support full offload Somnath Kotur
                   ` (6 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for the PF and VF port action items in the flow
create. During flow create the output port action can now be specified
as PF or VF port and those ports are parsed accordingly and converted
to vnic or vport as per the flow direction.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Michael Baucom <michael.baucom@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_port_db.c          |  52 +++++++++
 drivers/net/bnxt/tf_ulp/ulp_port_db.h          |  26 +++++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c       | 151 +++++++++++++++++--------
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h       |   4 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h |   4 +-
 5 files changed, 189 insertions(+), 48 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index 3c5a218..122b5f4 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -162,6 +162,7 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 			bnxt_get_vnic_id(port_id, BNXT_ULP_INTF_TYPE_INVALID);
 		func->phy_port_id = bnxt_get_phy_port_id(port_id);
 		func->func_valid = true;
+		func->ifindex = ifindex;
 	}
 
 	if (intf->type == BNXT_ULP_INTF_TYPE_VF_REP) {
@@ -178,6 +179,7 @@ int32_t	ulp_port_db_dev_port_intf_update(struct bnxt_ulp_context *ulp_ctxt,
 		func->func_vnic =
 			bnxt_get_vnic_id(port_id, BNXT_ULP_INTF_TYPE_VF_REP);
 		func->phy_port_id = bnxt_get_phy_port_id(port_id);
+		func->ifindex = ifindex;
 	}
 
 	port_data = &port_db->phy_port_list[func->phy_port_id];
@@ -461,3 +463,53 @@ ulp_port_db_phy_port_vport_get(struct bnxt_ulp_context *ulp_ctxt,
 	*out_port = port_db->phy_port_list[phy_port].port_vport;
 	return 0;
 }
+
+/*
+ * Api to get the port type for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ *
+ * Returns port type.
+ */
+enum bnxt_ulp_intf_type
+ulp_port_db_port_type_get(struct bnxt_ulp_context *ulp_ctxt,
+			  uint32_t ifindex)
+{
+	struct bnxt_ulp_port_db *port_db;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || ifindex >= port_db->ulp_intf_list_size || !ifindex) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return BNXT_ULP_INTF_TYPE_INVALID;
+	}
+	return port_db->ulp_intf_list[ifindex].type;
+}
+
+/*
+ * Api to get the ulp ifindex for a given function id.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * func_id [in].device func id
+ * ifindex [out] ulp ifindex
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_dev_func_id_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
+				     uint32_t func_id, uint32_t *ifindex)
+{
+	struct bnxt_ulp_port_db *port_db;
+
+	*ifindex = 0;
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || func_id >= BNXT_PORT_DB_MAX_FUNC) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	if (!port_db->ulp_func_id_tbl[func_id].func_valid)
+		return -ENOENT;
+
+	*ifindex = port_db->ulp_func_id_tbl[func_id].ifindex;
+	return 0;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index e3870f9..4afbb84 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -46,6 +46,7 @@ struct ulp_func_if_info {
 	uint16_t		func_parif;
 	uint16_t		func_vnic;
 	uint16_t		phy_port_id;
+	uint16_t		ifindex;
 };
 
 /* Structure for the Port database resource information. */
@@ -218,4 +219,29 @@ ulp_port_db_phy_port_vport_get(struct bnxt_ulp_context *ulp_ctxt,
 			       uint32_t phy_port,
 			       uint16_t *out_port);
 
+/*
+ * Api to get the port type for a given ulp ifindex.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * ifindex [in] ulp ifindex
+ *
+ * Returns port type.
+ */
+enum bnxt_ulp_intf_type
+ulp_port_db_port_type_get(struct bnxt_ulp_context *ulp_ctxt,
+			  uint32_t ifindex);
+
+/*
+ * Api to get the ulp ifindex for a given function id.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * func_id [in].device func id
+ * ifindex [out] ulp ifindex
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_dev_func_id_to_ulp_index(struct bnxt_ulp_context *ulp_ctxt,
+				     uint32_t func_id, uint32_t *ifindex);
+
 #endif /* _ULP_PORT_DB_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 58090bf..3c65442 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -155,8 +155,8 @@ bnxt_ulp_rte_parser_act_parse(const struct rte_flow_action actions[],
 		}
 		action_item++;
 	}
-	/* update the implied VNIC */
-	ulp_rte_parser_vnic_process(params);
+	/* update the implied port details */
+	ulp_rte_parser_implied_act_port_process(params);
 	return BNXT_TF_RC_SUCCESS;
 }
 
@@ -235,30 +235,26 @@ ulp_rte_parser_svif_process(struct ulp_rte_parser_params *params)
 				       port_id, svif_mask);
 }
 
-/* Function to handle the implicit VNIC RTE port id */
+/* Function to handle the implicit action port id */
 int32_t
-ulp_rte_parser_vnic_process(struct ulp_rte_parser_params *params)
+ulp_rte_parser_implied_act_port_process(struct ulp_rte_parser_params *params)
 {
-	struct ulp_rte_act_bitmap *act = &params->act_bitmap;
+	struct rte_flow_action action_item = {0};
+	struct rte_flow_action_port_id port_id = {0};
 
-	if (ULP_BITMAP_ISSET(act->bits, BNXT_ULP_ACTION_BIT_VNIC) ||
-	    ULP_BITMAP_ISSET(act->bits, BNXT_ULP_ACTION_BIT_VPORT)) {
-		/*
-		 * Reset the vnic/vport action bitmaps
-		 * it is not required for match
-		 */
-		ULP_BITMAP_RESET(params->act_bitmap.bits,
-				 BNXT_ULP_ACTION_BIT_VNIC);
-		ULP_BITMAP_RESET(params->act_bitmap.bits,
-				 BNXT_ULP_ACTION_BIT_VPORT);
+	/* Read the action port set bit */
+	if (ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_ACT_PORT_IS_SET)) {
+		/* Already set, so just exit */
 		return BNXT_TF_RC_SUCCESS;
 	}
+	port_id.id = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_INCOMING_IF);
+	action_item.conf = &port_id;
 
-	/* Update the vnic details */
-	ulp_rte_pf_act_handler(NULL, params);
-	/* Reset the hdr_bitmap with vnic bit */
-	ULP_BITMAP_RESET(params->act_bitmap.bits, BNXT_ULP_ACTION_BIT_VNIC);
+	/* Update the action port based on incoming port */
+	ulp_rte_port_id_act_handler(&action_item, params);
 
+	/* Reset the action port set bit */
+	ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_ACT_PORT_IS_SET, 0);
 	return BNXT_TF_RC_SUCCESS;
 }
 
@@ -1314,46 +1310,111 @@ int32_t
 ulp_rte_pf_act_handler(const struct rte_flow_action *action_item __rte_unused,
 		       struct ulp_rte_parser_params *params)
 {
-	uint32_t svif;
+	uint32_t port_id, pid;
+	uint32_t ifindex;
+	uint16_t pid_s;
+	struct ulp_rte_act_prop *act = &params->act_prop;
 
-	/* Update the hdr_bitmap with vnic bit */
-	ULP_BITMAP_SET(params->act_bitmap.bits, BNXT_ULP_ACTION_BIT_VNIC);
+	/* Get the port id of the current device */
+	port_id = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_INCOMING_IF);
+
+	/* Get the port db ifindex */
+	if (ulp_port_db_dev_port_to_ulp_index(params->ulp_ctx, port_id,
+					      &ifindex)) {
+		BNXT_TF_DBG(ERR, "Invalid port id\n");
+		return BNXT_TF_RC_ERROR;
+	}
 
-	/* copy the PF of the current device into VNIC Property */
-	svif = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_INCOMING_IF);
-	svif = bnxt_get_vnic_id(svif, BNXT_ULP_INTF_TYPE_INVALID);
-	svif = rte_cpu_to_be_32(svif);
-	memcpy(&params->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
-	       &svif, BNXT_ULP_ACT_PROP_SZ_VNIC);
+	/* Check the port is PF port */
+	if (ulp_port_db_port_type_get(params->ulp_ctx,
+				      ifindex) != BNXT_ULP_INTF_TYPE_PF) {
+		BNXT_TF_DBG(ERR, "Port is not a PF port\n");
+		return BNXT_TF_RC_ERROR;
+	}
 
+	if (params->dir == ULP_DIR_EGRESS) {
+		/* For egress direction, fill vport */
+		if (ulp_port_db_vport_get(params->ulp_ctx, ifindex, &pid_s))
+			return BNXT_TF_RC_ERROR;
+		pid = pid_s;
+		pid = rte_cpu_to_be_32(pid);
+		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_VPORT],
+		       &pid, BNXT_ULP_ACT_PROP_SZ_VPORT);
+	} else {
+		/* For ingress direction, fill vnic */
+		if (ulp_port_db_default_vnic_get(params->ulp_ctx, ifindex,
+						 BNXT_ULP_DRV_FUNC_VNIC,
+						 &pid_s))
+			return BNXT_TF_RC_ERROR;
+		pid = pid_s;
+		pid = rte_cpu_to_be_32(pid);
+		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
+		       &pid, BNXT_ULP_ACT_PROP_SZ_VNIC);
+	}
+
+	/*Update the action port set bit */
+	ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_ACT_PORT_IS_SET, 1);
 	return BNXT_TF_RC_SUCCESS;
 }
 
 /* Function to handle the parsing of RTE Flow action VF. */
 int32_t
 ulp_rte_vf_act_handler(const struct rte_flow_action *action_item,
-		       struct ulp_rte_parser_params *param)
+		       struct ulp_rte_parser_params *params)
 {
 	const struct rte_flow_action_vf *vf_action;
 	uint32_t pid;
+	uint32_t ifindex;
+	uint16_t pid_s;
+	struct ulp_rte_act_prop *act = &params->act_prop;
+	enum bnxt_ulp_intf_type intf_type;
 
 	vf_action = action_item->conf;
-	if (vf_action) {
-		if (vf_action->original) {
-			BNXT_TF_DBG(ERR,
-				    "Parse Error:VF Original not supported\n");
-			return BNXT_TF_RC_PARSE_ERR;
-		}
-		/* TBD: Update the computed VNIC using VF conversion */
-		pid = bnxt_get_vnic_id(vf_action->id,
-				       BNXT_ULP_INTF_TYPE_INVALID);
+	if (!vf_action) {
+		BNXT_TF_DBG(ERR, "ParseErr: Invalid Argument\n");
+		return BNXT_TF_RC_PARSE_ERR;
+	}
+
+	if (vf_action->original) {
+		BNXT_TF_DBG(ERR, "ParseErr:VF Original not supported\n");
+		return BNXT_TF_RC_PARSE_ERR;
+	}
+
+	/* Check the port is VF port */
+	if (ulp_port_db_dev_func_id_to_ulp_index(params->ulp_ctx, vf_action->id,
+						 &ifindex)) {
+		BNXT_TF_DBG(ERR, "VF is not valid interface\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	intf_type = ulp_port_db_port_type_get(params->ulp_ctx, ifindex);
+	if (intf_type != BNXT_ULP_INTF_TYPE_VF &&
+	    intf_type != BNXT_ULP_INTF_TYPE_TRUSTED_VF) {
+		BNXT_TF_DBG(ERR, "Port is not a VF port\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	if (params->dir == ULP_DIR_EGRESS) {
+		/* For egress direction, fill vport */
+		if (ulp_port_db_vport_get(params->ulp_ctx, ifindex, &pid_s))
+			return BNXT_TF_RC_ERROR;
+		pid = pid_s;
 		pid = rte_cpu_to_be_32(pid);
-		memcpy(&param->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
+		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_VPORT],
+		       &pid, BNXT_ULP_ACT_PROP_SZ_VPORT);
+	} else {
+		/* For ingress direction, fill vnic */
+		if (ulp_port_db_default_vnic_get(params->ulp_ctx, ifindex,
+						 BNXT_ULP_DRV_FUNC_VNIC,
+						 &pid_s))
+			return BNXT_TF_RC_ERROR;
+		pid = pid_s;
+		pid = rte_cpu_to_be_32(pid);
+		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 		       &pid, BNXT_ULP_ACT_PROP_SZ_VNIC);
 	}
 
-	/* Update the hdr_bitmap with count */
-	ULP_BITMAP_SET(param->act_bitmap.bits, BNXT_ULP_ACTION_BIT_VNIC);
+	/*Update the action port set bit */
+	ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_ACT_PORT_IS_SET, 1);
 	return BNXT_TF_RC_SUCCESS;
 }
 
@@ -1415,8 +1476,8 @@ ulp_rte_port_id_act_handler(const struct rte_flow_action *act_item,
 		       &pid, BNXT_ULP_ACT_PROP_SZ_VNIC);
 	}
 
-	/*Update the hdr_bitmap with vnic */
-	ULP_BITMAP_SET(param->act_bitmap.bits, BNXT_ULP_ACTION_BIT_VNIC);
+	/*Update the action port set bit */
+	ULP_COMP_FLD_IDX_WR(param, BNXT_ULP_CF_IDX_ACT_PORT_IS_SET, 1);
 	return BNXT_TF_RC_SUCCESS;
 }
 
@@ -1460,8 +1521,8 @@ ulp_rte_phy_port_act_handler(const struct rte_flow_action *action_item,
 	memcpy(&prm->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VPORT],
 	       &pid, BNXT_ULP_ACT_PROP_SZ_VPORT);
 
-	/* update the hdr_bitmap with vport */
-	ULP_BITMAP_SET(prm->act_bitmap.bits, BNXT_ULP_ACTION_BIT_VPORT);
+	/*Update the action port set bit */
+	ULP_COMP_FLD_IDX_WR(prm, BNXT_ULP_CF_IDX_ACT_PORT_IS_SET, 1);
 	return BNXT_TF_RC_SUCCESS;
 }
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
index 1bb4721..49e9cbb 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
@@ -35,9 +35,9 @@
 int32_t
 ulp_rte_parser_svif_process(struct ulp_rte_parser_params *params);
 
-/* Function to handle the implicit VNIC RTE port id */
+/* Function to handle the implicit action port id */
 int32_t
-ulp_rte_parser_vnic_process(struct ulp_rte_parser_params *params);
+ulp_rte_parser_implied_act_port_process(struct ulp_rte_parser_params *params);
 
 /*
  * Function to handle the parsing of RTE Flows and placing
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 13d1782..ada3a5e 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -128,7 +128,9 @@ enum bnxt_ulp_cf_idx {
 	BNXT_ULP_CF_IDX_ACT_ENCAP_IPV6_FLAG = 32,
 	BNXT_ULP_CF_IDX_ACT_DEC_TTL = 33,
 	BNXT_ULP_CF_IDX_ACT_T_DEC_TTL = 34,
-	BNXT_ULP_CF_IDX_LAST = 35
+	BNXT_ULP_CF_IDX_ACT_PORT_IS_SET = 35,
+	BNXT_ULP_CF_IDX_MATCH_PORT_TYPE = 36,
+	BNXT_ULP_CF_IDX_LAST = 37
 };
 
 enum bnxt_ulp_cond_opcode {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 14/20] net/bnxt: port configuration changes to support full offload
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
                   ` (12 preceding siblings ...)
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 13/20] net/bnxt: enable support for PF and VF port action items Somnath Kotur
@ 2020-07-06  8:24 ` Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 15/20] net/bnxt: add support for conditional opcodes for mapper result table Somnath Kotur
                   ` (5 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added port configuration changes to support full offload
rules when VF representor ports are used. The direction of
the flow is determined using the configured dirction and the
configured match and action ports of the flow create.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Michael Baucom <michael.baucom@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/bnxt_tf_common.h       |   8 +-
 drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c        |  34 +-
 drivers/net/bnxt/tf_ulp/ulp_mapper.h           |   2 +-
 drivers/net/bnxt/tf_ulp/ulp_matcher.c          |  10 +-
 drivers/net/bnxt/tf_ulp/ulp_port_db.c          |  25 ++
 drivers/net/bnxt/tf_ulp/ulp_port_db.h          |  14 +
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c       | 451 +++++++++++++++----------
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h       |  10 +-
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h |   6 +-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h  |   7 +-
 10 files changed, 361 insertions(+), 206 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
index ebb7140..f0633f0 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_common.h
@@ -44,9 +44,10 @@ enum bnxt_ulp_eth_ip_type {
 };
 
 /* ulp direction Type */
-enum ulp_direction_type {
-	ULP_DIR_INGRESS,
-	ULP_DIR_EGRESS,
+enum bnxt_ulp_direction_type {
+	BNXT_ULP_DIR_INVALID,
+	BNXT_ULP_DIR_INGRESS,
+	BNXT_ULP_DIR_EGRESS,
 };
 
 /* enumeration of the interface types */
@@ -57,6 +58,7 @@ enum bnxt_ulp_intf_type {
 	BNXT_ULP_INTF_TYPE_VF,
 	BNXT_ULP_INTF_TYPE_PF_REP,
 	BNXT_ULP_INTF_TYPE_VF_REP,
+	BNXT_ULP_INTF_TYPE_PHY_PORT,
 	BNXT_ULP_INTF_TYPE_LAST
 };
 
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
index 36a0141..89fffcf 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp_flow.c
@@ -60,6 +60,19 @@ bnxt_ulp_flow_validate_args(const struct rte_flow_attr *attr,
 	return BNXT_TF_RC_SUCCESS;
 }
 
+static inline void
+bnxt_ulp_set_dir_attributes(struct ulp_rte_parser_params *params,
+			    const struct rte_flow_attr *attr)
+{
+	/* Set the flow attributes */
+	if (attr->egress)
+		params->dir_attr |= BNXT_ULP_FLOW_ATTR_EGRESS;
+	if (attr->ingress)
+		params->dir_attr |= BNXT_ULP_FLOW_ATTR_INGRESS;
+	if (attr->transfer)
+		params->dir_attr |= BNXT_ULP_FLOW_ATTR_TRANSFER;
+}
+
 /* Function to create the rte flow. */
 static struct rte_flow *
 bnxt_ulp_flow_create(struct rte_eth_dev *dev,
@@ -93,13 +106,12 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev,
 	memset(&params, 0, sizeof(struct ulp_rte_parser_params));
 	params.ulp_ctx = ulp_ctx;
 
-	if (attr->egress)
-		params.dir = ULP_DIR_EGRESS;
+	/* Set the flow attributes */
+	bnxt_ulp_set_dir_attributes(&params, attr);
 
 	/* copy the device port id and direction for further processing */
 	ULP_COMP_FLD_IDX_WR(&params, BNXT_ULP_CF_IDX_INCOMING_IF,
 			    dev->data->port_id);
-	ULP_COMP_FLD_IDX_WR(&params, BNXT_ULP_CF_IDX_DIRECTION, params.dir);
 	ULP_COMP_FLD_IDX_WR(&params, BNXT_ULP_CF_IDX_SVIF_FLAG,
 			    BNXT_ULP_INVALID_SVIF_VAL);
 
@@ -113,6 +125,11 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev,
 	if (ret != BNXT_TF_RC_SUCCESS)
 		goto parse_error;
 
+	/* Perform the rte flow post process */
+	ret = bnxt_ulp_rte_parser_post_process(&params);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
 	ret = ulp_matcher_pattern_match(&params, &class_id);
 	if (ret != BNXT_TF_RC_SUCCESS)
 		goto parse_error;
@@ -131,7 +148,7 @@ bnxt_ulp_flow_create(struct rte_eth_dev *dev,
 	mapper_cparms.act_tid = act_tmpl;
 	mapper_cparms.func_id = bnxt_get_fw_func_id(dev->data->port_id,
 						    BNXT_ULP_INTF_TYPE_INVALID);
-	mapper_cparms.dir = params.dir;
+	mapper_cparms.dir_attr = params.dir_attr;
 
 	/* Call the ulp mapper to create the flow in the hardware. */
 	ret = ulp_mapper_flow_create(ulp_ctx, &mapper_cparms, &fid);
@@ -176,8 +193,8 @@ bnxt_ulp_flow_validate(struct rte_eth_dev *dev,
 	memset(&params, 0, sizeof(struct ulp_rte_parser_params));
 	params.ulp_ctx = ulp_ctx;
 
-	if (attr->egress)
-		params.dir = ULP_DIR_EGRESS;
+	/* Set the flow attributes */
+	bnxt_ulp_set_dir_attributes(&params, attr);
 
 	/* Parse the rte flow pattern */
 	ret = bnxt_ulp_rte_parser_hdr_parse(pattern, &params);
@@ -189,6 +206,11 @@ bnxt_ulp_flow_validate(struct rte_eth_dev *dev,
 	if (ret != BNXT_TF_RC_SUCCESS)
 		goto parse_error;
 
+	/* Perform the rte flow post process */
+	ret = bnxt_ulp_rte_parser_post_process(&params);
+	if (ret != BNXT_TF_RC_SUCCESS)
+		goto parse_error;
+
 	ret = ulp_matcher_pattern_match(&params, &class_id);
 
 	if (ret != BNXT_TF_RC_SUCCESS)
diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.h b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
index f6d5544..a19fb0d 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.h
@@ -87,7 +87,7 @@ struct bnxt_ulp_mapper_create_parms {
 	uint32_t			class_tid;
 	uint32_t			act_tid;
 	uint16_t			func_id;
-	enum ulp_direction_type		dir;
+	uint32_t			dir_attr;
 };
 
 /* Function to initialize any dynamic mapper data. */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_matcher.c b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
index f665700..9112647 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_matcher.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_matcher.c
@@ -47,14 +47,8 @@ ulp_matcher_pattern_match(struct ulp_rte_parser_params *params,
 	uint8_t vf_to_vf;
 	uint16_t tmpl_id;
 
-	/* determine vf to vf flow */
-	if (params->dir == ULP_DIR_EGRESS &&
-	    ULP_BITMAP_ISSET(params->act_bitmap.bits,
-			     BNXT_ULP_ACTION_BIT_VNIC)) {
-		vf_to_vf = 1;
-	} else {
-		vf_to_vf = 0;
-	}
+	/* Get vf to vf flow */
+	vf_to_vf = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_VF_TO_VF);
 
 	/* calculate the hash of the given flow */
 	class_hid = ulp_matcher_class_hash_calculate(params->hdr_bitmap.bits,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.c b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
index 122b5f4..0fc7c0a 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.c
@@ -465,6 +465,31 @@ ulp_port_db_phy_port_vport_get(struct bnxt_ulp_context *ulp_ctxt,
 }
 
 /*
+ * Api to get the svif for a given physical port.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * phy_port [in] physical port index
+ * svif [out] the svif of the given physical index
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_phy_port_svif_get(struct bnxt_ulp_context *ulp_ctxt,
+			      uint32_t phy_port,
+			      uint16_t *svif)
+{
+	struct bnxt_ulp_port_db *port_db;
+
+	port_db = bnxt_ulp_cntxt_ptr2_port_db_get(ulp_ctxt);
+	if (!port_db || phy_port >= port_db->phy_port_cnt) {
+		BNXT_TF_DBG(ERR, "Invalid Arguments\n");
+		return -EINVAL;
+	}
+	*svif = port_db->phy_port_list[phy_port].port_svif;
+	return 0;
+}
+
+/*
  * Api to get the port type for a given ulp ifindex.
  *
  * ulp_ctxt [in] Ptr to ulp context
diff --git a/drivers/net/bnxt/tf_ulp/ulp_port_db.h b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
index 4afbb84..393d01b 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_port_db.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_port_db.h
@@ -220,6 +220,20 @@ ulp_port_db_phy_port_vport_get(struct bnxt_ulp_context *ulp_ctxt,
 			       uint16_t *out_port);
 
 /*
+ * Api to get the svif for a given physical port.
+ *
+ * ulp_ctxt [in] Ptr to ulp context
+ * phy_port [in] physical port index
+ * svif [out] the svif of the given physical index
+ *
+ * Returns 0 on success or negative number on failure.
+ */
+int32_t
+ulp_port_db_phy_port_svif_get(struct bnxt_ulp_context *ulp_ctxt,
+			      uint32_t phy_port,
+			      uint16_t *svif);
+
+/*
  * Api to get the port type for a given ulp ifindex.
  *
  * ulp_ctxt [in] Ptr to ulp context
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 3c65442..e828325 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -84,9 +84,6 @@ bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
 	struct bnxt_ulp_rte_hdr_info *hdr_info;
 
 	params->field_idx = BNXT_ULP_PROTO_HDR_SVIF_NUM;
-	if (params->dir == ULP_DIR_EGRESS)
-		ULP_BITMAP_SET(params->hdr_bitmap.bits,
-			       BNXT_ULP_FLOW_DIR_BITMASK_EGR);
 
 	/* Set the computed flags for no vlan tags before parsing */
 	ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_O_NO_VTAG, 1);
@@ -113,8 +110,7 @@ bnxt_ulp_rte_parser_hdr_parse(const struct rte_flow_item pattern[],
 		item++;
 	}
 	/* update the implied SVIF */
-	(void)ulp_rte_parser_svif_process(params);
-	return BNXT_TF_RC_SUCCESS;
+	return ulp_rte_parser_implicit_match_port_process(params);
 }
 
 /*
@@ -128,10 +124,6 @@ bnxt_ulp_rte_parser_act_parse(const struct rte_flow_action actions[],
 	const struct rte_flow_action *action_item = actions;
 	struct bnxt_ulp_rte_act_info *hdr_info;
 
-	if (params->dir == ULP_DIR_EGRESS)
-		ULP_BITMAP_SET(params->act_bitmap.bits,
-			       BNXT_ULP_FLOW_DIR_BITMASK_EGR);
-
 	/* Parse all the items in the pattern */
 	while (action_item && action_item->type != RTE_FLOW_ACTION_TYPE_END) {
 		/* get the header information from the flow_hdr_info table */
@@ -156,24 +148,85 @@ bnxt_ulp_rte_parser_act_parse(const struct rte_flow_action actions[],
 		action_item++;
 	}
 	/* update the implied port details */
-	ulp_rte_parser_implied_act_port_process(params);
+	ulp_rte_parser_implicit_act_port_process(params);
 	return BNXT_TF_RC_SUCCESS;
 }
 
+/*
+ * Function to handle the post processing of the parsing details
+ */
+int32_t
+bnxt_ulp_rte_parser_post_process(struct ulp_rte_parser_params *params)
+{
+	enum bnxt_ulp_direction_type dir;
+	enum bnxt_ulp_intf_type match_port_type, act_port_type;
+	uint32_t act_port_set;
+
+	/* Get the computed details */
+	dir = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_DIRECTION);
+	match_port_type = ULP_COMP_FLD_IDX_RD(params,
+					      BNXT_ULP_CF_IDX_MATCH_PORT_TYPE);
+	act_port_type = ULP_COMP_FLD_IDX_RD(params,
+					    BNXT_ULP_CF_IDX_ACT_PORT_TYPE);
+	act_port_set = ULP_COMP_FLD_IDX_RD(params,
+					   BNXT_ULP_CF_IDX_ACT_PORT_IS_SET);
+
+	/* set the flow direction in the proto and action header */
+	if (dir == BNXT_ULP_DIR_EGRESS) {
+		ULP_BITMAP_SET(params->hdr_bitmap.bits,
+			       BNXT_ULP_FLOW_DIR_BITMASK_EGR);
+		ULP_BITMAP_SET(params->act_bitmap.bits,
+			       BNXT_ULP_FLOW_DIR_BITMASK_EGR);
+	}
+
+	/* calculate the VF to VF flag */
+	if (act_port_set && act_port_type == BNXT_ULP_INTF_TYPE_VF_REP &&
+	    match_port_type == BNXT_ULP_INTF_TYPE_VF_REP)
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_VF_TO_VF, 1);
+
+	/* TBD: Handle the flow rejection scenarios */
+	return 0;
+}
+
+/*
+ * Function to compute the flow direction based on the match port details
+ */
+static void
+bnxt_ulp_rte_parser_direction_compute(struct ulp_rte_parser_params *params)
+{
+	enum bnxt_ulp_intf_type match_port_type;
+
+	/* Get the match port type */
+	match_port_type = ULP_COMP_FLD_IDX_RD(params,
+					      BNXT_ULP_CF_IDX_MATCH_PORT_TYPE);
+
+	/* If ingress flow and matchport is vf rep then dir is egress*/
+	if ((params->dir_attr & BNXT_ULP_FLOW_ATTR_INGRESS) &&
+	    match_port_type == BNXT_ULP_INTF_TYPE_VF_REP) {
+		ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_DIRECTION,
+				    BNXT_ULP_DIR_EGRESS);
+	} else {
+		/* Assign the input direction */
+		if (params->dir_attr & BNXT_ULP_FLOW_ATTR_INGRESS)
+			ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_DIRECTION,
+					    BNXT_ULP_DIR_INGRESS);
+		else
+			ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_DIRECTION,
+					    BNXT_ULP_DIR_EGRESS);
+	}
+}
+
 /* Function to handle the parsing of RTE Flow item PF Header. */
 static int32_t
 ulp_rte_parser_svif_set(struct ulp_rte_parser_params *params,
-			enum rte_flow_item_type proto,
-			uint16_t svif,
+			uint32_t ifindex,
 			uint16_t mask)
 {
-	uint16_t port_id = svif;
-	uint32_t dir = 0;
+	uint16_t svif;
+	enum bnxt_ulp_direction_type dir;
 	struct ulp_rte_hdr_field *hdr_field;
 	enum bnxt_ulp_svif_type svif_type;
-	enum bnxt_ulp_intf_type if_type;
-	uint32_t ifindex;
-	int32_t rc;
+	enum bnxt_ulp_intf_type port_type;
 
 	if (ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_SVIF_FLAG) !=
 	    BNXT_ULP_INVALID_SVIF_VAL) {
@@ -182,31 +235,32 @@ ulp_rte_parser_svif_set(struct ulp_rte_parser_params *params,
 		return BNXT_TF_RC_ERROR;
 	}
 
-	if (proto == RTE_FLOW_ITEM_TYPE_PORT_ID) {
-		dir = ULP_COMP_FLD_IDX_RD(params,
-					  BNXT_ULP_CF_IDX_DIRECTION);
-		/* perform the conversion from dpdk port to bnxt svif */
-		rc = ulp_port_db_dev_port_to_ulp_index(params->ulp_ctx, port_id,
-						       &ifindex);
-		if (rc) {
-			BNXT_TF_DBG(ERR,
-				    "Invalid port id\n");
-			return BNXT_TF_RC_ERROR;
-		}
-
-		if (dir == ULP_DIR_INGRESS) {
-			svif_type = BNXT_ULP_PHY_PORT_SVIF;
-		} else {
-			if_type = bnxt_get_interface_type(port_id);
-			if (if_type == BNXT_ULP_INTF_TYPE_VF_REP)
-				svif_type = BNXT_ULP_VF_FUNC_SVIF;
-			else
-				svif_type = BNXT_ULP_DRV_FUNC_SVIF;
-		}
-		ulp_port_db_svif_get(params->ulp_ctx, ifindex, svif_type,
-				     &svif);
-		svif = rte_cpu_to_be_16(svif);
+	/* Get port type details */
+	port_type = ulp_port_db_port_type_get(params->ulp_ctx, ifindex);
+	if (port_type == BNXT_ULP_INTF_TYPE_INVALID) {
+		BNXT_TF_DBG(ERR, "Invalid port type\n");
+		return BNXT_TF_RC_ERROR;
 	}
+
+	/* Update the match port type */
+	ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_MATCH_PORT_TYPE, port_type);
+
+	/* compute the direction */
+	bnxt_ulp_rte_parser_direction_compute(params);
+
+	/* Get the computed direction */
+	dir = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_DIRECTION);
+	if (dir == BNXT_ULP_DIR_INGRESS) {
+		svif_type = BNXT_ULP_PHY_PORT_SVIF;
+	} else {
+		if (port_type == BNXT_ULP_INTF_TYPE_VF_REP)
+			svif_type = BNXT_ULP_VF_FUNC_SVIF;
+		else
+			svif_type = BNXT_ULP_DRV_FUNC_SVIF;
+	}
+	ulp_port_db_svif_get(params->ulp_ctx, ifindex, svif_type,
+			     &svif);
+	svif = rte_cpu_to_be_16(svif);
 	hdr_field = &params->hdr_field[BNXT_ULP_PROTO_HDR_FIELD_SVIF_IDX];
 	memcpy(hdr_field->spec, &svif, sizeof(svif));
 	memcpy(hdr_field->mask, &mask, sizeof(mask));
@@ -218,10 +272,12 @@ ulp_rte_parser_svif_set(struct ulp_rte_parser_params *params,
 
 /* Function to handle the parsing of the RTE port id */
 int32_t
-ulp_rte_parser_svif_process(struct ulp_rte_parser_params *params)
+ulp_rte_parser_implicit_match_port_process(struct ulp_rte_parser_params *params)
 {
 	uint16_t port_id = 0;
 	uint16_t svif_mask = 0xFFFF;
+	uint32_t ifindex;
+	int32_t rc = BNXT_TF_RC_ERROR;
 
 	if (ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_SVIF_FLAG) !=
 	    BNXT_ULP_INVALID_SVIF_VAL)
@@ -230,14 +286,21 @@ ulp_rte_parser_svif_process(struct ulp_rte_parser_params *params)
 	/* SVIF not set. So get the port id */
 	port_id = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_INCOMING_IF);
 
+	if (ulp_port_db_dev_port_to_ulp_index(params->ulp_ctx,
+					      port_id,
+					      &ifindex)) {
+		BNXT_TF_DBG(ERR, "ParseErr:Portid is not valid\n");
+		return rc;
+	}
+
 	/* Update the SVIF details */
-	return ulp_rte_parser_svif_set(params, RTE_FLOW_ITEM_TYPE_PORT_ID,
-				       port_id, svif_mask);
+	rc = ulp_rte_parser_svif_set(params, ifindex, svif_mask);
+	return rc;
 }
 
 /* Function to handle the implicit action port id */
 int32_t
-ulp_rte_parser_implied_act_port_process(struct ulp_rte_parser_params *params)
+ulp_rte_parser_implicit_act_port_process(struct ulp_rte_parser_params *params)
 {
 	struct rte_flow_action action_item = {0};
 	struct rte_flow_action_port_id port_id = {0};
@@ -260,19 +323,26 @@ ulp_rte_parser_implied_act_port_process(struct ulp_rte_parser_params *params)
 
 /* Function to handle the parsing of RTE Flow item PF Header. */
 int32_t
-ulp_rte_pf_hdr_handler(const struct rte_flow_item *item,
+ulp_rte_pf_hdr_handler(const struct rte_flow_item *item __rte_unused,
 		       struct ulp_rte_parser_params *params)
 {
 	uint16_t port_id = 0;
 	uint16_t svif_mask = 0xFFFF;
+	uint32_t ifindex;
 
-	/* Get the port id */
+	/* Get the implicit port id */
 	port_id = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_INCOMING_IF);
 
+	/* perform the conversion from dpdk port to bnxt ifindex */
+	if (ulp_port_db_dev_port_to_ulp_index(params->ulp_ctx,
+					      port_id,
+					      &ifindex)) {
+		BNXT_TF_DBG(ERR, "ParseErr:Portid is not valid\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
 	/* Update the SVIF details */
-	return ulp_rte_parser_svif_set(params,
-				       item->type,
-				       port_id, svif_mask);
+	return  ulp_rte_parser_svif_set(params, ifindex, svif_mask);
 }
 
 /* Function to handle the parsing of RTE Flow item VF Header. */
@@ -282,15 +352,30 @@ ulp_rte_vf_hdr_handler(const struct rte_flow_item *item,
 {
 	const struct rte_flow_item_vf *vf_spec = item->spec;
 	const struct rte_flow_item_vf *vf_mask = item->mask;
-	uint16_t svif = 0, mask = 0;
+	uint16_t mask = 0;
+	uint32_t ifindex;
+	int32_t rc = BNXT_TF_RC_PARSE_ERR;
 
 	/* Get VF rte_flow_item for Port details */
-	if (vf_spec)
-		svif = (uint16_t)vf_spec->id;
-	if (vf_mask)
-		mask = (uint16_t)vf_mask->id;
+	if (!vf_spec) {
+		BNXT_TF_DBG(ERR, "ParseErr:VF id is not valid\n");
+		return rc;
+	}
+	if (!vf_mask) {
+		BNXT_TF_DBG(ERR, "ParseErr:VF mask is not valid\n");
+		return rc;
+	}
+	mask = vf_mask->id;
 
-	return ulp_rte_parser_svif_set(params, item->type, svif, mask);
+	/* perform the conversion from VF Func id to bnxt ifindex */
+	if (ulp_port_db_dev_func_id_to_ulp_index(params->ulp_ctx,
+						 vf_spec->id,
+						 &ifindex)) {
+		BNXT_TF_DBG(ERR, "ParseErr:Portid is not valid\n");
+		return rc;
+	}
+	/* Update the SVIF details */
+	return ulp_rte_parser_svif_set(params, ifindex, mask);
 }
 
 /* Function to handle the parsing of RTE Flow item port id  Header. */
@@ -300,24 +385,29 @@ ulp_rte_port_id_hdr_handler(const struct rte_flow_item *item,
 {
 	const struct rte_flow_item_port_id *port_spec = item->spec;
 	const struct rte_flow_item_port_id *port_mask = item->mask;
-	uint16_t svif = 0, mask = 0;
+	uint16_t mask = 0;
+	int32_t rc = BNXT_TF_RC_PARSE_ERR;
+	uint32_t ifindex;
 
-	/*
-	 * Copy the rte_flow_item for Port into hdr_field using port id
-	 * header fields.
-	 */
-	if (port_spec) {
-		svif = (uint16_t)port_spec->id;
-		if (svif >= RTE_MAX_ETHPORTS) {
-			BNXT_TF_DBG(ERR, "ParseErr:Portid is not valid\n");
-			return BNXT_TF_RC_PARSE_ERR;
-		}
+	if (!port_spec) {
+		BNXT_TF_DBG(ERR, "ParseErr:Port id is not valid\n");
+		return rc;
 	}
-	if (port_mask)
-		mask = (uint16_t)port_mask->id;
+	if (!port_mask) {
+		BNXT_TF_DBG(ERR, "ParseErr:Phy Port mask is not valid\n");
+		return rc;
+	}
+	mask = port_mask->id;
 
+	/* perform the conversion from dpdk port to bnxt ifindex */
+	if (ulp_port_db_dev_port_to_ulp_index(params->ulp_ctx,
+					      port_spec->id,
+					      &ifindex)) {
+		BNXT_TF_DBG(ERR, "ParseErr:Portid is not valid\n");
+		return rc;
+	}
 	/* Update the SVIF details */
-	return ulp_rte_parser_svif_set(params, item->type, svif, mask);
+	return ulp_rte_parser_svif_set(params, ifindex, mask);
 }
 
 /* Function to handle the parsing of RTE Flow item phy port Header. */
@@ -327,34 +417,55 @@ ulp_rte_phy_port_hdr_handler(const struct rte_flow_item *item,
 {
 	const struct rte_flow_item_phy_port *port_spec = item->spec;
 	const struct rte_flow_item_phy_port *port_mask = item->mask;
-	uint32_t svif = 0, mask = 0;
-	struct bnxt_ulp_device_params *dparms;
-	uint32_t dev_id;
+	uint16_t mask = 0;
+	int32_t rc = BNXT_TF_RC_ERROR;
+	uint16_t svif;
+	enum bnxt_ulp_direction_type dir;
+	struct ulp_rte_hdr_field *hdr_field;
 
 	/* Copy the rte_flow_item for phy port into hdr_field */
-	if (port_spec)
-		svif = port_spec->index;
-	if (port_mask)
-		mask = port_mask->index;
-
-	if (bnxt_ulp_cntxt_dev_id_get(params->ulp_ctx, &dev_id)) {
-		BNXT_TF_DBG(DEBUG, "Failed to get device id\n");
-		return -EINVAL;
+	if (!port_spec) {
+		BNXT_TF_DBG(ERR, "ParseErr:Phy Port id is not valid\n");
+		return rc;
+	}
+	if (!port_mask) {
+		BNXT_TF_DBG(ERR, "ParseErr:Phy Port mask is not valid\n");
+		return rc;
 	}
+	mask = port_mask->index;
 
-	dparms = bnxt_ulp_device_params_get(dev_id);
-	if (!dparms) {
-		BNXT_TF_DBG(DEBUG, "Failed to get device parms\n");
-		return -EINVAL;
+	/* Update the match port type */
+	ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_MATCH_PORT_TYPE,
+			    BNXT_ULP_INTF_TYPE_PHY_PORT);
+
+	/* Compute the Hw direction */
+	bnxt_ulp_rte_parser_direction_compute(params);
+
+	/* Direction validation */
+	dir = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_DIRECTION);
+	if (dir == BNXT_ULP_DIR_EGRESS) {
+		BNXT_TF_DBG(ERR,
+			    "Parse Err:Phy ports are valid only for ingress\n");
+		return BNXT_TF_RC_PARSE_ERR;
 	}
 
-	if (svif > dparms->num_phy_ports) {
-		BNXT_TF_DBG(ERR, "ParseErr:Phy Port is not valid\n");
+	/* Get the physical port details from port db */
+	rc = ulp_port_db_phy_port_svif_get(params->ulp_ctx, port_spec->index,
+					   &svif);
+	if (rc) {
+		BNXT_TF_DBG(ERR, "Failed to get port details\n");
 		return BNXT_TF_RC_PARSE_ERR;
 	}
 
 	/* Update the SVIF details */
-	return ulp_rte_parser_svif_set(params, item->type, svif, mask);
+	svif = rte_cpu_to_be_16(svif);
+	hdr_field = &params->hdr_field[BNXT_ULP_PROTO_HDR_FIELD_SVIF_IDX];
+	memcpy(hdr_field->spec, &svif, sizeof(svif));
+	memcpy(hdr_field->mask, &mask, sizeof(mask));
+	hdr_field->size = sizeof(svif);
+	ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_SVIF_FLAG,
+			    rte_be_to_cpu_16(svif));
+	return BNXT_TF_RC_SUCCESS;
 }
 
 /* Function to handle the parsing of RTE Flow item Ethernet Header. */
@@ -1252,7 +1363,7 @@ ulp_rte_vxlan_encap_act_handler(const struct rte_flow_action *action_item,
 	memcpy(&ap->act_details[BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN_SZ],
 	       &vxlan_size, sizeof(uint32_t));
 
-	/*update the hdr_bitmap with vxlan */
+	/* update the hdr_bitmap with vxlan */
 	ULP_BITMAP_SET(act->bits, BNXT_ULP_ACTION_BIT_VXLAN_ENCAP);
 	return BNXT_TF_RC_SUCCESS;
 }
@@ -1305,68 +1416,82 @@ ulp_rte_count_act_handler(const struct rte_flow_action *action_item,
 	return BNXT_TF_RC_SUCCESS;
 }
 
-/* Function to handle the parsing of RTE Flow action PF. */
-int32_t
-ulp_rte_pf_act_handler(const struct rte_flow_action *action_item __rte_unused,
-		       struct ulp_rte_parser_params *params)
+/* Function to handle the parsing of action ports. */
+static int32_t
+ulp_rte_parser_act_port_set(struct ulp_rte_parser_params *param,
+			    uint32_t ifindex)
 {
-	uint32_t port_id, pid;
-	uint32_t ifindex;
+	enum bnxt_ulp_direction_type dir;
 	uint16_t pid_s;
-	struct ulp_rte_act_prop *act = &params->act_prop;
-
-	/* Get the port id of the current device */
-	port_id = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_INCOMING_IF);
-
-	/* Get the port db ifindex */
-	if (ulp_port_db_dev_port_to_ulp_index(params->ulp_ctx, port_id,
-					      &ifindex)) {
-		BNXT_TF_DBG(ERR, "Invalid port id\n");
-		return BNXT_TF_RC_ERROR;
-	}
-
-	/* Check the port is PF port */
-	if (ulp_port_db_port_type_get(params->ulp_ctx,
-				      ifindex) != BNXT_ULP_INTF_TYPE_PF) {
-		BNXT_TF_DBG(ERR, "Port is not a PF port\n");
-		return BNXT_TF_RC_ERROR;
-	}
+	uint32_t pid;
+	struct ulp_rte_act_prop *act = &param->act_prop;
 
-	if (params->dir == ULP_DIR_EGRESS) {
+	/* Get the direction */
+	dir = ULP_COMP_FLD_IDX_RD(param, BNXT_ULP_CF_IDX_DIRECTION);
+	if (dir == BNXT_ULP_DIR_EGRESS) {
 		/* For egress direction, fill vport */
-		if (ulp_port_db_vport_get(params->ulp_ctx, ifindex, &pid_s))
+		if (ulp_port_db_vport_get(param->ulp_ctx, ifindex, &pid_s))
 			return BNXT_TF_RC_ERROR;
+
 		pid = pid_s;
 		pid = rte_cpu_to_be_32(pid);
 		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_VPORT],
 		       &pid, BNXT_ULP_ACT_PROP_SZ_VPORT);
 	} else {
 		/* For ingress direction, fill vnic */
-		if (ulp_port_db_default_vnic_get(params->ulp_ctx, ifindex,
+		if (ulp_port_db_default_vnic_get(param->ulp_ctx, ifindex,
 						 BNXT_ULP_DRV_FUNC_VNIC,
 						 &pid_s))
 			return BNXT_TF_RC_ERROR;
+
 		pid = pid_s;
 		pid = rte_cpu_to_be_32(pid);
 		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
 		       &pid, BNXT_ULP_ACT_PROP_SZ_VNIC);
 	}
 
-	/*Update the action port set bit */
-	ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_ACT_PORT_IS_SET, 1);
+	/* Update the action port set bit */
+	ULP_COMP_FLD_IDX_WR(param, BNXT_ULP_CF_IDX_ACT_PORT_IS_SET, 1);
 	return BNXT_TF_RC_SUCCESS;
 }
 
+/* Function to handle the parsing of RTE Flow action PF. */
+int32_t
+ulp_rte_pf_act_handler(const struct rte_flow_action *action_item __rte_unused,
+		       struct ulp_rte_parser_params *params)
+{
+	uint32_t port_id;
+	uint32_t ifindex;
+	enum bnxt_ulp_intf_type intf_type;
+
+	/* Get the port id of the current device */
+	port_id = ULP_COMP_FLD_IDX_RD(params, BNXT_ULP_CF_IDX_INCOMING_IF);
+
+	/* Get the port db ifindex */
+	if (ulp_port_db_dev_port_to_ulp_index(params->ulp_ctx, port_id,
+					      &ifindex)) {
+		BNXT_TF_DBG(ERR, "Invalid port id\n");
+		return BNXT_TF_RC_ERROR;
+	}
+
+	/* Check the port is PF port */
+	intf_type = ulp_port_db_port_type_get(params->ulp_ctx, ifindex);
+	if (intf_type != BNXT_ULP_INTF_TYPE_PF) {
+		BNXT_TF_DBG(ERR, "Port is not a PF port\n");
+		return BNXT_TF_RC_ERROR;
+	}
+	/* Update the action properties */
+	ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_ACT_PORT_TYPE, intf_type);
+	return ulp_rte_parser_act_port_set(params, ifindex);
+}
+
 /* Function to handle the parsing of RTE Flow action VF. */
 int32_t
 ulp_rte_vf_act_handler(const struct rte_flow_action *action_item,
 		       struct ulp_rte_parser_params *params)
 {
 	const struct rte_flow_action_vf *vf_action;
-	uint32_t pid;
 	uint32_t ifindex;
-	uint16_t pid_s;
-	struct ulp_rte_act_prop *act = &params->act_prop;
 	enum bnxt_ulp_intf_type intf_type;
 
 	vf_action = action_item->conf;
@@ -1393,29 +1518,9 @@ ulp_rte_vf_act_handler(const struct rte_flow_action *action_item,
 		return BNXT_TF_RC_ERROR;
 	}
 
-	if (params->dir == ULP_DIR_EGRESS) {
-		/* For egress direction, fill vport */
-		if (ulp_port_db_vport_get(params->ulp_ctx, ifindex, &pid_s))
-			return BNXT_TF_RC_ERROR;
-		pid = pid_s;
-		pid = rte_cpu_to_be_32(pid);
-		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_VPORT],
-		       &pid, BNXT_ULP_ACT_PROP_SZ_VPORT);
-	} else {
-		/* For ingress direction, fill vnic */
-		if (ulp_port_db_default_vnic_get(params->ulp_ctx, ifindex,
-						 BNXT_ULP_DRV_FUNC_VNIC,
-						 &pid_s))
-			return BNXT_TF_RC_ERROR;
-		pid = pid_s;
-		pid = rte_cpu_to_be_32(pid);
-		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
-		       &pid, BNXT_ULP_ACT_PROP_SZ_VNIC);
-	}
-
-	/*Update the action port set bit */
-	ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_ACT_PORT_IS_SET, 1);
-	return BNXT_TF_RC_SUCCESS;
+	/* Update the action properties */
+	ULP_COMP_FLD_IDX_WR(params, BNXT_ULP_CF_IDX_ACT_PORT_TYPE, intf_type);
+	return ulp_rte_parser_act_port_set(params, ifindex);
 }
 
 /* Function to handle the parsing of RTE Flow action port_id. */
@@ -1423,14 +1528,10 @@ int32_t
 ulp_rte_port_id_act_handler(const struct rte_flow_action *act_item,
 			    struct ulp_rte_parser_params *param)
 {
-	const struct rte_flow_action_port_id *port_id;
-	struct ulp_rte_act_prop *act;
-	uint32_t pid;
-	int32_t rc;
+	const struct rte_flow_action_port_id *port_id = act_item->conf;
 	uint32_t ifindex;
-	uint16_t pid_s;
+	enum bnxt_ulp_intf_type intf_type;
 
-	port_id = act_item->conf;
 	if (!port_id) {
 		BNXT_TF_DBG(ERR,
 			    "ParseErr: Invalid Argument\n");
@@ -1443,42 +1544,22 @@ ulp_rte_port_id_act_handler(const struct rte_flow_action *act_item,
 	}
 
 	/* Get the port db ifindex */
-	rc = ulp_port_db_dev_port_to_ulp_index(param->ulp_ctx,
-					       port_id->id,
-					       &ifindex);
-	if (rc) {
+	if (ulp_port_db_dev_port_to_ulp_index(param->ulp_ctx, port_id->id,
+					      &ifindex)) {
 		BNXT_TF_DBG(ERR, "Invalid port id\n");
 		return BNXT_TF_RC_ERROR;
 	}
 
-	act = &param->act_prop;
-	if (param->dir == ULP_DIR_EGRESS) {
-		rc = ulp_port_db_vport_get(param->ulp_ctx,
-					   ifindex, &pid_s);
-		if (rc)
-			return BNXT_TF_RC_ERROR;
-
-		pid = pid_s;
-		pid = rte_cpu_to_be_32(pid);
-		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_VPORT],
-		       &pid, BNXT_ULP_ACT_PROP_SZ_VPORT);
-	} else {
-		rc = ulp_port_db_default_vnic_get(param->ulp_ctx,
-						  ifindex,
-						  BNXT_ULP_DRV_FUNC_VNIC,
-						  &pid_s);
-		if (rc)
-			return BNXT_TF_RC_ERROR;
-
-		pid = pid_s;
-		pid = rte_cpu_to_be_32(pid);
-		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_VNIC],
-		       &pid, BNXT_ULP_ACT_PROP_SZ_VNIC);
+	/* Get the intf type */
+	intf_type = ulp_port_db_port_type_get(param->ulp_ctx, ifindex);
+	if (!intf_type) {
+		BNXT_TF_DBG(ERR, "Invalid port type\n");
+		return BNXT_TF_RC_ERROR;
 	}
 
-	/*Update the action port set bit */
-	ULP_COMP_FLD_IDX_WR(param, BNXT_ULP_CF_IDX_ACT_PORT_IS_SET, 1);
-	return BNXT_TF_RC_SUCCESS;
+	/* Set the action port */
+	ULP_COMP_FLD_IDX_WR(param, BNXT_ULP_CF_IDX_ACT_PORT_TYPE, intf_type);
+	return ulp_rte_parser_act_port_set(param, ifindex);
 }
 
 /* Function to handle the parsing of RTE Flow action phy_port. */
@@ -1490,6 +1571,7 @@ ulp_rte_phy_port_act_handler(const struct rte_flow_action *action_item,
 	uint32_t pid;
 	int32_t rc;
 	uint16_t pid_s;
+	enum bnxt_ulp_direction_type dir;
 
 	phy_port = action_item->conf;
 	if (!phy_port) {
@@ -1503,7 +1585,8 @@ ulp_rte_phy_port_act_handler(const struct rte_flow_action *action_item,
 			    "Parse Err:Port Original not supported\n");
 		return BNXT_TF_RC_PARSE_ERR;
 	}
-	if (prm->dir != ULP_DIR_EGRESS) {
+	dir = ULP_COMP_FLD_IDX_RD(prm, BNXT_ULP_CF_IDX_DIRECTION);
+	if (dir != BNXT_ULP_DIR_EGRESS) {
 		BNXT_TF_DBG(ERR,
 			    "Parse Err:Phy ports are valid only for egress\n");
 		return BNXT_TF_RC_PARSE_ERR;
@@ -1512,7 +1595,7 @@ ulp_rte_phy_port_act_handler(const struct rte_flow_action *action_item,
 	rc = ulp_port_db_phy_port_vport_get(prm->ulp_ctx, phy_port->index,
 					    &pid_s);
 	if (rc) {
-		BNXT_TF_DBG(DEBUG, "Failed to get port details\n");
+		BNXT_TF_DBG(ERR, "Failed to get port details\n");
 		return -EINVAL;
 	}
 
@@ -1521,8 +1604,10 @@ ulp_rte_phy_port_act_handler(const struct rte_flow_action *action_item,
 	memcpy(&prm->act_prop.act_details[BNXT_ULP_ACT_PROP_IDX_VPORT],
 	       &pid, BNXT_ULP_ACT_PROP_SZ_VPORT);
 
-	/*Update the action port set bit */
+	/* Update the action port set bit */
 	ULP_COMP_FLD_IDX_WR(prm, BNXT_ULP_CF_IDX_ACT_PORT_IS_SET, 1);
+	ULP_COMP_FLD_IDX_WR(prm, BNXT_ULP_CF_IDX_ACT_PORT_TYPE,
+			    BNXT_ULP_INTF_TYPE_PHY_PORT);
 	return BNXT_TF_RC_SUCCESS;
 }
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
index 49e9cbb..a440280 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
@@ -33,11 +33,11 @@
 
 /* Function to handle the parsing of the RTE port id. */
 int32_t
-ulp_rte_parser_svif_process(struct ulp_rte_parser_params *params);
+ulp_rte_parser_implicit_match_port_process(struct ulp_rte_parser_params *param);
 
 /* Function to handle the implicit action port id */
 int32_t
-ulp_rte_parser_implied_act_port_process(struct ulp_rte_parser_params *params);
+ulp_rte_parser_implicit_act_port_process(struct ulp_rte_parser_params *params);
 
 /*
  * Function to handle the parsing of RTE Flows and placing
@@ -55,6 +55,12 @@ int32_t
 bnxt_ulp_rte_parser_act_parse(const struct rte_flow_action actions[],
 			      struct ulp_rte_parser_params *params);
 
+/*
+ * Function to handle the post processing of the parsing details
+ */
+int32_t
+bnxt_ulp_rte_parser_post_process(struct ulp_rte_parser_params *params);
+
 /* Function to handle the parsing of RTE Flow item PF Header. */
 int32_t
 ulp_rte_pf_hdr_handler(const struct rte_flow_item *item,
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index ada3a5e..14c77b3 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -129,8 +129,10 @@ enum bnxt_ulp_cf_idx {
 	BNXT_ULP_CF_IDX_ACT_DEC_TTL = 33,
 	BNXT_ULP_CF_IDX_ACT_T_DEC_TTL = 34,
 	BNXT_ULP_CF_IDX_ACT_PORT_IS_SET = 35,
-	BNXT_ULP_CF_IDX_MATCH_PORT_TYPE = 36,
-	BNXT_ULP_CF_IDX_LAST = 37
+	BNXT_ULP_CF_IDX_ACT_PORT_TYPE = 36,
+	BNXT_ULP_CF_IDX_MATCH_PORT_TYPE = 37,
+	BNXT_ULP_CF_IDX_VF_TO_VF = 38,
+	BNXT_ULP_CF_IDX_LAST = 39
 };
 
 enum bnxt_ulp_cond_opcode {
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index df999b1..ea4f253 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -30,6 +30,11 @@
 #define BNXT_ULP_PROTO_HDR_MAX		128
 #define BNXT_ULP_PROTO_HDR_FIELD_SVIF_IDX	0
 
+/* Direction attributes */
+#define BNXT_ULP_FLOW_ATTR_TRANSFER	0x1
+#define BNXT_ULP_FLOW_ATTR_INGRESS	0x2
+#define BNXT_ULP_FLOW_ATTR_EGRESS	0x4
+
 struct ulp_rte_hdr_bitmap {
 	uint64_t	bits;
 };
@@ -65,7 +70,7 @@ struct ulp_rte_parser_params {
 	uint32_t			vlan_idx;
 	struct ulp_rte_act_bitmap	act_bitmap;
 	struct ulp_rte_act_prop		act_prop;
-	uint32_t			dir;
+	uint32_t			dir_attr;
 	struct bnxt_ulp_context		*ulp_ctx;
 };
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 15/20] net/bnxt: add support for conditional opcodes for mapper result table
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
                   ` (13 preceding siblings ...)
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 14/20] net/bnxt: port configuration changes to support full offload Somnath Kotur
@ 2020-07-06  8:24 ` Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 16/20] net/bnxt: add support for nat rte action items Somnath Kotur
                   ` (4 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for conditional mapper result opcodes. The conditional
opcodes allows to set the action details in hardware based on the
actions configured for the flow. This allows aggregation of multiple
templates.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Michael Baucom <michael.baucom@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_mapper.c           | 69 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h |  4 +-
 drivers/net/bnxt/tf_ulp/ulp_template_struct.h  |  2 +
 3 files changed, 74 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_mapper.c b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
index dd99ea3..44fd0ac 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_mapper.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_mapper.c
@@ -841,7 +841,73 @@ ulp_mapper_result_field_process(struct bnxt_ulp_mapper_parms *parms,
 		}
 
 		break;
+	case BNXT_ULP_MAPPER_OPC_IF_ACT_BIT_THEN_ACT_PROP_ELSE_CONST:
+		if (!ulp_operand_read(fld->result_operand,
+				      (uint8_t *)&act_bit, sizeof(uint64_t))) {
+			BNXT_TF_DBG(ERR, "%s operand read failed\n", name);
+			return -EINVAL;
+		}
+		act_bit = tfp_be_to_cpu_64(act_bit);
+		if (ULP_BITMAP_ISSET(parms->act_bitmap->bits, act_bit)) {
+			/* Action bit is set so consider operand_true */
+			if (!ulp_operand_read(fld->result_operand_true,
+					      (uint8_t *)&idx,
+					      sizeof(uint16_t))) {
+				BNXT_TF_DBG(ERR, "%s operand read failed\n",
+					    name);
+				return -EINVAL;
+			}
+			idx = tfp_be_to_cpu_16(idx);
+			if (idx >= BNXT_ULP_ACT_PROP_IDX_LAST) {
+				BNXT_TF_DBG(ERR, "%s act_prop[%d] oob\n",
+					    name, idx);
+				return -EINVAL;
+			}
+			val = &parms->act_prop->act_details[idx];
+			field_size = ulp_mapper_act_prop_size_get(idx);
+			if (fld->field_bit_size < ULP_BYTE_2_BITS(field_size)) {
+				field_size  = field_size -
+				    ((fld->field_bit_size + 7) / 8);
+				val += field_size;
+			}
+			if (!ulp_blob_push(blob, val, fld->field_bit_size)) {
+				BNXT_TF_DBG(ERR, "%s push field failed\n",
+					    name);
+				return -EINVAL;
+			}
+		} else {
+			/* action bit is not set, use the operand false */
+			val = fld->result_operand_false;
+			if (!ulp_blob_push(blob, val, fld->field_bit_size)) {
+				BNXT_TF_DBG(ERR, "%s failed to add field\n",
+					    name);
+				return -EINVAL;
+			}
+		}
+		break;
+	case BNXT_ULP_MAPPER_OPC_IF_ACT_BIT_THEN_CONST_ELSE_CONST:
+		if (!ulp_operand_read(fld->result_operand,
+				      (uint8_t *)&act_bit, sizeof(uint64_t))) {
+			BNXT_TF_DBG(ERR, "%s operand read failed\n", name);
+			return -EINVAL;
+		}
+		act_bit = tfp_be_to_cpu_64(act_bit);
+		if (ULP_BITMAP_ISSET(parms->act_bitmap->bits, act_bit)) {
+			/* Action bit is set so consider operand_true */
+			val = fld->result_operand_true;
+		} else {
+			/* action bit is not set, use the operand false */
+			val = fld->result_operand_false;
+		}
+		if (!ulp_blob_push(blob, val, fld->field_bit_size)) {
+			BNXT_TF_DBG(ERR, "%s failed to add field\n",
+				    name);
+			return -EINVAL;
+		}
+		break;
 	default:
+		BNXT_TF_DBG(ERR, "invalid result mapper opcode 0x%x\n",
+			    fld->result_opcode);
 		return -EINVAL;
 	}
 	return 0;
@@ -973,6 +1039,9 @@ ulp_mapper_keymask_field_process(struct bnxt_ulp_mapper_parms *parms,
 		}
 		break;
 	default:
+		BNXT_TF_DBG(ERR, "invalid keymask mapper opcode 0x%x\n",
+			    opcode);
+		return -EINVAL;
 		break;
 	}
 
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 14c77b3..2d73ea3 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -212,7 +212,9 @@ enum bnxt_ulp_mapper_opc {
 	BNXT_ULP_MAPPER_OPC_SET_TO_ACT_BIT = 6,
 	BNXT_ULP_MAPPER_OPC_SET_TO_ACT_PROP = 7,
 	BNXT_ULP_MAPPER_OPC_SET_TO_ENCAP_ACT_PROP_SZ = 8,
-	BNXT_ULP_MAPPER_OPC_LAST = 9
+	BNXT_ULP_MAPPER_OPC_IF_ACT_BIT_THEN_ACT_PROP_ELSE_CONST = 9,
+	BNXT_ULP_MAPPER_OPC_IF_ACT_BIT_THEN_CONST_ELSE_CONST = 10,
+	BNXT_ULP_MAPPER_OPC_LAST = 11
 };
 
 enum bnxt_ulp_mark_db_opcode {
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
index ea4f253..2f2f9a2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_struct.h
@@ -213,6 +213,8 @@ struct bnxt_ulp_mapper_result_field_info {
 	enum bnxt_ulp_mapper_opc	result_opcode;
 	uint16_t			field_bit_size;
 	uint8_t				result_operand[16];
+	uint8_t				result_operand_true[16];
+	uint8_t				result_operand_false[16];
 };
 
 struct bnxt_ulp_mapper_ident_info {
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 16/20] net/bnxt: add support for nat rte action items
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
                   ` (14 preceding siblings ...)
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 15/20] net/bnxt: add support for conditional opcodes for mapper result table Somnath Kotur
@ 2020-07-06  8:24 ` Somnath Kotur
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 17/20] net/bnxt: add support for tp src/dst " Somnath Kotur
                   ` (3 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for set ipv4 address action items. It allows the source
or destination ip address to be changed for a given flow.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Michael Baucom <michael.baucom@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c       | 42 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h       | 10 ++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h |  6 ++--
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c  |  8 ++---
 4 files changed, 60 insertions(+), 6 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index e828325..ac432b2 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1694,3 +1694,45 @@ ulp_rte_of_set_vlan_pcp_act_handler(const struct rte_flow_action *action_item,
 	BNXT_TF_DBG(ERR, "Parse Error: Vlan pcp arg is invalid\n");
 	return BNXT_TF_RC_ERROR;
 }
+
+/* Function to handle the parsing of RTE Flow action set ipv4 src.*/
+int32_t
+ulp_rte_set_ipv4_src_act_handler(const struct rte_flow_action *action_item,
+				 struct ulp_rte_parser_params *params)
+{
+	const struct rte_flow_action_set_ipv4 *set_ipv4;
+	struct ulp_rte_act_prop *act = &params->act_prop;
+
+	set_ipv4 = action_item->conf;
+	if (set_ipv4) {
+		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_SET_IPV4_SRC],
+		       &set_ipv4->ipv4_addr, BNXT_ULP_ACT_PROP_SZ_SET_IPV4_SRC);
+		/* Update the hdr_bitmap with set ipv4 src */
+		ULP_BITMAP_SET(params->act_bitmap.bits,
+			       BNXT_ULP_ACTION_BIT_SET_IPV4_SRC);
+		return BNXT_TF_RC_SUCCESS;
+	}
+	BNXT_TF_DBG(ERR, "Parse Error: set ipv4 src arg is invalid\n");
+	return BNXT_TF_RC_ERROR;
+}
+
+/* Function to handle the parsing of RTE Flow action set ipv4 dst.*/
+int32_t
+ulp_rte_set_ipv4_dst_act_handler(const struct rte_flow_action *action_item,
+				 struct ulp_rte_parser_params *params)
+{
+	const struct rte_flow_action_set_ipv4 *set_ipv4;
+	struct ulp_rte_act_prop *act = &params->act_prop;
+
+	set_ipv4 = action_item->conf;
+	if (set_ipv4) {
+		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_SET_IPV4_DST],
+		       &set_ipv4->ipv4_addr, BNXT_ULP_ACT_PROP_SZ_SET_IPV4_DST);
+		/* Update the hdr_bitmap with set ipv4 dst */
+		ULP_BITMAP_SET(params->act_bitmap.bits,
+			       BNXT_ULP_ACTION_BIT_SET_IPV4_DST);
+		return BNXT_TF_RC_SUCCESS;
+	}
+	BNXT_TF_DBG(ERR, "Parse Error: set ipv4 dst arg is invalid\n");
+	return BNXT_TF_RC_ERROR;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
index a440280..6cb83b9 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
@@ -196,4 +196,14 @@ int32_t
 ulp_rte_of_set_vlan_pcp_act_handler(const struct rte_flow_action *action_item,
 				    struct ulp_rte_parser_params *params);
 
+/* Function to handle the parsing of RTE Flow action set ipv4 src.*/
+int32_t
+ulp_rte_set_ipv4_src_act_handler(const struct rte_flow_action *action_item,
+				 struct ulp_rte_parser_params *params);
+
+/* Function to handle the parsing of RTE Flow action set ipv4 dst.*/
+int32_t
+ulp_rte_set_ipv4_dst_act_handler(const struct rte_flow_action *action_item,
+				 struct ulp_rte_parser_params *params);
+
 #endif /* _ULP_RTE_PARSER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 2d73ea3..436f54c 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -6,7 +6,7 @@
 #ifndef ULP_TEMPLATE_DB_H_
 #define ULP_TEMPLATE_DB_H_
 
-#define BNXT_ULP_REGFILE_MAX_SZ 17
+#define BNXT_ULP_REGFILE_MAX_SZ 19
 #define BNXT_ULP_MAX_NUM_DEVICES 4
 #define BNXT_ULP_LOG2_MAX_NUM_DEV 2
 #define BNXT_ULP_CACHE_TBL_MAX_SZ 4
@@ -261,7 +261,9 @@ enum bnxt_ulp_regfile_index {
 	BNXT_ULP_REGFILE_INDEX_CRITICAL_RESOURCE = 14,
 	BNXT_ULP_REGFILE_INDEX_FLOW_CNTR_PTR_0 = 15,
 	BNXT_ULP_REGFILE_INDEX_MAIN_SP_PTR = 16,
-	BNXT_ULP_REGFILE_INDEX_LAST = 17
+	BNXT_ULP_REGFILE_INDEX_MODIFY_IPV4_SRC_PTR_0 = 17,
+	BNXT_ULP_REGFILE_INDEX_MODIFY_IPV4_DST_PTR_0 = 18,
+	BNXT_ULP_REGFILE_INDEX_LAST = 19
 };
 
 enum bnxt_ulp_search_before_alloc {
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index 2cc3458..6fa74f8 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -231,12 +231,12 @@ struct bnxt_ulp_rte_act_info ulp_act_info[] = {
 		.proto_act_func          = NULL
 	},
 	[RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC] = {
-		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
-		.proto_act_func          = NULL
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_set_ipv4_src_act_handler
 	},
 	[RTE_FLOW_ACTION_TYPE_SET_IPV4_DST] = {
-		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
-		.proto_act_func          = NULL
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_set_ipv4_dst_act_handler
 	},
 	[RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC] = {
 		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 17/20] net/bnxt: add support for tp src/dst rte action items
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
                   ` (15 preceding siblings ...)
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 16/20] net/bnxt: add support for nat rte action items Somnath Kotur
@ 2020-07-06  8:24 ` Somnath Kotur
  2020-07-06  8:25 ` [dpdk-dev] [PATCH 18/20] net/bnxt: use VF vnic when port action is for a VF rep port Somnath Kotur
                   ` (2 subsequent siblings)
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:24 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Added support for set transport port src and destination
rewrite action items. This allows changing the tcp or udp
source/destination ports for a given flow.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Michael Baucom <michael.baucom@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c       | 44 ++++++++++++++++++++++++++
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.h       | 10 ++++++
 drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h | 38 +++++++++++-----------
 drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c  |  8 ++---
 4 files changed, 77 insertions(+), 23 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index ac432b2..2f6a15d 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1736,3 +1736,47 @@ ulp_rte_set_ipv4_dst_act_handler(const struct rte_flow_action *action_item,
 	BNXT_TF_DBG(ERR, "Parse Error: set ipv4 dst arg is invalid\n");
 	return BNXT_TF_RC_ERROR;
 }
+
+/* Function to handle the parsing of RTE Flow action set tp src.*/
+int32_t
+ulp_rte_set_tp_src_act_handler(const struct rte_flow_action *action_item,
+			       struct ulp_rte_parser_params *params)
+{
+	const struct rte_flow_action_set_tp *set_tp;
+	struct ulp_rte_act_prop *act = &params->act_prop;
+
+	set_tp = action_item->conf;
+	if (set_tp) {
+		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_SET_TP_SRC],
+		       &set_tp->port, BNXT_ULP_ACT_PROP_SZ_SET_TP_SRC);
+		/* Update the hdr_bitmap with set tp src */
+		ULP_BITMAP_SET(params->act_bitmap.bits,
+			       BNXT_ULP_ACTION_BIT_SET_TP_SRC);
+		return BNXT_TF_RC_SUCCESS;
+	}
+
+	BNXT_TF_DBG(ERR, "Parse Error: set tp src arg is invalid\n");
+	return BNXT_TF_RC_ERROR;
+}
+
+/* Function to handle the parsing of RTE Flow action set tp dst.*/
+int32_t
+ulp_rte_set_tp_dst_act_handler(const struct rte_flow_action *action_item,
+			       struct ulp_rte_parser_params *params)
+{
+	const struct rte_flow_action_set_tp *set_tp;
+	struct ulp_rte_act_prop *act = &params->act_prop;
+
+	set_tp = action_item->conf;
+	if (set_tp) {
+		memcpy(&act->act_details[BNXT_ULP_ACT_PROP_IDX_SET_TP_DST],
+		       &set_tp->port, BNXT_ULP_ACT_PROP_SZ_SET_TP_DST);
+		/* Update the hdr_bitmap with set tp dst */
+		ULP_BITMAP_SET(params->act_bitmap.bits,
+			       BNXT_ULP_ACTION_BIT_SET_TP_DST);
+		return BNXT_TF_RC_SUCCESS;
+	}
+
+	BNXT_TF_DBG(ERR, "Parse Error: set tp src arg is invalid\n");
+	return BNXT_TF_RC_ERROR;
+}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
index 6cb83b9..e155250 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.h
@@ -206,4 +206,14 @@ int32_t
 ulp_rte_set_ipv4_dst_act_handler(const struct rte_flow_action *action_item,
 				 struct ulp_rte_parser_params *params);
 
+/* Function to handle the parsing of RTE Flow action set tp src.*/
+int32_t
+ulp_rte_set_tp_src_act_handler(const struct rte_flow_action *action_item,
+			       struct ulp_rte_parser_params *params);
+
+/* Function to handle the parsing of RTE Flow action set tp dst.*/
+int32_t
+ulp_rte_set_tp_dst_act_handler(const struct rte_flow_action *action_item,
+			       struct ulp_rte_parser_params *params);
+
 #endif /* _ULP_RTE_PARSER_H_ */
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
index 436f54c..6b68b95 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_enum.h
@@ -560,8 +560,8 @@ enum bnxt_ulp_act_prop_sz {
 	BNXT_ULP_ACT_PROP_SZ_SET_IPV4_DST = 4,
 	BNXT_ULP_ACT_PROP_SZ_SET_IPV6_SRC = 16,
 	BNXT_ULP_ACT_PROP_SZ_SET_IPV6_DST = 16,
-	BNXT_ULP_ACT_PROP_SZ_SET_TP_SRC = 4,
-	BNXT_ULP_ACT_PROP_SZ_SET_TP_DST = 4,
+	BNXT_ULP_ACT_PROP_SZ_SET_TP_SRC = 2,
+	BNXT_ULP_ACT_PROP_SZ_SET_TP_DST = 2,
 	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_0 = 4,
 	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_1 = 4,
 	BNXT_ULP_ACT_PROP_SZ_OF_PUSH_MPLS_2 = 4,
@@ -605,23 +605,23 @@ enum bnxt_ulp_act_prop_idx {
 	BNXT_ULP_ACT_PROP_IDX_SET_IPV6_SRC = 85,
 	BNXT_ULP_ACT_PROP_IDX_SET_IPV6_DST = 101,
 	BNXT_ULP_ACT_PROP_IDX_SET_TP_SRC = 117,
-	BNXT_ULP_ACT_PROP_IDX_SET_TP_DST = 121,
-	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_0 = 125,
-	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_1 = 129,
-	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_2 = 133,
-	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_3 = 137,
-	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_4 = 141,
-	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_5 = 145,
-	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_6 = 149,
-	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_7 = 153,
-	BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_DMAC = 157,
-	BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC = 163,
-	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG = 169,
-	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP = 177,
-	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC = 209,
-	BNXT_ULP_ACT_PROP_IDX_ENCAP_UDP = 225,
-	BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN = 229,
-	BNXT_ULP_ACT_PROP_IDX_LAST = 261
+	BNXT_ULP_ACT_PROP_IDX_SET_TP_DST = 119,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_0 = 121,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_1 = 125,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_2 = 129,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_3 = 133,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_4 = 137,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_5 = 141,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_6 = 145,
+	BNXT_ULP_ACT_PROP_IDX_OF_PUSH_MPLS_7 = 149,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_DMAC = 153,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_L2_SMAC = 159,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_VTAG = 165,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP = 173,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_IP_SRC = 205,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_UDP = 221,
+	BNXT_ULP_ACT_PROP_IDX_ENCAP_TUN = 225,
+	BNXT_ULP_ACT_PROP_IDX_LAST = 257
 };
 
 enum bnxt_ulp_class_hid {
diff --git a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
index 6fa74f8..5847e58 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_template_db_tbl.c
@@ -247,12 +247,12 @@ struct bnxt_ulp_rte_act_info ulp_act_info[] = {
 		.proto_act_func          = NULL
 	},
 	[RTE_FLOW_ACTION_TYPE_SET_TP_SRC] = {
-		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
-		.proto_act_func          = NULL
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_set_tp_src_act_handler
 	},
 	[RTE_FLOW_ACTION_TYPE_SET_TP_DST] = {
-		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
-		.proto_act_func          = NULL
+		.act_type                = BNXT_ULP_ACT_TYPE_SUPPORTED,
+		.proto_act_func          = ulp_rte_set_tp_dst_act_handler
 	},
 	[RTE_FLOW_ACTION_TYPE_MAC_SWAP] = {
 		.act_type                = BNXT_ULP_ACT_TYPE_NOT_SUPPORTED,
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 18/20] net/bnxt: use VF vnic when port action is for a VF rep port
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
                   ` (16 preceding siblings ...)
  2020-07-06  8:24 ` [dpdk-dev] [PATCH 17/20] net/bnxt: add support for tp src/dst " Somnath Kotur
@ 2020-07-06  8:25 ` Somnath Kotur
  2020-07-06  8:25 ` [dpdk-dev] [PATCH 19/20] net/bnxt: enable flow ctrl ops for the VF-rep device Somnath Kotur
  2020-07-06  8:25 ` [dpdk-dev] [PATCH 20/20] net/bnxt: use byte/pkt count shift/masks from the device template Somnath Kotur
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:25 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Fix to use the vf's vnic port for ingress flows whose
port action is a vf rep port.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_rte_parser.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
index 2f6a15d..b943465 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_rte_parser.c
@@ -1425,6 +1425,8 @@ ulp_rte_parser_act_port_set(struct ulp_rte_parser_params *param,
 	uint16_t pid_s;
 	uint32_t pid;
 	struct ulp_rte_act_prop *act = &param->act_prop;
+	enum bnxt_ulp_intf_type port_type;
+	uint32_t vnic_type;
 
 	/* Get the direction */
 	dir = ULP_COMP_FLD_IDX_RD(param, BNXT_ULP_CF_IDX_DIRECTION);
@@ -1439,9 +1441,15 @@ ulp_rte_parser_act_port_set(struct ulp_rte_parser_params *param,
 		       &pid, BNXT_ULP_ACT_PROP_SZ_VPORT);
 	} else {
 		/* For ingress direction, fill vnic */
+		port_type = ULP_COMP_FLD_IDX_RD(param,
+						BNXT_ULP_CF_IDX_ACT_PORT_TYPE);
+		if (port_type == BNXT_ULP_INTF_TYPE_VF_REP)
+			vnic_type = BNXT_ULP_VF_FUNC_VNIC;
+		else
+			vnic_type = BNXT_ULP_DRV_FUNC_VNIC;
+
 		if (ulp_port_db_default_vnic_get(param->ulp_ctx, ifindex,
-						 BNXT_ULP_DRV_FUNC_VNIC,
-						 &pid_s))
+						 vnic_type, &pid_s))
 			return BNXT_TF_RC_ERROR;
 
 		pid = pid_s;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 19/20] net/bnxt: enable flow ctrl ops for the VF-rep device
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
                   ` (17 preceding siblings ...)
  2020-07-06  8:25 ` [dpdk-dev] [PATCH 18/20] net/bnxt: use VF vnic when port action is for a VF rep port Somnath Kotur
@ 2020-07-06  8:25 ` Somnath Kotur
  2020-07-06  8:25 ` [dpdk-dev] [PATCH 20/20] net/bnxt: use byte/pkt count shift/masks from the device template Somnath Kotur
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:25 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Inorder to offload flows on the vfrep device, it must be
populated with rte_flow_ops.

This patch enables the same.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
---
 drivers/net/bnxt/bnxt.h            | 5 +++++
 drivers/net/bnxt/bnxt_ethdev.c     | 9 +++++++--
 drivers/net/bnxt/bnxt_reps.c       | 3 +--
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c | 8 ++++++--
 4 files changed, 19 insertions(+), 6 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index f16bf33..f69ba24 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -897,4 +897,9 @@ void bnxt_flow_cnt_alarm_cb(void *arg);
 int bnxt_flow_stats_req(struct bnxt *bp);
 int bnxt_flow_stats_cnt(struct bnxt *bp);
 uint32_t bnxt_get_speed_capabilities(struct bnxt *bp);
+
+int
+bnxt_filter_ctrl_op(struct rte_eth_dev *dev,
+		    enum rte_filter_type filter_type,
+		    enum rte_filter_op filter_op, void *arg);
 #endif
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index b21f850..5626ec3 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -3702,7 +3702,7 @@ bnxt_fdir_filter(struct rte_eth_dev *dev,
 	return ret;
 }
 
-static int
+int
 bnxt_filter_ctrl_op(struct rte_eth_dev *dev,
 		    enum rte_filter_type filter_type,
 		    enum rte_filter_op filter_op, void *arg)
@@ -3710,7 +3710,12 @@ bnxt_filter_ctrl_op(struct rte_eth_dev *dev,
 	struct bnxt *bp = dev->data->dev_private;
 	int ret = 0;
 
-	ret = is_bnxt_in_error(dev->data->dev_private);
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(dev)) {
+		struct bnxt_vf_representor *vfr = dev->data->dev_private;
+		bp = vfr->parent_dev->data->dev_private;
+	}
+
+	ret = is_bnxt_in_error(bp);
 	if (ret)
 		return ret;
 
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index a37a061..875e7b0 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -29,6 +29,7 @@ static const struct eth_dev_ops bnxt_vf_rep_dev_ops = {
 	.dev_stop = bnxt_vf_rep_dev_stop_op,
 	.stats_get = bnxt_vf_rep_stats_get_op,
 	.stats_reset = bnxt_vf_rep_stats_reset_op,
+	.filter_ctrl = bnxt_filter_ctrl_op
 };
 
 uint16_t
@@ -132,8 +133,6 @@ bnxt_vf_rep_tx_burst(void *tx_queue,
 	pthread_mutex_unlock(&parent->rep_info->vfr_lock);
 
 	return rc;
-
-	return 0;
 }
 
 int bnxt_vf_representor_init(struct rte_eth_dev *eth_dev, void *params)
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index 469ad36..397d0a9 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -923,9 +923,13 @@ bnxt_ulp_cntxt_ptr2_flow_db_get(struct bnxt_ulp_context	*ulp_ctx)
 struct bnxt_ulp_context	*
 bnxt_ulp_eth_dev_ptr2_cntxt_get(struct rte_eth_dev	*dev)
 {
-	struct bnxt	*bp;
+	struct bnxt *bp = (struct bnxt *)dev->data->dev_private;
+
+	if (BNXT_ETH_DEV_IS_REPRESENTOR(dev)) {
+		struct bnxt_vf_representor *vfr = dev->data->dev_private;
+		bp = vfr->parent_dev->data->dev_private;
+	}
 
-	bp = (struct bnxt *)dev->data->dev_private;
 	if (!bp) {
 		BNXT_TF_DBG(ERR, "Bnxt private data is not initialized\n");
 		return NULL;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [dpdk-dev] [PATCH 20/20] net/bnxt: use byte/pkt count shift/masks from the device template
  2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
                   ` (18 preceding siblings ...)
  2020-07-06  8:25 ` [dpdk-dev] [PATCH 19/20] net/bnxt: enable flow ctrl ops for the VF-rep device Somnath Kotur
@ 2020-07-06  8:25 ` Somnath Kotur
  19 siblings, 0 replies; 21+ messages in thread
From: Somnath Kotur @ 2020-07-06  8:25 UTC (permalink / raw)
  To: dev; +Cc: ferruh.yigit

Instead of using hardcoded values for the byte/pkt value shifts/masks
to read from the HW counters, use the shift/mask values from the device
template params

Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c | 27 ++++++++++++++++-----------
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h |  6 ++++--
 2 files changed, 20 insertions(+), 13 deletions(-)

diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
index 9944e9e..e90b962 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c
@@ -4,6 +4,7 @@
  */
 
 #include <rte_common.h>
+#include <rte_cycles.h>
 #include <rte_malloc.h>
 #include <rte_log.h>
 #include <rte_alarm.h>
@@ -227,9 +228,11 @@ void ulp_fc_mgr_thread_cancel(struct bnxt_ulp_context *ctxt)
  * num_counters [in] The number of counters
  *
  */
-__rte_unused static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
-				       struct bnxt_ulp_fc_info *fc_info,
-				       enum tf_dir dir, uint32_t num_counters)
+__rte_unused static int32_t
+ulp_bulk_get_flow_stats(struct tf *tfp,
+			struct bnxt_ulp_fc_info *fc_info,
+			enum tf_dir dir,
+			struct bnxt_ulp_device_params *dparms)
 /* MARK AS UNUSED FOR NOW TO AVOID COMPILATION ERRORS TILL API is RESOLVED */
 {
 	int rc = 0;
@@ -242,7 +245,7 @@ __rte_unused static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
 	parms.dir = dir;
 	parms.type = stype;
 	parms.starting_idx = fc_info->shadow_hw_tbl[dir].start_idx;
-	parms.num_entries = num_counters;
+	parms.num_entries = dparms->flow_count_db_entries / 2; /* direction */
 	/*
 	 * TODO:
 	 * Size of an entry needs to obtained from template
@@ -266,13 +269,14 @@ __rte_unused static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
 		return rc;
 	}
 
-	for (i = 0; i < num_counters; i++) {
+	for (i = 0; i < parms.num_entries; i++) {
 		/* TBD - Get PKT/BYTE COUNT SHIFT/MASK from Template */
 		sw_acc_tbl_entry = &fc_info->sw_acc_tbl[dir][i];
 		if (!sw_acc_tbl_entry->valid)
 			continue;
-		sw_acc_tbl_entry->pkt_count += FLOW_CNTR_PKTS(stats[i]);
-		sw_acc_tbl_entry->byte_count += FLOW_CNTR_BYTES(stats[i]);
+		sw_acc_tbl_entry->pkt_count += FLOW_CNTR_PKTS(stats[i], dparms);
+		sw_acc_tbl_entry->byte_count += FLOW_CNTR_BYTES(stats[i],
+								dparms);
 	}
 
 	return rc;
@@ -281,7 +285,8 @@ __rte_unused static int32_t ulp_bulk_get_flow_stats(struct tf *tfp,
 static int ulp_get_single_flow_stat(struct tf *tfp,
 				    struct bnxt_ulp_fc_info *fc_info,
 				    enum tf_dir dir,
-				    uint32_t hw_cntr_id)
+				    uint32_t hw_cntr_id,
+				    struct bnxt_ulp_device_params *dparms)
 {
 	int rc = 0;
 	struct tf_get_tbl_entry_parms parms = { 0 };
@@ -310,8 +315,8 @@ static int ulp_get_single_flow_stat(struct tf *tfp,
 	/* TBD - Get PKT/BYTE COUNT SHIFT/MASK from Template */
 	sw_cntr_indx = hw_cntr_id - fc_info->shadow_hw_tbl[dir].start_idx;
 	sw_acc_tbl_entry = &fc_info->sw_acc_tbl[dir][sw_cntr_indx];
-	sw_acc_tbl_entry->pkt_count += FLOW_CNTR_PKTS(stats);
-	sw_acc_tbl_entry->byte_count += FLOW_CNTR_BYTES(stats);
+	sw_acc_tbl_entry->pkt_count += FLOW_CNTR_PKTS(stats, dparms);
+	sw_acc_tbl_entry->byte_count += FLOW_CNTR_BYTES(stats, dparms);
 
 	return rc;
 }
@@ -385,7 +390,7 @@ ulp_fc_mgr_alarm_cb(void *arg)
 				continue;
 			hw_cntr_id = ulp_fc_info->sw_acc_tbl[i][j].hw_cntr_id;
 			rc = ulp_get_single_flow_stat(tfp, ulp_fc_info, i,
-						      hw_cntr_id);
+						      hw_cntr_id, dparms);
 			if (rc)
 				break;
 		}
diff --git a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
index 2072670..9c317b0 100644
--- a/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
+++ b/drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h
@@ -16,8 +16,10 @@
 #define FLOW_CNTR_BYTE_WIDTH 36
 #define FLOW_CNTR_BYTE_MASK  (((uint64_t)1 << FLOW_CNTR_BYTE_WIDTH) - 1)
 
-#define FLOW_CNTR_PKTS(v) ((v) >> FLOW_CNTR_BYTE_WIDTH)
-#define FLOW_CNTR_BYTES(v) ((v) & FLOW_CNTR_BYTE_MASK)
+#define FLOW_CNTR_PKTS(v, d) (((v) & (d)->packet_count_mask) >> \
+		(d)->packet_count_shift)
+#define FLOW_CNTR_BYTES(v, d) (((v) & (d)->byte_count_mask) >> \
+		(d)->byte_count_shift)
 
 struct sw_acc_counter {
 	uint64_t pkt_count;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2020-07-06  8:34 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-06  8:24 [dpdk-dev] [PATCH 00/20] bnxt patches Somnath Kotur
2020-07-06  8:24 ` [dpdk-dev] [PATCH 01/20] net/bnxt: vxlan encap and decap with src property enabled Somnath Kotur
2020-07-06  8:24 ` [dpdk-dev] [PATCH 02/20] net/bnxt: add support vlan header bitmap Somnath Kotur
2020-07-06  8:24 ` [dpdk-dev] [PATCH 03/20] net/bnxt: add support for negative conditional opcodes Somnath Kotur
2020-07-06  8:24 ` [dpdk-dev] [PATCH 04/20] net/bnxt: add validations to dpdk port id and phy port parsing Somnath Kotur
2020-07-06  8:24 ` [dpdk-dev] [PATCH 05/20] net/bnxt: add support for index opcode constant Somnath Kotur
2020-07-06  8:24 ` [dpdk-dev] [PATCH 06/20] net/bnxt: updated hsi_struct_def_dpdk.h Somnath Kotur
2020-07-06  8:24 ` [dpdk-dev] [PATCH 07/20] nxt/bnxt: added HWRM support for global cfg Somnath Kotur
2020-07-06  8:24 ` [dpdk-dev] [PATCH 08/20] net/bnxt: cleanup and refactoring Somnath Kotur
2020-07-06  8:24 ` [dpdk-dev] [PATCH 09/20] net/bnxt: add support for vlan push and vlan pop actions Somnath Kotur
2020-07-06  8:24 ` [dpdk-dev] [PATCH 10/20] net/bnxt: remove vnic and vport act bits from template matching Somnath Kotur
2020-07-06  8:24 ` [dpdk-dev] [PATCH 11/20] net/bnxt: fix vxlan outer ip protocol id encapsulation Somnath Kotur
2020-07-06  8:24 ` [dpdk-dev] [PATCH 12/20] net/bnxt: add number of vlan tags in the computed field list Somnath Kotur
2020-07-06  8:24 ` [dpdk-dev] [PATCH 13/20] net/bnxt: enable support for PF and VF port action items Somnath Kotur
2020-07-06  8:24 ` [dpdk-dev] [PATCH 14/20] net/bnxt: port configuration changes to support full offload Somnath Kotur
2020-07-06  8:24 ` [dpdk-dev] [PATCH 15/20] net/bnxt: add support for conditional opcodes for mapper result table Somnath Kotur
2020-07-06  8:24 ` [dpdk-dev] [PATCH 16/20] net/bnxt: add support for nat rte action items Somnath Kotur
2020-07-06  8:24 ` [dpdk-dev] [PATCH 17/20] net/bnxt: add support for tp src/dst " Somnath Kotur
2020-07-06  8:25 ` [dpdk-dev] [PATCH 18/20] net/bnxt: use VF vnic when port action is for a VF rep port Somnath Kotur
2020-07-06  8:25 ` [dpdk-dev] [PATCH 19/20] net/bnxt: enable flow ctrl ops for the VF-rep device Somnath Kotur
2020-07-06  8:25 ` [dpdk-dev] [PATCH 20/20] net/bnxt: use byte/pkt count shift/masks from the device template Somnath Kotur

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).