DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH v2 00/14] support new 5760X P7 devices
@ 2023-12-10  1:24 Ajit Khaparde
  2023-12-10  1:24 ` [PATCH v2 01/14] net/bnxt: refactor epoch setting Ajit Khaparde
                   ` (14 more replies)
  0 siblings, 15 replies; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-10  1:24 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 1843 bytes --]

While some of the patches refactor and improve existing code,
this series adds support for the new 5760X P7 device family.
Follow-on patches will incrementally add more functionality.

v1->v2:
- Fixed unused variable error
- Fixed some spellings
- Code refactoring and fixes in backing store v2

Ajit Khaparde (12):
  net/bnxt: refactor epoch setting
  net/bnxt: update HWRM API
  net/bnxt: use the correct COS queue for Tx
  net/bnxt: refactor mem zone allocation
  net/bnxt: add support for p7 device family
  net/bnxt: refactor code to support P7 devices
  net/bnxt: fix array overflow
  net/bnxt: add support for backing store v2
  net/bnxt: modify sending new HWRM commands to firmware
  net/bnxt: retry HWRM ver get if the command fails
  net/bnxt: cap ring resources for P7 devices
  net/bnxt: add support for v3 Rx completion

Kalesh AP (1):
  net/bnxt: log a message when multicast promisc mode changes

Kishore Padmanabha (1):
  net/bnxt: refactor the ulp initialization

 drivers/net/bnxt/bnxt.h                |   97 +-
 drivers/net/bnxt/bnxt_cpr.h            |    5 +-
 drivers/net/bnxt/bnxt_ethdev.c         |  319 ++++-
 drivers/net/bnxt/bnxt_flow.c           |    2 +-
 drivers/net/bnxt/bnxt_hwrm.c           |  416 ++++++-
 drivers/net/bnxt/bnxt_hwrm.h           |   15 +
 drivers/net/bnxt/bnxt_ring.c           |   15 +-
 drivers/net/bnxt/bnxt_rxq.c            |    2 +-
 drivers/net/bnxt/bnxt_rxr.c            |   93 +-
 drivers/net/bnxt/bnxt_rxr.h            |   92 ++
 drivers/net/bnxt/bnxt_util.c           |   10 +
 drivers/net/bnxt/bnxt_util.h           |    1 +
 drivers/net/bnxt/bnxt_vnic.c           |   58 +-
 drivers/net/bnxt/bnxt_vnic.h           |    1 -
 drivers/net/bnxt/hsi_struct_def_dpdk.h | 1531 ++++++++++++++++++++++--
 15 files changed, 2408 insertions(+), 249 deletions(-)

-- 
2.39.2 (Apple Git-143)


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 01/14] net/bnxt: refactor epoch setting
  2023-12-10  1:24 [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
@ 2023-12-10  1:24 ` Ajit Khaparde
  2023-12-10  1:24 ` [PATCH v2 02/14] net/bnxt: update HWRM API Ajit Khaparde
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-10  1:24 UTC (permalink / raw)
  To: dev; +Cc: Damodharam Ammepalli

[-- Attachment #1: Type: text/plain, Size: 2035 bytes --]

Fix epoch bit setting when we ring the doorbell.
Epoch bit needs to toggle alternatively from 0 to 1 every time the
ring indices wrap.
Currently its value is everything but an alternating 0 and 1.

Remove unnecessary field db_epoch_shift from
bnxt_db_info structure.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Damodharam Ammepalli <damodharam.ammepalli@broadcom.com>
---
 drivers/net/bnxt/bnxt_cpr.h  | 5 ++---
 drivers/net/bnxt/bnxt_ring.c | 9 ++-------
 2 files changed, 4 insertions(+), 10 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_cpr.h b/drivers/net/bnxt/bnxt_cpr.h
index 2de154322d..26e81a6a7e 100644
--- a/drivers/net/bnxt/bnxt_cpr.h
+++ b/drivers/net/bnxt/bnxt_cpr.h
@@ -53,11 +53,10 @@ struct bnxt_db_info {
 	bool                    db_64;
 	uint32_t		db_ring_mask;
 	uint32_t		db_epoch_mask;
-	uint32_t		db_epoch_shift;
 };
 
-#define DB_EPOCH(db, idx)	(((idx) & (db)->db_epoch_mask) <<	\
-				 ((db)->db_epoch_shift))
+#define DB_EPOCH(db, idx)	(!!((idx) & (db)->db_epoch_mask) <<	\
+				 DBR_EPOCH_SFT)
 #define DB_RING_IDX(db, idx)	(((idx) & (db)->db_ring_mask) |		\
 				 DB_EPOCH(db, idx))
 
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index 34b2510d54..6dacb1b37f 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -371,9 +371,10 @@ static void bnxt_set_db(struct bnxt *bp,
 			db->db_key64 = DBR_PATH_L2;
 			break;
 		}
-		if (BNXT_CHIP_SR2(bp)) {
+		if (BNXT_CHIP_P7(bp)) {
 			db->db_key64 |= DBR_VALID;
 			db_offset = bp->legacy_db_size;
+			db->db_epoch_mask = ring_mask + 1;
 		} else if (BNXT_VF(bp)) {
 			db_offset = DB_VF_OFFSET;
 		}
@@ -397,12 +398,6 @@ static void bnxt_set_db(struct bnxt *bp,
 		db->db_64 = false;
 	}
 	db->db_ring_mask = ring_mask;
-
-	if (BNXT_CHIP_SR2(bp)) {
-		db->db_epoch_mask = db->db_ring_mask + 1;
-		db->db_epoch_shift = DBR_EPOCH_SFT -
-					rte_log2_u32(db->db_epoch_mask);
-	}
 }
 
 static int bnxt_alloc_cmpl_ring(struct bnxt *bp, int queue_index,
-- 
2.39.2 (Apple Git-143)


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 02/14] net/bnxt: update HWRM API
  2023-12-10  1:24 [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
  2023-12-10  1:24 ` [PATCH v2 01/14] net/bnxt: refactor epoch setting Ajit Khaparde
@ 2023-12-10  1:24 ` Ajit Khaparde
  2023-12-10  1:24 ` [PATCH v2 03/14] net/bnxt: log a message when multicast promisc mode changes Ajit Khaparde
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-10  1:24 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 78110 bytes --]

Update HWRM API to version 1.10.2.158

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_hwrm.c           |    3 -
 drivers/net/bnxt/hsi_struct_def_dpdk.h | 1531 ++++++++++++++++++++++--
 2 files changed, 1429 insertions(+), 105 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 06f196760f..0a31b984e6 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -5175,9 +5175,6 @@ int bnxt_hwrm_set_ntuple_filter(struct bnxt *bp,
 	if (enables &
 	    HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_PORT_MASK)
 		req.dst_port_mask = rte_cpu_to_le_16(filter->dst_port_mask);
-	if (enables &
-	    HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_MIRROR_VNIC_ID)
-		req.mirror_vnic_id = filter->mirror_vnic_id;
 
 	req.enables = rte_cpu_to_le_32(enables);
 
diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h
index 9afdd056ce..65f3f0576b 100644
--- a/drivers/net/bnxt/hsi_struct_def_dpdk.h
+++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h
@@ -1154,8 +1154,8 @@ struct hwrm_err_output {
 #define HWRM_VERSION_MINOR 10
 #define HWRM_VERSION_UPDATE 2
 /* non-zero means beta version */
-#define HWRM_VERSION_RSVD 138
-#define HWRM_VERSION_STR "1.10.2.138"
+#define HWRM_VERSION_RSVD 158
+#define HWRM_VERSION_STR "1.10.2.158"
 
 /****************
  * hwrm_ver_get *
@@ -6329,19 +6329,14 @@ struct rx_pkt_v3_cmpl_hi {
 	#define RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_TTL \
 		(UINT32_C(0x5) << 9)
 	/*
-	 * Indicates that the IP checksum failed its check in the tunnel
+	 * Indicates that the physical packet is shorter than that claimed
+	 * by the tunnel header length. Valid for GTPv1-U packets.
 	 * header.
 	 */
-	#define RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_IP_CS_ERROR \
+	#define RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_TOTAL_ERROR \
 		(UINT32_C(0x6) << 9)
-	/*
-	 * Indicates that the L4 checksum failed its check in the tunnel
-	 * header.
-	 */
-	#define RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_L4_CS_ERROR \
-		(UINT32_C(0x7) << 9)
 	#define RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_LAST \
-		RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_L4_CS_ERROR
+		RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_TOTAL_ERROR
 	/*
 	 * This indicates that there was an error in the inner
 	 * portion of the packet when this
@@ -6406,20 +6401,8 @@ struct rx_pkt_v3_cmpl_hi {
 	 */
 	#define RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_OPT_LEN \
 		(UINT32_C(0x8) << 12)
-	/*
-	 * Indicates that the IP checksum failed its check in the
-	 * inner header.
-	 */
-	#define RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_IP_CS_ERROR \
-		(UINT32_C(0x9) << 12)
-	/*
-	 * Indicates that the L4 checksum failed its check in the
-	 * inner header.
-	 */
-	#define RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_L4_CS_ERROR \
-		(UINT32_C(0xa) << 12)
 	#define RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_LAST \
-		RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_L4_CS_ERROR
+		RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_OPT_LEN
 	/*
 	 * This is data from the CFA block as indicated by the meta_format
 	 * field.
@@ -14157,7 +14140,7 @@ struct hwrm_func_qcaps_input {
 	uint8_t	unused_0[6];
 } __rte_packed;
 
-/* hwrm_func_qcaps_output (size:896b/112B) */
+/* hwrm_func_qcaps_output (size:1088b/136B) */
 struct hwrm_func_qcaps_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
@@ -14840,9 +14823,85 @@ struct hwrm_func_qcaps_output {
 	/*
 	 * When this bit is '1', it indicates that the hardware based
 	 * link aggregation group (L2 and RoCE) feature is supported.
+	 * This LAG feature is only supported on the THOR2 or newer NIC
+	 * with multiple ports.
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_HW_LAG_SUPPORTED \
 		UINT32_C(0x400)
+	/*
+	 * When this bit is '1', it indicates all contexts can be stored
+	 * on chip instead of using host based backing store memory.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_ON_CHIP_CTX_SUPPORTED \
+		UINT32_C(0x800)
+	/*
+	 * When this bit is '1', it indicates that the HW supports
+	 * using a steering tag in the memory transactions targeting
+	 * L2 or RoCE ring resources.
+	 * Steering Tags are system-specific values that must follow the
+	 * encoding requirements of the hardware platform. On devices that
+	 * support steering to multiple address domains, a value of 0 in
+	 * bit 0 of the steering tag specifies the address is associated
+	 * with the SOC address space, and a value of 1 indicates the
+	 * address is associated with the host address space.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_STEERING_TAG_SUPPORTED \
+		UINT32_C(0x1000)
+	/*
+	 * When this bit is '1', it indicates that driver can enable
+	 * support for an enhanced VF scale.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_ENHANCED_VF_SCALE_SUPPORTED \
+		UINT32_C(0x2000)
+	/*
+	 * When this bit is '1', it indicates that FW is capable of
+	 * supporting partition based XID management for KTLS/QUIC
+	 * Tx/Rx Key Context types.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_KEY_XID_PARTITION_SUPPORTED \
+		UINT32_C(0x4000)
+	/*
+	 * This bit is only valid on the condition that both
+	 * “ktls_supported” and “quic_supported” flags are set. When this
+	 * bit is valid, it conveys information below:
+	 * 1. If it is set to ‘1’, it indicates that the firmware allows the
+	 *    driver to run KTLS and QUIC concurrently;
+	 * 2. If it is cleared to ‘0’, it indicates that the driver has to
+	 *    make sure all crypto connections on all functions are of the
+	 *    same type, i.e., either KTLS or QUIC.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_CONCURRENT_KTLS_QUIC_SUPPORTED \
+		UINT32_C(0x8000)
+	/*
+	 * When this bit is '1', it indicates that the device supports
+	 * setting a cross TC cap on a scheduler queue.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_SCHQ_CROSS_TC_CAP_SUPPORTED \
+		UINT32_C(0x10000)
+	/*
+	 * When this bit is '1', it indicates that the device supports
+	 * setting a per TC cap on a scheduler queue.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_SCHQ_PER_TC_CAP_SUPPORTED \
+		UINT32_C(0x20000)
+	/*
+	 * When this bit is '1', it indicates that the device supports
+	 * setting a per TC reservation on a scheduler queues.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_SCHQ_PER_TC_RESERVATION_SUPPORTED \
+		UINT32_C(0x40000)
+	/*
+	 * When this bit is '1', it indicates that firmware supports query
+	 * for statistics related to invalid doorbell errors and drops.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_DB_ERROR_STATS_SUPPORTED \
+		UINT32_C(0x80000)
+	/*
+	 * When this bit is '1', it indicates that the device supports
+	 * VF RoCE resource management.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_ROCE_VF_RESOURCE_MGMT_SUPPORTED \
+		UINT32_C(0x100000)
 	uint16_t	tunnel_disable_flag;
 	/*
 	 * When this bit is '1', it indicates that the VXLAN parsing
@@ -14892,7 +14951,35 @@ struct hwrm_func_qcaps_output {
 	 */
 	#define HWRM_FUNC_QCAPS_OUTPUT_TUNNEL_DISABLE_FLAG_DISABLE_PPPOE \
 		UINT32_C(0x80)
-	uint8_t	unused_1[2];
+	uint16_t	xid_partition_cap;
+	/*
+	 * When this bit is '1', it indicates that FW is capable of
+	 * supporting partition based XID management for KTLS TX
+	 * key contexts.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_XID_PARTITION_CAP_KTLS_TKC \
+		UINT32_C(0x1)
+	/*
+	 * When this bit is '1', it indicates that FW is capable of
+	 * supporting partition based XID management for KTLS RX
+	 * key contexts.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_XID_PARTITION_CAP_KTLS_RKC \
+		UINT32_C(0x2)
+	/*
+	 * When this bit is '1', it indicates that FW is capable of
+	 * supporting partition based XID management for QUIC TX
+	 * key contexts.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_XID_PARTITION_CAP_QUIC_TKC \
+		UINT32_C(0x4)
+	/*
+	 * When this bit is '1', it indicates that FW is capable of
+	 * supporting partition based XID management for QUIC RX
+	 * key contexts.
+	 */
+	#define HWRM_FUNC_QCAPS_OUTPUT_XID_PARTITION_CAP_QUIC_RKC \
+		UINT32_C(0x8)
 	/*
 	 * This value uniquely identifies the hardware NIC used by the
 	 * function. The value returned will be the same for all functions.
@@ -14901,7 +14988,55 @@ struct hwrm_func_qcaps_output {
 	 * PCIe Capability Device Serial Number.
 	 */
 	uint8_t	device_serial_number[8];
-	uint8_t	unused_2[7];
+	/*
+	 * This field is only valid in the XID partition mode. It indicates
+	 * the number contexts per partition.
+	 */
+	uint16_t	ctxs_per_partition;
+	uint8_t	unused_2[2];
+	/*
+	 * The maximum number of address vectors that may be allocated across
+	 * all VFs for the function. This is valid only on the PF with VF RoCE
+	 * (SR-IOV) enabled. Returns zero if this command is called on a PF
+	 * with VF RoCE (SR-IOV) disabled or on a VF.
+	 */
+	uint32_t	roce_vf_max_av;
+	/*
+	 * The maximum number of completion queues that may be allocated across
+	 * all VFs for the function. This is valid only on the PF with VF RoCE
+	 * (SR-IOV) enabled. Returns zero if this command is called on a PF
+	 * with VF RoCE (SR-IOV) disabled or on a VF.
+	 */
+	uint32_t	roce_vf_max_cq;
+	/*
+	 * The maximum number of memory regions plus memory windows that may be
+	 * allocated across all VFs for the function. This is valid only on the
+	 * PF with VF RoCE (SR-IOV) enabled. Returns zero if this command is
+	 * called on a PF with VF RoCE (SR-IOV) disabled or on a VF.
+	 */
+	uint32_t	roce_vf_max_mrw;
+	/*
+	 * The maximum number of queue pairs that may be allocated across
+	 * all VFs for the function. This is valid only on the PF with VF RoCE
+	 * (SR-IOV) enabled. Returns zero if this command is called on a PF
+	 * with VF RoCE (SR-IOV) disabled or on a VF.
+	 */
+	uint32_t	roce_vf_max_qp;
+	/*
+	 * The maximum number of shared receive queues that may be allocated
+	 * across all VFs for the function. This is valid only on the PF with
+	 * VF RoCE (SR-IOV) enabled. Returns zero if this command is called on
+	 * a PF with VF RoCE (SR-IOV) disabled or on a VF.
+	 */
+	uint32_t	roce_vf_max_srq;
+	/*
+	 * The maximum number of GIDs that may be allocated across all VFs for
+	 * the function. This is valid only on the PF with VF RoCE (SR-IOV)
+	 * enabled. Returns zero if this command is called on a PF with VF RoCE
+	 * (SR-IOV) disabled or on a VF.
+	 */
+	uint32_t	roce_vf_max_gid;
+	uint8_t	unused_3[3];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -14959,7 +15094,7 @@ struct hwrm_func_qcfg_input {
 	uint8_t	unused_0[6];
 } __rte_packed;
 
-/* hwrm_func_qcfg_output (size:1024b/128B) */
+/* hwrm_func_qcfg_output (size:1280b/160B) */
 struct hwrm_func_qcfg_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
@@ -15604,11 +15739,68 @@ struct hwrm_func_qcfg_output {
 	 */
 	uint16_t	port_kdnet_fid;
 	uint8_t	unused_5[2];
-	/* Number of Tx Key Contexts allocated. */
-	uint32_t	alloc_tx_key_ctxs;
-	/* Number of Rx Key Contexts allocated. */
-	uint32_t	alloc_rx_key_ctxs;
-	uint8_t	unused_6[7];
+	/* Number of KTLS Tx Key Contexts allocated. */
+	uint32_t	num_ktls_tx_key_ctxs;
+	/* Number of KTLS Rx Key Contexts allocated. */
+	uint32_t	num_ktls_rx_key_ctxs;
+	/*
+	 * The LAG idx of this function. The lag_id is per port and the
+	 * valid lag_id is from 0 to 7, if there is no valid lag_id,
+	 * 0xff will be returned.
+	 * This HW lag id is used for Truflow programming only.
+	 */
+	uint8_t	lag_id;
+	/* Partition interface for this function. */
+	uint8_t	parif;
+	/*
+	 * The LAG ID of a hardware link aggregation group (LAG) whose
+	 * member ports include the port of this function.  The LAG was
+	 * previously created using HWRM_FUNC_LAG_CREATE.  If the port of this
+	 * function is not a member of any LAG, the fw_lag_id will be 0xff.
+	 */
+	uint8_t	fw_lag_id;
+	uint8_t	unused_6;
+	/* Number of QUIC Tx Key Contexts allocated. */
+	uint32_t	num_quic_tx_key_ctxs;
+	/* Number of QUIC Rx Key Contexts allocated. */
+	uint32_t	num_quic_rx_key_ctxs;
+	/*
+	 * Number of AVs per VF. Only valid for PF. This field is ignored
+	 * when the flag, l2_vf_resource_mgmt, is not set in RoCE
+	 * initialize_fw.
+	 */
+	uint32_t	roce_max_av_per_vf;
+	/*
+	 * Number of CQs per VF. Only valid for PF. This field is ignored when
+	 * the flag, l2_vf_resource_mgmt, is not set in RoCE initialize_fw.
+	 */
+	uint32_t	roce_max_cq_per_vf;
+	/*
+	 * Number of MR/MWs per VF. Only valid for PF. This field is ignored
+	 * when the flag, l2_vf_resource_mgmt, is not set in RoCE
+	 * initialize_fw.
+	 */
+	uint32_t	roce_max_mrw_per_vf;
+	/*
+	 * Number of QPs per VF. Only valid for PF. This field is ignored when
+	 * the flag, l2_vf_resource_mgmt, is not set in RoCE initialize_fw.
+	 */
+	uint32_t	roce_max_qp_per_vf;
+	/*
+	 * Number of SRQs per VF. Only valid for PF. This field is ignored
+	 * when the flag, l2_vf_resource_mgmt, is not set in RoCE
+	 * initialize_fw.
+	 */
+	uint32_t	roce_max_srq_per_vf;
+	/*
+	 * Number of GIDs per VF. Only valid for PF. This field is ignored
+	 * when the flag, l2_vf_resource_mgmt, is not set in RoCE
+	 * initialize_fw.
+	 */
+	uint32_t	roce_max_gid_per_vf;
+	/* Bitmap of context types that have XID partition enabled. */
+	uint16_t	xid_partition_cfg;
+	uint8_t	unused_7;
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -15624,7 +15816,7 @@ struct hwrm_func_qcfg_output {
  *****************/
 
 
-/* hwrm_func_cfg_input (size:1024b/128B) */
+/* hwrm_func_cfg_input (size:1280b/160B) */
 struct hwrm_func_cfg_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
@@ -15888,15 +16080,6 @@ struct hwrm_func_cfg_input {
 	 */
 	#define HWRM_FUNC_CFG_INPUT_FLAGS_BD_METADATA_DISABLE \
 		UINT32_C(0x40000000)
-	/*
-	 * If this bit is set to 1, the driver is requesting FW to see if
-	 * all the assets requested in this command (i.e. number of KTLS/
-	 * QUIC key contexts) are available. The firmware will return an
-	 * error if the requested assets are not available. The firmware
-	 * will NOT reserve the assets if they are available.
-	 */
-	#define HWRM_FUNC_CFG_INPUT_FLAGS_KEY_CTX_ASSETS_TEST \
-		UINT32_C(0x80000000)
 	uint32_t	enables;
 	/*
 	 * This bit must be '1' for the admin_mtu field to be
@@ -16080,16 +16263,16 @@ struct hwrm_func_cfg_input {
 	#define HWRM_FUNC_CFG_INPUT_ENABLES_HOST_MTU \
 		UINT32_C(0x20000000)
 	/*
-	 * This bit must be '1' for the number of Tx Key Contexts
-	 * field to be configured.
+	 * This bit must be '1' for the num_ktls_tx_key_ctxs field to be
+	 * configured.
 	 */
-	#define HWRM_FUNC_CFG_INPUT_ENABLES_TX_KEY_CTXS \
+	#define HWRM_FUNC_CFG_INPUT_ENABLES_KTLS_TX_KEY_CTXS \
 		UINT32_C(0x40000000)
 	/*
-	 * This bit must be '1' for the number of Rx Key Contexts
-	 * field to be configured.
+	 * This bit must be '1' for the num_ktls_rx_key_ctxs field to be
+	 * configured.
 	 */
-	#define HWRM_FUNC_CFG_INPUT_ENABLES_RX_KEY_CTXS \
+	#define HWRM_FUNC_CFG_INPUT_ENABLES_KTLS_RX_KEY_CTXS \
 		UINT32_C(0x80000000)
 	/*
 	 * This field can be used by the admin PF to configure
@@ -16542,19 +16725,93 @@ struct hwrm_func_cfg_input {
 	 * ring that is assigned to a function has a valid mtu.
 	 */
 	uint16_t	host_mtu;
-	uint8_t	unused_0[4];
+	uint32_t	flags2;
+	/*
+	 * If this bit is set to 1, the driver is requesting the firmware
+	 * to see if the assets (i.e., the number of KTLS key contexts)
+	 * requested in this command are available. The firmware will return
+	 * an error if the requested assets are not available. The firmware
+	 * will NOT reserve the assets if they are available.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_FLAGS2_KTLS_KEY_CTX_ASSETS_TEST \
+		UINT32_C(0x1)
+	/*
+	 * If this bit is set to 1, the driver is requesting the firmware
+	 * to see if the assets (i.e., the number of QUIC key contexts)
+	 * requested in this command are available. The firmware will return
+	 * an error if the requested assets are not available. The firmware
+	 * will NOT reserve the assets if they are available.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_FLAGS2_QUIC_KEY_CTX_ASSETS_TEST \
+		UINT32_C(0x2)
 	uint32_t	enables2;
 	/*
 	 * This bit must be '1' for the kdnet_mode field to be
 	 * configured.
 	 */
-	#define HWRM_FUNC_CFG_INPUT_ENABLES2_KDNET            UINT32_C(0x1)
+	#define HWRM_FUNC_CFG_INPUT_ENABLES2_KDNET \
+		UINT32_C(0x1)
 	/*
 	 * This bit must be '1' for the db_page_size field to be
 	 * configured. Legacy controller core FW may silently ignore
 	 * the db_page_size programming request through this command.
 	 */
-	#define HWRM_FUNC_CFG_INPUT_ENABLES2_DB_PAGE_SIZE     UINT32_C(0x2)
+	#define HWRM_FUNC_CFG_INPUT_ENABLES2_DB_PAGE_SIZE \
+		UINT32_C(0x2)
+	/*
+	 * This bit must be '1' for the num_quic_tx_key_ctxs field to be
+	 * configured.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_ENABLES2_QUIC_TX_KEY_CTXS \
+		UINT32_C(0x4)
+	/*
+	 * This bit must be '1' for the num_quic_rx_key_ctxs field to be
+	 * configured.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_ENABLES2_QUIC_RX_KEY_CTXS \
+		UINT32_C(0x8)
+	/*
+	 * This bit must be '1' for the roce_max_av_per_vf field to be
+	 * configured.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_AV_PER_VF \
+		UINT32_C(0x10)
+	/*
+	 * This bit must be '1' for the roce_max_cq_per_vf field to be
+	 * configured. Only valid for PF.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_CQ_PER_VF \
+		UINT32_C(0x20)
+	/*
+	 * This bit must be '1' for the roce_max_mrw_per_vf field to be
+	 * configured. Only valid for PF.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_MRW_PER_VF \
+		UINT32_C(0x40)
+	/*
+	 * This bit must be '1' for the roce_max_qp_per_vf field to be
+	 * configured.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_QP_PER_VF \
+		UINT32_C(0x80)
+	/*
+	 * This bit must be '1' for the roce_max_srq_per_vf field to be
+	 * configured. Only valid for PF.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_SRQ_PER_VF \
+		UINT32_C(0x100)
+	/*
+	 * This bit must be '1' for the roce_max_gid_per_vf field to be
+	 * configured. Only valid for PF.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_GID_PER_VF \
+		UINT32_C(0x200)
+	/*
+	 * This bit must be '1' for the xid_partition_cfg field to be
+	 * configured. Only valid for PF.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_ENABLES2_XID_PARTITION_CFG \
+		UINT32_C(0x400)
 	/*
 	 * KDNet mode for the port for this function.  If NPAR is
 	 * also configured on this port, it takes precedence.  KDNet
@@ -16602,11 +16859,56 @@ struct hwrm_func_cfg_input {
 	#define HWRM_FUNC_CFG_INPUT_DB_PAGE_SIZE_LAST \
 		HWRM_FUNC_CFG_INPUT_DB_PAGE_SIZE_4MB
 	uint8_t	unused_1[2];
-	/* Number of Tx Key Contexts requested. */
-	uint32_t	num_tx_key_ctxs;
-	/* Number of Rx Key Contexts requested. */
-	uint32_t	num_rx_key_ctxs;
-	uint8_t	unused_2[4];
+	/* Number of KTLS Tx Key Contexts requested. */
+	uint32_t	num_ktls_tx_key_ctxs;
+	/* Number of KTLS Rx Key Contexts requested. */
+	uint32_t	num_ktls_rx_key_ctxs;
+	/* Number of QUIC Tx Key Contexts requested. */
+	uint32_t	num_quic_tx_key_ctxs;
+	/* Number of QUIC Rx Key Contexts requested. */
+	uint32_t	num_quic_rx_key_ctxs;
+	/* Number of AVs per VF. Only valid for PF. */
+	uint32_t	roce_max_av_per_vf;
+	/* Number of CQs per VF. Only valid for PF. */
+	uint32_t	roce_max_cq_per_vf;
+	/* Number of MR/MWs per VF. Only valid for PF. */
+	uint32_t	roce_max_mrw_per_vf;
+	/* Number of QPs per VF. Only valid for PF. */
+	uint32_t	roce_max_qp_per_vf;
+	/* Number of SRQs per VF. Only valid for PF. */
+	uint32_t	roce_max_srq_per_vf;
+	/* Number of GIDs per VF. Only valid for PF. */
+	uint32_t	roce_max_gid_per_vf;
+	/*
+	 * Bitmap of context kinds that have XID partition enabled.
+	 * Only valid for PF.
+	 */
+	uint16_t	xid_partition_cfg;
+	/*
+	 * When this bit is '1', it indicates that driver enables XID
+	 * partition on KTLS TX key contexts.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_XID_PARTITION_CFG_KTLS_TKC \
+		UINT32_C(0x1)
+	/*
+	 * When this bit is '1', it indicates that driver enables XID
+	 * partition on KTLS RX key contexts.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_XID_PARTITION_CFG_KTLS_RKC \
+		UINT32_C(0x2)
+	/*
+	 * When this bit is '1', it indicates that driver enables XID
+	 * partition on QUIC TX key contexts.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_XID_PARTITION_CFG_QUIC_TKC \
+		UINT32_C(0x4)
+	/*
+	 * When this bit is '1', it indicates that driver enables XID
+	 * partition on QUIC RX key contexts.
+	 */
+	#define HWRM_FUNC_CFG_INPUT_XID_PARTITION_CFG_QUIC_RKC \
+		UINT32_C(0x8)
+	uint16_t	unused_2;
 } __rte_packed;
 
 /* hwrm_func_cfg_output (size:128b/16B) */
@@ -22466,8 +22768,14 @@ struct hwrm_func_backing_store_cfg_v2_input {
 	 * which means "0" indicates the first instance. For backing
 	 * stores with single instance only, leave this field to 0.
 	 * 1. If the backing store type is MPC TQM ring, use the following
-	 *    instance value to MPC client mapping:
+	 *    instance value to map to MPC clients:
 	 *    TCE (0), RCE (1), TE_CFA(2), RE_CFA (3), PRIMATE(4)
+	 * 2. If the backing store type is TBL_SCOPE, use the following
+	 *    instance value to map to table scope regions:
+	 *    RE_CFA_LKUP (0), RE_CFA_ACT (1), TE_CFA_LKUP(2), TE_CFA_ACT (3)
+	 * 3. If the backing store type is XID partition, use the following
+	 *    instance value to map to context types:
+	 *    KTLS_TKC (0), KTLS_RKC (1), QUIC_TKC (2), QUIC_RKC (3)
 	 */
 	uint16_t	instance;
 	/* Control flags. */
@@ -22578,7 +22886,8 @@ struct hwrm_func_backing_store_cfg_v2_input {
 	 * | SRQ  |             srq_split_entries                      |
 	 * | CQ   |             cq_split_entries                       |
 	 * | VINC |            vnic_split_entries                      |
-	 * | MRAV |            marv_split_entries                      |
+	 * | MRAV |            mrav_split_entries                      |
+	 * | TS   |             ts_split_entries                       |
 	 */
 	uint32_t	split_entry_0;
 	/* Split entry #1. */
@@ -22711,6 +23020,15 @@ struct hwrm_func_backing_store_qcfg_v2_input {
 	 * Instance of the backing store type. It is zero-based,
 	 * which means "0" indicates the first instance. For backing
 	 * stores with single instance only, leave this field to 0.
+	 * 1. If the backing store type is MPC TQM ring, use the following
+	 *    instance value to map to MPC clients:
+	 *    TCE (0), RCE (1), TE_CFA(2), RE_CFA (3), PRIMATE(4)
+	 * 2. If the backing store type is TBL_SCOPE, use the following
+	 *    instance value to map to table scope regions:
+	 *    RE_CFA_LKUP (0), RE_CFA_ACT (1), TE_CFA_LKUP(2), TE_CFA_ACT (3)
+	 * 3. If the backing store type is XID partition, use the following
+	 *    instance value to map to context types:
+	 *    KTLS_TKC (0), KTLS_RKC (1), QUIC_TKC (2), QUIC_RKC (3)
 	 */
 	uint16_t	instance;
 	uint8_t	rsvd[4];
@@ -22779,6 +23097,15 @@ struct hwrm_func_backing_store_qcfg_v2_output {
 	 * Instance of the backing store type. It is zero-based,
 	 * which means "0" indicates the first instance. For backing
 	 * stores with single instance only, leave this field to 0.
+	 * 1. If the backing store type is MPC TQM ring, use the following
+	 *    instance value to map to MPC clients:
+	 *    TCE (0), RCE (1), TE_CFA(2), RE_CFA (3), PRIMATE(4)
+	 * 2. If the backing store type is TBL_SCOPE, use the following
+	 *    instance value to map to table scope regions:
+	 *    RE_CFA_LKUP (0), RE_CFA_ACT (1), TE_CFA_LKUP(2), TE_CFA_ACT (3)
+	 * 3. If the backing store type is XID partition, use the following
+	 *    instance value to map to context types:
+	 *    KTLS_TKC (0), KTLS_RKC (1), QUIC_TKC (2), QUIC_RKC (3)
 	 */
 	uint16_t	instance;
 	/* Control flags. */
@@ -22855,7 +23182,8 @@ struct hwrm_func_backing_store_qcfg_v2_output {
 	 * | SRQ  |             srq_split_entries                      |
 	 * | CQ   |             cq_split_entries                       |
 	 * | VINC |            vnic_split_entries                      |
-	 * | MRAV |            marv_split_entries                      |
+	 * | MRAV |            mrav_split_entries                      |
+	 * | TS   |             ts_split_entries                       |
 	 */
 	uint32_t	split_entry_0;
 	/* Split entry #1. */
@@ -22876,17 +23204,20 @@ struct hwrm_func_backing_store_qcfg_v2_output {
 	uint8_t	valid;
 } __rte_packed;
 
-/* Common structure to cast QPC split entries. This casting is required in the following HWRM command inputs/outputs if the backing store type is QPC. 1. hwrm_func_backing_store_cfg_v2_input 2. hwrm_func_backing_store_qcfg_v2_output 3. hwrm_func_backing_store_qcaps_v2_output */
 /* qpc_split_entries (size:128b/16B) */
 struct qpc_split_entries {
 	/* Number of L2 QP backing store entries. */
 	uint32_t	qp_num_l2_entries;
 	/* Number of QP1 entries. */
 	uint32_t	qp_num_qp1_entries;
-	uint32_t	rsvd[2];
+	/*
+	 * Number of RoCE QP context entries required for this
+	 * function to support fast QP modify destroy feature.
+	 */
+	uint32_t	qp_num_fast_qpmd_entries;
+	uint32_t	rsvd;
 } __rte_packed;
 
-/* Common structure to cast SRQ split entries. This casting is required in the following HWRM command inputs/outputs if the backing store type is SRQ. 1. hwrm_func_backing_store_cfg_v2_input 2. hwrm_func_backing_store_qcfg_v2_output 3. hwrm_func_backing_store_qcaps_v2_output */
 /* srq_split_entries (size:128b/16B) */
 struct srq_split_entries {
 	/* Number of L2 SRQ backing store entries. */
@@ -22895,7 +23226,6 @@ struct srq_split_entries {
 	uint32_t	rsvd2[2];
 } __rte_packed;
 
-/* Common structure to cast CQ split entries. This casting is required in the following HWRM command inputs/outputs if the backing store type is CQ. 1. hwrm_func_backing_store_cfg_v2_input 2. hwrm_func_backing_store_qcfg_v2_output 3. hwrm_func_backing_store_qcaps_v2_output */
 /* cq_split_entries (size:128b/16B) */
 struct cq_split_entries {
 	/* Number of L2 CQ backing store entries. */
@@ -22904,7 +23234,6 @@ struct cq_split_entries {
 	uint32_t	rsvd2[2];
 } __rte_packed;
 
-/* Common structure to cast VNIC split entries. This casting is required in the following HWRM command inputs/outputs if the backing store type is VNIC. 1. hwrm_func_backing_store_cfg_v2_input 2. hwrm_func_backing_store_qcfg_v2_output 3. hwrm_func_backing_store_qcaps_v2_output */
 /* vnic_split_entries (size:128b/16B) */
 struct vnic_split_entries {
 	/* Number of VNIC backing store entries. */
@@ -22913,7 +23242,6 @@ struct vnic_split_entries {
 	uint32_t	rsvd2[2];
 } __rte_packed;
 
-/* Common structure to cast MRAV split entries. This casting is required in the following HWRM command inputs/outputs if the backing store type is MRAV. 1. hwrm_func_backing_store_cfg_v2_input 2. hwrm_func_backing_store_qcfg_v2_output 3. hwrm_func_backing_store_qcaps_v2_output */
 /* mrav_split_entries (size:128b/16B) */
 struct mrav_split_entries {
 	/* Number of AV backing store entries. */
@@ -22922,6 +23250,21 @@ struct mrav_split_entries {
 	uint32_t	rsvd2[2];
 } __rte_packed;
 
+/* ts_split_entries (size:128b/16B) */
+struct ts_split_entries {
+	/* Max number of TBL_SCOPE region entries (QCAPS). */
+	uint32_t	region_num_entries;
+	/* tsid to configure (CFG). */
+	uint8_t	tsid;
+	/*
+	 * Lkup static bucket count (power of 2).
+	 * Array is indexed by enum cfa_dir
+	 */
+	uint8_t	lkup_static_bkt_cnt_exp[2];
+	uint8_t	rsvd;
+	uint32_t	rsvd2[2];
+} __rte_packed;
+
 /************************************
  * hwrm_func_backing_store_qcaps_v2 *
  ************************************/
@@ -23112,12 +23455,36 @@ struct hwrm_func_backing_store_qcaps_v2_output {
 	 */
 	#define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_DRIVER_MANAGED_MEMORY \
 		UINT32_C(0x4)
+	/*
+	 * When set, it indicates the support of the following capability
+	 * that is specific to the QP type:
+	 * - For 2-port adapters, the ability to extend the RoCE QP
+	 *   entries configured on a PF, during some network events such as
+	 *   Link Down. These additional entries count is included in the
+	 *   advertised 'max_num_entries'.
+	 * - The count of RoCE QP entries, derived from 'max_num_entries'
+	 *   (max_num_entries - qp_num_qp1_entries - qp_num_l2_entries -
+	 *   qp_num_fast_qpmd_entries, note qp_num_fast_qpmd_entries is
+	 *   always zero when QPs are pseudo-statically allocated), includes
+	 *   the count of QPs that can be migrated from the other PF (e.g.,
+	 *   during network link down). Therefore, during normal operation
+	 *   when both PFs are active, the supported number of RoCE QPs for
+	 *   each of the PF is half of the advertised value.
+	 */
+	#define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_ROCE_QP_PSEUDO_STATIC_ALLOC \
+		UINT32_C(0x8)
 	/*
 	 * Bit map of the valid instances associated with the
 	 * backing store type.
 	 * 1. If the backing store type is MPC TQM ring, use the following
-	 *    bit to MPC client mapping:
+	 *    bits to map to MPC clients:
 	 *    TCE (0), RCE (1), TE_CFA(2), RE_CFA (3), PRIMATE(4)
+	 * 2. If the backing store type is TBL_SCOPE, use the following
+	 *    bits to map to table scope regions:
+	 *    RE_CFA_LKUP (0), RE_CFA_ACT (1), TE_CFA_LKUP(2), TE_CFA_ACT (3)
+	 * 3. If the backing store type is VF XID partition in-use table, use
+	 *    the following bits to map to context types:
+	 *    KTLS_TKC (0), KTLS_RKC (1), QUIC_TKC (2), QUIC_RKC (3)
 	 */
 	uint32_t	instance_bit_map;
 	/*
@@ -23164,7 +23531,43 @@ struct hwrm_func_backing_store_qcaps_v2_output {
 	 * |   4   | All four split entries have valid data.            |
 	 */
 	uint8_t	subtype_valid_cnt;
-	uint8_t	rsvd2;
+	/*
+	 * Bitmap that indicates if each of the 'split_entry' denotes an
+	 * exact count (i.e., min = max). When the exact count bit is set,
+	 * it indicates the exact number of entries as advertised has to be
+	 * configured. The 'split_entry' to be set to contain exact count by
+	 * this bitmap needs to be a valid split entry specified by
+	 * 'subtype_valid_cnt'.
+	 */
+	uint8_t	exact_cnt_bit_map;
+	/*
+	 * When this bit is '1', it indicates 'split_entry_0' contains
+	 * an exact count.
+	 */
+	#define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_SPLIT_ENTRY_0_EXACT \
+		UINT32_C(0x1)
+	/*
+	 * When this bit is '1', it indicates 'split_entry_1' contains
+	 * an exact count.
+	 */
+	#define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_SPLIT_ENTRY_1_EXACT \
+		UINT32_C(0x2)
+	/*
+	 * When this bit is '1', it indicates 'split_entry_2' contains
+	 * an exact count.
+	 */
+	#define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_SPLIT_ENTRY_2_EXACT \
+		UINT32_C(0x4)
+	/*
+	 * When this bit is '1', it indicates 'split_entry_3' contains
+	 * an exact count.
+	 */
+	#define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_SPLIT_ENTRY_3_EXACT \
+		UINT32_C(0x8)
+	#define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_UNUSED_MASK \
+		UINT32_C(0xf0)
+	#define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_UNUSED_SFT \
+		4
 	/*
 	 * Split entry #0. Note that the four split entries (as a group)
 	 * must be cast to a type-specific data structure first before
@@ -23176,7 +23579,8 @@ struct hwrm_func_backing_store_qcaps_v2_output {
 	 * | SRQ  |             srq_split_entries                      |
 	 * | CQ   |             cq_split_entries                       |
 	 * | VINC |            vnic_split_entries                      |
-	 * | MRAV |            marv_split_entries                      |
+	 * | MRAV |            mrav_split_entries                      |
+	 * | TS   |             ts_split_entries                       |
 	 */
 	uint32_t	split_entry_0;
 	/* Split entry #1. */
@@ -23471,7 +23875,9 @@ struct hwrm_func_dbr_pacing_qcfg_output {
 	 * dbr_throttling_aeq_arm_reg register.
 	 */
 	uint8_t	dbr_throttling_aeq_arm_reg_val;
-	uint8_t	unused_3[7];
+	uint8_t	unused_3[3];
+	/* This field indicates the maximum depth of the doorbell FIFO. */
+	uint32_t	dbr_stat_db_max_fifo_depth;
 	/*
 	 * Specifies primary function’s NQ ID.
 	 * A value of 0xFFFF FFFF indicates NQ ID is invalid.
@@ -25128,7 +25534,7 @@ struct hwrm_func_spd_qcfg_output {
  *********************/
 
 
-/* hwrm_port_phy_cfg_input (size:448b/56B) */
+/* hwrm_port_phy_cfg_input (size:512b/64B) */
 struct hwrm_port_phy_cfg_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
@@ -25505,6 +25911,18 @@ struct hwrm_port_phy_cfg_input {
 	 */
 	#define HWRM_PORT_PHY_CFG_INPUT_ENABLES_AUTO_PAM4_LINK_SPEED_MASK \
 		UINT32_C(0x1000)
+	/*
+	 * This bit must be '1' for the force_link_speeds2 field to be
+	 * configured.
+	 */
+	#define HWRM_PORT_PHY_CFG_INPUT_ENABLES_FORCE_LINK_SPEEDS2 \
+		UINT32_C(0x2000)
+	/*
+	 * This bit must be '1' for the auto_link_speeds2_mask field to
+	 * be configured.
+	 */
+	#define HWRM_PORT_PHY_CFG_INPUT_ENABLES_AUTO_LINK_SPEEDS2_MASK \
+		UINT32_C(0x4000)
 	/* Port ID of port that is to be configured. */
 	uint16_t	port_id;
 	/*
@@ -25808,7 +26226,99 @@ struct hwrm_port_phy_cfg_input {
 		UINT32_C(0x2)
 	#define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_PAM4_SPEED_MASK_200G \
 		UINT32_C(0x4)
-	uint8_t	unused_2[2];
+	/*
+	 * This is the speed that will be used if the force_link_speeds2
+	 * bit is '1'.  If unsupported speed is selected, an error
+	 * will be generated.
+	 */
+	uint16_t	force_link_speeds2;
+	/* 1Gb link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_1GB \
+		UINT32_C(0xa)
+	/* 10Gb link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_10GB \
+		UINT32_C(0x64)
+	/* 25Gb link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_25GB \
+		UINT32_C(0xfa)
+	/* 40Gb link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_40GB \
+		UINT32_C(0x190)
+	/* 50Gb link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_50GB \
+		UINT32_C(0x1f4)
+	/* 100Gb link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_100GB \
+		UINT32_C(0x3e8)
+	/* 50Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_50GB_PAM4_56 \
+		UINT32_C(0x1f5)
+	/* 100Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_100GB_PAM4_56 \
+		UINT32_C(0x3e9)
+	/* 200Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_200GB_PAM4_56 \
+		UINT32_C(0x7d1)
+	/* 400Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_400GB_PAM4_56 \
+		UINT32_C(0xfa1)
+	/* 100Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_100GB_PAM4_112 \
+		UINT32_C(0x3ea)
+	/* 200Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_200GB_PAM4_112 \
+		UINT32_C(0x7d2)
+	/* 400Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_400GB_PAM4_112 \
+		UINT32_C(0xfa2)
+	#define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_LAST \
+		HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_400GB_PAM4_112
+	/*
+	 * This is a mask of link speeds that will be used if
+	 * auto_link_speeds2_mask bit in the "enables" field is 1.
+	 * If unsupported speed is enabled an error will be generated.
+	 */
+	uint16_t	auto_link_speeds2_mask;
+	/* 1Gb link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_1GB \
+		UINT32_C(0x1)
+	/* 10Gb link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_10GB \
+		UINT32_C(0x2)
+	/* 25Gb link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_25GB \
+		UINT32_C(0x4)
+	/* 40Gb link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_40GB \
+		UINT32_C(0x8)
+	/* 50Gb link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_50GB \
+		UINT32_C(0x10)
+	/* 100Gb link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_100GB \
+		UINT32_C(0x20)
+	/* 50Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_50GB_PAM4_56 \
+		UINT32_C(0x40)
+	/* 100Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_100GB_PAM4_56 \
+		UINT32_C(0x80)
+	/* 200Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_200GB_PAM4_56 \
+		UINT32_C(0x100)
+	/* 400Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_400GB_PAM4_56 \
+		UINT32_C(0x200)
+	/* 100Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_100GB_PAM4_112 \
+		UINT32_C(0x400)
+	/* 200Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_200GB_PAM4_112 \
+		UINT32_C(0x800)
+	/* 400Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_400GB_PAM4_112 \
+		UINT32_C(0x1000)
+	uint8_t	unused_2[6];
 } __rte_packed;
 
 /* hwrm_port_phy_cfg_output (size:128b/16B) */
@@ -25932,11 +26442,14 @@ struct hwrm_port_phy_qcfg_output {
 	/* NRZ signaling */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_NRZ \
 		UINT32_C(0x0)
-	/* PAM4 signaling */
+	/* PAM4-56 signaling */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_PAM4 \
 		UINT32_C(0x1)
+	/* PAM4-112 signaling */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_PAM4_112 \
+		UINT32_C(0x2)
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_LAST \
-		HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_PAM4
+		HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_PAM4_112
 	/* This value indicates the current active FEC mode. */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_ACTIVE_FEC_MASK \
 		UINT32_C(0xf0)
@@ -25992,6 +26505,8 @@ struct hwrm_port_phy_qcfg_output {
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100GB UINT32_C(0x3e8)
 	/* 200Gb link speed */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_200GB UINT32_C(0x7d0)
+	/* 400Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_400GB UINT32_C(0xfa0)
 	/* 10Mb link speed */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_10MB  UINT32_C(0xffff)
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_LAST \
@@ -26446,8 +26961,56 @@ struct hwrm_port_phy_qcfg_output {
 	/* 100G_BASEER2 */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASEER2 \
 		UINT32_C(0x27)
+	/* 400G_BASECR */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASECR \
+		UINT32_C(0x28)
+	/* 100G_BASESR */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASESR \
+		UINT32_C(0x29)
+	/* 100G_BASELR */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASELR \
+		UINT32_C(0x2a)
+	/* 100G_BASEER */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASEER \
+		UINT32_C(0x2b)
+	/* 200G_BASECR2 */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_200G_BASECR2 \
+		UINT32_C(0x2c)
+	/* 200G_BASESR2 */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_200G_BASESR2 \
+		UINT32_C(0x2d)
+	/* 200G_BASELR2 */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_200G_BASELR2 \
+		UINT32_C(0x2e)
+	/* 200G_BASEER2 */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_200G_BASEER2 \
+		UINT32_C(0x2f)
+	/* 400G_BASECR8 */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASECR8 \
+		UINT32_C(0x30)
+	/* 200G_BASESR8 */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASESR8 \
+		UINT32_C(0x31)
+	/* 400G_BASELR8 */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASELR8 \
+		UINT32_C(0x32)
+	/* 400G_BASEER8 */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASEER8 \
+		UINT32_C(0x33)
+	/* 400G_BASECR4 */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASECR4 \
+		UINT32_C(0x34)
+	/* 400G_BASESR4 */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASESR4 \
+		UINT32_C(0x35)
+	/* 400G_BASELR4 */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASELR4 \
+		UINT32_C(0x36)
+	/* 400G_BASEER4 */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASEER4 \
+		UINT32_C(0x37)
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_LAST \
-		HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASEER2
+		HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASEER4
 	/* This value represents a media type. */
 	uint8_t	media_type;
 	/* Unknown */
@@ -26855,6 +27418,12 @@ struct hwrm_port_phy_qcfg_output {
 	 */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_OPTION_FLAGS_SIGNAL_MODE_KNOWN \
 		UINT32_C(0x2)
+	/*
+	 * When this bit is '1', speeds2 fields are used to get
+	 * speed details.
+	 */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_OPTION_FLAGS_SPEEDS2_SUPPORTED \
+		UINT32_C(0x4)
 	/*
 	 * Up to 16 bytes of null padded ASCII string representing
 	 * PHY vendor.
@@ -26933,7 +27502,162 @@ struct hwrm_port_phy_qcfg_output {
 	uint8_t	link_down_reason;
 	/* Remote fault */
 	#define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_DOWN_REASON_RF     UINT32_C(0x1)
-	uint8_t	unused_0[7];
+	/*
+	 * The supported speeds for the port. This is a bit mask.
+	 * For each speed that is supported, the corresponding
+	 * bit will be set to '1'. This is valid only if speeds2_supported
+	 * is set in option_flags
+	 */
+	uint16_t	support_speeds2;
+	/* 1Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_1GB \
+		UINT32_C(0x1)
+	/* 10Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_10GB \
+		UINT32_C(0x2)
+	/* 25Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_25GB \
+		UINT32_C(0x4)
+	/* 40Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_40GB \
+		UINT32_C(0x8)
+	/* 50Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_50GB \
+		UINT32_C(0x10)
+	/* 100Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_100GB \
+		UINT32_C(0x20)
+	/* 50Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_50GB_PAM4_56 \
+		UINT32_C(0x40)
+	/* 100Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_100GB_PAM4_56 \
+		UINT32_C(0x80)
+	/* 200Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_200GB_PAM4_56 \
+		UINT32_C(0x100)
+	/* 400Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_400GB_PAM4_56 \
+		UINT32_C(0x200)
+	/* 100Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_100GB_PAM4_112 \
+		UINT32_C(0x400)
+	/* 200Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_200GB_PAM4_112 \
+		UINT32_C(0x800)
+	/* 400Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_400GB_PAM4_112 \
+		UINT32_C(0x1000)
+	/* 800Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_800GB_PAM4_112 \
+		UINT32_C(0x2000)
+	/*
+	 * Current setting of forced link speed. When the link speed is not
+	 * being forced, this value shall be set to 0.
+	 * This field is valid only if speeds2_supported is set in option_flags.
+	 */
+	uint16_t	force_link_speeds2;
+	/* 1Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_1GB \
+		UINT32_C(0xa)
+	/* 10Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_10GB \
+		UINT32_C(0x64)
+	/* 25Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_25GB \
+		UINT32_C(0xfa)
+	/* 40Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_40GB \
+		UINT32_C(0x190)
+	/* 50Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_50GB \
+		UINT32_C(0x1f4)
+	/* 100Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_100GB \
+		UINT32_C(0x3e8)
+	/* 50Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_50GB_PAM4_56 \
+		UINT32_C(0x1f5)
+	/* 100Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_100GB_PAM4_56 \
+		UINT32_C(0x3e9)
+	/* 200Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_200GB_PAM4_56 \
+		UINT32_C(0x7d1)
+	/* 400Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_400GB_PAM4_56 \
+		UINT32_C(0xfa1)
+	/* 100Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_100GB_PAM4_112 \
+		UINT32_C(0x3ea)
+	/* 200Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_200GB_PAM4_112 \
+		UINT32_C(0x7d2)
+	/* 400Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_400GB_PAM4_112 \
+		UINT32_C(0xfa2)
+	/* 800Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_800GB_PAM4_112 \
+		UINT32_C(0x1f42)
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_LAST \
+		HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_800GB_PAM4_112
+	/*
+	 * Current setting of auto_link speed_mask that is used to advertise
+	 * speeds during autonegotiation.
+	 * This field is only valid when auto_mode is set to "mask".
+	 * and if speeds2_supported is set in option_flags
+	 * The speeds specified in this field shall be a subset of
+	 * supported speeds on this port.
+	 */
+	uint16_t	auto_link_speeds2;
+	/* 1Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_1GB \
+		UINT32_C(0x1)
+	/* 10Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_10GB \
+		UINT32_C(0x2)
+	/* 25Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_25GB \
+		UINT32_C(0x4)
+	/* 40Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_40GB \
+		UINT32_C(0x8)
+	/* 50Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_50GB \
+		UINT32_C(0x10)
+	/* 100Gb link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_100GB \
+		UINT32_C(0x20)
+	/* 50Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_50GB_PAM4_56 \
+		UINT32_C(0x40)
+	/* 100Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_100GB_PAM4_56 \
+		UINT32_C(0x80)
+	/* 200Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_200GB_PAM4_56 \
+		UINT32_C(0x100)
+	/* 400Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_400GB_PAM4_56 \
+		UINT32_C(0x200)
+	/* 100Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_100GB_PAM4_112 \
+		UINT32_C(0x400)
+	/* 200Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_200GB_PAM4_112 \
+		UINT32_C(0x800)
+	/* 400Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_400GB_PAM4_112 \
+		UINT32_C(0x1000)
+	/* 800Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_800GB_PAM4_112 \
+		UINT32_C(0x2000)
+	/*
+	 * This field is indicate the number of lanes used to transfer
+	 * data. If the link is down, the value is zero.
+	 * This is valid only if speeds2_supported is set in option_flags.
+	 */
+	uint8_t	active_lanes;
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -28381,7 +29105,7 @@ struct tx_port_stats_ext {
 } __rte_packed;
 
 /* Port Rx Statistics extended Format */
-/* rx_port_stats_ext (size:3776b/472B) */
+/* rx_port_stats_ext (size:3904b/488B) */
 struct rx_port_stats_ext {
 	/* Number of times link state changed to down */
 	uint64_t	link_down_events;
@@ -28462,8 +29186,9 @@ struct rx_port_stats_ext {
 	/* The number of events where the port receive buffer was over 85% full */
 	uint64_t	rx_buffer_passed_threshold;
 	/*
-	 * The number of symbol errors that wasn't corrected by FEC correction
-	 * algorithm
+	 * This counter represents uncorrected symbol errors post-FEC and may not
+	 * be populated in all cases. Each uncorrected FEC block may result in
+	 * one or more symbol errors.
 	 */
 	uint64_t	rx_pcs_symbol_err;
 	/* The number of corrected bits on the port according to active FEC */
@@ -28507,6 +29232,21 @@ struct rx_port_stats_ext {
 	 * FEC function in the PHY
 	 */
 	uint64_t	rx_fec_uncorrectable_blocks;
+	/*
+	 * Total number of packets that are dropped due to not matching
+	 * any RX filter rules. This value is zero on the non supported
+	 * controllers. This counter is per controller, Firmware reports the
+	 * same value on active ports. This counter does not include the
+	 * packet discards because of no available buffers.
+	 */
+	uint64_t	rx_filter_miss;
+	/*
+	 * This field represents the number of FEC symbol errors by counting
+	 * once for each 10-bit symbol corrected by FEC block.
+	 * rx_fec_corrected_blocks will be incremented if all symbol errors in a
+	 * codeword gets corrected.
+	 */
+	uint64_t	rx_fec_symbol_err;
 } __rte_packed;
 
 /*
@@ -29435,7 +30175,7 @@ struct hwrm_port_phy_qcaps_input {
 	uint8_t	unused_0[6];
 } __rte_packed;
 
-/* hwrm_port_phy_qcaps_output (size:256b/32B) */
+/* hwrm_port_phy_qcaps_output (size:320b/40B) */
 struct hwrm_port_phy_qcaps_output {
 	/* The specific error status for the command. */
 	uint16_t	error_code;
@@ -29725,6 +30465,13 @@ struct hwrm_port_phy_qcaps_output {
 	 */
 	#define HWRM_PORT_PHY_QCAPS_OUTPUT_FLAGS2_BANK_ADDR_SUPPORTED \
 		UINT32_C(0x4)
+	/*
+	 * If set to 1, then this field indicates that
+	 * supported_speed2 field is to be used in lieu of all
+	 * supported_speed variants.
+	 */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_FLAGS2_SPEEDS2_SUPPORTED \
+		UINT32_C(0x8)
 	/*
 	 * Number of internal ports for this device. This field allows the FW
 	 * to advertise how many internal ports are present. Manufacturing
@@ -29733,6 +30480,108 @@ struct hwrm_port_phy_qcaps_output {
 	 * option "HPTN_MODE" is set to 1.
 	 */
 	uint8_t	internal_port_cnt;
+	uint8_t	unused_0;
+	/*
+	 * This is a bit mask to indicate what speeds are supported
+	 * as forced speeds on this link.
+	 * For each speed that can be forced on this link, the
+	 * corresponding mask bit shall be set to '1'.
+	 * This field is valid only if speeds2_supported bit is set in flags2
+	 */
+	uint16_t	supported_speeds2_force_mode;
+	/* 1Gb link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_1GB \
+		UINT32_C(0x1)
+	/* 10Gb link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_10GB \
+		UINT32_C(0x2)
+	/* 25Gb link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_25GB \
+		UINT32_C(0x4)
+	/* 40Gb link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_40GB \
+		UINT32_C(0x8)
+	/* 50Gb link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_50GB \
+		UINT32_C(0x10)
+	/* 100Gb link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_100GB \
+		UINT32_C(0x20)
+	/* 50Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_50GB_PAM4_56 \
+		UINT32_C(0x40)
+	/* 100Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_100GB_PAM4_56 \
+		UINT32_C(0x80)
+	/* 200Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_200GB_PAM4_56 \
+		UINT32_C(0x100)
+	/* 400Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_400GB_PAM4_56 \
+		UINT32_C(0x200)
+	/* 100Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_100GB_PAM4_112 \
+		UINT32_C(0x400)
+	/* 200Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_200GB_PAM4_112 \
+		UINT32_C(0x800)
+	/* 400Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_400GB_PAM4_112 \
+		UINT32_C(0x1000)
+	/* 800Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_800GB_PAM4_112 \
+		UINT32_C(0x2000)
+	/*
+	 * This is a bit mask to indicate what speeds are supported
+	 * for autonegotiation on this link.
+	 * For each speed that can be autonegotiated on this link, the
+	 * corresponding mask bit shall be set to '1'.
+	 * This field is valid only if speeds2_supported bit is set in flags2
+	 */
+	uint16_t	supported_speeds2_auto_mode;
+	/* 1Gb link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_1GB \
+		UINT32_C(0x1)
+	/* 10Gb link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_10GB \
+		UINT32_C(0x2)
+	/* 25Gb link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_25GB \
+		UINT32_C(0x4)
+	/* 40Gb link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_40GB \
+		UINT32_C(0x8)
+	/* 50Gb link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_50GB \
+		UINT32_C(0x10)
+	/* 100Gb link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_100GB \
+		UINT32_C(0x20)
+	/* 50Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_50GB_PAM4_56 \
+		UINT32_C(0x40)
+	/* 100Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_100GB_PAM4_56 \
+		UINT32_C(0x80)
+	/* 200Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_200GB_PAM4_56 \
+		UINT32_C(0x100)
+	/* 400Gb (PAM4-56: 50G per lane) link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_400GB_PAM4_56 \
+		UINT32_C(0x200)
+	/* 100Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_100GB_PAM4_112 \
+		UINT32_C(0x400)
+	/* 200Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_200GB_PAM4_112 \
+		UINT32_C(0x800)
+	/* 400Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_400GB_PAM4_112 \
+		UINT32_C(0x1000)
+	/* 800Gb (PAM4-112: 100G per lane) link speed */
+	#define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_800GB_PAM4_112 \
+		UINT32_C(0x2000)
+	uint8_t	unused_1[3];
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -38132,6 +38981,9 @@ struct hwrm_vnic_qcaps_output {
 	/* When this bit is '1' FW supports VNIC hash mode. */
 	#define HWRM_VNIC_QCAPS_OUTPUT_FLAGS_VNIC_RSS_HASH_MODE_CAP \
 		UINT32_C(0x10000000)
+	/* When this bit is set to '1', hardware supports tunnel TPA. */
+	#define HWRM_VNIC_QCAPS_OUTPUT_FLAGS_HW_TUNNEL_TPA_CAP \
+		UINT32_C(0x20000000)
 	/*
 	 * This field advertises the maximum concurrent TPA aggregations
 	 * supported by the VNIC on new devices that support TPA v2 or v3.
@@ -38154,7 +39006,7 @@ struct hwrm_vnic_qcaps_output {
  *********************/
 
 
-/* hwrm_vnic_tpa_cfg_input (size:320b/40B) */
+/* hwrm_vnic_tpa_cfg_input (size:384b/48B) */
 struct hwrm_vnic_tpa_cfg_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
@@ -38276,6 +39128,12 @@ struct hwrm_vnic_tpa_cfg_input {
 	#define HWRM_VNIC_TPA_CFG_INPUT_ENABLES_MAX_AGG_TIMER     UINT32_C(0x4)
 	/* deprecated bit.  Do not use!!! */
 	#define HWRM_VNIC_TPA_CFG_INPUT_ENABLES_MIN_AGG_LEN       UINT32_C(0x8)
+	/*
+	 * This bit must be '1' for the tnl_tpa_en_bitmap field to be
+	 * configured.
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_ENABLES_TNL_TPA_EN \
+		UINT32_C(0x10)
 	/* Logical vnic ID */
 	uint16_t	vnic_id;
 	/*
@@ -38332,6 +39190,117 @@ struct hwrm_vnic_tpa_cfg_input {
 	 * and can be queried using hwrm_vnic_tpa_qcfg.
 	 */
 	uint32_t	min_agg_len;
+	/*
+	 * If the device supports hardware tunnel TPA feature, as indicated by
+	 * the HWRM_VNIC_QCAPS command, this field is used to configure the
+	 * tunnel types to be enabled. Each bit corresponds to a specific
+	 * tunnel type. If a bit is set to '1', then the associated tunnel
+	 * type is enabled; otherwise, it is disabled.
+	 */
+	uint32_t	tnl_tpa_en_bitmap;
+	/*
+	 * When this bit is '1', enable VXLAN encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_VXLAN \
+		UINT32_C(0x1)
+	/*
+	 * When this bit is set to ‘1’, enable GENEVE encapsulated packets
+	 * for aggregation.
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_GENEVE \
+		UINT32_C(0x2)
+	/*
+	 * When this bit is set to ‘1’, enable NVGRE encapsulated packets
+	 * for aggregation..
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_NVGRE \
+		UINT32_C(0x4)
+	/*
+	 * When this bit is set to ‘1’, enable GRE encapsulated packets
+	 * for aggregation..
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_GRE \
+		UINT32_C(0x8)
+	/*
+	 * When this bit is set to ‘1’, enable IPV4 encapsulated packets
+	 * for aggregation..
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_IPV4 \
+		UINT32_C(0x10)
+	/*
+	 * When this bit is set to ‘1’, enable IPV6 encapsulated packets
+	 * for aggregation..
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_IPV6 \
+		UINT32_C(0x20)
+	/*
+	 * When this bit is '1', enable VXLAN_GPE encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_VXLAN_GPE \
+		UINT32_C(0x40)
+	/*
+	 * When this bit is '1', enable VXLAN_CUSTOMER1 encapsulated packets
+	 * for aggregation.
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_VXLAN_CUST1 \
+		UINT32_C(0x80)
+	/*
+	 * When this bit is '1', enable GRE_CUSTOMER1 encapsulated packets
+	 * for aggregation.
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_GRE_CUST1 \
+		UINT32_C(0x100)
+	/*
+	 * When this bit is '1', enable UPAR1 encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR1 \
+		UINT32_C(0x200)
+	/*
+	 * When this bit is '1', enable UPAR2 encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR2 \
+		UINT32_C(0x400)
+	/*
+	 * When this bit is '1', enable UPAR3 encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR3 \
+		UINT32_C(0x800)
+	/*
+	 * When this bit is '1', enable UPAR4 encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR4 \
+		UINT32_C(0x1000)
+	/*
+	 * When this bit is '1', enable UPAR5 encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR5 \
+		UINT32_C(0x2000)
+	/*
+	 * When this bit is '1', enable UPAR6 encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR6 \
+		UINT32_C(0x4000)
+	/*
+	 * When this bit is '1', enable UPAR7 encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR7 \
+		UINT32_C(0x8000)
+	/*
+	 * When this bit is '1', enable UPAR8 encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR8 \
+		UINT32_C(0x10000)
+	uint8_t	unused_1[4];
 } __rte_packed;
 
 /* hwrm_vnic_tpa_cfg_output (size:128b/16B) */
@@ -38355,6 +39324,288 @@ struct hwrm_vnic_tpa_cfg_output {
 	uint8_t	valid;
 } __rte_packed;
 
+/**********************
+ * hwrm_vnic_tpa_qcfg *
+ **********************/
+
+
+/* hwrm_vnic_tpa_qcfg_input (size:192b/24B) */
+struct hwrm_vnic_tpa_qcfg_input {
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/*
+	 * The completion ring to send the completion event on. This should
+	 * be the NQ ID returned from the `nq_alloc` HWRM command.
+	 */
+	uint16_t	cmpl_ring;
+	/*
+	 * The sequence ID is used by the driver for tracking multiple
+	 * commands. This ID is treated as opaque data by the firmware and
+	 * the value is returned in the `hwrm_resp_hdr` upon completion.
+	 */
+	uint16_t	seq_id;
+	/*
+	 * The target ID of the command:
+	 * * 0x0-0xFFF8 - The function ID
+	 * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors
+	 * * 0xFFFD - Reserved for user-space HWRM interface
+	 * * 0xFFFF - HWRM
+	 */
+	uint16_t	target_id;
+	/*
+	 * A physical address pointer pointing to a host buffer that the
+	 * command's response data will be written. This can be either a host
+	 * physical address (HPA) or a guest physical address (GPA) and must
+	 * point to a physically contiguous block of memory.
+	 */
+	uint64_t	resp_addr;
+	/* Logical vnic ID */
+	uint16_t	vnic_id;
+	uint8_t	unused_0[6];
+} __rte_packed;
+
+/* hwrm_vnic_tpa_qcfg_output (size:256b/32B) */
+struct hwrm_vnic_tpa_qcfg_output {
+	/* The specific error status for the command. */
+	uint16_t	error_code;
+	/* The HWRM command request type. */
+	uint16_t	req_type;
+	/* The sequence ID from the original command. */
+	uint16_t	seq_id;
+	/* The length of the response data in number of bytes. */
+	uint16_t	resp_len;
+	uint32_t	flags;
+	/*
+	 * When this bit is '1', the VNIC is configured to
+	 * perform transparent packet aggregation (TPA) of
+	 * non-tunneled TCP packets.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_TPA \
+		UINT32_C(0x1)
+	/*
+	 * When this bit is '1', the VNIC is configured to
+	 * perform transparent packet aggregation (TPA) of
+	 * tunneled TCP packets.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_ENCAP_TPA \
+		UINT32_C(0x2)
+	/*
+	 * When this bit is '1', the VNIC is configured to
+	 * perform transparent packet aggregation (TPA) according
+	 * to Windows Receive Segment Coalescing (RSC) rules.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_RSC_WND_UPDATE \
+		UINT32_C(0x4)
+	/*
+	 * When this bit is '1', the VNIC is configured to
+	 * perform transparent packet aggregation (TPA) according
+	 * to Linux Generic Receive Offload (GRO) rules.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_GRO \
+		UINT32_C(0x8)
+	/*
+	 * When this bit is '1', the VNIC is configured to
+	 * perform transparent packet aggregation (TPA) for TCP
+	 * packets with IP ECN set to non-zero.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_AGG_WITH_ECN \
+		UINT32_C(0x10)
+	/*
+	 * When this bit is '1', the VNIC is configured to
+	 * perform transparent packet aggregation (TPA) for
+	 * GRE tunneled TCP packets only if all packets have the
+	 * same GRE sequence.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_AGG_WITH_SAME_GRE_SEQ \
+		UINT32_C(0x20)
+	/*
+	 * When this bit is '1' and the GRO mode is enabled,
+	 * the VNIC is configured to
+	 * perform transparent packet aggregation (TPA) for
+	 * TCP/IPv4 packets with consecutively increasing IPIDs.
+	 * In other words, the last packet that is being
+	 * aggregated to an already existing aggregation context
+	 * shall have IPID 1 more than the IPID of the last packet
+	 * that was aggregated in that aggregation context.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_GRO_IPID_CHECK \
+		UINT32_C(0x40)
+	/*
+	 * When this bit is '1' and the GRO mode is enabled,
+	 * the VNIC is configured to
+	 * perform transparent packet aggregation (TPA) for
+	 * TCP packets with the same TTL (IPv4) or Hop limit (IPv6)
+	 * value.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_GRO_TTL_CHECK \
+		UINT32_C(0x80)
+	/*
+	 * This is the maximum number of TCP segments that can
+	 * be aggregated (unit is Log2). Max value is 31.
+	 */
+	uint16_t	max_agg_segs;
+	/* 1 segment */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_1   UINT32_C(0x0)
+	/* 2 segments */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_2   UINT32_C(0x1)
+	/* 4 segments */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_4   UINT32_C(0x2)
+	/* 8 segments */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_8   UINT32_C(0x3)
+	/* Any segment size larger than this is not valid */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_MAX UINT32_C(0x1f)
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_LAST \
+		HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_MAX
+	/*
+	 * This is the maximum number of aggregations this VNIC is
+	 * allowed (unit is Log2). Max value is 7
+	 */
+	uint16_t	max_aggs;
+	/* 1 aggregation */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_1   UINT32_C(0x0)
+	/* 2 aggregations */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_2   UINT32_C(0x1)
+	/* 4 aggregations */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_4   UINT32_C(0x2)
+	/* 8 aggregations */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_8   UINT32_C(0x3)
+	/* 16 aggregations */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_16  UINT32_C(0x4)
+	/* Any aggregation size larger than this is not valid */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_MAX UINT32_C(0x7)
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_LAST \
+		HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_MAX
+	/*
+	 * This is the maximum amount of time allowed for
+	 * an aggregation context to complete after it was initiated.
+	 */
+	uint32_t	max_agg_timer;
+	/*
+	 * This is the minimum amount of payload length required to
+	 * start an aggregation context.
+	 */
+	uint32_t	min_agg_len;
+	/*
+	 * If the device supports hardware tunnel TPA feature, as indicated by
+	 * the HWRM_VNIC_QCAPS command, this field conveys the bitmap of the
+	 * tunnel types that have been configured. Each bit corresponds to a
+	 * specific tunnel type. If a bit is set to '1', then the associated
+	 * tunnel type is enabled; otherwise, it is disabled.
+	 */
+	uint32_t	tnl_tpa_en_bitmap;
+	/*
+	 * When this bit is '1', enable VXLAN encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_VXLAN \
+		UINT32_C(0x1)
+	/*
+	 * When this bit is set to ‘1’, enable GENEVE encapsulated packets
+	 * for aggregation.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_GENEVE \
+		UINT32_C(0x2)
+	/*
+	 * When this bit is set to ‘1’, enable NVGRE encapsulated packets
+	 * for aggregation..
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_NVGRE \
+		UINT32_C(0x4)
+	/*
+	 * When this bit is set to ‘1’, enable GRE encapsulated packets
+	 * for aggregation..
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_GRE \
+		UINT32_C(0x8)
+	/*
+	 * When this bit is set to ‘1’, enable IPV4 encapsulated packets
+	 * for aggregation..
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_IPV4 \
+		UINT32_C(0x10)
+	/*
+	 * When this bit is set to ‘1’, enable IPV6 encapsulated packets
+	 * for aggregation..
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_IPV6 \
+		UINT32_C(0x20)
+	/*
+	 * When this bit is '1', enable VXLAN_GPE encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_VXLAN_GPE \
+		UINT32_C(0x40)
+	/*
+	 * When this bit is '1', enable VXLAN_CUSTOMER1 encapsulated packets
+	 * for aggregation.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_VXLAN_CUST1 \
+		UINT32_C(0x80)
+	/*
+	 * When this bit is '1', enable GRE_CUSTOMER1 encapsulated packets
+	 * for aggregation.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_GRE_CUST1 \
+		UINT32_C(0x100)
+	/*
+	 * When this bit is '1', enable UPAR1 encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR1 \
+		UINT32_C(0x200)
+	/*
+	 * When this bit is '1', enable UPAR2 encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR2 \
+		UINT32_C(0x400)
+	/*
+	 * When this bit is '1', enable UPAR3 encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR3 \
+		UINT32_C(0x800)
+	/*
+	 * When this bit is '1', enable UPAR4 encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR4 \
+		UINT32_C(0x1000)
+	/*
+	 * When this bit is '1', enable UPAR5 encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR5 \
+		UINT32_C(0x2000)
+	/*
+	 * When this bit is '1', enable UPAR6 encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR6 \
+		UINT32_C(0x4000)
+	/*
+	 * When this bit is '1', enable UPAR7 encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR7 \
+		UINT32_C(0x8000)
+	/*
+	 * When this bit is '1', enable UPAR8 encapsulated packets for
+	 * aggregation.
+	 */
+	#define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR8 \
+		UINT32_C(0x10000)
+	uint8_t	unused_0[3];
+	/*
+	 * This field is used in Output records to indicate that the output
+	 * is completely written to RAM.  This field should be read as '1'
+	 * to indicate that the output has been completely written.
+	 * When writing a command completion or response to an internal processor,
+	 * the order of writes has to be such that this field is written last.
+	 */
+	uint8_t	valid;
+} __rte_packed;
+
 /*********************
  * hwrm_vnic_rss_cfg *
  *********************/
@@ -38572,6 +39823,12 @@ struct hwrm_vnic_rss_cfg_input {
 	 */
 	#define HWRM_VNIC_RSS_CFG_INPUT_FLAGS_HASH_TYPE_EXCLUDE \
 		UINT32_C(0x2)
+	/*
+	 * When this bit is '1', it indicates that the support of setting
+	 * ipsec hash_types by the host drivers.
+	 */
+	#define HWRM_VNIC_RSS_CFG_INPUT_FLAGS_IPSEC_HASH_TYPE_CFG_SUPPORT \
+		UINT32_C(0x4)
 	uint8_t	ring_select_mode;
 	/*
 	 * In this mode, HW uses Toeplitz algorithm and provided Toeplitz
@@ -39439,6 +40696,12 @@ struct hwrm_ring_alloc_input {
 	 */
 	#define HWRM_RING_ALLOC_INPUT_ENABLES_MPC_CHNLS_TYPE \
 		UINT32_C(0x400)
+	/*
+	 * This bit must be '1' for the steering_tag field to be
+	 * configured.
+	 */
+	#define HWRM_RING_ALLOC_INPUT_ENABLES_STEERING_TAG_VALID \
+		UINT32_C(0x800)
 	/* Ring Type. */
 	uint8_t	ring_type;
 	/* L2 Completion Ring (CR) */
@@ -39664,7 +40927,8 @@ struct hwrm_ring_alloc_input {
 	#define HWRM_RING_ALLOC_INPUT_RING_ARB_CFG_ARB_POLICY_PARAM_MASK \
 		UINT32_C(0xff00)
 	#define HWRM_RING_ALLOC_INPUT_RING_ARB_CFG_ARB_POLICY_PARAM_SFT 8
-	uint16_t	unused_3;
+	/* Steering tag to use for memory transactions. */
+	uint16_t	steering_tag;
 	/*
 	 * This field is reserved for the future use.
 	 * It shall be set to 0.
@@ -43871,7 +45135,10 @@ struct hwrm_cfa_ntuple_filter_alloc_input {
 	 * Setting of this flag indicates that the dst_id field contains RFS
 	 * ring table index. If this is not set it indicates dst_id is VNIC
 	 * or VPORT or function ID.  Note dest_fid and dest_rfs_ring_idx
-	 * can’t be set at the same time.
+	 * can't be set at the same time.  Updated drivers should pass ring
+	 * idx in the rfs_ring_tbl_idx field if the firmware indicates
+	 * support for the new field in the HWRM_CFA_ADV_FLOW_MGMT_QCAPS
+	 * response.
 	 */
 	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_DEST_RFS_RING_IDX \
 		UINT32_C(0x20)
@@ -43986,10 +45253,7 @@ struct hwrm_cfa_ntuple_filter_alloc_input {
 	 */
 	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_ID \
 		UINT32_C(0x10000)
-	/*
-	 * This bit must be '1' for the mirror_vnic_id field to be
-	 * configured.
-	 */
+	/* This flag is deprecated. */
 	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_MIRROR_VNIC_ID \
 		UINT32_C(0x20000)
 	/*
@@ -43998,7 +45262,10 @@ struct hwrm_cfa_ntuple_filter_alloc_input {
 	 */
 	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_MACADDR \
 		UINT32_C(0x40000)
-	/* This flag is deprecated. */
+	/*
+	 * This bit must be '1' for the rfs_ring_tbl_idx field to
+	 * be configured.
+	 */
 	#define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_RFS_RING_TBL_IDX \
 		UINT32_C(0x80000)
 	/*
@@ -44069,10 +45336,12 @@ struct hwrm_cfa_ntuple_filter_alloc_input {
 	 */
 	uint16_t	dst_id;
 	/*
-	 * Logical VNIC ID of the VNIC where traffic is
-	 * mirrored.
+	 * If set, this value shall represent the ring table
+	 * index for receive flow steering. Note that this offset
+	 * was formerly used for the mirror_vnic_id field, which
+	 * is no longer supported.
 	 */
-	uint16_t	mirror_vnic_id;
+	uint16_t	rfs_ring_tbl_idx;
 	/*
 	 * This value indicates the tunnel type for this filter.
 	 * If this field is not specified, then the filter shall
@@ -50258,6 +51527,13 @@ struct hwrm_cfa_adv_flow_mgnt_qcaps_output {
 	 */
 	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_EXT_IP_PROTO_SUPPORTED \
 		UINT32_C(0x100000)
+	/*
+	 * Value of 1 to indicate that firmware supports setting of
+	 * rfs_ring_tbl_idx (new offset) in HWRM_CFA_NTUPLE_ALLOC command.
+	 * Value of 0 indicates ring tbl idx should be passed using dst_id.
+	 */
+	#define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_V3_SUPPORTED \
+		UINT32_C(0x200000)
 	uint8_t	unused_0[3];
 	/*
 	 * This field is used in Output records to indicate that the output
@@ -56744,9 +58020,17 @@ struct hwrm_tunnel_dst_port_query_input {
 	/* Generic Protocol Extension for VXLAN (VXLAN-GPE) */
 	#define HWRM_TUNNEL_DST_PORT_QUERY_INPUT_TUNNEL_TYPE_VXLAN_GPE \
 		UINT32_C(0x10)
+	/* Generic Routing Encapsulation */
+	#define HWRM_TUNNEL_DST_PORT_QUERY_INPUT_TUNNEL_TYPE_GRE \
+		UINT32_C(0x11)
 	#define HWRM_TUNNEL_DST_PORT_QUERY_INPUT_TUNNEL_TYPE_LAST \
-		HWRM_TUNNEL_DST_PORT_QUERY_INPUT_TUNNEL_TYPE_VXLAN_GPE
-	uint8_t	unused_0[7];
+		HWRM_TUNNEL_DST_PORT_QUERY_INPUT_TUNNEL_TYPE_GRE
+	/*
+	 * This field is used to specify the next protocol value defined in the
+	 * corresponding RFC spec for the applicable tunnel type.
+	 */
+	uint8_t	tunnel_next_proto;
+	uint8_t	unused_0[6];
 } __rte_packed;
 
 /* hwrm_tunnel_dst_port_query_output (size:128b/16B) */
@@ -56808,7 +58092,21 @@ struct hwrm_tunnel_dst_port_query_output {
 	/* This bit will be '1' when UPAR7 is IN_USE */
 	#define HWRM_TUNNEL_DST_PORT_QUERY_OUTPUT_UPAR_IN_USE_UPAR7 \
 		UINT32_C(0x80)
-	uint8_t	unused_0[2];
+	/*
+	 * This field is used to convey the status of non udp port based
+	 * tunnel parsing at chip level and at function level.
+	 */
+	uint8_t	status;
+	/* This bit will be '1' when tunnel parsing is enabled globally. */
+	#define HWRM_TUNNEL_DST_PORT_QUERY_OUTPUT_STATUS_CHIP_LEVEL \
+		UINT32_C(0x1)
+	/*
+	 * This bit will be '1' when tunnel parsing is enabled
+	 * on the corresponding function.
+	 */
+	#define HWRM_TUNNEL_DST_PORT_QUERY_OUTPUT_STATUS_FUNC_LEVEL \
+		UINT32_C(0x2)
+	uint8_t	unused_0;
 	/*
 	 * This field is used in Output records to indicate that the output
 	 * is completely written to RAM.  This field should be read as '1'
@@ -56886,9 +58184,16 @@ struct hwrm_tunnel_dst_port_alloc_input {
 	/* Generic Protocol Extension for VXLAN (VXLAN-GPE) */
 	#define HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN_GPE \
 		UINT32_C(0x10)
+	/* Generic Routing Encapsulation */
+	#define HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_GRE \
+		UINT32_C(0x11)
 	#define HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_LAST \
-		HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN_GPE
-	uint8_t	unused_0;
+		HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_GRE
+	/*
+	 * This field is used to specify the next protocol value defined in the
+	 * corresponding RFC spec for the applicable tunnel type.
+	 */
+	uint8_t	tunnel_next_proto;
 	/*
 	 * This field represents the value of L4 destination port used
 	 * for the given tunnel type. This field is valid for
@@ -56900,7 +58205,7 @@ struct hwrm_tunnel_dst_port_alloc_input {
 	 * A value of 0 shall fail the command.
 	 */
 	uint16_t	tunnel_dst_port_val;
-	uint8_t	unused_1[4];
+	uint8_t	unused_0[4];
 } __rte_packed;
 
 /* hwrm_tunnel_dst_port_alloc_output (size:128b/16B) */
@@ -56929,8 +58234,11 @@ struct hwrm_tunnel_dst_port_alloc_output {
 	/* Out of resources error */
 	#define HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_ERR_NO_RESOURCE \
 		UINT32_C(0x2)
+	/* Tunnel type is alread enabled */
+	#define HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_ERR_ENABLED \
+		UINT32_C(0x3)
 	#define HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_LAST \
-		HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_ERR_NO_RESOURCE
+		HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_ERR_ENABLED
 	/*
 	 * This field represents the UPAR usage status.
 	 * Available UPARs on wh+ are UPAR0 and UPAR1
@@ -57040,15 +58348,22 @@ struct hwrm_tunnel_dst_port_free_input {
 	/* Generic Protocol Extension for VXLAN (VXLAN-GPE) */
 	#define HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_VXLAN_GPE \
 		UINT32_C(0x10)
+	/* Generic Routing Encapsulation */
+	#define HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_GRE \
+		UINT32_C(0x11)
 	#define HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_LAST \
-		HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_VXLAN_GPE
-	uint8_t	unused_0;
+		HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_GRE
+	/*
+	 * This field is used to specify the next protocol value defined in the
+	 * corresponding RFC spec for the applicable tunnel type.
+	 */
+	uint8_t	tunnel_next_proto;
 	/*
 	 * Identifier of a tunnel L4 destination port value. Only applies to tunnel
 	 * types that has l4 destination port parameters.
 	 */
 	uint16_t	tunnel_dst_port_id;
-	uint8_t	unused_1[4];
+	uint8_t	unused_0[4];
 } __rte_packed;
 
 /* hwrm_tunnel_dst_port_free_output (size:128b/16B) */
@@ -57234,7 +58549,7 @@ struct ctx_eng_stats {
  ***********************/
 
 
-/* hwrm_stat_ctx_alloc_input (size:256b/32B) */
+/* hwrm_stat_ctx_alloc_input (size:320b/40B) */
 struct hwrm_stat_ctx_alloc_input {
 	/* The HWRM command request type. */
 	uint16_t	req_type;
@@ -57305,6 +58620,18 @@ struct hwrm_stat_ctx_alloc_input {
 	 * for the periodic DMA updates.
 	 */
 	uint16_t	stats_dma_length;
+	uint16_t	flags;
+	/* This stats context uses the steering tag specified in the command. */
+	#define HWRM_STAT_CTX_ALLOC_INPUT_FLAGS_STEERING_TAG_VALID \
+		UINT32_C(0x1)
+	/*
+	 * Steering tag to use for memory transactions from the periodic DMA
+	 * updates. 'steering_tag_valid' should be set and 'steering_tag'
+	 * should be specified, when the 'steering_tag_supported' bit is set
+	 * under the 'flags_ext2' field of the hwrm_func_qcaps_output.
+	 */
+	uint16_t	steering_tag;
+	uint32_t	unused_1;
 } __rte_packed;
 
 /* hwrm_stat_ctx_alloc_output (size:128b/16B) */
-- 
2.39.2 (Apple Git-143)


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 03/14] net/bnxt: log a message when multicast promisc mode changes
  2023-12-10  1:24 [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
  2023-12-10  1:24 ` [PATCH v2 01/14] net/bnxt: refactor epoch setting Ajit Khaparde
  2023-12-10  1:24 ` [PATCH v2 02/14] net/bnxt: update HWRM API Ajit Khaparde
@ 2023-12-10  1:24 ` Ajit Khaparde
  2023-12-10 17:56   ` Stephen Hemminger
  2023-12-10  1:24 ` [PATCH v2 04/14] net/bnxt: use the correct COS queue for Tx Ajit Khaparde
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-10  1:24 UTC (permalink / raw)
  To: dev; +Cc: Kalesh AP, Somnath Kotur

[-- Attachment #1: Type: text/plain, Size: 1586 bytes --]

From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>

When the user tries to add more number of Mcast MAC addresses than
supported by the port, driver puts port into Mcast promiscuous mode.
It may be useful to the user to know that Mcast promiscuous mode is
turned on.

Similarly added a log when Mcast promiscuous mode is turned off.

Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index acf7e6e46e..999e4f1398 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -2931,12 +2931,18 @@ bnxt_dev_set_mc_addr_list_op(struct rte_eth_dev *eth_dev,
 	bp->nb_mc_addr = nb_mc_addr;
 
 	if (nb_mc_addr > BNXT_MAX_MC_ADDRS) {
+		PMD_DRV_LOG(INFO, "Number of Mcast MACs added (%d) exceeded Max supported (%d)\n",
+			    nb_mc_addr, BNXT_MAX_MC_ADDRS);
+		PMD_DRV_LOG(INFO, "Turning on Mcast promiscuous mode\n");
 		vnic->flags |= BNXT_VNIC_INFO_ALLMULTI;
 		goto allmulti;
 	}
 
 	/* TODO Check for Duplicate mcast addresses */
-	vnic->flags &= ~BNXT_VNIC_INFO_ALLMULTI;
+	if (vnic->flags & BNXT_VNIC_INFO_ALLMULTI) {
+		PMD_DRV_LOG(INFO, "Turning off Mcast promiscuous mode\n");
+		vnic->flags &= ~BNXT_VNIC_INFO_ALLMULTI;
+	}
 	for (i = 0; i < nb_mc_addr; i++)
 		rte_ether_addr_copy(&mc_addr_set[i], &bp->mcast_addr_list[i]);
 
-- 
2.39.2 (Apple Git-143)


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 04/14] net/bnxt: use the correct COS queue for Tx
  2023-12-10  1:24 [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
                   ` (2 preceding siblings ...)
  2023-12-10  1:24 ` [PATCH v2 03/14] net/bnxt: log a message when multicast promisc mode changes Ajit Khaparde
@ 2023-12-10  1:24 ` Ajit Khaparde
  2023-12-10  1:24 ` [PATCH v2 05/14] net/bnxt: refactor mem zone allocation Ajit Khaparde
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-10  1:24 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur

[-- Attachment #1: Type: text/plain, Size: 5315 bytes --]

Earlier the firmware was configuring single lossy COS profiles for Tx.
But now more than one profiles is possible.
Identify the profile a NIC driver should use based on the profile type
hint provided in queue_cfg_info.

If the firmware does not set the bit to use profile type,
then we will use the older method to pick the COS queue for Tx.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      |  1 +
 drivers/net/bnxt/bnxt_hwrm.c | 56 ++++++++++++++++++++++++++++++++++--
 drivers/net/bnxt/bnxt_hwrm.h |  7 +++++
 3 files changed, 62 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 0e01b1d4ba..542ef13f7c 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -311,6 +311,7 @@ struct bnxt_link_info {
 struct bnxt_cos_queue_info {
 	uint8_t	id;
 	uint8_t	profile;
+	uint8_t	profile_type;
 };
 
 struct rte_flow {
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 0a31b984e6..fe9e629892 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -1544,7 +1544,7 @@ int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp)
 	return 0;
 }
 
-static bool bnxt_find_lossy_profile(struct bnxt *bp)
+static bool _bnxt_find_lossy_profile(struct bnxt *bp)
 {
 	int i = 0;
 
@@ -1558,6 +1558,41 @@ static bool bnxt_find_lossy_profile(struct bnxt *bp)
 	return false;
 }
 
+static bool _bnxt_find_lossy_nic_profile(struct bnxt *bp)
+{
+	int i = 0, j = 0;
+
+	for (i = 0; i < BNXT_COS_QUEUE_COUNT; i++) {
+		for (j = 0; j < BNXT_COS_QUEUE_COUNT; j++) {
+			if (bp->tx_cos_queue[i].profile ==
+			    HWRM_QUEUE_SERVICE_PROFILE_LOSSY &&
+			    bp->tx_cos_queue[j].profile_type ==
+			    HWRM_QUEUE_SERVICE_PROFILE_TYPE_NIC) {
+				bp->tx_cosq_id[0] = bp->tx_cos_queue[i].id;
+				return true;
+			}
+		}
+	}
+	return false;
+}
+
+static bool bnxt_find_lossy_profile(struct bnxt *bp, bool use_prof_type)
+{
+	int i;
+
+	for (i = 0; i < BNXT_COS_QUEUE_COUNT; i++) {
+		PMD_DRV_LOG(DEBUG, "profile %d, profile_id %d, type %d\n",
+			    bp->tx_cos_queue[i].profile,
+			    bp->tx_cos_queue[i].id,
+			    bp->tx_cos_queue[i].profile_type);
+	}
+
+	if (use_prof_type)
+		return _bnxt_find_lossy_nic_profile(bp);
+	else
+		return _bnxt_find_lossy_profile(bp);
+}
+
 static void bnxt_find_first_valid_profile(struct bnxt *bp)
 {
 	int i = 0;
@@ -1579,6 +1614,7 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp)
 	struct hwrm_queue_qportcfg_input req = {.req_type = 0 };
 	struct hwrm_queue_qportcfg_output *resp = bp->hwrm_cmd_resp_addr;
 	uint32_t dir = HWRM_QUEUE_QPORTCFG_INPUT_FLAGS_PATH_TX;
+	bool use_prof_type = false;
 	int i;
 
 get_rx_info:
@@ -1590,10 +1626,15 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp)
 	    !(bp->vnic_cap_flags & BNXT_VNIC_CAP_COS_CLASSIFY))
 		req.drv_qmap_cap =
 			HWRM_QUEUE_QPORTCFG_INPUT_DRV_QMAP_CAP_ENABLED;
+
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
 	HWRM_CHECK_RESULT();
 
+	if (resp->queue_cfg_info &
+	    HWRM_QUEUE_QPORTCFG_OUTPUT_QUEUE_CFG_INFO_USE_PROFILE_TYPE)
+		use_prof_type = true;
+
 	if (dir == HWRM_QUEUE_QPORTCFG_INPUT_FLAGS_PATH_TX) {
 		GET_TX_QUEUE_INFO(0);
 		GET_TX_QUEUE_INFO(1);
@@ -1603,6 +1644,16 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp)
 		GET_TX_QUEUE_INFO(5);
 		GET_TX_QUEUE_INFO(6);
 		GET_TX_QUEUE_INFO(7);
+		if (use_prof_type) {
+			GET_TX_QUEUE_TYPE_INFO(0);
+			GET_TX_QUEUE_TYPE_INFO(1);
+			GET_TX_QUEUE_TYPE_INFO(2);
+			GET_TX_QUEUE_TYPE_INFO(3);
+			GET_TX_QUEUE_TYPE_INFO(4);
+			GET_TX_QUEUE_TYPE_INFO(5);
+			GET_TX_QUEUE_TYPE_INFO(6);
+			GET_TX_QUEUE_TYPE_INFO(7);
+		}
 	} else  {
 		GET_RX_QUEUE_INFO(0);
 		GET_RX_QUEUE_INFO(1);
@@ -1636,11 +1687,12 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp)
 			 * operations, ideally we should look to use LOSSY.
 			 * If not found, fallback to the first valid profile
 			 */
-			if (!bnxt_find_lossy_profile(bp))
+			if (!bnxt_find_lossy_profile(bp, use_prof_type))
 				bnxt_find_first_valid_profile(bp);
 
 		}
 	}
+	PMD_DRV_LOG(DEBUG, "Tx COS Queue ID %d\n", bp->tx_cosq_id[0]);
 
 	bp->max_tc = resp->max_configurable_queues;
 	bp->max_lltc = resp->max_configurable_lossless_queues;
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 68384bc757..f9fa6cf73a 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -46,6 +46,9 @@ struct hwrm_func_qstats_output;
 #define HWRM_QUEUE_SERVICE_PROFILE_UNKNOWN \
 	HWRM_QUEUE_QPORTCFG_OUTPUT_QUEUE_ID0_SERVICE_PROFILE_UNKNOWN
 
+#define HWRM_QUEUE_SERVICE_PROFILE_TYPE_NIC \
+	HWRM_QUEUE_QPORTCFG_OUTPUT_QUEUE_ID0_SERVICE_PROFILE_TYPE_NIC
+
 #define HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESV_STRATEGY_MINIMAL_STATIC \
 	HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESERVATION_STRATEGY_MINIMAL_STATIC
 #define HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESV_STRATEGY_MAXIMAL \
@@ -74,6 +77,10 @@ struct hwrm_func_qstats_output;
 	bp->tx_cos_queue[x].profile =	\
 		resp->queue_id##x##_service_profile
 
+#define GET_TX_QUEUE_TYPE_INFO(x) \
+	bp->tx_cos_queue[x].profile_type =	\
+		resp->queue_id##x##_service_profile_type
+
 #define GET_RX_QUEUE_INFO(x) \
 	bp->rx_cos_queue[x].id = resp->queue_id##x; \
 	bp->rx_cos_queue[x].profile =	\
-- 
2.39.2 (Apple Git-143)


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 05/14] net/bnxt: refactor mem zone allocation
  2023-12-10  1:24 [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
                   ` (3 preceding siblings ...)
  2023-12-10  1:24 ` [PATCH v2 04/14] net/bnxt: use the correct COS queue for Tx Ajit Khaparde
@ 2023-12-10  1:24 ` Ajit Khaparde
  2023-12-10  1:24 ` [PATCH v2 06/14] net/bnxt: add support for p7 device family Ajit Khaparde
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-10  1:24 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur, Kalesh AP

[-- Attachment #1: Type: text/plain, Size: 4019 bytes --]

Currently we are allocating memzone for VNIC attributes per VNIC.
In cases where the firmware supports a higher VNIC count, this could
lead to a higher number of memzone segments than supported.

Move the memzone for VNIC attributes per function instead of per
VNIC. Divide the memzone per VNIC as needed.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      |  1 +
 drivers/net/bnxt/bnxt_vnic.c | 52 +++++++++++++++++++-----------------
 drivers/net/bnxt/bnxt_vnic.h |  1 -
 3 files changed, 28 insertions(+), 26 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 542ef13f7c..6af668e92f 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -772,6 +772,7 @@ struct bnxt {
 
 	struct bnxt_vnic_info	*vnic_info;
 	STAILQ_HEAD(, bnxt_vnic_info)	free_vnic_list;
+	const struct rte_memzone *vnic_rss_mz;
 
 	struct bnxt_filter_info	*filter_info;
 	STAILQ_HEAD(, bnxt_filter_info)	free_filter_list;
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index f86d27fd79..d40daf631e 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -123,13 +123,11 @@ void bnxt_free_vnic_attributes(struct bnxt *bp)
 
 	for (i = 0; i < bp->max_vnics; i++) {
 		vnic = &bp->vnic_info[i];
-		if (vnic->rss_mz != NULL) {
-			rte_memzone_free(vnic->rss_mz);
-			vnic->rss_mz = NULL;
-			vnic->rss_hash_key = NULL;
-			vnic->rss_table = NULL;
-		}
+		vnic->rss_hash_key = NULL;
+		vnic->rss_table = NULL;
 	}
+	rte_memzone_free(bp->vnic_rss_mz);
+	bp->vnic_rss_mz = NULL;
 }
 
 int bnxt_alloc_vnic_attributes(struct bnxt *bp, bool reconfig)
@@ -153,31 +151,35 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp, bool reconfig)
 
 	entry_length = RTE_CACHE_LINE_ROUNDUP(entry_length + rss_table_size);
 
-	for (i = 0; i < bp->max_vnics; i++) {
-		vnic = &bp->vnic_info[i];
-
-		snprintf(mz_name, RTE_MEMZONE_NAMESIZE,
-			 "bnxt_" PCI_PRI_FMT "_vnicattr_%d", pdev->addr.domain,
-			 pdev->addr.bus, pdev->addr.devid, pdev->addr.function, i);
-		mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0;
-		mz = rte_memzone_lookup(mz_name);
-		if (mz == NULL) {
-			mz = rte_memzone_reserve(mz_name,
-						 entry_length,
+	snprintf(mz_name, RTE_MEMZONE_NAMESIZE,
+		 "bnxt_" PCI_PRI_FMT "_vnicattr", pdev->addr.domain,
+		 pdev->addr.bus, pdev->addr.devid, pdev->addr.function);
+	mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0;
+	mz = rte_memzone_lookup(mz_name);
+	if (mz == NULL) {
+		mz = rte_memzone_reserve_aligned(mz_name,
+						 entry_length * bp->max_vnics,
 						 bp->eth_dev->device->numa_node,
 						 RTE_MEMZONE_2MB |
 						 RTE_MEMZONE_SIZE_HINT_ONLY |
-						 RTE_MEMZONE_IOVA_CONTIG);
-			if (mz == NULL) {
-				PMD_DRV_LOG(ERR, "Cannot allocate bnxt vnic_attributes memory\n");
-				return -ENOMEM;
-			}
+						 RTE_MEMZONE_IOVA_CONTIG,
+						 BNXT_PAGE_SIZE);
+		if (mz == NULL) {
+			PMD_DRV_LOG(ERR,
+				    "Cannot allocate vnic_attributes memory\n");
+			return -ENOMEM;
 		}
-		vnic->rss_mz = mz;
-		mz_phys_addr = mz->iova;
+	}
+	bp->vnic_rss_mz = mz;
+	for (i = 0; i < bp->max_vnics; i++) {
+		uint32_t offset = entry_length * i;
+
+		vnic = &bp->vnic_info[i];
+
+		mz_phys_addr = mz->iova + offset;
 
 		/* Allocate rss table and hash key */
-		vnic->rss_table = (void *)((char *)mz->addr);
+		vnic->rss_table = (void *)((char *)mz->addr + offset);
 		vnic->rss_table_dma_addr = mz_phys_addr;
 		memset(vnic->rss_table, -1, entry_length);
 
diff --git a/drivers/net/bnxt/bnxt_vnic.h b/drivers/net/bnxt/bnxt_vnic.h
index 4396d95bda..7a6a0aa739 100644
--- a/drivers/net/bnxt/bnxt_vnic.h
+++ b/drivers/net/bnxt/bnxt_vnic.h
@@ -47,7 +47,6 @@ struct bnxt_vnic_info {
 	uint16_t	hash_type;
 	uint8_t		hash_mode;
 	uint8_t		prev_hash_mode;
-	const struct rte_memzone *rss_mz;
 	rte_iova_t	rss_table_dma_addr;
 	uint16_t	*rss_table;
 	rte_iova_t	rss_hash_key_dma_addr;
-- 
2.39.2 (Apple Git-143)


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 06/14] net/bnxt: add support for p7 device family
  2023-12-10  1:24 [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
                   ` (4 preceding siblings ...)
  2023-12-10  1:24 ` [PATCH v2 05/14] net/bnxt: refactor mem zone allocation Ajit Khaparde
@ 2023-12-10  1:24 ` Ajit Khaparde
  2023-12-10  1:24 ` [PATCH v2 07/14] net/bnxt: refactor code to support P7 devices Ajit Khaparde
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-10  1:24 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 4298 bytes --]

Add support for the P7 device family.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        | 14 ++++++++++++--
 drivers/net/bnxt/bnxt_ethdev.c | 25 +++++++++++++++++++++++++
 2 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 6af668e92f..3a1d8a6ff6 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -72,6 +72,11 @@
 #define BROADCOM_DEV_ID_58814		0xd814
 #define BROADCOM_DEV_ID_58818		0xd818
 #define BROADCOM_DEV_ID_58818_VF	0xd82e
+#define BROADCOM_DEV_ID_57608		0x1760
+#define BROADCOM_DEV_ID_57604		0x1761
+#define BROADCOM_DEV_ID_57602		0x1762
+#define BROADCOM_DEV_ID_57601		0x1763
+#define BROADCOM_DEV_ID_5760X_VF	0x1819
 
 #define BROADCOM_DEV_957508_N2100	0x5208
 #define BROADCOM_DEV_957414_N225	0x4145
@@ -685,6 +690,7 @@ struct bnxt {
 #define BNXT_FLAG_FLOW_XSTATS_EN		BIT(25)
 #define BNXT_FLAG_DFLT_MAC_SET			BIT(26)
 #define BNXT_FLAG_GFID_ENABLE			BIT(27)
+#define BNXT_FLAG_CHIP_P7			BIT(30)
 #define BNXT_PF(bp)		(!((bp)->flags & BNXT_FLAG_VF))
 #define BNXT_VF(bp)		((bp)->flags & BNXT_FLAG_VF)
 #define BNXT_NPAR(bp)		((bp)->flags & BNXT_FLAG_NPAR_PF)
@@ -694,12 +700,16 @@ struct bnxt {
 #define BNXT_USE_KONG(bp)	((bp)->flags & BNXT_FLAG_KONG_MB_EN)
 #define BNXT_VF_IS_TRUSTED(bp)	((bp)->flags & BNXT_FLAG_TRUSTED_VF_EN)
 #define BNXT_CHIP_P5(bp)	((bp)->flags & BNXT_FLAG_CHIP_P5)
+#define BNXT_CHIP_P7(bp)	((bp)->flags & BNXT_FLAG_CHIP_P7)
+#define BNXT_CHIP_P5_P7(bp)	(BNXT_CHIP_P5(bp) || BNXT_CHIP_P7(bp))
 #define BNXT_STINGRAY(bp)	((bp)->flags & BNXT_FLAG_STINGRAY)
-#define BNXT_HAS_NQ(bp)		BNXT_CHIP_P5(bp)
-#define BNXT_HAS_RING_GRPS(bp)	(!BNXT_CHIP_P5(bp))
+#define BNXT_HAS_NQ(bp)		BNXT_CHIP_P5_P7(bp)
+#define BNXT_HAS_RING_GRPS(bp)	(!BNXT_CHIP_P5_P7(bp))
 #define BNXT_FLOW_XSTATS_EN(bp)	((bp)->flags & BNXT_FLAG_FLOW_XSTATS_EN)
 #define BNXT_HAS_DFLT_MAC_SET(bp)	((bp)->flags & BNXT_FLAG_DFLT_MAC_SET)
 #define BNXT_GFID_ENABLED(bp)	((bp)->flags & BNXT_FLAG_GFID_ENABLE)
+#define BNXT_P7_MAX_NQ_RING_CNT	512
+#define BNXT_P7_CQ_MAX_L2_ENT	8192
 
 	uint32_t			flags2;
 #define BNXT_FLAGS2_PTP_TIMESYNC_ENABLED	BIT(0)
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 999e4f1398..1e4182071a 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -84,6 +84,11 @@ static const struct rte_pci_id bnxt_pci_id_map[] = {
 	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58814) },
 	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58818) },
 	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58818_VF) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_57608) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_57604) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_57602) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_57601) },
+	{ RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_5760X_VF) },
 	{ .vendor_id = 0, /* sentinel */ },
 };
 
@@ -4681,6 +4686,7 @@ static bool bnxt_vf_pciid(uint16_t device_id)
 	case BROADCOM_DEV_ID_57500_VF1:
 	case BROADCOM_DEV_ID_57500_VF2:
 	case BROADCOM_DEV_ID_58818_VF:
+	case BROADCOM_DEV_ID_5760X_VF:
 		/* FALLTHROUGH */
 		return true;
 	default:
@@ -4706,7 +4712,23 @@ static bool bnxt_p5_device(uint16_t device_id)
 	case BROADCOM_DEV_ID_58812:
 	case BROADCOM_DEV_ID_58814:
 	case BROADCOM_DEV_ID_58818:
+		/* FALLTHROUGH */
+		return true;
+	default:
+		return false;
+	}
+}
+
+/* Phase 7 device */
+static bool bnxt_p7_device(uint16_t device_id)
+{
+	switch (device_id) {
 	case BROADCOM_DEV_ID_58818_VF:
+	case BROADCOM_DEV_ID_57608:
+	case BROADCOM_DEV_ID_57604:
+	case BROADCOM_DEV_ID_57602:
+	case BROADCOM_DEV_ID_57601:
+	case BROADCOM_DEV_ID_5760X_VF:
 		/* FALLTHROUGH */
 		return true;
 	default:
@@ -5874,6 +5896,9 @@ static int bnxt_drv_init(struct rte_eth_dev *eth_dev)
 	if (bnxt_p5_device(pci_dev->id.device_id))
 		bp->flags |= BNXT_FLAG_CHIP_P5;
 
+	if (bnxt_p7_device(pci_dev->id.device_id))
+		bp->flags |= BNXT_FLAG_CHIP_P7;
+
 	if (pci_dev->id.device_id == BROADCOM_DEV_ID_58802 ||
 	    pci_dev->id.device_id == BROADCOM_DEV_ID_58804 ||
 	    pci_dev->id.device_id == BROADCOM_DEV_ID_58808 ||
-- 
2.39.2 (Apple Git-143)


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 07/14] net/bnxt: refactor code to support P7 devices
  2023-12-10  1:24 [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
                   ` (5 preceding siblings ...)
  2023-12-10  1:24 ` [PATCH v2 06/14] net/bnxt: add support for p7 device family Ajit Khaparde
@ 2023-12-10  1:24 ` Ajit Khaparde
  2023-12-10  1:24 ` [PATCH v2 08/14] net/bnxt: fix array overflow Ajit Khaparde
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-10  1:24 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 13212 bytes --]

Refactor code to support the P7 device family.
The changes include support for RSS, VNIC allocation, TPA.
Remove unnecessary check to disable vector mode support for
some device families.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  6 +++---
 drivers/net/bnxt/bnxt_ethdev.c | 27 ++++++++-------------------
 drivers/net/bnxt/bnxt_flow.c   |  2 +-
 drivers/net/bnxt/bnxt_hwrm.c   | 26 ++++++++++++++------------
 drivers/net/bnxt/bnxt_ring.c   |  6 +++---
 drivers/net/bnxt/bnxt_rxq.c    |  2 +-
 drivers/net/bnxt/bnxt_rxr.c    |  6 +++---
 drivers/net/bnxt/bnxt_vnic.c   |  6 +++---
 8 files changed, 36 insertions(+), 45 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 3a1d8a6ff6..7439ecf4fa 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -107,11 +107,11 @@
 #define TPA_MAX_SEGS		5 /* 32 segments in log2 units */
 
 #define BNXT_TPA_MAX_AGGS(bp) \
-	(BNXT_CHIP_P5(bp) ? TPA_MAX_AGGS_TH : \
+	(BNXT_CHIP_P5_P7(bp) ? TPA_MAX_AGGS_TH : \
 			     TPA_MAX_AGGS)
 
 #define BNXT_TPA_MAX_SEGS(bp) \
-	(BNXT_CHIP_P5(bp) ? TPA_MAX_SEGS_TH : \
+	(BNXT_CHIP_P5_P7(bp) ? TPA_MAX_SEGS_TH : \
 			      TPA_MAX_SEGS)
 
 /*
@@ -938,7 +938,7 @@ inline uint16_t bnxt_max_rings(struct bnxt *bp)
 	 * RSS table size in P5 is 512.
 	 * Cap max Rx rings to the same value for RSS.
 	 */
-	if (BNXT_CHIP_P5(bp))
+	if (BNXT_CHIP_P5_P7(bp))
 		max_rx_rings = RTE_MIN(max_rx_rings, BNXT_RSS_TBL_SIZE_P5);
 
 	max_tx_rings = RTE_MIN(max_tx_rings, max_rx_rings);
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 1e4182071a..2a41fafa02 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -212,7 +212,7 @@ uint16_t bnxt_rss_ctxts(const struct bnxt *bp)
 	unsigned int num_rss_rings = RTE_MIN(bp->rx_nr_rings,
 					     BNXT_RSS_TBL_SIZE_P5);
 
-	if (!BNXT_CHIP_P5(bp))
+	if (!BNXT_CHIP_P5_P7(bp))
 		return 1;
 
 	return RTE_ALIGN_MUL_CEIL(num_rss_rings,
@@ -222,7 +222,7 @@ uint16_t bnxt_rss_ctxts(const struct bnxt *bp)
 
 uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp)
 {
-	if (!BNXT_CHIP_P5(bp))
+	if (!BNXT_CHIP_P5_P7(bp))
 		return HW_HASH_INDEX_SIZE;
 
 	return bnxt_rss_ctxts(bp) * BNXT_RSS_ENTRIES_PER_CTX_P5;
@@ -765,7 +765,7 @@ static int bnxt_start_nic(struct bnxt *bp)
 	/* P5 does not support ring groups.
 	 * But we will use the array to save RSS context IDs.
 	 */
-	if (BNXT_CHIP_P5(bp))
+	if (BNXT_CHIP_P5_P7(bp))
 		bp->max_ring_grps = BNXT_MAX_RSS_CTXTS_P5;
 
 	rc = bnxt_vnic_queue_db_init(bp);
@@ -1247,12 +1247,6 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
 {
 	struct bnxt *bp = eth_dev->data->dev_private;
 
-	/* Disable vector mode RX for Stingray2 for now */
-	if (BNXT_CHIP_SR2(bp)) {
-		bp->flags &= ~BNXT_FLAG_RX_VECTOR_PKT_MODE;
-		return bnxt_recv_pkts;
-	}
-
 #if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
 	/* Vector mode receive cannot be enabled if scattered rx is in use. */
 	if (eth_dev->data->scattered_rx)
@@ -1319,14 +1313,9 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
 static eth_tx_burst_t
 bnxt_transmit_function(struct rte_eth_dev *eth_dev)
 {
-	struct bnxt *bp = eth_dev->data->dev_private;
-
-	/* Disable vector mode TX for Stingray2 for now */
-	if (BNXT_CHIP_SR2(bp))
-		return bnxt_xmit_pkts;
-
 #if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
 	uint64_t offloads = eth_dev->data->dev_conf.txmode.offloads;
+	struct bnxt *bp = eth_dev->data->dev_private;
 
 	/*
 	 * Vector mode transmit can be enabled only if not using scatter rx
@@ -2091,7 +2080,7 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
 			continue;
 
 		rxq = bnxt_qid_to_rxq(bp, reta_conf[idx].reta[sft]);
-		if (BNXT_CHIP_P5(bp)) {
+		if (BNXT_CHIP_P5_P7(bp)) {
 			vnic->rss_table[i * 2] =
 				rxq->rx_ring->rx_ring_struct->fw_ring_id;
 			vnic->rss_table[i * 2 + 1] =
@@ -2138,7 +2127,7 @@ static int bnxt_reta_query_op(struct rte_eth_dev *eth_dev,
 		if (reta_conf[idx].mask & (1ULL << sft)) {
 			uint16_t qid;
 
-			if (BNXT_CHIP_P5(bp))
+			if (BNXT_CHIP_P5_P7(bp))
 				qid = bnxt_rss_to_qid(bp,
 						      vnic->rss_table[i * 2]);
 			else
@@ -3224,7 +3213,7 @@ bnxt_rx_queue_count_op(void *rx_queue)
 			break;
 
 		case CMPL_BASE_TYPE_RX_TPA_END:
-			if (BNXT_CHIP_P5(rxq->bp)) {
+			if (BNXT_CHIP_P5_P7(rxq->bp)) {
 				struct rx_tpa_v2_end_cmpl_hi *p5_tpa_end;
 
 				p5_tpa_end = (void *)rxcmp;
@@ -3335,7 +3324,7 @@ bnxt_rx_descriptor_status_op(void *rx_queue, uint16_t offset)
 			if (desc == offset)
 				return RTE_ETH_RX_DESC_DONE;
 
-			if (BNXT_CHIP_P5(rxq->bp)) {
+			if (BNXT_CHIP_P5_P7(rxq->bp)) {
 				struct rx_tpa_v2_end_cmpl_hi *p5_tpa_end;
 
 				p5_tpa_end = (void *)rxcmp;
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 28dd5ae6cb..15f0e1b308 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -1199,7 +1199,7 @@ bnxt_vnic_rss_cfg_update(struct bnxt *bp,
 		if (i == bp->rx_cp_nr_rings)
 			return 0;
 
-		if (BNXT_CHIP_P5(bp)) {
+		if (BNXT_CHIP_P5_P7(bp)) {
 			rxq = bp->rx_queues[idx];
 			vnic->rss_table[rss_idx * 2] =
 				rxq->rx_ring->rx_ring_struct->fw_ring_id;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index fe9e629892..2d0a7a2731 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -853,7 +853,7 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp)
 	bp->first_vf_id = rte_le_to_cpu_16(resp->first_vf_id);
 	bp->max_rx_em_flows = rte_le_to_cpu_16(resp->max_rx_em_flows);
 	bp->max_l2_ctx = rte_le_to_cpu_16(resp->max_l2_ctxs);
-	if (!BNXT_CHIP_P5(bp) && !bp->pdev->max_vfs)
+	if (!BNXT_CHIP_P5_P7(bp) && !bp->pdev->max_vfs)
 		bp->max_l2_ctx += bp->max_rx_em_flows;
 	if (bp->vnic_cap_flags & BNXT_VNIC_CAP_COS_CLASSIFY)
 		bp->max_vnics = rte_le_to_cpu_16(BNXT_MAX_VNICS_COS_CLASSIFY);
@@ -1187,7 +1187,7 @@ int bnxt_hwrm_func_resc_qcaps(struct bnxt *bp)
 	 * So use the value provided by func_qcaps.
 	 */
 	bp->max_l2_ctx = rte_le_to_cpu_16(resp->max_l2_ctxs);
-	if (!BNXT_CHIP_P5(bp) && !bp->pdev->max_vfs)
+	if (!BNXT_CHIP_P5_P7(bp) && !bp->pdev->max_vfs)
 		bp->max_l2_ctx += bp->max_rx_em_flows;
 	if (bp->vnic_cap_flags & BNXT_VNIC_CAP_COS_CLASSIFY)
 		bp->max_vnics = rte_le_to_cpu_16(BNXT_MAX_VNICS_COS_CLASSIFY);
@@ -1744,7 +1744,7 @@ int bnxt_hwrm_ring_alloc(struct bnxt *bp,
 		req.ring_type = ring_type;
 		req.cmpl_ring_id = rte_cpu_to_le_16(cmpl_ring_id);
 		req.stat_ctx_id = rte_cpu_to_le_32(stats_ctx_id);
-		if (BNXT_CHIP_P5(bp)) {
+		if (BNXT_CHIP_P5_P7(bp)) {
 			mb_pool = bp->rx_queues[0]->mb_pool;
 			rx_buf_size = rte_pktmbuf_data_room_size(mb_pool) -
 				      RTE_PKTMBUF_HEADROOM;
@@ -2118,7 +2118,7 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 
 	HWRM_PREP(&req, HWRM_VNIC_CFG, BNXT_USE_CHIMP_MB);
 
-	if (BNXT_CHIP_P5(bp)) {
+	if (BNXT_CHIP_P5_P7(bp)) {
 		int dflt_rxq = vnic->start_grp_id;
 		struct bnxt_rx_ring_info *rxr;
 		struct bnxt_cp_ring_info *cpr;
@@ -2304,7 +2304,7 @@ int bnxt_hwrm_vnic_ctx_free(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 {
 	int rc = 0;
 
-	if (BNXT_CHIP_P5(bp)) {
+	if (BNXT_CHIP_P5_P7(bp)) {
 		int j;
 
 		for (j = 0; j < vnic->num_lb_ctxts; j++) {
@@ -2556,7 +2556,7 @@ int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp,
 	struct hwrm_vnic_tpa_cfg_input req = {.req_type = 0 };
 	struct hwrm_vnic_tpa_cfg_output *resp = bp->hwrm_cmd_resp_addr;
 
-	if (BNXT_CHIP_P5(bp) && !bp->max_tpa_v2) {
+	if ((BNXT_CHIP_P5(bp) || BNXT_CHIP_P7(bp)) && !bp->max_tpa_v2) {
 		if (enable)
 			PMD_DRV_LOG(ERR, "No HW support for LRO\n");
 		return -ENOTSUP;
@@ -2584,6 +2584,9 @@ int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp,
 		req.max_aggs = rte_cpu_to_le_16(BNXT_TPA_MAX_AGGS(bp));
 		req.max_agg_segs = rte_cpu_to_le_16(BNXT_TPA_MAX_SEGS(bp));
 		req.min_agg_len = rte_cpu_to_le_32(512);
+
+		if (BNXT_CHIP_P5_P7(bp))
+			req.max_aggs = rte_cpu_to_le_16(bp->max_tpa_v2);
 	}
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
 
@@ -2836,7 +2839,7 @@ void bnxt_free_hwrm_rx_ring(struct bnxt *bp, int queue_index)
 	ring = rxr ? rxr->ag_ring_struct : NULL;
 	if (ring != NULL && cpr != NULL) {
 		bnxt_hwrm_ring_free(bp, ring,
-				    BNXT_CHIP_P5(bp) ?
+				    BNXT_CHIP_P5_P7(bp) ?
 				    HWRM_RING_FREE_INPUT_RING_TYPE_RX_AGG :
 				    HWRM_RING_FREE_INPUT_RING_TYPE_RX,
 				    cpr->cp_ring_struct->fw_ring_id);
@@ -3356,8 +3359,7 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up)
 
 	/* Get user requested autoneg setting */
 	autoneg = bnxt_check_eth_link_autoneg(dev_conf->link_speeds);
-
-	if (BNXT_CHIP_P5(bp) &&
+	if (BNXT_CHIP_P5_P7(bp) &&
 	    dev_conf->link_speeds & RTE_ETH_LINK_SPEED_40G) {
 		/* 40G is not supported as part of media auto detect.
 		 * The speed should be forced and autoneg disabled
@@ -5348,7 +5350,7 @@ int bnxt_vnic_rss_configure(struct bnxt *bp, struct bnxt_vnic_info *vnic)
 	if (!(vnic->rss_table && vnic->hash_type))
 		return 0;
 
-	if (BNXT_CHIP_P5(bp))
+	if (BNXT_CHIP_P5_P7(bp))
 		return bnxt_vnic_rss_configure_p5(bp, vnic);
 
 	/*
@@ -5440,7 +5442,7 @@ int bnxt_hwrm_set_ring_coal(struct bnxt *bp,
 	int rc;
 
 	/* Set ring coalesce parameters only for 100G NICs */
-	if (BNXT_CHIP_P5(bp)) {
+	if (BNXT_CHIP_P5_P7(bp)) {
 		if (bnxt_hwrm_set_coal_params_p5(bp, &req))
 			return -1;
 	} else if (bnxt_stratus_device(bp)) {
@@ -5470,7 +5472,7 @@ int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp)
 	int total_alloc_len;
 	int rc, i, tqm_rings;
 
-	if (!BNXT_CHIP_P5(bp) ||
+	if (!BNXT_CHIP_P5_P7(bp) ||
 	    bp->hwrm_spec_code < HWRM_VERSION_1_9_2 ||
 	    BNXT_VF(bp) ||
 	    bp->ctx)
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index 6dacb1b37f..90cad6c9c6 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -57,7 +57,7 @@ int bnxt_alloc_ring_grps(struct bnxt *bp)
 	/* P5 does not support ring groups.
 	 * But we will use the array to save RSS context IDs.
 	 */
-	if (BNXT_CHIP_P5(bp)) {
+	if (BNXT_CHIP_P5_P7(bp)) {
 		bp->max_ring_grps = BNXT_MAX_RSS_CTXTS_P5;
 	} else if (bp->max_ring_grps < bp->rx_cp_nr_rings) {
 		/* 1 ring is for default completion ring */
@@ -354,7 +354,7 @@ static void bnxt_set_db(struct bnxt *bp,
 			uint32_t fid,
 			uint32_t ring_mask)
 {
-	if (BNXT_CHIP_P5(bp)) {
+	if (BNXT_CHIP_P5_P7(bp)) {
 		int db_offset = DB_PF_OFFSET;
 		switch (ring_type) {
 		case HWRM_RING_ALLOC_INPUT_RING_TYPE_TX:
@@ -559,7 +559,7 @@ static int bnxt_alloc_rx_agg_ring(struct bnxt *bp, int queue_index)
 
 	ring->fw_rx_ring_id = rxr->rx_ring_struct->fw_ring_id;
 
-	if (BNXT_CHIP_P5(bp)) {
+	if (BNXT_CHIP_P5_P7(bp)) {
 		ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_RX_AGG;
 		hw_stats_ctx_id = cpr->hw_stats_ctx_id;
 	} else {
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index 0d0b5e28e4..575e7f193f 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -600,7 +600,7 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 			if (bp->rx_queues[i]->rx_started)
 				active_queue_cnt++;
 
-		if (BNXT_CHIP_P5(bp)) {
+		if (BNXT_CHIP_P5_P7(bp)) {
 			/*
 			 * For P5, we need to ensure that the VNIC default
 			 * receive ring corresponds to an active receive queue.
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 0cabfb583c..9d45065f28 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -334,7 +334,7 @@ static int bnxt_rx_pages(struct bnxt_rx_queue *rxq,
 	uint16_t cp_cons, ag_cons;
 	struct rx_pkt_cmpl *rxcmp;
 	struct rte_mbuf *last = mbuf;
-	bool is_p5_tpa = tpa_info && BNXT_CHIP_P5(rxq->bp);
+	bool is_p5_tpa = tpa_info && BNXT_CHIP_P5_P7(rxq->bp);
 
 	for (i = 0; i < agg_buf; i++) {
 		struct rte_mbuf **ag_buf;
@@ -395,7 +395,7 @@ static int bnxt_discard_rx(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
 	} else if (cmp_type == RX_TPA_END_CMPL_TYPE_RX_TPA_END) {
 		struct rx_tpa_end_cmpl *tpa_end = cmp;
 
-		if (BNXT_CHIP_P5(bp))
+		if (BNXT_CHIP_P5_P7(bp))
 			return 0;
 
 		agg_bufs = BNXT_TPA_END_AGG_BUFS(tpa_end);
@@ -430,7 +430,7 @@ static inline struct rte_mbuf *bnxt_tpa_end(
 		return NULL;
 	}
 
-	if (BNXT_CHIP_P5(rxq->bp)) {
+	if (BNXT_CHIP_P5_P7(rxq->bp)) {
 		struct rx_tpa_v2_end_cmpl *th_tpa_end;
 		struct rx_tpa_v2_end_cmpl_hi *th_tpa_end1;
 
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index d40daf631e..bf93120d28 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -143,7 +143,7 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp, bool reconfig)
 
 	entry_length = HW_HASH_KEY_SIZE;
 
-	if (BNXT_CHIP_P5(bp))
+	if (BNXT_CHIP_P5_P7(bp))
 		rss_table_size = BNXT_RSS_TBL_SIZE_P5 *
 				 2 * sizeof(*vnic->rss_table);
 	else
@@ -418,8 +418,8 @@ static
 int32_t bnxt_vnic_populate_rss_table(struct bnxt *bp,
 				     struct bnxt_vnic_info *vnic)
 {
-	/* RSS table population is different for p4 and p5 platforms */
-	if (BNXT_CHIP_P5(bp))
+	/* RSS table population is different for p4 and p5, p7 platforms */
+	if (BNXT_CHIP_P5_P7(bp))
 		return bnxt_vnic_populate_rss_table_p5(bp, vnic);
 
 	return bnxt_vnic_populate_rss_table_p4(bp, vnic);
-- 
2.39.2 (Apple Git-143)


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 08/14] net/bnxt: fix array overflow
  2023-12-10  1:24 [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
                   ` (6 preceding siblings ...)
  2023-12-10  1:24 ` [PATCH v2 07/14] net/bnxt: refactor code to support P7 devices Ajit Khaparde
@ 2023-12-10  1:24 ` Ajit Khaparde
  2023-12-10  1:24 ` [PATCH v2 09/14] net/bnxt: add support for backing store v2 Ajit Khaparde
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-10  1:24 UTC (permalink / raw)
  To: dev; +Cc: stable, Damodharam Ammepalli

[-- Attachment #1: Type: text/plain, Size: 4601 bytes --]

In some cases the number of elements in the context memory array
can exceed the MAX_CTX_PAGES and that can cause the static members
ctx_pg_arr and ctx_dma_arr to overflow.
Allocate them dynamically to prevent this overflow.

Cc: stable@dpdk.org
Fixes: f8168ca0e690 ("net/bnxt: support thor controller")
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Damodharam Ammepalli <damodharam.ammepalli@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  4 ++--
 drivers/net/bnxt/bnxt_ethdev.c | 42 +++++++++++++++++++++++++++-------
 2 files changed, 36 insertions(+), 10 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 7439ecf4fa..3fbdf1ddcc 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -455,8 +455,8 @@ struct bnxt_ring_mem_info {
 
 struct bnxt_ctx_pg_info {
 	uint32_t	entries;
-	void		*ctx_pg_arr[MAX_CTX_PAGES];
-	rte_iova_t	ctx_dma_arr[MAX_CTX_PAGES];
+	void		**ctx_pg_arr;
+	rte_iova_t	*ctx_dma_arr;
 	struct bnxt_ring_mem_info ring_mem;
 };
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 2a41fafa02..c585373ba3 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -4767,7 +4767,7 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp,
 {
 	struct bnxt_ring_mem_info *rmem = &ctx_pg->ring_mem;
 	const struct rte_memzone *mz = NULL;
-	char mz_name[RTE_MEMZONE_NAMESIZE];
+	char name[RTE_MEMZONE_NAMESIZE];
 	rte_iova_t mz_phys_addr;
 	uint64_t valid_bits = 0;
 	uint32_t sz;
@@ -4779,6 +4779,19 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp,
 	rmem->nr_pages = RTE_ALIGN_MUL_CEIL(mem_size, BNXT_PAGE_SIZE) /
 			 BNXT_PAGE_SIZE;
 	rmem->page_size = BNXT_PAGE_SIZE;
+
+	snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_pg_arr%s_%x_%d",
+		 suffix, idx, bp->eth_dev->data->port_id);
+	ctx_pg->ctx_pg_arr = rte_zmalloc(name, sizeof(void *) * rmem->nr_pages, 0);
+	if (ctx_pg->ctx_pg_arr == NULL)
+		return -ENOMEM;
+
+	snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_dma_arr%s_%x_%d",
+		 suffix, idx, bp->eth_dev->data->port_id);
+	ctx_pg->ctx_dma_arr = rte_zmalloc(name, sizeof(rte_iova_t *) * rmem->nr_pages, 0);
+	if (ctx_pg->ctx_dma_arr == NULL)
+		return -ENOMEM;
+
 	rmem->pg_arr = ctx_pg->ctx_pg_arr;
 	rmem->dma_arr = ctx_pg->ctx_dma_arr;
 	rmem->flags = BNXT_RMEM_VALID_PTE_FLAG;
@@ -4786,13 +4799,13 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp,
 	valid_bits = PTU_PTE_VALID;
 
 	if (rmem->nr_pages > 1) {
-		snprintf(mz_name, RTE_MEMZONE_NAMESIZE,
+		snprintf(name, RTE_MEMZONE_NAMESIZE,
 			 "bnxt_ctx_pg_tbl%s_%x_%d",
 			 suffix, idx, bp->eth_dev->data->port_id);
-		mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0;
-		mz = rte_memzone_lookup(mz_name);
+		name[RTE_MEMZONE_NAMESIZE - 1] = 0;
+		mz = rte_memzone_lookup(name);
 		if (!mz) {
-			mz = rte_memzone_reserve_aligned(mz_name,
+			mz = rte_memzone_reserve_aligned(name,
 						rmem->nr_pages * 8,
 						bp->eth_dev->device->numa_node,
 						RTE_MEMZONE_2MB |
@@ -4811,11 +4824,11 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp,
 		rmem->pg_tbl_mz = mz;
 	}
 
-	snprintf(mz_name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_%s_%x_%d",
+	snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_%s_%x_%d",
 		 suffix, idx, bp->eth_dev->data->port_id);
-	mz = rte_memzone_lookup(mz_name);
+	mz = rte_memzone_lookup(name);
 	if (!mz) {
-		mz = rte_memzone_reserve_aligned(mz_name,
+		mz = rte_memzone_reserve_aligned(name,
 						 mem_size,
 						 bp->eth_dev->device->numa_node,
 						 RTE_MEMZONE_1GB |
@@ -4861,6 +4874,17 @@ static void bnxt_free_ctx_mem(struct bnxt *bp)
 		return;
 
 	bp->ctx->flags &= ~BNXT_CTX_FLAG_INITED;
+	rte_free(bp->ctx->qp_mem.ctx_pg_arr);
+	rte_free(bp->ctx->srq_mem.ctx_pg_arr);
+	rte_free(bp->ctx->cq_mem.ctx_pg_arr);
+	rte_free(bp->ctx->vnic_mem.ctx_pg_arr);
+	rte_free(bp->ctx->stat_mem.ctx_pg_arr);
+	rte_free(bp->ctx->qp_mem.ctx_dma_arr);
+	rte_free(bp->ctx->srq_mem.ctx_dma_arr);
+	rte_free(bp->ctx->cq_mem.ctx_dma_arr);
+	rte_free(bp->ctx->vnic_mem.ctx_dma_arr);
+	rte_free(bp->ctx->stat_mem.ctx_dma_arr);
+
 	rte_memzone_free(bp->ctx->qp_mem.ring_mem.mz);
 	rte_memzone_free(bp->ctx->srq_mem.ring_mem.mz);
 	rte_memzone_free(bp->ctx->cq_mem.ring_mem.mz);
@@ -4873,6 +4897,8 @@ static void bnxt_free_ctx_mem(struct bnxt *bp)
 	rte_memzone_free(bp->ctx->stat_mem.ring_mem.pg_tbl_mz);
 
 	for (i = 0; i < bp->ctx->tqm_fp_rings_count + 1; i++) {
+		rte_free(bp->ctx->tqm_mem[i]->ctx_pg_arr);
+		rte_free(bp->ctx->tqm_mem[i]->ctx_dma_arr);
 		if (bp->ctx->tqm_mem[i])
 			rte_memzone_free(bp->ctx->tqm_mem[i]->ring_mem.mz);
 	}
-- 
2.39.2 (Apple Git-143)


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 09/14] net/bnxt: add support for backing store v2
  2023-12-10  1:24 [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
                   ` (7 preceding siblings ...)
  2023-12-10  1:24 ` [PATCH v2 08/14] net/bnxt: fix array overflow Ajit Khaparde
@ 2023-12-10  1:24 ` Ajit Khaparde
  2023-12-10  1:24 ` [PATCH v2 10/14] net/bnxt: refactor the ulp initialization Ajit Khaparde
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-10  1:24 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 27573 bytes --]

Add backing store v2 changes.
The firmware supports the new backing store scheme for P7
and newer devices.

To support this, the driver queries the different types of chip
contexts the firmware supports and allocates the appropriate size
of memory for the firmware and hardware to use.
The code then goes ahead and frees up the memory during cleanup.

Older P5 device family continues to support the version 1 of
backing store. While the P4 device family does not need any
backing store memory.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  69 ++++++-
 drivers/net/bnxt/bnxt_ethdev.c | 177 ++++++++++++++++--
 drivers/net/bnxt/bnxt_hwrm.c   | 321 +++++++++++++++++++++++++++++++--
 drivers/net/bnxt/bnxt_hwrm.h   |   8 +
 drivers/net/bnxt/bnxt_util.c   |  10 +
 drivers/net/bnxt/bnxt_util.h   |   1 +
 6 files changed, 547 insertions(+), 39 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 3fbdf1ddcc..68c4778dc3 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -81,6 +81,11 @@
 #define BROADCOM_DEV_957508_N2100	0x5208
 #define BROADCOM_DEV_957414_N225	0x4145
 
+#define HWRM_SPEC_CODE_1_8_3		0x10803
+#define HWRM_VERSION_1_9_1		0x10901
+#define HWRM_VERSION_1_9_2		0x10903
+#define HWRM_VERSION_1_10_2_13		0x10a020d
+
 #define BNXT_MAX_MTU		9574
 #define BNXT_NUM_VLANS		2
 #define BNXT_MAX_PKT_LEN	(BNXT_MAX_MTU + RTE_ETHER_HDR_LEN +\
@@ -430,16 +435,26 @@ struct bnxt_coal {
 #define BNXT_PAGE_SIZE (1 << BNXT_PAGE_SHFT)
 #define MAX_CTX_PAGES  (BNXT_PAGE_SIZE / 8)
 
+#define BNXT_RTE_MEMZONE_FLAG  (RTE_MEMZONE_1GB | RTE_MEMZONE_IOVA_CONTIG)
+
 #define PTU_PTE_VALID             0x1UL
 #define PTU_PTE_LAST              0x2UL
 #define PTU_PTE_NEXT_TO_LAST      0x4UL
 
+#define BNXT_CTX_MIN		1
+#define BNXT_CTX_INV		0xffff
+
+#define BNXT_CTX_INIT_VALID(flags)	\
+	((flags) &			\
+	 HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_ENABLE_CTX_KIND_INIT)
+
 struct bnxt_ring_mem_info {
 	int				nr_pages;
 	int				page_size;
 	uint32_t			flags;
 #define BNXT_RMEM_VALID_PTE_FLAG	1
 #define BNXT_RMEM_RING_PTE_FLAG		2
+#define BNXT_RMEM_USE_FULL_PAGE_FLAG	4
 
 	void				**pg_arr;
 	rte_iova_t			*dma_arr;
@@ -460,7 +475,50 @@ struct bnxt_ctx_pg_info {
 	struct bnxt_ring_mem_info ring_mem;
 };
 
+struct bnxt_ctx_mem {
+	uint16_t	type;
+	uint16_t	entry_size;
+	uint32_t	flags;
+#define BNXT_CTX_MEM_TYPE_VALID \
+	HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_TYPE_VALID
+	uint32_t	instance_bmap;
+	uint8_t		init_value;
+	uint8_t		entry_multiple;
+	uint16_t	init_offset;
+#define	BNXT_CTX_INIT_INVALID_OFFSET	0xffff
+	uint32_t	max_entries;
+	uint32_t	min_entries;
+	uint8_t		last:1;
+	uint8_t		split_entry_cnt;
+#define BNXT_MAX_SPLIT_ENTRY	4
+	union {
+		struct {
+			uint32_t	qp_l2_entries;
+			uint32_t	qp_qp1_entries;
+			uint32_t	qp_fast_qpmd_entries;
+		};
+		uint32_t	srq_l2_entries;
+		uint32_t	cq_l2_entries;
+		uint32_t	vnic_entries;
+		struct {
+			uint32_t	mrav_av_entries;
+			uint32_t	mrav_num_entries_units;
+		};
+		uint32_t	split[BNXT_MAX_SPLIT_ENTRY];
+	};
+	struct bnxt_ctx_pg_info	*pg_info;
+};
+
+#define BNXT_CTX_FLAG_INITED    0x01
+
 struct bnxt_ctx_mem_info {
+	struct bnxt_ctx_mem	*ctx_arr;
+	uint32_t	supported_types;
+	uint32_t	flags;
+	uint16_t	types;
+	uint8_t		tqm_fp_rings_count;
+
+	/* The following are used for V1 */
 	uint32_t        qp_max_entries;
 	uint16_t        qp_min_qp1_entries;
 	uint16_t        qp_max_l2_entries;
@@ -484,10 +542,6 @@ struct bnxt_ctx_mem_info {
 	uint16_t        tim_entry_size;
 	uint32_t        tim_max_entries;
 	uint8_t         tqm_entries_multiple;
-	uint8_t         tqm_fp_rings_count;
-
-	uint32_t        flags;
-#define BNXT_CTX_FLAG_INITED    0x01
 
 	struct bnxt_ctx_pg_info qp_mem;
 	struct bnxt_ctx_pg_info srq_mem;
@@ -739,6 +793,13 @@ struct bnxt {
 #define BNXT_FW_CAP_TRUFLOW_EN		BIT(8)
 #define BNXT_FW_CAP_VLAN_TX_INSERT	BIT(9)
 #define BNXT_FW_CAP_RX_ALL_PKT_TS	BIT(10)
+#define BNXT_FW_CAP_BACKING_STORE_V2	BIT(12)
+#define BNXT_FW_BACKING_STORE_V2_EN(bp)	\
+	((bp)->fw_cap & BNXT_FW_CAP_BACKING_STORE_V2)
+#define BNXT_FW_BACKING_STORE_V1_EN(bp)	\
+	(BNXT_CHIP_P5_P7((bp)) && \
+	 (bp)->hwrm_spec_code >= HWRM_VERSION_1_9_2 && \
+	 !BNXT_VF((bp)))
 #define BNXT_TRUFLOW_EN(bp)	((bp)->fw_cap & BNXT_FW_CAP_TRUFLOW_EN &&\
 				 (bp)->app_id != 0xFF)
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index c585373ba3..5810e0a2a9 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -4759,8 +4759,26 @@ static int bnxt_map_pci_bars(struct rte_eth_dev *eth_dev)
 	return 0;
 }
 
+static void bnxt_init_ctxm_mem(struct bnxt_ctx_mem *ctxm, void *p, int len)
+{
+	uint8_t init_val = ctxm->init_value;
+	uint16_t offset = ctxm->init_offset;
+	uint8_t *p2 = p;
+	int i;
+
+	if (!init_val)
+		return;
+	if (offset == BNXT_CTX_INIT_INVALID_OFFSET) {
+		memset(p, init_val, len);
+		return;
+	}
+	for (i = 0; i < len; i += ctxm->entry_size)
+		*(p2 + i + offset) = init_val;
+}
+
 static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp,
 				  struct bnxt_ctx_pg_info *ctx_pg,
+				  struct bnxt_ctx_mem *ctxm,
 				  uint32_t mem_size,
 				  const char *suffix,
 				  uint16_t idx)
@@ -4776,8 +4794,8 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp,
 	if (!mem_size)
 		return 0;
 
-	rmem->nr_pages = RTE_ALIGN_MUL_CEIL(mem_size, BNXT_PAGE_SIZE) /
-			 BNXT_PAGE_SIZE;
+	rmem->nr_pages =
+		RTE_ALIGN_MUL_CEIL(mem_size, BNXT_PAGE_SIZE) / BNXT_PAGE_SIZE;
 	rmem->page_size = BNXT_PAGE_SIZE;
 
 	snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_pg_arr%s_%x_%d",
@@ -4794,13 +4812,13 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp,
 
 	rmem->pg_arr = ctx_pg->ctx_pg_arr;
 	rmem->dma_arr = ctx_pg->ctx_dma_arr;
-	rmem->flags = BNXT_RMEM_VALID_PTE_FLAG;
+	rmem->flags = BNXT_RMEM_VALID_PTE_FLAG | BNXT_RMEM_USE_FULL_PAGE_FLAG;
 
 	valid_bits = PTU_PTE_VALID;
 
 	if (rmem->nr_pages > 1) {
 		snprintf(name, RTE_MEMZONE_NAMESIZE,
-			 "bnxt_ctx_pg_tbl%s_%x_%d",
+			 "bnxt_ctxpgtbl%s_%x_%d",
 			 suffix, idx, bp->eth_dev->data->port_id);
 		name[RTE_MEMZONE_NAMESIZE - 1] = 0;
 		mz = rte_memzone_lookup(name);
@@ -4816,9 +4834,11 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp,
 				return -ENOMEM;
 		}
 
-		memset(mz->addr, 0, mz->len);
+		memset(mz->addr, 0xff, mz->len);
 		mz_phys_addr = mz->iova;
 
+		if (ctxm != NULL)
+			bnxt_init_ctxm_mem(ctxm, mz->addr, mz->len);
 		rmem->pg_tbl = mz->addr;
 		rmem->pg_tbl_map = mz_phys_addr;
 		rmem->pg_tbl_mz = mz;
@@ -4839,9 +4859,11 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp,
 			return -ENOMEM;
 	}
 
-	memset(mz->addr, 0, mz->len);
+	memset(mz->addr, 0xff, mz->len);
 	mz_phys_addr = mz->iova;
 
+	if (ctxm != NULL)
+		bnxt_init_ctxm_mem(ctxm, mz->addr, mz->len);
 	for (sz = 0, i = 0; sz < mem_size; sz += BNXT_PAGE_SIZE, i++) {
 		rmem->pg_arr[i] = ((char *)mz->addr) + sz;
 		rmem->dma_arr[i] = mz_phys_addr + sz;
@@ -4866,6 +4888,34 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp,
 	return 0;
 }
 
+static void bnxt_free_ctx_mem_v2(struct bnxt *bp)
+{
+	uint16_t type;
+
+	for (type = 0; type < bp->ctx->types; type++) {
+		struct bnxt_ctx_mem *ctxm = &bp->ctx->ctx_arr[type];
+		struct bnxt_ctx_pg_info *ctx_pg = ctxm->pg_info;
+		int i, n = 1;
+
+		if (!ctx_pg)
+			continue;
+		if (ctxm->instance_bmap)
+			n = hweight32(ctxm->instance_bmap);
+
+		for (i = 0; i < n; i++) {
+			rte_free(ctx_pg[i].ctx_pg_arr);
+			rte_free(ctx_pg[i].ctx_dma_arr);
+			rte_memzone_free(ctx_pg[i].ring_mem.mz);
+			rte_memzone_free(ctx_pg[i].ring_mem.pg_tbl_mz);
+		}
+
+		rte_free(ctx_pg);
+		ctxm->pg_info = NULL;
+	}
+	rte_free(bp->ctx->ctx_arr);
+	bp->ctx->ctx_arr = NULL;
+}
+
 static void bnxt_free_ctx_mem(struct bnxt *bp)
 {
 	int i;
@@ -4874,6 +4924,12 @@ static void bnxt_free_ctx_mem(struct bnxt *bp)
 		return;
 
 	bp->ctx->flags &= ~BNXT_CTX_FLAG_INITED;
+
+	if (BNXT_FW_BACKING_STORE_V2_EN(bp)) {
+		bnxt_free_ctx_mem_v2(bp);
+		goto free_ctx;
+	}
+
 	rte_free(bp->ctx->qp_mem.ctx_pg_arr);
 	rte_free(bp->ctx->srq_mem.ctx_pg_arr);
 	rte_free(bp->ctx->cq_mem.ctx_pg_arr);
@@ -4903,6 +4959,7 @@ static void bnxt_free_ctx_mem(struct bnxt *bp)
 			rte_memzone_free(bp->ctx->tqm_mem[i]->ring_mem.mz);
 	}
 
+free_ctx:
 	rte_free(bp->ctx);
 	bp->ctx = NULL;
 }
@@ -4921,28 +4978,113 @@ static void bnxt_free_ctx_mem(struct bnxt *bp)
 
 #define clamp_t(type, _x, min, max)     min_t(type, max_t(type, _x, min), max)
 
+int bnxt_alloc_ctx_pg_tbls(struct bnxt *bp)
+{
+	struct bnxt_ctx_mem_info *ctx = bp->ctx;
+	struct bnxt_ctx_mem *ctx2;
+	uint16_t type;
+	int rc = 0;
+
+	ctx2 = &ctx->ctx_arr[0];
+	for (type = 0; type < ctx->types && rc == 0; type++) {
+		struct bnxt_ctx_mem *ctxm = &ctx->ctx_arr[type];
+		struct bnxt_ctx_pg_info *ctx_pg;
+		uint32_t entries, mem_size;
+		int w = 1;
+		int i;
+
+		if (ctxm->entry_size == 0)
+			continue;
+
+		ctx_pg = ctxm->pg_info;
+
+		if (ctxm->instance_bmap)
+			w = hweight32(ctxm->instance_bmap);
+
+		for (i = 0; i < w && rc == 0; i++) {
+			char name[RTE_MEMZONE_NAMESIZE] = {0};
+
+			sprintf(name, "_%d_%d", i, type);
+
+			if (ctxm->entry_multiple)
+				entries = bnxt_roundup(ctxm->max_entries,
+						       ctxm->entry_multiple);
+			else
+				entries = ctxm->max_entries;
+
+			if (ctxm->type == HWRM_FUNC_BACKING_STORE_CFG_V2_INPUT_TYPE_CQ)
+				entries = ctxm->cq_l2_entries;
+			else if (ctxm->type == HWRM_FUNC_BACKING_STORE_CFG_V2_INPUT_TYPE_QP)
+				entries = ctxm->qp_l2_entries;
+			else if (ctxm->type == HWRM_FUNC_BACKING_STORE_CFG_V2_INPUT_TYPE_MRAV)
+				entries = ctxm->mrav_av_entries;
+			else if (ctxm->type == HWRM_FUNC_BACKING_STORE_CFG_V2_INPUT_TYPE_TIM)
+				entries = ctx2->qp_l2_entries;
+			entries = clamp_t(uint32_t, entries, ctxm->min_entries,
+					  ctxm->max_entries);
+			ctx_pg[i].entries = entries;
+			mem_size = ctxm->entry_size * entries;
+			PMD_DRV_LOG(DEBUG,
+				    "Type:0x%x instance:%d entries:%d size:%d\n",
+				    ctxm->type, i, ctx_pg[i].entries, mem_size);
+			rc = bnxt_alloc_ctx_mem_blk(bp, &ctx_pg[i],
+						    ctxm->init_value ? ctxm : NULL,
+						    mem_size, name, i);
+		}
+	}
+
+	return rc;
+}
+
 int bnxt_alloc_ctx_mem(struct bnxt *bp)
 {
 	struct bnxt_ctx_pg_info *ctx_pg;
 	struct bnxt_ctx_mem_info *ctx;
 	uint32_t mem_size, ena, entries;
+	int types = BNXT_CTX_MIN;
 	uint32_t entries_sp, min;
-	int i, rc;
+	int i, rc = 0;
+
+	if (!BNXT_FW_BACKING_STORE_V1_EN(bp) &&
+	    !BNXT_FW_BACKING_STORE_V2_EN(bp))
+		return rc;
+
+	if (BNXT_FW_BACKING_STORE_V2_EN(bp)) {
+		types = bnxt_hwrm_func_backing_store_types_count(bp);
+		if (types <= 0)
+			return types;
+	}
+
+	rc = bnxt_hwrm_func_backing_store_ctx_alloc(bp, types);
+	if (rc != 0)
+		return rc;
+
+	if (bp->ctx->flags & BNXT_CTX_FLAG_INITED)
+		return 0;
+
+	ctx = bp->ctx;
+	if (BNXT_FW_BACKING_STORE_V2_EN(bp)) {
+		rc = bnxt_hwrm_func_backing_store_qcaps_v2(bp);
+
+		for (i = 0 ; i < bp->ctx->types && rc == 0; i++) {
+			struct bnxt_ctx_mem *ctxm = &ctx->ctx_arr[i];
+
+			rc = bnxt_hwrm_func_backing_store_cfg_v2(bp, ctxm);
+		}
+		goto done;
+	}
 
 	rc = bnxt_hwrm_func_backing_store_qcaps(bp);
 	if (rc) {
 		PMD_DRV_LOG(ERR, "Query context mem capability failed\n");
 		return rc;
 	}
-	ctx = bp->ctx;
-	if (!ctx || (ctx->flags & BNXT_CTX_FLAG_INITED))
-		return 0;
 
 	ctx_pg = &ctx->qp_mem;
 	ctx_pg->entries = ctx->qp_min_qp1_entries + ctx->qp_max_l2_entries;
 	if (ctx->qp_entry_size) {
 		mem_size = ctx->qp_entry_size * ctx_pg->entries;
-		rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "qp_mem", 0);
+		rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL, mem_size, "qp_mem", 0);
 		if (rc)
 			return rc;
 	}
@@ -4951,7 +5093,7 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp)
 	ctx_pg->entries = ctx->srq_max_l2_entries;
 	if (ctx->srq_entry_size) {
 		mem_size = ctx->srq_entry_size * ctx_pg->entries;
-		rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "srq_mem", 0);
+		rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL, mem_size, "srq_mem", 0);
 		if (rc)
 			return rc;
 	}
@@ -4960,7 +5102,7 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp)
 	ctx_pg->entries = ctx->cq_max_l2_entries;
 	if (ctx->cq_entry_size) {
 		mem_size = ctx->cq_entry_size * ctx_pg->entries;
-		rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "cq_mem", 0);
+		rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL, mem_size, "cq_mem", 0);
 		if (rc)
 			return rc;
 	}
@@ -4970,7 +5112,7 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp)
 		ctx->vnic_max_ring_table_entries;
 	if (ctx->vnic_entry_size) {
 		mem_size = ctx->vnic_entry_size * ctx_pg->entries;
-		rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "vnic_mem", 0);
+		rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL, mem_size, "vnic_mem", 0);
 		if (rc)
 			return rc;
 	}
@@ -4979,7 +5121,7 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp)
 	ctx_pg->entries = ctx->stat_max_entries;
 	if (ctx->stat_entry_size) {
 		mem_size = ctx->stat_entry_size * ctx_pg->entries;
-		rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "stat_mem", 0);
+		rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL, mem_size, "stat_mem", 0);
 		if (rc)
 			return rc;
 	}
@@ -5003,8 +5145,8 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp)
 		ctx_pg->entries = i ? entries : entries_sp;
 		if (ctx->tqm_entry_size) {
 			mem_size = ctx->tqm_entry_size * ctx_pg->entries;
-			rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size,
-						    "tqm_mem", i);
+			rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL,
+						    mem_size, "tqm_mem", i);
 			if (rc)
 				return rc;
 		}
@@ -5016,6 +5158,7 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp)
 
 	ena |= FUNC_BACKING_STORE_CFG_INPUT_DFLT_ENABLES;
 	rc = bnxt_hwrm_func_backing_store_cfg(bp, ena);
+done:
 	if (rc)
 		PMD_DRV_LOG(ERR,
 			    "Failed to configure context mem: rc = %d\n", rc);
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 2d0a7a2731..dda3d3a6ac 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -24,10 +24,6 @@
 #include "bnxt_vnic.h"
 #include "hsi_struct_def_dpdk.h"
 
-#define HWRM_SPEC_CODE_1_8_3		0x10803
-#define HWRM_VERSION_1_9_1		0x10901
-#define HWRM_VERSION_1_9_2		0x10903
-#define HWRM_VERSION_1_10_2_13		0x10a020d
 struct bnxt_plcmodes_cfg {
 	uint32_t	flags;
 	uint16_t	jumbo_thresh;
@@ -35,6 +31,43 @@ struct bnxt_plcmodes_cfg {
 	uint16_t	hds_threshold;
 };
 
+const char *bnxt_backing_store_types[] = {
+	"Queue pair",
+	"Shared receive queue",
+	"Completion queue",
+	"Virtual NIC",
+	"Statistic context",
+	"Slow-path TQM ring",
+	"Fast-path TQM ring",
+	"Unused",
+	"Unused",
+	"Unused",
+	"Unused",
+	"Unused",
+	"Unused",
+	"Unused",
+	"MR and MAV Context",
+	"TIM",
+	"Unused",
+	"Unused",
+	"Unused",
+	"Tx key context",
+	"Rx key context",
+	"Mid-path TQM ring",
+	"SQ Doorbell shadow region",
+	"RQ Doorbell shadow region",
+	"SRQ Doorbell shadow region",
+	"CQ Doorbell shadow region",
+	"QUIC Tx key context",
+	"QUIC Rx key context",
+	"Invalid type",
+	"Invalid type",
+	"Invalid type",
+	"Invalid type",
+	"Invalid type",
+	"Invalid type"
+};
+
 static int page_getenum(size_t size)
 {
 	if (size <= 1 << 4)
@@ -894,6 +927,11 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp)
 	if (flags & HWRM_FUNC_QCAPS_OUTPUT_FLAGS_LINK_ADMIN_STATUS_SUPPORTED)
 		bp->fw_cap |= BNXT_FW_CAP_LINK_ADMIN;
 
+	if (flags & HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_BS_V2_SUPPORTED) {
+		PMD_DRV_LOG(DEBUG, "Backing store v2 supported\n");
+		if (BNXT_CHIP_P7(bp))
+			bp->fw_cap |= BNXT_FW_CAP_BACKING_STORE_V2;
+	}
 	if (!(flags & HWRM_FUNC_QCAPS_OUTPUT_FLAGS_VLAN_ACCELERATION_TX_DISABLED)) {
 		bp->fw_cap |= BNXT_FW_CAP_VLAN_TX_INSERT;
 		PMD_DRV_LOG(DEBUG, "VLAN acceleration for TX is enabled\n");
@@ -5461,7 +5499,196 @@ int bnxt_hwrm_set_ring_coal(struct bnxt *bp,
 	return 0;
 }
 
-#define BNXT_RTE_MEMZONE_FLAG  (RTE_MEMZONE_1GB | RTE_MEMZONE_IOVA_CONTIG)
+static void bnxt_init_ctx_initializer(struct bnxt_ctx_mem *ctxm,
+				      uint8_t init_val,
+				      uint8_t init_offset,
+				      bool init_mask_set)
+{
+	ctxm->init_value = init_val;
+	ctxm->init_offset = BNXT_CTX_INIT_INVALID_OFFSET;
+	if (init_mask_set)
+		ctxm->init_offset = init_offset * 4;
+	else
+		ctxm->init_value = 0;
+}
+
+static int bnxt_alloc_all_ctx_pg_info(struct bnxt *bp)
+{
+	struct bnxt_ctx_mem_info *ctx = bp->ctx;
+	char name[RTE_MEMZONE_NAMESIZE];
+	uint16_t type;
+
+	for (type = 0; type < ctx->types; type++) {
+		struct bnxt_ctx_mem *ctxm = &ctx->ctx_arr[type];
+		int n = 1;
+
+		if (!ctxm->max_entries || ctxm->pg_info)
+			continue;
+
+		if (ctxm->instance_bmap)
+			n = hweight32(ctxm->instance_bmap);
+
+		sprintf(name, "bnxt_ctx_pgmem_%d_%d",
+			bp->eth_dev->data->port_id, type);
+		ctxm->pg_info = rte_malloc(name, sizeof(*ctxm->pg_info) * n,
+					   RTE_CACHE_LINE_SIZE);
+		if (!ctxm->pg_info)
+			return -ENOMEM;
+	}
+	return 0;
+}
+
+static void bnxt_init_ctx_v2_driver_managed(struct bnxt *bp __rte_unused,
+					    struct bnxt_ctx_mem *ctxm)
+{
+	switch (ctxm->type) {
+	case HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_TYPE_SQ_DB_SHADOW:
+	case HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_TYPE_RQ_DB_SHADOW:
+	case HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_TYPE_SRQ_DB_SHADOW:
+	case HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_TYPE_CQ_DB_SHADOW:
+		/* FALLTHROUGH */
+		ctxm->entry_size = 0;
+		ctxm->min_entries = 1;
+		ctxm->max_entries = 1;
+		break;
+	}
+}
+
+int bnxt_hwrm_func_backing_store_qcaps_v2(struct bnxt *bp)
+{
+	struct hwrm_func_backing_store_qcaps_v2_input req = {0};
+	struct hwrm_func_backing_store_qcaps_v2_output *resp =
+		bp->hwrm_cmd_resp_addr;
+	struct bnxt_ctx_mem_info *ctx = bp->ctx;
+	uint16_t last_valid_type = BNXT_CTX_INV;
+	uint16_t last_valid_idx = 0;
+	uint16_t types, type;
+	int rc;
+
+	for (types = 0, type = 0; types < bp->ctx->types && type != BNXT_CTX_INV; types++) {
+		struct bnxt_ctx_mem *ctxm = &bp->ctx->ctx_arr[types];
+		uint8_t init_val, init_off, i;
+		uint32_t *p;
+		uint32_t flags;
+
+		HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_QCAPS_V2, BNXT_USE_CHIMP_MB);
+		req.type = rte_cpu_to_le_16(type);
+		rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+		HWRM_CHECK_RESULT();
+
+		flags = rte_le_to_cpu_32(resp->flags);
+		type = rte_le_to_cpu_16(resp->next_valid_type);
+		if (!(flags & HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_TYPE_VALID))
+			goto next;
+
+		ctxm->type = rte_le_to_cpu_16(resp->type);
+
+		ctxm->flags = flags;
+		if (flags &
+		    HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_DRIVER_MANAGED_MEMORY) {
+			bnxt_init_ctx_v2_driver_managed(bp, ctxm);
+			goto next;
+		}
+		ctxm->entry_size = rte_le_to_cpu_16(resp->entry_size);
+
+		if (ctxm->entry_size == 0)
+			goto next;
+
+		ctxm->instance_bmap = rte_le_to_cpu_32(resp->instance_bit_map);
+		ctxm->entry_multiple = resp->entry_multiple;
+		ctxm->max_entries = rte_le_to_cpu_32(resp->max_num_entries);
+		ctxm->min_entries = rte_le_to_cpu_32(resp->min_num_entries);
+		init_val = resp->ctx_init_value;
+		init_off = resp->ctx_init_offset;
+		bnxt_init_ctx_initializer(ctxm, init_val, init_off,
+					  BNXT_CTX_INIT_VALID(flags));
+		ctxm->split_entry_cnt = RTE_MIN(resp->subtype_valid_cnt,
+						BNXT_MAX_SPLIT_ENTRY);
+		for (i = 0, p = &resp->split_entry_0; i < ctxm->split_entry_cnt;
+		     i++, p++)
+			ctxm->split[i] = rte_le_to_cpu_32(*p);
+
+		PMD_DRV_LOG(DEBUG,
+			    "type:%s size:%d multiple:%d max:%d min:%d split:%d init_val:%d init_off:%d init:%d bmap:0x%x\n",
+			    bnxt_backing_store_types[ctxm->type], ctxm->entry_size,
+			    ctxm->entry_multiple, ctxm->max_entries, ctxm->min_entries,
+			    ctxm->split_entry_cnt, init_val, init_off,
+			    BNXT_CTX_INIT_VALID(flags), ctxm->instance_bmap);
+		last_valid_type = ctxm->type;
+		last_valid_idx = types;
+next:
+		HWRM_UNLOCK();
+	}
+	ctx->ctx_arr[last_valid_idx].last = true;
+	PMD_DRV_LOG(DEBUG, "Last valid type 0x%x\n", last_valid_type);
+
+	rc = bnxt_alloc_all_ctx_pg_info(bp);
+	if (rc == 0)
+		rc = bnxt_alloc_ctx_pg_tbls(bp);
+	return rc;
+}
+
+int bnxt_hwrm_func_backing_store_types_count(struct bnxt *bp)
+{
+	struct hwrm_func_backing_store_qcaps_v2_input req = {0};
+	struct hwrm_func_backing_store_qcaps_v2_output *resp =
+		bp->hwrm_cmd_resp_addr;
+	uint16_t type = 0;
+	int types = 0;
+	int rc;
+
+	/* Calculate number of valid context types */
+	do {
+		uint32_t flags;
+
+		HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_QCAPS_V2, BNXT_USE_CHIMP_MB);
+		req.type = rte_cpu_to_le_16(type);
+		rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+		HWRM_CHECK_RESULT();
+		if (rc != 0)
+			return rc;
+
+		flags = rte_le_to_cpu_32(resp->flags);
+		type = rte_le_to_cpu_16(resp->next_valid_type);
+		HWRM_UNLOCK();
+
+		if (flags & HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_TYPE_VALID) {
+			PMD_DRV_LOG(DEBUG, "Valid types 0x%x - %s\n",
+				    req.type, bnxt_backing_store_types[req.type]);
+			types++;
+		}
+	} while (type != HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_TYPE_INVALID);
+	PMD_DRV_LOG(DEBUG, "Number of valid types %d\n", types);
+
+	return types;
+}
+
+int bnxt_hwrm_func_backing_store_ctx_alloc(struct bnxt *bp, uint16_t types)
+{
+	int alloc_len = sizeof(struct bnxt_ctx_mem_info);
+
+	if (!BNXT_CHIP_P5_P7(bp) ||
+	    bp->hwrm_spec_code < HWRM_VERSION_1_9_2 ||
+	    BNXT_VF(bp) ||
+	    bp->ctx)
+		return 0;
+
+	bp->ctx = rte_zmalloc("bnxt_ctx_mem", alloc_len,
+			      RTE_CACHE_LINE_SIZE);
+	if (bp->ctx == NULL)
+		return -ENOMEM;
+
+	alloc_len = sizeof(struct bnxt_ctx_mem) * types;
+	bp->ctx->ctx_arr = rte_zmalloc("bnxt_ctx_mem_arr",
+				       alloc_len,
+				       RTE_CACHE_LINE_SIZE);
+	if (bp->ctx->ctx_arr == NULL)
+		return -ENOMEM;
+
+	bp->ctx->types = types;
+	return 0;
+}
+
 int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp)
 {
 	struct hwrm_func_backing_store_qcaps_input req = {0};
@@ -5469,27 +5696,19 @@ int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp)
 		bp->hwrm_cmd_resp_addr;
 	struct bnxt_ctx_pg_info *ctx_pg;
 	struct bnxt_ctx_mem_info *ctx;
-	int total_alloc_len;
 	int rc, i, tqm_rings;
 
 	if (!BNXT_CHIP_P5_P7(bp) ||
 	    bp->hwrm_spec_code < HWRM_VERSION_1_9_2 ||
 	    BNXT_VF(bp) ||
-	    bp->ctx)
+	    bp->ctx->flags & BNXT_CTX_FLAG_INITED)
 		return 0;
 
+	ctx = bp->ctx;
 	HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_QCAPS, BNXT_USE_CHIMP_MB);
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 	HWRM_CHECK_RESULT_SILENT();
 
-	total_alloc_len = sizeof(*ctx);
-	ctx = rte_zmalloc("bnxt_ctx_mem", total_alloc_len,
-			  RTE_CACHE_LINE_SIZE);
-	if (!ctx) {
-		rc = -ENOMEM;
-		goto ctx_err;
-	}
-
 	ctx->qp_max_entries = rte_le_to_cpu_32(resp->qp_max_entries);
 	ctx->qp_min_qp1_entries =
 		rte_le_to_cpu_16(resp->qp_min_qp1_entries);
@@ -5500,8 +5719,13 @@ int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp)
 		rte_le_to_cpu_16(resp->srq_max_l2_entries);
 	ctx->srq_max_entries = rte_le_to_cpu_32(resp->srq_max_entries);
 	ctx->srq_entry_size = rte_le_to_cpu_16(resp->srq_entry_size);
-	ctx->cq_max_l2_entries =
-		rte_le_to_cpu_16(resp->cq_max_l2_entries);
+	if (BNXT_CHIP_P7(bp))
+		ctx->cq_max_l2_entries =
+			RTE_MIN(BNXT_P7_CQ_MAX_L2_ENT,
+				rte_le_to_cpu_16(resp->cq_max_l2_entries));
+	else
+		ctx->cq_max_l2_entries =
+			rte_le_to_cpu_16(resp->cq_max_l2_entries);
 	ctx->cq_max_entries = rte_le_to_cpu_32(resp->cq_max_entries);
 	ctx->cq_entry_size = rte_le_to_cpu_16(resp->cq_entry_size);
 	ctx->vnic_max_vnic_entries =
@@ -5555,12 +5779,73 @@ int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp)
 	for (i = 0; i < tqm_rings; i++, ctx_pg++)
 		ctx->tqm_mem[i] = ctx_pg;
 
-	bp->ctx = ctx;
 ctx_err:
 	HWRM_UNLOCK();
 	return rc;
 }
 
+int bnxt_hwrm_func_backing_store_cfg_v2(struct bnxt *bp,
+					struct bnxt_ctx_mem *ctxm)
+{
+	struct hwrm_func_backing_store_cfg_v2_input req = {0};
+	struct hwrm_func_backing_store_cfg_v2_output *resp =
+		bp->hwrm_cmd_resp_addr;
+	struct bnxt_ctx_pg_info *ctx_pg;
+	int i, j, k;
+	uint32_t *p;
+	int rc = 0;
+	int w = 1;
+	int b = 1;
+
+	if (!BNXT_PF(bp)) {
+		PMD_DRV_LOG(INFO,
+			    "Backing store config V2 can be issued on PF only\n");
+		return 0;
+	}
+
+	if (!(ctxm->flags & BNXT_CTX_MEM_TYPE_VALID) || !ctxm->pg_info)
+		return 0;
+
+	if (ctxm->instance_bmap)
+		b = ctxm->instance_bmap;
+
+	w = hweight32(b);
+
+	for (i = 0, j = 0; i < w && rc == 0; i++) {
+		if (!(b & (1 << i)))
+			continue;
+
+		HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_CFG_V2, BNXT_USE_CHIMP_MB);
+		req.type = rte_cpu_to_le_16(ctxm->type);
+		req.entry_size = rte_cpu_to_le_16(ctxm->entry_size);
+		req.subtype_valid_cnt = ctxm->split_entry_cnt;
+		for (k = 0, p = &req.split_entry_0; k < ctxm->split_entry_cnt; k++)
+			p[k] = rte_cpu_to_le_32(ctxm->split[k]);
+
+		req.instance = rte_cpu_to_le_16(i);
+		ctx_pg = &ctxm->pg_info[j++];
+		if (!ctx_pg->entries)
+			goto unlock;
+
+		req.num_entries = rte_cpu_to_le_32(ctx_pg->entries);
+		bnxt_hwrm_set_pg_attr(&ctx_pg->ring_mem,
+				      &req.page_size_pbl_level,
+				      &req.page_dir);
+		PMD_DRV_LOG(DEBUG,
+			    "Backing store config V2 type:%s last %d, instance %d, hw %d\n",
+			    bnxt_backing_store_types[req.type], ctxm->last, j, w);
+		if (ctxm->last && i == (w - 1))
+			req.flags =
+			rte_cpu_to_le_32(BACKING_STORE_CFG_V2_IN_FLG_CFG_ALL_DONE);
+
+		rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
+		HWRM_CHECK_RESULT();
+unlock:
+		HWRM_UNLOCK();
+	}
+	return rc;
+}
+
 int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, uint32_t enables)
 {
 	struct hwrm_func_backing_store_cfg_input req = {0};
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index f9fa6cf73a..3d5194257b 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -60,6 +60,8 @@ struct hwrm_func_qstats_output;
 	HWRM_PORT_PHY_CFG_INPUT_ENABLES_AUTO_PAM4_LINK_SPEED_MASK
 #define HWRM_PORT_PHY_CFG_IN_EN_AUTO_LINK_SPEED_MASK \
 	HWRM_PORT_PHY_CFG_INPUT_ENABLES_AUTO_LINK_SPEED_MASK
+#define BACKING_STORE_CFG_V2_IN_FLG_CFG_ALL_DONE \
+	HWRM_FUNC_BACKING_STORE_CFG_V2_INPUT_FLAGS_BS_CFG_ALL_DONE
 
 #define HWRM_SPEC_CODE_1_8_4		0x10804
 #define HWRM_SPEC_CODE_1_9_0		0x10900
@@ -355,4 +357,10 @@ void bnxt_free_hwrm_tx_ring(struct bnxt *bp, int queue_index);
 int bnxt_alloc_hwrm_tx_ring(struct bnxt *bp, int queue_index);
 int bnxt_hwrm_config_host_mtu(struct bnxt *bp);
 int bnxt_vnic_rss_clear_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic);
+int bnxt_hwrm_func_backing_store_qcaps_v2(struct bnxt *bp);
+int bnxt_hwrm_func_backing_store_cfg_v2(struct bnxt *bp,
+					struct bnxt_ctx_mem *ctxm);
+int bnxt_hwrm_func_backing_store_types_count(struct bnxt *bp);
+int bnxt_hwrm_func_backing_store_ctx_alloc(struct bnxt *bp, uint16_t types);
+int bnxt_alloc_ctx_pg_tbls(struct bnxt *bp);
 #endif
diff --git a/drivers/net/bnxt/bnxt_util.c b/drivers/net/bnxt/bnxt_util.c
index 47dd5fa6ff..aa184496c2 100644
--- a/drivers/net/bnxt/bnxt_util.c
+++ b/drivers/net/bnxt/bnxt_util.c
@@ -27,3 +27,13 @@ void bnxt_eth_hw_addr_random(uint8_t *mac_addr)
 	mac_addr[1] = 0x0a;
 	mac_addr[2] = 0xf7;
 }
+
+uint8_t hweight32(uint32_t word32)
+{
+	uint32_t res = word32 - ((word32 >> 1) & 0x55555555);
+
+	res = (res & 0x33333333) + ((res >> 2) & 0x33333333);
+	res = (res + (res >> 4)) & 0x0F0F0F0F;
+	res = res + (res >> 8);
+	return (res + (res >> 16)) & 0x000000FF;
+}
diff --git a/drivers/net/bnxt/bnxt_util.h b/drivers/net/bnxt/bnxt_util.h
index 7f5b4c160e..b265f5841b 100644
--- a/drivers/net/bnxt/bnxt_util.h
+++ b/drivers/net/bnxt/bnxt_util.h
@@ -17,4 +17,5 @@
 
 int bnxt_check_zero_bytes(const uint8_t *bytes, int len);
 void bnxt_eth_hw_addr_random(uint8_t *mac_addr);
+uint8_t hweight32(uint32_t word32);
 #endif /* _BNXT_UTIL_H_ */
-- 
2.39.2 (Apple Git-143)


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 10/14] net/bnxt: refactor the ulp initialization
  2023-12-10  1:24 [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
                   ` (8 preceding siblings ...)
  2023-12-10  1:24 ` [PATCH v2 09/14] net/bnxt: add support for backing store v2 Ajit Khaparde
@ 2023-12-10  1:24 ` Ajit Khaparde
  2023-12-10  1:24 ` [PATCH v2 11/14] net/bnxt: modify sending new HWRM commands to firmware Ajit Khaparde
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-10  1:24 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Mike Baucom

[-- Attachment #1: Type: text/plain, Size: 2573 bytes --]

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

Add new method to consider all the conditions to
check before the ulp could be initialized.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 28 +++++++++++++++++++++++-----
 1 file changed, 23 insertions(+), 5 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 5810e0a2a9..6282f16a7d 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -190,6 +190,7 @@ static void bnxt_dev_recover(void *arg);
 static void bnxt_free_error_recovery_info(struct bnxt *bp);
 static void bnxt_free_rep_info(struct bnxt *bp);
 static int bnxt_check_fw_ready(struct bnxt *bp);
+static bool bnxt_enable_ulp(struct bnxt *bp);
 
 int is_bnxt_in_error(struct bnxt *bp)
 {
@@ -1520,7 +1521,8 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev)
 		return ret;
 
 	/* delete the bnxt ULP port details */
-	bnxt_ulp_port_deinit(bp);
+	if (bnxt_enable_ulp(bp))
+		bnxt_ulp_port_deinit(bp);
 
 	bnxt_cancel_fw_health_check(bp);
 
@@ -1641,9 +1643,11 @@ int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
 		goto error;
 
 	/* Initialize bnxt ULP port details */
-	rc = bnxt_ulp_port_init(bp);
-	if (rc)
-		goto error;
+	if (bnxt_enable_ulp(bp)) {
+		rc = bnxt_ulp_port_init(bp);
+		if (rc)
+			goto error;
+	}
 
 	eth_dev->rx_pkt_burst = bnxt_receive_function(eth_dev);
 	eth_dev->tx_pkt_burst = bnxt_transmit_function(eth_dev);
@@ -3426,7 +3430,7 @@ bnxt_flow_ops_get_op(struct rte_eth_dev *dev,
 	 */
 	dev->data->dev_flags |= RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE;
 
-	if (BNXT_TRUFLOW_EN(bp))
+	if (bnxt_enable_ulp(bp))
 		*ops = &bnxt_ulp_rte_flow_ops;
 	else
 		*ops = &bnxt_flow_ops;
@@ -6666,6 +6670,20 @@ struct tf *bnxt_get_tfp_session(struct bnxt *bp, enum bnxt_session_type type)
 		&bp->tfp[BNXT_SESSION_TYPE_REGULAR] : &bp->tfp[type];
 }
 
+/* check if ULP should be enabled or not */
+static bool bnxt_enable_ulp(struct bnxt *bp)
+{
+	/* truflow and MPC should be enabled */
+	/* not enabling ulp for cli and no truflow apps */
+	if (BNXT_TRUFLOW_EN(bp) && bp->app_id != 254 &&
+	    bp->app_id != 255) {
+		if (BNXT_CHIP_P7(bp))
+			return false;
+		return true;
+	}
+	return false;
+}
+
 RTE_LOG_REGISTER_SUFFIX(bnxt_logtype_driver, driver, NOTICE);
 RTE_PMD_REGISTER_PCI(net_bnxt, bnxt_rte_pmd);
 RTE_PMD_REGISTER_PCI_TABLE(net_bnxt, bnxt_pci_id_map);
-- 
2.39.2 (Apple Git-143)


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 11/14] net/bnxt: modify sending new HWRM commands to firmware
  2023-12-10  1:24 [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
                   ` (9 preceding siblings ...)
  2023-12-10  1:24 ` [PATCH v2 10/14] net/bnxt: refactor the ulp initialization Ajit Khaparde
@ 2023-12-10  1:24 ` Ajit Khaparde
  2023-12-10  1:24 ` [PATCH v2 12/14] net/bnxt: retry HWRM ver get if the command fails Ajit Khaparde
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-10  1:24 UTC (permalink / raw)
  To: dev; +Cc: Damodharam Ammepalli

[-- Attachment #1: Type: text/plain, Size: 1787 bytes --]

If the firmware fails to respond a HWRM command in a certain time,
it may be because the firmware is in a bad state.
Do not send any new HWRM commands in such a scenario.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Damodharam Ammepalli <damodharam.ammepalli@broadcom.com>
---
 drivers/net/bnxt/bnxt.h      | 1 +
 drivers/net/bnxt/bnxt_hwrm.c | 5 +++++
 2 files changed, 6 insertions(+)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 68c4778dc3..f7a60eb9a1 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -745,6 +745,7 @@ struct bnxt {
 #define BNXT_FLAG_DFLT_MAC_SET			BIT(26)
 #define BNXT_FLAG_GFID_ENABLE			BIT(27)
 #define BNXT_FLAG_CHIP_P7			BIT(30)
+#define BNXT_FLAG_FW_TIMEDOUT			BIT(31)
 #define BNXT_PF(bp)		(!((bp)->flags & BNXT_FLAG_VF))
 #define BNXT_VF(bp)		((bp)->flags & BNXT_FLAG_VF)
 #define BNXT_NPAR(bp)		((bp)->flags & BNXT_FLAG_NPAR_PF)
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index dda3d3a6ac..2835d48a0e 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -215,6 +215,10 @@ static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg,
 	if (bp->flags & BNXT_FLAG_FATAL_ERROR)
 		return 0;
 
+	/* If previous HWRM command timed out, donot send new HWRM command */
+	if (bp->flags & BNXT_FLAG_FW_TIMEDOUT)
+		return 0;
+
 	timeout = bp->hwrm_cmd_timeout;
 
 	/* Update the message length for backing store config for new FW. */
@@ -315,6 +319,7 @@ static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg,
 		PMD_DRV_LOG(ERR,
 			    "Error(timeout) sending msg 0x%04x, seq_id %d\n",
 			    req->req_type, req->seq_id);
+		bp->flags |= BNXT_FLAG_FW_TIMEDOUT;
 		return -ETIMEDOUT;
 	}
 	return 0;
-- 
2.39.2 (Apple Git-143)


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 12/14] net/bnxt: retry HWRM ver get if the command fails
  2023-12-10  1:24 [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
                   ` (10 preceding siblings ...)
  2023-12-10  1:24 ` [PATCH v2 11/14] net/bnxt: modify sending new HWRM commands to firmware Ajit Khaparde
@ 2023-12-10  1:24 ` Ajit Khaparde
  2023-12-10  1:24 ` [PATCH v2 13/14] net/bnxt: cap ring resources for P7 devices Ajit Khaparde
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-10  1:24 UTC (permalink / raw)
  To: dev; +Cc: Kalesh AP, Somnath Kotur

[-- Attachment #1: Type: text/plain, Size: 2019 bytes --]

Retry HWRM ver get if the command timesout because of PCI FLR.
When the PCI driver issues an FLR during device initialization,
the firmware may have to block the PXP target traffic till the FLR
is complete.

HWRM_VER_GET command issued during that window may time out.
So retry the command again in such a scenario.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  1 +
 drivers/net/bnxt/bnxt_ethdev.c | 12 +++++++++++-
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index f7a60eb9a1..7aed4c3da3 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -879,6 +879,7 @@ struct bnxt {
 
 	 /* default command timeout value of 500ms */
 #define DFLT_HWRM_CMD_TIMEOUT		500000
+#define PCI_FUNC_RESET_WAIT_TIMEOUT	1500000
 	 /* short command timeout value of 50ms */
 #define SHORT_HWRM_CMD_TIMEOUT		50000
 	/* default HWRM request timeout value */
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 6282f16a7d..8aca3c6fba 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -5441,6 +5441,7 @@ static int bnxt_map_hcomm_fw_status_reg(struct bnxt *bp)
 static int bnxt_get_config(struct bnxt *bp)
 {
 	uint16_t mtu;
+	int timeout;
 	int rc = 0;
 
 	bp->fw_cap = 0;
@@ -5449,8 +5450,17 @@ static int bnxt_get_config(struct bnxt *bp)
 	if (rc)
 		return rc;
 
-	rc = bnxt_hwrm_ver_get(bp, DFLT_HWRM_CMD_TIMEOUT);
+	timeout = BNXT_CHIP_P7(bp) ?
+		  PCI_FUNC_RESET_WAIT_TIMEOUT :
+		  DFLT_HWRM_CMD_TIMEOUT;
+try_again:
+	rc = bnxt_hwrm_ver_get(bp, timeout);
 	if (rc) {
+		if (rc == -ETIMEDOUT && timeout == PCI_FUNC_RESET_WAIT_TIMEOUT) {
+			bp->flags &= ~BNXT_FLAG_FW_TIMEDOUT;
+			timeout = DFLT_HWRM_CMD_TIMEOUT;
+			goto try_again;
+		}
 		bnxt_check_fw_status(bp);
 		return rc;
 	}
-- 
2.39.2 (Apple Git-143)


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 13/14] net/bnxt: cap ring resources for P7 devices
  2023-12-10  1:24 [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
                   ` (11 preceding siblings ...)
  2023-12-10  1:24 ` [PATCH v2 12/14] net/bnxt: retry HWRM ver get if the command fails Ajit Khaparde
@ 2023-12-10  1:24 ` Ajit Khaparde
  2023-12-10  1:24 ` [PATCH v2 14/14] net/bnxt: add support for v3 Rx completion Ajit Khaparde
  2023-12-13  5:33 ` [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
  14 siblings, 0 replies; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-10  1:24 UTC (permalink / raw)
  To: dev; +Cc: Kalesh AP, Damodharam Ammepalli

[-- Attachment #1: Type: text/plain, Size: 1156 bytes --]

Cap the NQ count for P7 devices.
Driver does not need a high NQ ring count anyway since we operate in
poll mode.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Damodharam Ammepalli <damodharam.ammepalli@broadcom.com>
---
 drivers/net/bnxt/bnxt_hwrm.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 2835d48a0e..c6d774bd14 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -1237,7 +1237,10 @@ int bnxt_hwrm_func_resc_qcaps(struct bnxt *bp)
 	else
 		bp->max_vnics = rte_le_to_cpu_16(resp->max_vnics);
 	bp->max_stat_ctx = rte_le_to_cpu_16(resp->max_stat_ctx);
-	bp->max_nq_rings = rte_le_to_cpu_16(resp->max_msix);
+	if (BNXT_CHIP_P7(bp))
+		bp->max_nq_rings = BNXT_P7_MAX_NQ_RING_CNT;
+	else
+		bp->max_nq_rings = rte_le_to_cpu_16(resp->max_msix);
 	bp->vf_resv_strategy = rte_le_to_cpu_16(resp->vf_reservation_strategy);
 	if (bp->vf_resv_strategy >
 	    HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESV_STRATEGY_MINIMAL_STATIC)
-- 
2.39.2 (Apple Git-143)


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 14/14] net/bnxt: add support for v3 Rx completion
  2023-12-10  1:24 [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
                   ` (12 preceding siblings ...)
  2023-12-10  1:24 ` [PATCH v2 13/14] net/bnxt: cap ring resources for P7 devices Ajit Khaparde
@ 2023-12-10  1:24 ` Ajit Khaparde
  2023-12-13  5:33 ` [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
  14 siblings, 0 replies; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-10  1:24 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 8131 bytes --]

P7 devices support the newer Rx completion version.
This Rx completion though similar to the previous generation,
provides some extra information for flow offload scenarios
apart from the normal information.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_rxr.c | 87 ++++++++++++++++++++++++++++++++++-
 drivers/net/bnxt/bnxt_rxr.h | 92 +++++++++++++++++++++++++++++++++++++
 2 files changed, 177 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 9d45065f28..59ea0121de 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -553,6 +553,41 @@ bnxt_parse_pkt_type(struct rx_pkt_cmpl *rxcmp, struct rx_pkt_cmpl_hi *rxcmp1)
 	return bnxt_ptype_table[index];
 }
 
+static void
+bnxt_parse_pkt_type_v3(struct rte_mbuf *mbuf,
+		       struct rx_pkt_cmpl *rxcmp_v1,
+		       struct rx_pkt_cmpl_hi *rxcmp1_v1)
+{
+	uint32_t flags_type, flags2, meta;
+	struct rx_pkt_v3_cmpl_hi *rxcmp1;
+	struct rx_pkt_v3_cmpl *rxcmp;
+	uint8_t index;
+
+	rxcmp = (void *)rxcmp_v1;
+	rxcmp1 = (void *)rxcmp1_v1;
+
+	flags_type = rte_le_to_cpu_16(rxcmp->flags_type);
+	flags2 = rte_le_to_cpu_32(rxcmp1->flags2);
+	meta = rte_le_to_cpu_32(rxcmp->metadata1_payload_offset);
+
+	/* TODO */
+	/* Validate ptype table indexing at build time. */
+	/* bnxt_check_ptype_constants_v3(); */
+
+	/*
+	 * Index format:
+	 *     bit 0: Set if IP tunnel encapsulated packet.
+	 *     bit 1: Set if IPv6 packet, clear if IPv4.
+	 *     bit 2: Set if VLAN tag present.
+	 *     bits 3-6: Four-bit hardware packet type field.
+	 */
+	index = BNXT_CMPL_V3_ITYPE_TO_IDX(flags_type) |
+		BNXT_CMPL_V3_VLAN_TO_IDX(meta) |
+		BNXT_CMPL_V3_IP_VER_TO_IDX(flags2);
+
+	mbuf->packet_type = bnxt_ptype_table[index];
+}
+
 static void __rte_cold
 bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
 {
@@ -716,6 +751,43 @@ bnxt_get_rx_ts_p5(struct bnxt *bp, uint32_t rx_ts_cmpl)
 	ptp->rx_timestamp = pkt_time;
 }
 
+static uint32_t
+bnxt_ulp_set_mark_in_mbuf_v3(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
+			     struct rte_mbuf *mbuf, uint32_t *vfr_flag)
+{
+	struct rx_pkt_v3_cmpl_hi *rxcmp1_v3 = (void *)rxcmp1;
+	uint32_t flags2, meta, mark_id = 0;
+	/* revisit the usage of gfid/lfid if mark action is supported.
+	 * for now, only VFR is using mark and the metadata is the SVIF
+	 * (a small number)
+	 */
+	bool gfid = false;
+	int rc = 0;
+
+	flags2 = rte_le_to_cpu_32(rxcmp1_v3->flags2);
+
+	switch (flags2 & RX_PKT_V3_CMPL_HI_FLAGS2_META_FORMAT_MASK) {
+	case RX_PKT_V3_CMPL_HI_FLAGS2_META_FORMAT_CHDR_DATA:
+		/* Only supporting Metadata for ulp now */
+		meta = rxcmp1_v3->metadata2;
+		break;
+	default:
+		goto skip_mark;
+	}
+
+	rc = ulp_mark_db_mark_get(bp->ulp_ctx, gfid, meta, vfr_flag, &mark_id);
+	if (!rc) {
+		/* Only supporting VFR for now, no Mark actions */
+		if (vfr_flag && *vfr_flag)
+			return mark_id;
+	}
+
+skip_mark:
+	mbuf->hash.fdir.hi = 0;
+
+	return 0;
+}
+
 static uint32_t
 bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1,
 			  struct rte_mbuf *mbuf, uint32_t *vfr_flag)
@@ -892,7 +964,8 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 		*rx_pkt = mbuf;
 		goto next_rx;
 	} else if ((cmp_type != CMPL_BASE_TYPE_RX_L2) &&
-		   (cmp_type != CMPL_BASE_TYPE_RX_L2_V2)) {
+		   (cmp_type != CMPL_BASE_TYPE_RX_L2_V2) &&
+		   (cmp_type != CMPL_BASE_TYPE_RX_L2_V3)) {
 		rc = -EINVAL;
 		goto next_rx;
 	}
@@ -929,6 +1002,16 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 		      bp->ptp_all_rx_tstamp)
 		bnxt_get_rx_ts_p5(rxq->bp, rxcmp1->reorder);
 
+	if (cmp_type == CMPL_BASE_TYPE_RX_L2_V3) {
+		bnxt_parse_csum_v3(mbuf, rxcmp1);
+		bnxt_parse_pkt_type_v3(mbuf, rxcmp, rxcmp1);
+		bnxt_rx_vlan_v3(mbuf, rxcmp, rxcmp1);
+		if (BNXT_TRUFLOW_EN(bp))
+			mark_id = bnxt_ulp_set_mark_in_mbuf_v3(rxq->bp, rxcmp1,
+							       mbuf, &vfr_flag);
+		goto reuse_rx_mbuf;
+	}
+
 	if (cmp_type == CMPL_BASE_TYPE_RX_L2_V2) {
 		bnxt_parse_csum_v2(mbuf, rxcmp1);
 		bnxt_parse_pkt_type_v2(mbuf, rxcmp, rxcmp1);
@@ -1066,7 +1149,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 		if (CMP_TYPE(rxcmp) == CMPL_BASE_TYPE_HWRM_DONE) {
 			PMD_DRV_LOG(ERR, "Rx flush done\n");
 		} else if ((CMP_TYPE(rxcmp) >= CMPL_BASE_TYPE_RX_TPA_START_V2) &&
-		     (CMP_TYPE(rxcmp) <= RX_TPA_V2_ABUF_CMPL_TYPE_RX_TPA_AGG)) {
+			   (CMP_TYPE(rxcmp) <= CMPL_BASE_TYPE_RX_TPA_START_V3)) {
 			rc = bnxt_rx_pkt(&rx_pkts[nb_rx_pkts], rxq, &raw_cons);
 			if (!rc)
 				nb_rx_pkts++;
diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h
index af53bc0c25..439d29a07f 100644
--- a/drivers/net/bnxt/bnxt_rxr.h
+++ b/drivers/net/bnxt/bnxt_rxr.h
@@ -386,4 +386,96 @@ bnxt_parse_pkt_type_v2(struct rte_mbuf *mbuf,
 
 	mbuf->packet_type = pkt_type;
 }
+
+/* Thor2 specific code for RX completion parsing */
+#define RX_PKT_V3_CMPL_FLAGS2_IP_TYPE_SFT	8
+#define RX_PKT_V3_CMPL_METADATA1_VALID_SFT	15
+
+#define BNXT_CMPL_V3_ITYPE_TO_IDX(ft) \
+	(((ft) & RX_PKT_V3_CMPL_FLAGS_ITYPE_MASK) >> \
+	 (RX_PKT_V3_CMPL_FLAGS_ITYPE_SFT - BNXT_PTYPE_TBL_TYPE_SFT))
+
+#define BNXT_CMPL_V3_VLAN_TO_IDX(meta) \
+	(((meta) & (1 << RX_PKT_V3_CMPL_METADATA1_VALID_SFT)) >> \
+	 (RX_PKT_V3_CMPL_METADATA1_VALID_SFT - BNXT_PTYPE_TBL_VLAN_SFT))
+
+#define BNXT_CMPL_V3_IP_VER_TO_IDX(f2) \
+	(((f2) & RX_PKT_V3_CMPL_HI_FLAGS2_IP_TYPE) >> \
+	 (RX_PKT_V3_CMPL_FLAGS2_IP_TYPE_SFT - BNXT_PTYPE_TBL_IP_VER_SFT))
+
+#define RX_CMP_V3_VLAN_VALID(rxcmp)        \
+	(((struct rx_pkt_v3_cmpl *)rxcmp)->metadata1_payload_offset &	\
+	 RX_PKT_V3_CMPL_METADATA1_VALID)
+
+#define RX_CMP_V3_METADATA0_VID(rxcmp1)				\
+	((((struct rx_pkt_v3_cmpl_hi *)rxcmp1)->metadata0) &	\
+	 (RX_PKT_V3_CMPL_HI_METADATA0_VID_MASK |		\
+	  RX_PKT_V3_CMPL_HI_METADATA0_DE  |			\
+	  RX_PKT_V3_CMPL_HI_METADATA0_PRI_MASK))
+
+static inline void bnxt_rx_vlan_v3(struct rte_mbuf *mbuf,
+	struct rx_pkt_cmpl *rxcmp,
+	struct rx_pkt_cmpl_hi *rxcmp1)
+{
+	if (RX_CMP_V3_VLAN_VALID(rxcmp)) {
+		mbuf->vlan_tci = RX_CMP_V3_METADATA0_VID(rxcmp1);
+		mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
+	}
+}
+
+#define RX_CMP_V3_L4_CS_ERR(err)		\
+	(((err) & RX_PKT_CMPL_ERRORS_MASK)	\
+	 & (RX_PKT_CMPL_ERRORS_L4_CS_ERROR))
+#define RX_CMP_V3_L3_CS_ERR(err)		\
+	(((err) & RX_PKT_CMPL_ERRORS_MASK)	\
+	 & (RX_PKT_CMPL_ERRORS_IP_CS_ERROR))
+#define RX_CMP_V3_T_IP_CS_ERR(err)		\
+	(((err) & RX_PKT_CMPL_ERRORS_MASK)	\
+	 & (RX_PKT_CMPL_ERRORS_T_IP_CS_ERROR))
+#define RX_CMP_V3_T_L4_CS_ERR(err)		\
+	(((err) & RX_PKT_CMPL_ERRORS_MASK)	\
+	 & (RX_PKT_CMPL_ERRORS_T_L4_CS_ERROR))
+#define RX_PKT_CMPL_CALC			\
+	(RX_PKT_CMPL_FLAGS2_IP_CS_CALC |	\
+	 RX_PKT_CMPL_FLAGS2_L4_CS_CALC |	\
+	 RX_PKT_CMPL_FLAGS2_T_IP_CS_CALC |	\
+	 RX_PKT_CMPL_FLAGS2_T_L4_CS_CALC)
+
+static inline uint64_t
+bnxt_parse_csum_fields_v3(uint32_t flags2, uint32_t error_v2)
+{
+	uint64_t ol_flags = 0;
+
+	if (flags2 & RX_PKT_CMPL_CALC) {
+		if (unlikely(RX_CMP_V3_L4_CS_ERR(error_v2)))
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
+		else
+			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
+		if (unlikely(RX_CMP_V3_L3_CS_ERR(error_v2)))
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
+		if (unlikely(RX_CMP_V3_T_L4_CS_ERR(error_v2)))
+			ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
+		else
+			ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD;
+		if (unlikely(RX_CMP_V3_T_IP_CS_ERR(error_v2)))
+			ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
+		if (!(ol_flags & (RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD)))
+			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
+	} else {
+		/* Unknown is defined as 0 for all packets types hence using below for all */
+		ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
+	}
+	return ol_flags;
+}
+
+static inline void
+bnxt_parse_csum_v3(struct rte_mbuf *mbuf, struct rx_pkt_cmpl_hi *rxcmp1)
+{
+	struct rx_pkt_v3_cmpl_hi *v3_cmp =
+		(struct rx_pkt_v3_cmpl_hi *)(rxcmp1);
+	uint16_t error_v2 = rte_le_to_cpu_16(v3_cmp->errors_v2);
+	uint32_t flags2 = rte_le_to_cpu_32(v3_cmp->flags2);
+
+	mbuf->ol_flags |= bnxt_parse_csum_fields_v3(flags2, error_v2);
+}
 #endif /*  _BNXT_RXR_H_ */
-- 
2.39.2 (Apple Git-143)


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 03/14] net/bnxt: log a message when multicast promisc mode changes
  2023-12-10  1:24 ` [PATCH v2 03/14] net/bnxt: log a message when multicast promisc mode changes Ajit Khaparde
@ 2023-12-10 17:56   ` Stephen Hemminger
  2023-12-10 22:58     ` Ajit Khaparde
  0 siblings, 1 reply; 21+ messages in thread
From: Stephen Hemminger @ 2023-12-10 17:56 UTC (permalink / raw)
  To: Ajit Khaparde; +Cc: dev, Kalesh AP, Somnath Kotur

On Sat,  9 Dec 2023 17:24:44 -0800
Ajit Khaparde <ajit.khaparde@broadcom.com> wrote:

> +		PMD_DRV_LOG(INFO, "Number of Mcast MACs added (%d) exceeded Max supported (%d)\n",
> +			    nb_mc_addr, BNXT_MAX_MC_ADDRS);

Use %u for unsigned variables.

> +		PMD_DRV_LOG(INFO, "Turning on Mcast promiscuous mode\n");

Do you really need two log lines.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 03/14] net/bnxt: log a message when multicast promisc mode changes
  2023-12-10 17:56   ` Stephen Hemminger
@ 2023-12-10 22:58     ` Ajit Khaparde
  0 siblings, 0 replies; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-10 22:58 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, Kalesh AP, Somnath Kotur

[-- Attachment #1: Type: text/plain, Size: 631 bytes --]

On Sun, Dec 10, 2023 at 9:56 AM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> On Sat,  9 Dec 2023 17:24:44 -0800
> Ajit Khaparde <ajit.khaparde@broadcom.com> wrote:
>
> > +             PMD_DRV_LOG(INFO, "Number of Mcast MACs added (%d) exceeded Max supported (%d)\n",
> > +                         nb_mc_addr, BNXT_MAX_MC_ADDRS);
>
> Use %u for unsigned variables.
Ok. Sure. We will update it in v3.

>
>
> > +             PMD_DRV_LOG(INFO, "Turning on Mcast promiscuous mode\n");
>
> Do you really need two log lines.
For the dev team, even one is enough.
The field team thinks two is more clear.

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 00/14] support new 5760X P7 devices
  2023-12-10  1:24 [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
                   ` (13 preceding siblings ...)
  2023-12-10  1:24 ` [PATCH v2 14/14] net/bnxt: add support for v3 Rx completion Ajit Khaparde
@ 2023-12-13  5:33 ` Ajit Khaparde
  2023-12-13  7:57   ` David Marchand
  14 siblings, 1 reply; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-13  5:33 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 2142 bytes --]

On Sat, Dec 9, 2023 at 5:31 PM Ajit Khaparde <ajit.khaparde@broadcom.com> wrote:
>
> While some of the patches refactor and improve existing code,
> this series adds support for the new 5760X P7 device family.
> Follow-on patches will incrementally add more functionality.
>
> v1->v2:
> - Fixed unused variable error
> - Fixed some spellings
> - Code refactoring and fixes in backing store v2

Patchset applied to dpdk-next-net-brcm for-next-net branch.
Thanks

>
> Ajit Khaparde (12):
>   net/bnxt: refactor epoch setting
>   net/bnxt: update HWRM API
>   net/bnxt: use the correct COS queue for Tx
>   net/bnxt: refactor mem zone allocation
>   net/bnxt: add support for p7 device family
>   net/bnxt: refactor code to support P7 devices
>   net/bnxt: fix array overflow
>   net/bnxt: add support for backing store v2
>   net/bnxt: modify sending new HWRM commands to firmware
>   net/bnxt: retry HWRM ver get if the command fails
>   net/bnxt: cap ring resources for P7 devices
>   net/bnxt: add support for v3 Rx completion
>
> Kalesh AP (1):
>   net/bnxt: log a message when multicast promisc mode changes
>
> Kishore Padmanabha (1):
>   net/bnxt: refactor the ulp initialization
>
>  drivers/net/bnxt/bnxt.h                |   97 +-
>  drivers/net/bnxt/bnxt_cpr.h            |    5 +-
>  drivers/net/bnxt/bnxt_ethdev.c         |  319 ++++-
>  drivers/net/bnxt/bnxt_flow.c           |    2 +-
>  drivers/net/bnxt/bnxt_hwrm.c           |  416 ++++++-
>  drivers/net/bnxt/bnxt_hwrm.h           |   15 +
>  drivers/net/bnxt/bnxt_ring.c           |   15 +-
>  drivers/net/bnxt/bnxt_rxq.c            |    2 +-
>  drivers/net/bnxt/bnxt_rxr.c            |   93 +-
>  drivers/net/bnxt/bnxt_rxr.h            |   92 ++
>  drivers/net/bnxt/bnxt_util.c           |   10 +
>  drivers/net/bnxt/bnxt_util.h           |    1 +
>  drivers/net/bnxt/bnxt_vnic.c           |   58 +-
>  drivers/net/bnxt/bnxt_vnic.h           |    1 -
>  drivers/net/bnxt/hsi_struct_def_dpdk.h | 1531 ++++++++++++++++++++++--
>  15 files changed, 2408 insertions(+), 249 deletions(-)
>
> --
> 2.39.2 (Apple Git-143)
>

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 00/14] support new 5760X P7 devices
  2023-12-13  5:33 ` [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
@ 2023-12-13  7:57   ` David Marchand
  2023-12-13 14:49     ` Ajit Khaparde
  0 siblings, 1 reply; 21+ messages in thread
From: David Marchand @ 2023-12-13  7:57 UTC (permalink / raw)
  To: Ajit Khaparde; +Cc: dev, Ferruh Yigit, Thomas Monjalon

On Wed, Dec 13, 2023 at 6:34 AM Ajit Khaparde
<ajit.khaparde@broadcom.com> wrote:
>
> On Sat, Dec 9, 2023 at 5:31 PM Ajit Khaparde <ajit.khaparde@broadcom.com> wrote:
> >
> > While some of the patches refactor and improve existing code,
> > this series adds support for the new 5760X P7 device family.
> > Follow-on patches will incrementally add more functionality.
> >
> > v1->v2:
> > - Fixed unused variable error
> > - Fixed some spellings
> > - Code refactoring and fixes in backing store v2
>
> Patchset applied to dpdk-next-net-brcm for-next-net branch.
> Thanks

In case you did not read my mail about mirroring in github, this
for-next-net branch has been mirrored (cool).
And now GHA runs on this branch, but it failed (not cool).
https://github.com/DPDK/dpdk/actions/runs/7191182897/job/19585464602


Looking at the error, I think you applied the v2 (series 30499) and
not the v3 (series 30511) of this work.

$ git diff ovsrobot/series_30499..ovsrobot/series_30511
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 8aca3c6fba..75e968394f 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1312,7 +1312,7 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
 }

 static eth_tx_burst_t
-bnxt_transmit_function(struct rte_eth_dev *eth_dev)
+bnxt_transmit_function(__rte_unused struct rte_eth_dev *eth_dev)
 {
 #if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
        uint64_t offloads = eth_dev->data->dev_conf.txmode.offloads;
@@ -2929,7 +2929,7 @@ bnxt_dev_set_mc_addr_list_op(struct rte_eth_dev *eth_dev,
        bp->nb_mc_addr = nb_mc_addr;

        if (nb_mc_addr > BNXT_MAX_MC_ADDRS) {
-               PMD_DRV_LOG(INFO, "Number of Mcast MACs added (%d)
exceeded Max supported (%d)\n",
+               PMD_DRV_LOG(INFO, "Number of Mcast MACs added (%u)
exceeded Max supported (%u)\n",
                            nb_mc_addr, BNXT_MAX_MC_ADDRS);
                PMD_DRV_LOG(INFO, "Turning on Mcast promiscuous mode\n");
                vnic->flags |= BNXT_VNIC_INFO_ALLMULTI;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index c6d774bd14..e56f7693af 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -5653,8 +5653,6 @@ int
bnxt_hwrm_func_backing_store_types_count(struct bnxt *bp)
                req.type = rte_cpu_to_le_16(type);
                rc = bnxt_hwrm_send_message(bp, &req, sizeof(req),
BNXT_USE_CHIMP_MB);
                HWRM_CHECK_RESULT();
-               if (rc != 0)
-                       return rc;

                flags = rte_le_to_cpu_32(resp->flags);
                type = rte_le_to_cpu_16(resp->next_valid_type);


-- 
David Marchand


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 00/14] support new 5760X P7 devices
  2023-12-13  7:57   ` David Marchand
@ 2023-12-13 14:49     ` Ajit Khaparde
  2023-12-13 19:09       ` Ajit Khaparde
  0 siblings, 1 reply; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-13 14:49 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Ferruh Yigit, Thomas Monjalon

[-- Attachment #1: Type: text/plain, Size: 3072 bytes --]

On Tue, Dec 12, 2023 at 11:57 PM David Marchand
<david.marchand@redhat.com> wrote:
>
> On Wed, Dec 13, 2023 at 6:34 AM Ajit Khaparde
> <ajit.khaparde@broadcom.com> wrote:
> >
> > On Sat, Dec 9, 2023 at 5:31 PM Ajit Khaparde <ajit.khaparde@broadcom.com> wrote:
> > >
> > > While some of the patches refactor and improve existing code,
> > > this series adds support for the new 5760X P7 device family.
> > > Follow-on patches will incrementally add more functionality.
> > >
> > > v1->v2:
> > > - Fixed unused variable error
> > > - Fixed some spellings
> > > - Code refactoring and fixes in backing store v2
> >
> > Patchset applied to dpdk-next-net-brcm for-next-net branch.
> > Thanks
>
> In case you did not read my mail about mirroring in github, this
> for-next-net branch has been mirrored (cool).
> And now GHA runs on this branch, but it failed (not cool).
> https://github.com/DPDK/dpdk/actions/runs/7191182897/job/19585464602
Hmm. It tested ok on my setup. Let me take a look.

>
>
> Looking at the error, I think you applied the v2 (series 30499) and
> not the v3 (series 30511) of this work.
Let me check.

>
> $ git diff ovsrobot/series_30499..ovsrobot/series_30511
> diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> index 8aca3c6fba..75e968394f 100644
> --- a/drivers/net/bnxt/bnxt_ethdev.c
> +++ b/drivers/net/bnxt/bnxt_ethdev.c
> @@ -1312,7 +1312,7 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
>  }
>
>  static eth_tx_burst_t
> -bnxt_transmit_function(struct rte_eth_dev *eth_dev)
> +bnxt_transmit_function(__rte_unused struct rte_eth_dev *eth_dev)
>  {
>  #if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
>         uint64_t offloads = eth_dev->data->dev_conf.txmode.offloads;
> @@ -2929,7 +2929,7 @@ bnxt_dev_set_mc_addr_list_op(struct rte_eth_dev *eth_dev,
>         bp->nb_mc_addr = nb_mc_addr;
>
>         if (nb_mc_addr > BNXT_MAX_MC_ADDRS) {
> -               PMD_DRV_LOG(INFO, "Number of Mcast MACs added (%d)
> exceeded Max supported (%d)\n",
> +               PMD_DRV_LOG(INFO, "Number of Mcast MACs added (%u)
> exceeded Max supported (%u)\n",
>                             nb_mc_addr, BNXT_MAX_MC_ADDRS);
>                 PMD_DRV_LOG(INFO, "Turning on Mcast promiscuous mode\n");
>                 vnic->flags |= BNXT_VNIC_INFO_ALLMULTI;
> diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
> index c6d774bd14..e56f7693af 100644
> --- a/drivers/net/bnxt/bnxt_hwrm.c
> +++ b/drivers/net/bnxt/bnxt_hwrm.c
> @@ -5653,8 +5653,6 @@ int
> bnxt_hwrm_func_backing_store_types_count(struct bnxt *bp)
>                 req.type = rte_cpu_to_le_16(type);
>                 rc = bnxt_hwrm_send_message(bp, &req, sizeof(req),
> BNXT_USE_CHIMP_MB);
>                 HWRM_CHECK_RESULT();
> -               if (rc != 0)
> -                       return rc;
>
>                 flags = rte_le_to_cpu_32(resp->flags);
>                 type = rte_le_to_cpu_16(resp->next_valid_type);
>
>
> --
> David Marchand
>

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 00/14] support new 5760X P7 devices
  2023-12-13 14:49     ` Ajit Khaparde
@ 2023-12-13 19:09       ` Ajit Khaparde
  0 siblings, 0 replies; 21+ messages in thread
From: Ajit Khaparde @ 2023-12-13 19:09 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Ferruh Yigit, Thomas Monjalon

[-- Attachment #1: Type: text/plain, Size: 4119 bytes --]

On Wed, Dec 13, 2023 at 6:49 AM Ajit Khaparde
<ajit.khaparde@broadcom.com> wrote:
>
> On Tue, Dec 12, 2023 at 11:57 PM David Marchand
> <david.marchand@redhat.com> wrote:
> >
> > On Wed, Dec 13, 2023 at 6:34 AM Ajit Khaparde
> > <ajit.khaparde@broadcom.com> wrote:
> > >
> > > On Sat, Dec 9, 2023 at 5:31 PM Ajit Khaparde <ajit.khaparde@broadcom.com> wrote:
> > > >
> > > > While some of the patches refactor and improve existing code,
> > > > this series adds support for the new 5760X P7 device family.
> > > > Follow-on patches will incrementally add more functionality.
> > > >
> > > > v1->v2:
> > > > - Fixed unused variable error
> > > > - Fixed some spellings
> > > > - Code refactoring and fixes in backing store v2
> > >
> > > Patchset applied to dpdk-next-net-brcm for-next-net branch.
> > > Thanks
> >
> > In case you did not read my mail about mirroring in github, this
> > for-next-net branch has been mirrored (cool).
> > And now GHA runs on this branch, but it failed (not cool).
> > https://github.com/DPDK/dpdk/actions/runs/7191182897/job/19585464602
> Hmm. It tested ok on my setup. Let me take a look.
>
> >
> >
> > Looking at the error, I think you applied the v2 (series 30499) and
> > not the v3 (series 30511) of this work.
> Let me check.

I spent some time trying to dig this up.
It turns out that I had not updated my staging branch
after I mailed the v3 patchset.
And that's how v2 ended up getting merged.
Thanks for catching it and pointing it out.

I pushed the v3 to the dpdk-next-net-brcm for-next-net
branch and the GHA completed successfully.
https://github.com/DPDK/dpdk/actions/runs/7199579031

Please pick this version from the subtree.

----
While some of the patches refactor and improve existing code,
this series adds support for the new 5760X P7 device family.
Follow-on patches will incrementally add more functionality.

v1->v2:
- Fixed unused variable error
- Fixed some spellings
- Code refactoring and fixes in backing store v2

v2->v3:
- Addressed review comments
- Fixed unused arg error


>
> >
> > $ git diff ovsrobot/series_30499..ovsrobot/series_30511
> > diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
> > index 8aca3c6fba..75e968394f 100644
> > --- a/drivers/net/bnxt/bnxt_ethdev.c
> > +++ b/drivers/net/bnxt/bnxt_ethdev.c
> > @@ -1312,7 +1312,7 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
> >  }
> >
> >  static eth_tx_burst_t
> > -bnxt_transmit_function(struct rte_eth_dev *eth_dev)
> > +bnxt_transmit_function(__rte_unused struct rte_eth_dev *eth_dev)
> >  {
> >  #if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64)
> >         uint64_t offloads = eth_dev->data->dev_conf.txmode.offloads;
> > @@ -2929,7 +2929,7 @@ bnxt_dev_set_mc_addr_list_op(struct rte_eth_dev *eth_dev,
> >         bp->nb_mc_addr = nb_mc_addr;
> >
> >         if (nb_mc_addr > BNXT_MAX_MC_ADDRS) {
> > -               PMD_DRV_LOG(INFO, "Number of Mcast MACs added (%d)
> > exceeded Max supported (%d)\n",
> > +               PMD_DRV_LOG(INFO, "Number of Mcast MACs added (%u)
> > exceeded Max supported (%u)\n",
> >                             nb_mc_addr, BNXT_MAX_MC_ADDRS);
> >                 PMD_DRV_LOG(INFO, "Turning on Mcast promiscuous mode\n");
> >                 vnic->flags |= BNXT_VNIC_INFO_ALLMULTI;
> > diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
> > index c6d774bd14..e56f7693af 100644
> > --- a/drivers/net/bnxt/bnxt_hwrm.c
> > +++ b/drivers/net/bnxt/bnxt_hwrm.c
> > @@ -5653,8 +5653,6 @@ int
> > bnxt_hwrm_func_backing_store_types_count(struct bnxt *bp)
> >                 req.type = rte_cpu_to_le_16(type);
> >                 rc = bnxt_hwrm_send_message(bp, &req, sizeof(req),
> > BNXT_USE_CHIMP_MB);
> >                 HWRM_CHECK_RESULT();
> > -               if (rc != 0)
> > -                       return rc;
> >
> >                 flags = rte_le_to_cpu_32(resp->flags);
> >                 type = rte_le_to_cpu_16(resp->next_valid_type);
> >
> >
> > --
> > David Marchand
> >

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4218 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2023-12-13 19:09 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-10  1:24 [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
2023-12-10  1:24 ` [PATCH v2 01/14] net/bnxt: refactor epoch setting Ajit Khaparde
2023-12-10  1:24 ` [PATCH v2 02/14] net/bnxt: update HWRM API Ajit Khaparde
2023-12-10  1:24 ` [PATCH v2 03/14] net/bnxt: log a message when multicast promisc mode changes Ajit Khaparde
2023-12-10 17:56   ` Stephen Hemminger
2023-12-10 22:58     ` Ajit Khaparde
2023-12-10  1:24 ` [PATCH v2 04/14] net/bnxt: use the correct COS queue for Tx Ajit Khaparde
2023-12-10  1:24 ` [PATCH v2 05/14] net/bnxt: refactor mem zone allocation Ajit Khaparde
2023-12-10  1:24 ` [PATCH v2 06/14] net/bnxt: add support for p7 device family Ajit Khaparde
2023-12-10  1:24 ` [PATCH v2 07/14] net/bnxt: refactor code to support P7 devices Ajit Khaparde
2023-12-10  1:24 ` [PATCH v2 08/14] net/bnxt: fix array overflow Ajit Khaparde
2023-12-10  1:24 ` [PATCH v2 09/14] net/bnxt: add support for backing store v2 Ajit Khaparde
2023-12-10  1:24 ` [PATCH v2 10/14] net/bnxt: refactor the ulp initialization Ajit Khaparde
2023-12-10  1:24 ` [PATCH v2 11/14] net/bnxt: modify sending new HWRM commands to firmware Ajit Khaparde
2023-12-10  1:24 ` [PATCH v2 12/14] net/bnxt: retry HWRM ver get if the command fails Ajit Khaparde
2023-12-10  1:24 ` [PATCH v2 13/14] net/bnxt: cap ring resources for P7 devices Ajit Khaparde
2023-12-10  1:24 ` [PATCH v2 14/14] net/bnxt: add support for v3 Rx completion Ajit Khaparde
2023-12-13  5:33 ` [PATCH v2 00/14] support new 5760X P7 devices Ajit Khaparde
2023-12-13  7:57   ` David Marchand
2023-12-13 14:49     ` Ajit Khaparde
2023-12-13 19:09       ` Ajit Khaparde

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).