DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 00/13] patchset for bnxt PMD
@ 2024-10-25 17:57 Ajit Khaparde
  2024-10-25 17:57 ` [PATCH 01/13] net/bnxt: fix TCP and UDP checksum flags Ajit Khaparde
                   ` (12 more replies)
  0 siblings, 13 replies; 14+ messages in thread
From: Ajit Khaparde @ 2024-10-25 17:57 UTC (permalink / raw)
  To: dev

This patchset contains changes to the BNXT PMD.
Some of them are fixes.
Please accept and apply.

Ajit Khaparde (6):
  net/bnxt: fix TCP and UDP checksum flags
  net/bnxt: free and account a bad Tx mbuf
  net/bnxt: fix LRO offload capability
  net/bnxt: remove some unnecessary logs
  net/bnxt: add support for buffer split Rx offload
  net/bnxt: remove unnecessary ifdef

Kalesh AP (2):
  net/bnxt: add check to validate TSO segment size
  net/bnxt: add check for invalid mbuf passed by application

Kishore Padmanabha (2):
  net/bnxt: disable VLAN filter when TF is enabled
  net/bnxt: remove the VNIC async event handler

Manish Kurup (1):
  net/bnxt: register for and handle RSS change event

Peter Spreadborough (1):
  net/bnxt: fix bad action offset in Tx bd

Somnath Kotur (1):
  net/bnxt: add check for number of segs

 drivers/net/bnxt/bnxt.h        |   5 ++
 drivers/net/bnxt/bnxt_cpr.c    |  54 ++-----------
 drivers/net/bnxt/bnxt_ethdev.c |  11 ++-
 drivers/net/bnxt/bnxt_hwrm.c   |  55 +++++++++----
 drivers/net/bnxt/bnxt_hwrm.h   |   2 +
 drivers/net/bnxt/bnxt_reps.c   |   4 +-
 drivers/net/bnxt/bnxt_rxq.c    |  50 ++++++++++--
 drivers/net/bnxt/bnxt_rxq.h    |   4 +
 drivers/net/bnxt/bnxt_rxr.c    |  24 ++----
 drivers/net/bnxt/bnxt_stats.c  |   7 ++
 drivers/net/bnxt/bnxt_txq.h    |   1 +
 drivers/net/bnxt/bnxt_txr.c    | 138 ++++++++++++++++++++++++++-------
 drivers/net/bnxt/bnxt_vnic.h   |   1 +
 13 files changed, 235 insertions(+), 121 deletions(-)

-- 
2.39.5 (Apple Git-154)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 01/13] net/bnxt: fix TCP and UDP checksum flags
  2024-10-25 17:57 [PATCH 00/13] patchset for bnxt PMD Ajit Khaparde
@ 2024-10-25 17:57 ` Ajit Khaparde
  2024-10-25 17:57 ` [PATCH 02/13] net/bnxt: fix bad action offset in Tx bd Ajit Khaparde
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Ajit Khaparde @ 2024-10-25 17:57 UTC (permalink / raw)
  To: dev; +Cc: stable, Kalesh AP, Damodharam Ammepalli

Set TCP and UDP checksum flags explicitly for LSO capable packets.
In some older chip variants, this will enable the hardware compute
the checksum correctly for tunnel and non-tunnel packets.

Fixes: 1d76c878b21d ("net/bnxt: support updating IPID")
Cc: stable@dpdk.org

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Damodharam Ammepalli <damodharam.ammepalli@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 12e4faa8fa..38f858f27f 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -319,7 +319,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 
 			/* TSO */
 			txbd1->lflags |= TX_BD_LONG_LFLAGS_LSO |
-					 TX_BD_LONG_LFLAGS_T_IPID;
+					 TX_BD_LONG_LFLAGS_T_IPID |
+					 TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM |
+					 TX_BD_LONG_LFLAGS_T_IP_CHKSUM;
 			hdr_size = tx_pkt->l2_len + tx_pkt->l3_len +
 					tx_pkt->l4_len;
 			hdr_size += (tx_pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 02/13] net/bnxt: fix bad action offset in Tx bd
  2024-10-25 17:57 [PATCH 00/13] patchset for bnxt PMD Ajit Khaparde
  2024-10-25 17:57 ` [PATCH 01/13] net/bnxt: fix TCP and UDP checksum flags Ajit Khaparde
@ 2024-10-25 17:57 ` Ajit Khaparde
  2024-10-25 17:57 ` [PATCH 03/13] net/bnxt: add check to validate TSO segment size Ajit Khaparde
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Ajit Khaparde @ 2024-10-25 17:57 UTC (permalink / raw)
  To: dev; +Cc: Peter Spreadborough, stable, Kishore Padmanabha

From: Peter Spreadborough <peter.spreadborough@broadcom.com>

This change ensures that the high part of an action table entry
offset stored in the Tx BD is set correctly. A bad value will
cause the PDCU to abort a fetch an may stall the pipeline.

Fixes: 527b10089cc5 ("net/bnxt: optimize Tx completion handling")
Cc: stable@dpdk.org

Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 38f858f27f..c82b11e733 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -308,10 +308,15 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		 */
 		txbd1->kid_or_ts_high_mss = 0;
 
-		if (txq->vfr_tx_cfa_action)
-			txbd1->cfa_action = txq->vfr_tx_cfa_action;
-		else
-			txbd1->cfa_action = txq->bp->tx_cfa_action;
+		if (txq->vfr_tx_cfa_action) {
+			txbd1->cfa_action = txq->vfr_tx_cfa_action & 0xffff;
+			txbd1->cfa_action_high = (txq->vfr_tx_cfa_action >> 16) &
+				TX_BD_LONG_CFA_ACTION_HIGH_MASK;
+		} else {
+			txbd1->cfa_action = txq->bp->tx_cfa_action & 0xffff;
+			txbd1->cfa_action_high = (txq->bp->tx_cfa_action >> 16) &
+				TX_BD_LONG_CFA_ACTION_HIGH_MASK;
+		}
 
 		if (tx_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG ||
 		    tx_pkt->ol_flags & RTE_MBUF_F_TX_UDP_SEG) {
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 03/13] net/bnxt: add check to validate TSO segment size
  2024-10-25 17:57 [PATCH 00/13] patchset for bnxt PMD Ajit Khaparde
  2024-10-25 17:57 ` [PATCH 01/13] net/bnxt: fix TCP and UDP checksum flags Ajit Khaparde
  2024-10-25 17:57 ` [PATCH 02/13] net/bnxt: fix bad action offset in Tx bd Ajit Khaparde
@ 2024-10-25 17:57 ` Ajit Khaparde
  2024-10-25 17:57 ` [PATCH 04/13] net/bnxt: add check for number of segs Ajit Khaparde
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Ajit Khaparde @ 2024-10-25 17:57 UTC (permalink / raw)
  To: dev; +Cc: Kalesh AP, Somnath Kotur

From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>

Currently driver has a check to validate TSO seg_size for 0 which
is to detect corrupted packet. But user can set any value as the
TSO seg_size. Adding a check to validate the minimum TSO seg_size
in the driver.

Driver will drop a packet with TSO seg_size less than 4 when
TSO is requested in the MBUF flags.

Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 31 +++++++++++++++++--------------
 1 file changed, 17 insertions(+), 14 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index c82b11e733..6d7e9962ce 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -129,23 +129,26 @@ bnxt_xmit_need_long_bd(struct rte_mbuf *tx_pkt, struct bnxt_tx_queue *txq)
  * segments or fragments in those cases.
  */
 static bool
-bnxt_zero_data_len_tso_segsz(struct rte_mbuf *tx_pkt, uint8_t data_len_chk)
+bnxt_zero_data_len_tso_segsz(struct rte_mbuf *tx_pkt, bool data_len_chk, bool tso_segsz_check)
 {
-	const char *type_str = "Data len";
-	uint16_t len_to_check = tx_pkt->data_len;
+	const char *type_str;
 
-	if (data_len_chk == 0) {
-		type_str = "TSO Seg size";
-		len_to_check = tx_pkt->tso_segsz;
+	/* Minimum TSO seg_size should be 4 */
+	if (tso_segsz_check && tx_pkt->tso_segsz < 4) {
+		type_str = "Unsupported TSO Seg size";
+		goto dump_pkt;
 	}
 
-	if (len_to_check == 0) {
-		PMD_DRV_LOG_LINE(ERR, "Error! Tx pkt %s == 0", type_str);
-		rte_pktmbuf_dump(stdout, tx_pkt, 64);
-		rte_dump_stack();
-		return true;
+	if (data_len_chk && tx_pkt->data_len == 0) {
+		type_str = "Data len == 0";
+		goto dump_pkt;
 	}
 	return false;
+dump_pkt:
+	PMD_DRV_LOG_LINE(ERR, "Error! Tx pkt %s == 0", type_str);
+	rte_pktmbuf_dump(stdout, tx_pkt, 64);
+	rte_dump_stack();
+	return true;
 }
 
 static bool
@@ -248,7 +251,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	}
 
 	/* Check non zero data_len */
-	if (unlikely(bnxt_zero_data_len_tso_segsz(tx_pkt, 1)))
+	if (unlikely(bnxt_zero_data_len_tso_segsz(tx_pkt, true, false)))
 		return -EIO;
 
 	if (unlikely(txq->bp->ptp_cfg != NULL && txq->bp->ptp_all_rx_tstamp == 1))
@@ -338,7 +341,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 			 */
 			txbd1->kid_or_ts_low_hdr_size = hdr_size >> 1;
 			txbd1->kid_or_ts_high_mss = tx_pkt->tso_segsz;
-			if (unlikely(bnxt_zero_data_len_tso_segsz(tx_pkt, 0)))
+			if (unlikely(bnxt_zero_data_len_tso_segsz(tx_pkt, false, true)))
 				return -EIO;
 
 		} else if ((tx_pkt->ol_flags & PKT_TX_OIP_IIP_TCP_UDP_CKSUM) ==
@@ -413,7 +416,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	m_seg = tx_pkt->next;
 	while (m_seg) {
 		/* Check non zero data_len */
-		if (unlikely(bnxt_zero_data_len_tso_segsz(m_seg, 1)))
+		if (unlikely(bnxt_zero_data_len_tso_segsz(m_seg, true, false)))
 			return -EIO;
 		txr->tx_raw_prod = RING_NEXT(txr->tx_raw_prod);
 
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 04/13] net/bnxt: add check for number of segs
  2024-10-25 17:57 [PATCH 00/13] patchset for bnxt PMD Ajit Khaparde
                   ` (2 preceding siblings ...)
  2024-10-25 17:57 ` [PATCH 03/13] net/bnxt: add check to validate TSO segment size Ajit Khaparde
@ 2024-10-25 17:57 ` Ajit Khaparde
  2024-10-25 17:57 ` [PATCH 05/13] net/bnxt: add check for invalid mbuf passed by application Ajit Khaparde
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Ajit Khaparde @ 2024-10-25 17:57 UTC (permalink / raw)
  To: dev; +Cc: Somnath Kotur

From: Somnath Kotur <somnath.kotur@broadcom.com>

If the application passes incorrect number of segs for a Tx pkt i.e.
sets it to 5 while actually sending down only a single mbuf, this could
escape all the existing driver checks and driver could end up sending
down garbage TX BDs to the HW. This in turn could lead to a Tx pipeline
stall.
Fix it by validating the number of segs passed for the Tx pkt against
what is actually set by the application to prevent this.

Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 6d7e9962ce..51d3689e9c 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -194,6 +194,21 @@ bnxt_check_pkt_needs_ts(struct rte_mbuf *m)
 	return false;
 }
 
+static bool
+bnxt_invalid_nb_segs(struct rte_mbuf *tx_pkt)
+{
+	uint16_t nb_segs = 1;
+	struct rte_mbuf *m_seg;
+
+	m_seg = tx_pkt->next;
+	while (m_seg) {
+		nb_segs++;
+		m_seg = m_seg->next;
+	}
+
+	return (nb_segs != tx_pkt->nb_segs);
+}
+
 static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 				struct bnxt_tx_queue *txq,
 				uint16_t *coal_pkts,
@@ -221,6 +236,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	if (unlikely(is_bnxt_in_error(txq->bp)))
 		return -EIO;
 
+	if (unlikely(bnxt_invalid_nb_segs(tx_pkt)))
+		return -EINVAL;
+
 	long_bd = bnxt_xmit_need_long_bd(tx_pkt, txq);
 	nr_bds = long_bd + tx_pkt->nb_segs;
 
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 05/13] net/bnxt: add check for invalid mbuf passed by application
  2024-10-25 17:57 [PATCH 00/13] patchset for bnxt PMD Ajit Khaparde
                   ` (3 preceding siblings ...)
  2024-10-25 17:57 ` [PATCH 04/13] net/bnxt: add check for number of segs Ajit Khaparde
@ 2024-10-25 17:57 ` Ajit Khaparde
  2024-10-25 17:57 ` [PATCH 06/13] net/bnxt: free and account a bad Tx mbuf Ajit Khaparde
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Ajit Khaparde @ 2024-10-25 17:57 UTC (permalink / raw)
  To: dev; +Cc: Kalesh AP, Somnath Kotur

From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>

If the application passes invalid mbuf for a Tx pkt, this could
escape all the existing driver checks and driver could end up sending
down invalid TX BDs to the HW. This in turn could lead to a FW reset.
Fix by validating the "mbuf->buf_iova" or "mbuf->buf_addr" passed for
the Tx pkt by the application.

Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
---
 drivers/net/bnxt/bnxt_txr.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 51d3689e9c..4e9e377d5b 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -209,6 +209,25 @@ bnxt_invalid_nb_segs(struct rte_mbuf *tx_pkt)
 	return (nb_segs != tx_pkt->nb_segs);
 }
 
+static int bnxt_invalid_mbuf(struct rte_mbuf *mbuf)
+{
+	uint32_t mbuf_size = sizeof(struct rte_mbuf) + mbuf->priv_size;
+	const char *reason;
+
+	if (unlikely(rte_eal_iova_mode() != RTE_IOVA_VA &&
+		     rte_eal_iova_mode() != RTE_IOVA_PA))
+		return 0;
+
+	if (unlikely(rte_mbuf_check(mbuf, 1, &reason)))
+		return -EINVAL;
+
+	if (unlikely(mbuf->buf_iova < mbuf_size ||
+		     (mbuf->buf_iova != rte_mempool_virt2iova(mbuf) + mbuf_size)))
+		return -EINVAL;
+
+	return 0;
+}
+
 static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 				struct bnxt_tx_queue *txq,
 				uint16_t *coal_pkts,
@@ -236,6 +255,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	if (unlikely(is_bnxt_in_error(txq->bp)))
 		return -EIO;
 
+	if (unlikely(bnxt_invalid_mbuf(tx_pkt)))
+		return -EINVAL;
+
 	if (unlikely(bnxt_invalid_nb_segs(tx_pkt)))
 		return -EINVAL;
 
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 06/13] net/bnxt: free and account a bad Tx mbuf
  2024-10-25 17:57 [PATCH 00/13] patchset for bnxt PMD Ajit Khaparde
                   ` (4 preceding siblings ...)
  2024-10-25 17:57 ` [PATCH 05/13] net/bnxt: add check for invalid mbuf passed by application Ajit Khaparde
@ 2024-10-25 17:57 ` Ajit Khaparde
  2024-10-25 17:57 ` [PATCH 07/13] net/bnxt: register for and handle RSS change event Ajit Khaparde
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Ajit Khaparde @ 2024-10-25 17:57 UTC (permalink / raw)
  To: dev; +Cc: Kalesh AP

When the PMD gets a bad Tx mbuf from the application, it is not
freeing it currently. The PMD is depending on the application to
do it. but in most cases, the application may not know this.

Instead the Tx burst function now frees the mbuf and updates the
oerrors counter to indicate that the PMD encounteres a bad mbuf
during transmit.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
---
 drivers/net/bnxt/bnxt_stats.c |  7 ++++
 drivers/net/bnxt/bnxt_txq.h   |  1 +
 drivers/net/bnxt/bnxt_txr.c   | 64 +++++++++++++++++++++++++----------
 3 files changed, 54 insertions(+), 18 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_stats.c b/drivers/net/bnxt/bnxt_stats.c
index 5e59afe79f..ccd28f19b3 100644
--- a/drivers/net/bnxt/bnxt_stats.c
+++ b/drivers/net/bnxt/bnxt_stats.c
@@ -746,6 +746,7 @@ int bnxt_stats_get_op(struct rte_eth_dev *eth_dev,
 			return rc;
 
 		bnxt_fill_rte_eth_stats(bnxt_stats, &ring_stats, i, false);
+		bnxt_stats->oerrors += rte_atomic64_read(&txq->tx_mbuf_drop);
 	}
 
 	return rc;
@@ -792,6 +793,12 @@ int bnxt_stats_reset_op(struct rte_eth_dev *eth_dev)
 		rxq->rx_mbuf_alloc_fail = 0;
 	}
 
+	for (i = 0; i < bp->tx_cp_nr_rings; i++) {
+		struct bnxt_tx_queue *txq = bp->tx_queues[i];
+
+		rte_atomic64_clear(&txq->tx_mbuf_drop);
+	}
+
 	bnxt_clear_prev_stat(bp);
 
 	return ret;
diff --git a/drivers/net/bnxt/bnxt_txq.h b/drivers/net/bnxt/bnxt_txq.h
index 9e54985c4c..44a672a401 100644
--- a/drivers/net/bnxt/bnxt_txq.h
+++ b/drivers/net/bnxt/bnxt_txq.h
@@ -34,6 +34,7 @@ struct bnxt_tx_queue {
 	const struct rte_memzone *mz;
 	struct rte_mbuf **free;
 	uint64_t offloads;
+	rte_atomic64_t          tx_mbuf_drop;
 };
 
 void bnxt_free_txq_stats(struct bnxt_tx_queue *txq);
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 4e9e377d5b..e961fed9b5 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -228,7 +228,7 @@ static int bnxt_invalid_mbuf(struct rte_mbuf *mbuf)
 	return 0;
 }
 
-static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
+static int bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 				struct bnxt_tx_queue *txq,
 				uint16_t *coal_pkts,
 				struct tx_bd_long **last_txbd)
@@ -251,27 +251,37 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 		TX_BD_LONG_FLAGS_LHINT_LT2K,
 		TX_BD_LONG_FLAGS_LHINT_LT2K
 	};
+	int rc = 0;
 
-	if (unlikely(is_bnxt_in_error(txq->bp)))
-		return -EIO;
+	if (unlikely(is_bnxt_in_error(txq->bp))) {
+		rc = -EIO;
+		goto ret;
+	}
 
-	if (unlikely(bnxt_invalid_mbuf(tx_pkt)))
-		return -EINVAL;
+	if (unlikely(bnxt_invalid_mbuf(tx_pkt))) {
+		rc = -EINVAL;
+		goto drop;
+	}
 
-	if (unlikely(bnxt_invalid_nb_segs(tx_pkt)))
-		return -EINVAL;
+	if (unlikely(bnxt_invalid_nb_segs(tx_pkt))) {
+		rc = -EINVAL;
+		goto drop;
+	}
 
 	long_bd = bnxt_xmit_need_long_bd(tx_pkt, txq);
 	nr_bds = long_bd + tx_pkt->nb_segs;
 
-	if (unlikely(bnxt_tx_avail(txq) < nr_bds))
-		return -ENOMEM;
+	if (unlikely(bnxt_tx_avail(txq) < nr_bds)) {
+		rc = -ENOMEM;
+		goto ret;
+	}
 
 	/* Check if number of Tx descriptors is above HW limit */
 	if (unlikely(nr_bds > BNXT_MAX_TSO_SEGS)) {
 		PMD_DRV_LOG_LINE(ERR,
 			    "Num descriptors %d exceeds HW limit", nr_bds);
-		return -ENOSPC;
+		rc = -EINVAL;
+		goto drop;
 	}
 
 	/* If packet length is less than minimum packet size, pad it */
@@ -283,7 +293,8 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 			PMD_DRV_LOG_LINE(ERR,
 				    "Failed to pad mbuf by %d bytes",
 				    pad);
-			return -ENOMEM;
+			rc = -ENOMEM;
+			goto ret;
 		}
 
 		/* Note: data_len, pkt len are updated in rte_pktmbuf_append */
@@ -291,8 +302,10 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	}
 
 	/* Check non zero data_len */
-	if (unlikely(bnxt_zero_data_len_tso_segsz(tx_pkt, true, false)))
-		return -EIO;
+	if (unlikely(bnxt_zero_data_len_tso_segsz(tx_pkt, true, false))) {
+		rc = -EINVAL;
+		goto drop;
+	}
 
 	if (unlikely(txq->bp->ptp_cfg != NULL && txq->bp->ptp_all_rx_tstamp == 1))
 		pkt_needs_ts = bnxt_check_pkt_needs_ts(tx_pkt);
@@ -381,8 +394,10 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 			 */
 			txbd1->kid_or_ts_low_hdr_size = hdr_size >> 1;
 			txbd1->kid_or_ts_high_mss = tx_pkt->tso_segsz;
-			if (unlikely(bnxt_zero_data_len_tso_segsz(tx_pkt, false, true)))
-				return -EIO;
+			if (unlikely(bnxt_zero_data_len_tso_segsz(tx_pkt, false, true))) {
+				rc = -EINVAL;
+				goto drop;
+			}
 
 		} else if ((tx_pkt->ol_flags & PKT_TX_OIP_IIP_TCP_UDP_CKSUM) ==
 			   PKT_TX_OIP_IIP_TCP_UDP_CKSUM) {
@@ -456,8 +471,10 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	m_seg = tx_pkt->next;
 	while (m_seg) {
 		/* Check non zero data_len */
-		if (unlikely(bnxt_zero_data_len_tso_segsz(m_seg, true, false)))
-			return -EIO;
+		if (unlikely(bnxt_zero_data_len_tso_segsz(m_seg, true, false))) {
+			rc = -EINVAL;
+			goto drop;
+		}
 		txr->tx_raw_prod = RING_NEXT(txr->tx_raw_prod);
 
 		prod = RING_IDX(ring, txr->tx_raw_prod);
@@ -477,6 +494,10 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 	txr->tx_raw_prod = RING_NEXT(txr->tx_raw_prod);
 
 	return 0;
+drop:
+	rte_pktmbuf_free(tx_pkt);
+ret:
+	return rc;
 }
 
 /*
@@ -644,6 +665,7 @@ uint16_t _bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 	uint16_t coal_pkts = 0;
 	struct bnxt_tx_queue *txq = tx_queue;
 	struct tx_bd_long *last_txbd = NULL;
+	uint8_t dropped = 0;
 
 	/* Handle TX completions */
 	bnxt_handle_tx_cp(txq);
@@ -660,8 +682,13 @@ uint16_t _bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		rc = bnxt_start_xmit(tx_pkts[nb_tx_pkts], txq,
 				     &coal_pkts, &last_txbd);
 
-		if (unlikely(rc))
+		if (unlikely(rc)) {
+			if (rc == -EINVAL) {
+				rte_atomic64_inc(&txq->tx_mbuf_drop);
+				dropped++;
+			}
 			break;
+		}
 	}
 
 	if (likely(nb_tx_pkts)) {
@@ -670,6 +697,7 @@ uint16_t _bnxt_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
 		bnxt_db_write(&txq->tx_ring->tx_db, txq->tx_ring->tx_raw_prod);
 	}
 
+	nb_tx_pkts += dropped;
 	return nb_tx_pkts;
 }
 
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 07/13] net/bnxt: register for and handle RSS change event
  2024-10-25 17:57 [PATCH 00/13] patchset for bnxt PMD Ajit Khaparde
                   ` (5 preceding siblings ...)
  2024-10-25 17:57 ` [PATCH 06/13] net/bnxt: free and account a bad Tx mbuf Ajit Khaparde
@ 2024-10-25 17:57 ` Ajit Khaparde
  2024-10-25 17:57 ` [PATCH 08/13] net/bnxt: fix LRO offload capability Ajit Khaparde
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Ajit Khaparde @ 2024-10-25 17:57 UTC (permalink / raw)
  To: dev; +Cc: Manish Kurup, Kalesh AP

From: Manish Kurup <manish.kurup@broadcom.com>

1. Register for RSS change events. When an RSS change occurs
   (especially for custom parsed tunnels), we need to update
   the RSS flags in the VNIC QCAPS so that upstream drivers
   don't send down the now unsupported bits
   ("config port all rss all" command). This will cause the
   firmware to fail the HWRM command. This should be done by
   the driver registering for said events, and re-reading the
   VNIC QCAPS for that bp.
2. Add a call to update QCAPS upon async notifications
   for the same.
3. Fix bug in PMD QCAPS update code The PMD QCAPS function
   only "sets" the new QCAPS flags, but not clearing them,
   if they were cleared due to some events. Fixed this by
   clearing the flags first, so that we could correctly
   set the new ones (for that bp).

Signed-off-by: Manish Kurup <manish.kurup@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_cpr.c  | 6 ++++++
 drivers/net/bnxt/bnxt_hwrm.c | 5 ++++-
 drivers/net/bnxt/bnxt_hwrm.h | 2 ++
 3 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_cpr.c b/drivers/net/bnxt/bnxt_cpr.c
index 455240a09d..4ffba6f594 100644
--- a/drivers/net/bnxt/bnxt_cpr.c
+++ b/drivers/net/bnxt/bnxt_cpr.c
@@ -294,6 +294,12 @@ void bnxt_handle_async_event(struct bnxt *bp,
 	case HWRM_ASYNC_EVENT_CMPL_EVENT_ID_VF_FLR:
 		bnxt_process_vf_flr(bp, data1);
 		break;
+	case HWRM_ASYNC_EVENT_CMPL_EVENT_ID_RSS_CHANGE:
+		/* RSS change notificaton, re-read QCAPS */
+		PMD_DRV_LOG_LINE(INFO, "Async event: RSS change event [%#x, %#x]",
+				 data1, data2);
+		bnxt_hwrm_vnic_qcaps(bp);
+		break;
 	default:
 		PMD_DRV_LOG_LINE(DEBUG, "handle_async_event id = 0x%x", event_id);
 		break;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 1ac4b8cd58..80f7c1a6a1 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -1307,6 +1307,8 @@ int bnxt_hwrm_vnic_qcaps(struct bnxt *bp)
 
 	HWRM_CHECK_RESULT();
 
+	bp->vnic_cap_flags = 0;
+
 	flags = rte_le_to_cpu_32(resp->flags);
 
 	if (flags & HWRM_VNIC_QCAPS_OUTPUT_FLAGS_COS_ASSIGNMENT_CAP) {
@@ -1444,7 +1446,8 @@ int bnxt_hwrm_func_driver_register(struct bnxt *bp)
 
 	req.async_event_fwd[2] |=
 		rte_cpu_to_le_32(ASYNC_CMPL_EVENT_ID_ECHO_REQUEST |
-				 ASYNC_CMPL_EVENT_ID_ERROR_REPORT);
+				 ASYNC_CMPL_EVENT_ID_ERROR_REPORT |
+				 ASYNC_CMPL_EVENT_ID_RSS_CHANGE);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h
index 2346ae637d..ecb6335b3d 100644
--- a/drivers/net/bnxt/bnxt_hwrm.h
+++ b/drivers/net/bnxt/bnxt_hwrm.h
@@ -41,6 +41,8 @@ struct hwrm_func_qstats_output;
 	(1 << (HWRM_ASYNC_EVENT_CMPL_EVENT_ID_ECHO_REQUEST - 64))
 #define	ASYNC_CMPL_EVENT_ID_ERROR_REPORT	\
 	(1 << (HWRM_ASYNC_EVENT_CMPL_EVENT_ID_ERROR_REPORT - 64))
+#define	ASYNC_CMPL_EVENT_ID_RSS_CHANGE	\
+	(1 << (HWRM_ASYNC_EVENT_CMPL_EVENT_ID_RSS_CHANGE - 64))
 
 #define HWRM_QUEUE_SERVICE_PROFILE_LOSSY \
 	HWRM_QUEUE_QPORTCFG_OUTPUT_QUEUE_ID0_SERVICE_PROFILE_LOSSY
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 08/13] net/bnxt: fix LRO offload capability
  2024-10-25 17:57 [PATCH 00/13] patchset for bnxt PMD Ajit Khaparde
                   ` (6 preceding siblings ...)
  2024-10-25 17:57 ` [PATCH 07/13] net/bnxt: register for and handle RSS change event Ajit Khaparde
@ 2024-10-25 17:57 ` Ajit Khaparde
  2024-10-25 17:57 ` [PATCH 09/13] net/bnxt: disable VLAN filter when TF is enabled Ajit Khaparde
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Ajit Khaparde @ 2024-10-25 17:57 UTC (permalink / raw)
  To: dev; +Cc: stable, Vasuthevan Maheswaran

Fix LRO offload capability for P7 devices.
Export the capability to the application only if compressed
Rx CQE mode is not enabled.

LRO aka TPA is not supported when compressed CQE mode is set.

Fixes: 3b56c3ffc182 ("net/bnxt: refactor code to support P7 devices")
Cc: stable@dpdk.org

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Vasuthevan Maheswaran <vasuthevan.maheswaran@broadcom.com>
---
 drivers/net/bnxt/bnxt_rxq.c | 7 ++++++-
 drivers/net/bnxt/bnxt_rxr.c | 3 +++
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index 1c25c57ca6..249fe7f6e5 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -30,10 +30,12 @@ uint64_t bnxt_get_rx_port_offloads(struct bnxt *bp)
 			  RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
 			  RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
 			  RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
-			  RTE_ETH_RX_OFFLOAD_TCP_LRO |
 			  RTE_ETH_RX_OFFLOAD_SCATTER |
 			  RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
+	if ((BNXT_CHIP_P7(bp) && !bnxt_compressed_rx_cqe_mode_enabled(bp)) ||
+	    BNXT_CHIP_P5(bp))
+		rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 	if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
 		rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	if (bp->vnic_cap_flags & BNXT_VNIC_CAP_VLAN_RX_STRIP)
@@ -244,6 +246,9 @@ void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq)
 		}
 	}
 
+	if (bnxt_compressed_rx_cqe_mode_enabled(rxq->bp))
+		return;
+
 	/* Free up mbufs in TPA */
 	tpa_info = rxq->rx_ring->tpa_info;
 	if (tpa_info) {
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 0f3fd5326e..dc0bf6032b 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -1671,6 +1671,9 @@ int bnxt_init_one_rx_ring(struct bnxt_rx_queue *rxq)
 	}
 	PMD_DRV_LOG_LINE(DEBUG, "AGG Done!");
 
+	if (bnxt_compressed_rx_cqe_mode_enabled(rxq->bp))
+		return 0;
+
 	if (rxr->tpa_info) {
 		unsigned int max_aggs = BNXT_TPA_MAX_AGGS(rxq->bp);
 
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 09/13] net/bnxt: disable VLAN filter when TF is enabled
  2024-10-25 17:57 [PATCH 00/13] patchset for bnxt PMD Ajit Khaparde
                   ` (7 preceding siblings ...)
  2024-10-25 17:57 ` [PATCH 08/13] net/bnxt: fix LRO offload capability Ajit Khaparde
@ 2024-10-25 17:57 ` Ajit Khaparde
  2024-10-25 17:57 ` [PATCH 10/13] net/bnxt: remove the VNIC async event handler Ajit Khaparde
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Ajit Khaparde @ 2024-10-25 17:57 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Mike Baucom

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

For p7 platform, the vlan filter and strip is disabled if the truflow
is enabled on the platform.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c |  6 +++++-
 drivers/net/bnxt/bnxt_rxq.c    | 17 +++++++++++------
 2 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 890c9f8b45..d3ea4ed539 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -2964,7 +2964,7 @@ bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask)
 {
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 	struct bnxt *bp = dev->data->dev_private;
-	int rc;
+	int rc = 0;
 
 	rc = is_bnxt_in_error(bp);
 	if (rc)
@@ -2974,6 +2974,10 @@ bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask)
 	if (!dev->data->dev_started)
 		return 0;
 
+	/* For P7 platform, cannot support if truflow is enabled */
+	if (BNXT_TRUFLOW_EN(bp) && BNXT_CHIP_P7(bp))
+		return rc;
+
 	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* Enable or disable VLAN filtering */
 		rc = bnxt_config_vlan_hw_filter(bp, rx_offloads);
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index 249fe7f6e5..8b8bc6584a 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -28,18 +28,23 @@ uint64_t bnxt_get_rx_port_offloads(struct bnxt *bp)
 			  RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
 			  RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
 			  RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
-			  RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
-			  RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
 			  RTE_ETH_RX_OFFLOAD_SCATTER |
 			  RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
-	if ((BNXT_CHIP_P7(bp) && !bnxt_compressed_rx_cqe_mode_enabled(bp)) ||
-	    BNXT_CHIP_P5(bp))
+	/* In P7 platform if truflow is enabled then vlan offload is disabled*/
+	if (!(BNXT_TRUFLOW_EN(bp) && BNXT_CHIP_P7(bp)))
+		rx_offload_capa |= (RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				    RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
+
+
+	if (!bnxt_compressed_rx_cqe_mode_enabled(bp))
 		rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 	if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
 		rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
-	if (bp->vnic_cap_flags & BNXT_VNIC_CAP_VLAN_RX_STRIP)
-		rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+	if (bp->vnic_cap_flags & BNXT_VNIC_CAP_VLAN_RX_STRIP) {
+		if (!(BNXT_TRUFLOW_EN(bp) && BNXT_CHIP_P7(bp)))
+			rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+	}
 
 	if (BNXT_TUNNELED_OFFLOADS_CAP_ALL_EN(bp))
 		rx_offload_capa |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 10/13] net/bnxt: remove the VNIC async event handler
  2024-10-25 17:57 [PATCH 00/13] patchset for bnxt PMD Ajit Khaparde
                   ` (8 preceding siblings ...)
  2024-10-25 17:57 ` [PATCH 09/13] net/bnxt: disable VLAN filter when TF is enabled Ajit Khaparde
@ 2024-10-25 17:57 ` Ajit Khaparde
  2024-10-25 17:57 ` [PATCH 11/13] net/bnxt: remove some unnecessary logs Ajit Khaparde
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 14+ messages in thread
From: Ajit Khaparde @ 2024-10-25 17:57 UTC (permalink / raw)
  To: dev; +Cc: Kishore Padmanabha, Shahaji Bhosle

From: Kishore Padmanabha <kishore.padmanabha@broadcom.com>

The VNIC async event handler is removed, it is no longer required if
during the port initialization if svif is used instead of VNIC which
could be invalid for rep port if the rep's VF port link is down.

Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Shahaji Bhosle <sbhosle@broadcom.com>
---
 drivers/net/bnxt/bnxt_cpr.c  | 48 ------------------------------------
 drivers/net/bnxt/bnxt_hwrm.c | 21 +++++++++++++---
 drivers/net/bnxt/bnxt_reps.c |  4 +--
 3 files changed, 19 insertions(+), 54 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_cpr.c b/drivers/net/bnxt/bnxt_cpr.c
index 4ffba6f594..ba0d7f4bf7 100644
--- a/drivers/net/bnxt/bnxt_cpr.c
+++ b/drivers/net/bnxt/bnxt_cpr.c
@@ -47,51 +47,6 @@ void bnxt_wait_for_device_shutdown(struct bnxt *bp)
 	} while (timeout);
 }
 
-static void
-bnxt_process_default_vnic_change(struct bnxt *bp,
-				 struct hwrm_async_event_cmpl *async_cmp)
-{
-	uint16_t vnic_state, vf_fid, vf_id;
-	struct bnxt_representor *vf_rep_bp;
-	struct rte_eth_dev *eth_dev;
-	bool vfr_found = false;
-	uint32_t event_data;
-
-	if (!BNXT_TRUFLOW_EN(bp))
-		return;
-
-	PMD_DRV_LOG_LINE(INFO, "Default vnic change async event received");
-	event_data = rte_le_to_cpu_32(async_cmp->event_data1);
-
-	vnic_state = (event_data & BNXT_DEFAULT_VNIC_STATE_MASK) >>
-			BNXT_DEFAULT_VNIC_STATE_SFT;
-	if (vnic_state != BNXT_DEFAULT_VNIC_ALLOC)
-		return;
-
-	if (!bp->rep_info)
-		return;
-
-	vf_fid = (event_data & BNXT_DEFAULT_VNIC_CHANGE_VF_ID_MASK) >>
-			BNXT_DEFAULT_VNIC_CHANGE_VF_ID_SFT;
-	PMD_DRV_LOG_LINE(INFO, "async event received vf_id 0x%x", vf_fid);
-
-	for (vf_id = 0; vf_id < BNXT_MAX_VF_REPS(bp); vf_id++) {
-		eth_dev = bp->rep_info[vf_id].vfr_eth_dev;
-		if (!eth_dev)
-			continue;
-		vf_rep_bp = eth_dev->data->dev_private;
-		if (vf_rep_bp &&
-		    vf_rep_bp->fw_fid == vf_fid) {
-			vfr_found = true;
-			break;
-		}
-	}
-	if (!vfr_found)
-		return;
-
-	bnxt_rep_dev_start_op(eth_dev);
-}
-
 static void bnxt_handle_event_error_report(struct bnxt *bp,
 					   uint32_t data1,
 					   uint32_t data2)
@@ -278,9 +233,6 @@ void bnxt_handle_async_event(struct bnxt *bp,
 		PMD_DRV_LOG_LINE(INFO, "Port: %u DNC event: data1 %#x data2 %#x",
 			    port_id, data1, data2);
 		break;
-	case HWRM_ASYNC_EVENT_CMPL_EVENT_ID_DEFAULT_VNIC_CHANGE:
-		bnxt_process_default_vnic_change(bp, async_cmp);
-		break;
 	case HWRM_ASYNC_EVENT_CMPL_EVENT_ID_ECHO_REQUEST:
 		PMD_DRV_LOG_LINE(INFO,
 			    "Port %u: Received fw echo request: data1 %#x data2 %#x",
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 80f7c1a6a1..8dea446e60 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -4336,12 +4336,25 @@ int bnxt_hwrm_get_dflt_vnic_svif(struct bnxt *bp, uint16_t fid,
 
 	HWRM_CHECK_RESULT();
 
-	if (vnic_id)
-		*vnic_id = rte_le_to_cpu_16(resp->dflt_vnic_id);
-
 	svif_info = rte_le_to_cpu_16(resp->svif_info);
-	if (svif && (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID))
+	if (svif && (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID)) {
 		*svif = svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_MASK;
+		/* When the VF corresponding to the VFR is down at the time of
+		 * VFR conduit creation, the VFR rule will be programmed with
+		 * invalid vnic id because FW will return default vnic id as
+		 * INVALID when queried through FUNC_QCFG. As a result, when
+		 * the VF is brought up, VF won't receive packets because
+		 * INVALID vnic id is already programmed.
+		 *
+		 * Hence, use svif value as vnic id during VFR conduit creation
+		 * as both svif and default vnic id values are same and will
+		 * never change.
+		 */
+		if (vnic_id)
+			*vnic_id = *svif;
+	} else {
+		rc = -EINVAL;
+	}
 
 	HWRM_UNLOCK();
 
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index 6c431c7dd8..6f5c3f80eb 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -540,12 +540,12 @@ static int bnxt_vfr_free(struct bnxt_representor *vfr)
 		return -ENOMEM;
 	}
 
-	parent_bp = vfr->parent_dev->data->dev_private;
-	if (!parent_bp) {
+	if (!bnxt_rep_check_parent(vfr)) {
 		PMD_DRV_LOG_LINE(DEBUG, "BNXT Port:%d VFR already freed",
 			    vfr->dpdk_port_id);
 		return 0;
 	}
+	parent_bp = vfr->parent_dev->data->dev_private;
 
 	/* Check if representor has been already freed in FW */
 	if (!vfr->vfr_tx_cfa_action)
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 11/13] net/bnxt: remove some unnecessary logs
  2024-10-25 17:57 [PATCH 00/13] patchset for bnxt PMD Ajit Khaparde
                   ` (9 preceding siblings ...)
  2024-10-25 17:57 ` [PATCH 10/13] net/bnxt: remove the VNIC async event handler Ajit Khaparde
@ 2024-10-25 17:57 ` Ajit Khaparde
  2024-10-25 17:57 ` [PATCH 12/13] net/bnxt: add support for buffer split Rx offload Ajit Khaparde
  2024-10-25 17:57 ` [PATCH 13/13] net/bnxt: remove unnecessary ifdef Ajit Khaparde
  12 siblings, 0 replies; 14+ messages in thread
From: Ajit Khaparde @ 2024-10-25 17:57 UTC (permalink / raw)
  To: dev

Remove some unnecessary logs messages when buffer allocation fails.
We already have stats to indicate such failures.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_rxr.c | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index dc0bf6032b..b8637ff57c 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -316,11 +316,8 @@ static int bnxt_prod_ag_mbuf(struct bnxt_rx_queue *rxq)
 
 	/* TODO batch allocation for better performance */
 	while (rte_bitmap_get(rxr->ag_bitmap, bmap_next)) {
-		if (unlikely(bnxt_alloc_ag_data(rxq, rxr, raw_next))) {
-			PMD_DRV_LOG_LINE(ERR, "agg mbuf alloc failed: prod=0x%x",
-				    raw_next);
+		if (unlikely(bnxt_alloc_ag_data(rxq, rxr, raw_next)))
 			break;
-		}
 		rte_bitmap_clear(rxr->ag_bitmap, bmap_next);
 		rxr->ag_raw_prod = raw_next;
 		raw_next = RING_NEXT(raw_next);
@@ -1092,8 +1089,6 @@ static int bnxt_crx_pkt(struct rte_mbuf **rx_pkt,
 	bnxt_set_vlan_crx(rxcmp, mbuf);
 
 	if (bnxt_alloc_rx_data(rxq, rxr, raw_prod)) {
-		PMD_DRV_LOG_LINE(ERR, "mbuf alloc failed with prod=0x%x",
-			    raw_prod);
 		rc = -ENOMEM;
 		goto rx;
 	}
@@ -1271,8 +1266,6 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt,
 	 */
 	raw_prod = RING_NEXT(raw_prod);
 	if (bnxt_alloc_rx_data(rxq, rxr, raw_prod)) {
-		PMD_DRV_LOG_LINE(ERR, "mbuf alloc failed with prod=0x%x",
-			    raw_prod);
 		rc = -ENOMEM;
 		goto rx;
 	}
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 12/13] net/bnxt: add support for buffer split Rx offload
  2024-10-25 17:57 [PATCH 00/13] patchset for bnxt PMD Ajit Khaparde
                   ` (10 preceding siblings ...)
  2024-10-25 17:57 ` [PATCH 11/13] net/bnxt: remove some unnecessary logs Ajit Khaparde
@ 2024-10-25 17:57 ` Ajit Khaparde
  2024-10-25 17:57 ` [PATCH 13/13] net/bnxt: remove unnecessary ifdef Ajit Khaparde
  12 siblings, 0 replies; 14+ messages in thread
From: Ajit Khaparde @ 2024-10-25 17:57 UTC (permalink / raw)
  To: dev

Add header and data split Rx offload support if the hardware supports
it. The packet will be split at fixed offset for IPv4 or IPv6 packets.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt.h        |  5 +++++
 drivers/net/bnxt/bnxt_ethdev.c |  5 +++++
 drivers/net/bnxt/bnxt_hwrm.c   | 29 ++++++++++++++++++++---------
 drivers/net/bnxt/bnxt_rxq.c    | 30 +++++++++++++++++++++++++++---
 drivers/net/bnxt/bnxt_rxq.h    |  4 ++++
 drivers/net/bnxt/bnxt_rxr.c    |  4 ++--
 drivers/net/bnxt/bnxt_vnic.h   |  1 +
 7 files changed, 64 insertions(+), 14 deletions(-)

diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 3502481056..771349de6c 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -770,6 +770,11 @@ enum bnxt_session_type {
 	BNXT_SESSION_TYPE_LAST
 };
 
+#define BNXT_MAX_BUFFER_SPLIT_SEGS		2
+#define BNXT_MULTI_POOL_BUF_SPLIT_CAP		1
+#define BNXT_BUF_SPLIT_OFFSET_CAP		1
+#define BNXT_BUF_SPLIT_ALIGN_CAP		0
+
 struct bnxt {
 	void				*bar0;
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index d3ea4ed539..09ee39b64d 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -1268,6 +1268,11 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	dev_info->vmdq_pool_base = 0;
 	dev_info->vmdq_queue_base = 0;
 
+	dev_info->rx_seg_capa.max_nseg = BNXT_MAX_BUFFER_SPLIT_SEGS;
+	dev_info->rx_seg_capa.multi_pools = BNXT_MULTI_POOL_BUF_SPLIT_CAP;
+	dev_info->rx_seg_capa.offset_allowed = BNXT_BUF_SPLIT_OFFSET_CAP;
+	dev_info->rx_seg_capa.offset_align_log2 = BNXT_BUF_SPLIT_ALIGN_CAP;
+
 	dev_info->err_handle_mode = RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE;
 
 	return 0;
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index 8dea446e60..351effb28f 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -3041,10 +3041,14 @@ int bnxt_hwrm_vnic_rss_cfg(struct bnxt *bp,
 int bnxt_hwrm_vnic_plcmode_cfg(struct bnxt *bp,
 			struct bnxt_vnic_info *vnic)
 {
-	int rc = 0;
-	struct hwrm_vnic_plcmodes_cfg_input req = {.req_type = 0 };
 	struct hwrm_vnic_plcmodes_cfg_output *resp = bp->hwrm_cmd_resp_addr;
+	struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf;
+	struct hwrm_vnic_plcmodes_cfg_input req = {.req_type = 0 };
+	uint64_t rx_offloads = dev_conf->rxmode.offloads;
+	uint8_t rs = !!(rx_offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT);
+	uint32_t flags, enables;
 	uint16_t size;
+	int rc = 0;
 
 	if (vnic->fw_vnic_id == INVALID_HW_RING_ID) {
 		PMD_DRV_LOG_LINE(DEBUG, "VNIC ID %x", vnic->fw_vnic_id);
@@ -3052,19 +3056,26 @@ int bnxt_hwrm_vnic_plcmode_cfg(struct bnxt *bp,
 	}
 
 	HWRM_PREP(&req, HWRM_VNIC_PLCMODES_CFG, BNXT_USE_CHIMP_MB);
-
-	req.flags = rte_cpu_to_le_32(
-			HWRM_VNIC_PLCMODES_CFG_INPUT_FLAGS_JUMBO_PLACEMENT);
-
-	req.enables = rte_cpu_to_le_32(
-		HWRM_VNIC_PLCMODES_CFG_INPUT_ENABLES_JUMBO_THRESH_VALID);
+	flags = HWRM_VNIC_PLCMODES_CFG_INPUT_FLAGS_JUMBO_PLACEMENT;
+	enables = HWRM_VNIC_PLCMODES_CFG_INPUT_ENABLES_JUMBO_THRESH_VALID;
 
 	size = rte_pktmbuf_data_room_size(bp->rx_queues[0]->mb_pool);
 	size -= RTE_PKTMBUF_HEADROOM;
 	size = RTE_MIN(BNXT_MAX_PKT_LEN, size);
-
 	req.jumbo_thresh = rte_cpu_to_le_16(size);
+
+	if (rs & vnic->hds_threshold) {
+		flags |=
+			HWRM_VNIC_PLCMODES_CFG_INPUT_FLAGS_HDS_IPV4 |
+			HWRM_VNIC_PLCMODES_CFG_INPUT_FLAGS_HDS_IPV6;
+		req.hds_threshold = rte_cpu_to_le_16(vnic->hds_threshold);
+		enables |=
+		HWRM_VNIC_PLCMODES_CFG_INPUT_ENABLES_HDS_THRESHOLD_VALID;
+	}
+
 	req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id);
+	req.flags = rte_cpu_to_le_32(flags);
+	req.enables = rte_cpu_to_le_32(enables);
 
 	rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB);
 
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index 8b8bc6584a..41e1aa2a23 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -29,7 +29,8 @@ uint64_t bnxt_get_rx_port_offloads(struct bnxt *bp)
 			  RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
 			  RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
 			  RTE_ETH_RX_OFFLOAD_SCATTER |
-			  RTE_ETH_RX_OFFLOAD_RSS_HASH;
+			  RTE_ETH_RX_OFFLOAD_RSS_HASH |
+			  RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT;
 
 	/* In P7 platform if truflow is enabled then vlan offload is disabled*/
 	if (!(BNXT_TRUFLOW_EN(bp) && BNXT_CHIP_P7(bp)))
@@ -332,8 +333,12 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 			       const struct rte_eth_rxconf *rx_conf,
 			       struct rte_mempool *mp)
 {
-	struct bnxt *bp = eth_dev->data->dev_private;
 	uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads;
+	uint8_t rs = !!(rx_offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT);
+	struct bnxt *bp = eth_dev->data->dev_private;
+	struct rte_eth_rxseg_split *rx_seg =
+			(struct rte_eth_rxseg_split *)rx_conf->rx_seg;
+	uint16_t n_seg = rx_conf->rx_nseg;
 	struct bnxt_rx_queue *rxq;
 	int rc = 0;
 
@@ -341,6 +346,17 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 	if (rc)
 		return rc;
 
+	if (n_seg > 1 && !rs) {
+		PMD_DRV_LOG_LINE(ERR, "n_seg %d does not match buffer split %d setting",
+				n_seg, rs);
+		return -EINVAL;
+	}
+
+	if (n_seg > BNXT_MAX_BUFFER_SPLIT_SEGS) {
+		PMD_DRV_LOG_LINE(ERR, "n_seg %d not supported", n_seg);
+		return -EINVAL;
+	}
+
 	if (queue_idx >= bnxt_max_rings(bp)) {
 		PMD_DRV_LOG_LINE(ERR,
 			"Cannot create Rx ring %d. Only %d rings available",
@@ -365,7 +381,14 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 		return -ENOMEM;
 	}
 	rxq->bp = bp;
-	rxq->mb_pool = mp;
+	if (n_seg > 1) {
+		rxq->mb_pool = rx_seg[BNXT_MEM_POOL_IDX_0].mp;
+		rxq->agg_mb_pool = rx_seg[BNXT_MEM_POOL_IDX_1].mp;
+	} else {
+		rxq->mb_pool = mp;
+		rxq->agg_mb_pool = mp;
+	}
+
 	rxq->nb_rx_desc = nb_desc;
 	rxq->rx_free_thresh =
 		RTE_MIN(rte_align32pow2(nb_desc) / 4, RTE_BNXT_MAX_RX_BURST);
@@ -411,6 +434,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 
 	rxq->rx_started = rxq->rx_deferred_start ? false : true;
 	rxq->vnic = bnxt_get_default_vnic(bp);
+	rxq->vnic->hds_threshold = n_seg ? rxq->vnic->hds_threshold : 0;
 
 	return 0;
 err:
diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h
index 36e0ac34dd..0b411a941a 100644
--- a/drivers/net/bnxt/bnxt_rxq.h
+++ b/drivers/net/bnxt/bnxt_rxq.h
@@ -12,11 +12,15 @@
 /* Drop by default when receive desc is not available. */
 #define BNXT_DEFAULT_RX_DROP_EN		1
 
+#define BNXT_MEM_POOL_IDX_0		0
+#define BNXT_MEM_POOL_IDX_1		1
+
 struct bnxt;
 struct bnxt_rx_ring_info;
 struct bnxt_cp_ring_info;
 struct bnxt_rx_queue {
 	struct rte_mempool	*mb_pool; /* mbuf pool for RX ring */
+	struct rte_mempool	*agg_mb_pool; /* mbuf pool for AGG ring */
 	uint64_t		mbuf_initializer; /* val to init mbuf */
 	uint16_t		nb_rx_desc; /* num of RX desc */
 	uint16_t		rx_free_thresh; /* max free RX desc to hold */
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index b8637ff57c..8f0a1b9cfd 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -84,7 +84,7 @@ static inline int bnxt_alloc_ag_data(struct bnxt_rx_queue *rxq,
 		return -EINVAL;
 	}
 
-	mbuf = __bnxt_alloc_rx_data(rxq->mb_pool);
+	mbuf = __bnxt_alloc_rx_data(rxq->agg_mb_pool);
 	if (!mbuf) {
 		rte_atomic_fetch_add_explicit(&rxq->rx_mbuf_alloc_fail, 1,
 				rte_memory_order_relaxed);
@@ -1673,7 +1673,7 @@ int bnxt_init_one_rx_ring(struct bnxt_rx_queue *rxq)
 		for (i = 0; i < max_aggs; i++) {
 			if (unlikely(!rxr->tpa_info[i].mbuf)) {
 				rxr->tpa_info[i].mbuf =
-					__bnxt_alloc_rx_data(rxq->mb_pool);
+					__bnxt_alloc_rx_data(rxq->agg_mb_pool);
 				if (!rxr->tpa_info[i].mbuf) {
 					rte_atomic_fetch_add_explicit(&rxq->rx_mbuf_alloc_fail, 1,
 							rte_memory_order_relaxed);
diff --git a/drivers/net/bnxt/bnxt_vnic.h b/drivers/net/bnxt/bnxt_vnic.h
index c4a7c5257c..5a4fd4ecb7 100644
--- a/drivers/net/bnxt/bnxt_vnic.h
+++ b/drivers/net/bnxt/bnxt_vnic.h
@@ -84,6 +84,7 @@ struct bnxt_vnic_info {
 	enum rte_eth_hash_function hash_f;
 	enum rte_eth_hash_function hash_f_local;
 	uint64_t	rss_types_local;
+	uint16_t	hds_threshold;
 	uint8_t         metadata_format;
 	uint8_t         state;
 };
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 13/13] net/bnxt: remove unnecessary ifdef
  2024-10-25 17:57 [PATCH 00/13] patchset for bnxt PMD Ajit Khaparde
                   ` (11 preceding siblings ...)
  2024-10-25 17:57 ` [PATCH 12/13] net/bnxt: add support for buffer split Rx offload Ajit Khaparde
@ 2024-10-25 17:57 ` Ajit Khaparde
  12 siblings, 0 replies; 14+ messages in thread
From: Ajit Khaparde @ 2024-10-25 17:57 UTC (permalink / raw)
  To: dev

Remove the unnnecessary and useless compile-time option for IEEE 1588.

Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_rxr.c | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 8f0a1b9cfd..5b43bcbea6 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -955,10 +955,6 @@ bnxt_set_ol_flags_crx(struct bnxt_rx_ring_info *rxr,
 		ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
 	}
 
-#ifdef RTE_LIBRTE_IEEE1588
-	/* TODO: TIMESTAMP flags need to be parsed and set. */
-#endif
-
 	mbuf->ol_flags = ol_flags;
 }
 
@@ -1080,10 +1076,6 @@ static int bnxt_crx_pkt(struct rte_mbuf **rx_pkt,
 	mbuf->data_len = mbuf->pkt_len;
 	mbuf->port = rxq->port_id;
 
-#ifdef RTE_LIBRTE_IEEE1588
-	/* TODO: Add timestamp support. */
-#endif
-
 	bnxt_set_ol_flags_crx(rxr, rxcmp, mbuf);
 	mbuf->packet_type = bnxt_parse_pkt_type_crx(rxcmp);
 	bnxt_set_vlan_crx(rxcmp, mbuf);
-- 
2.39.5 (Apple Git-154)


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2024-10-25 17:59 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-25 17:57 [PATCH 00/13] patchset for bnxt PMD Ajit Khaparde
2024-10-25 17:57 ` [PATCH 01/13] net/bnxt: fix TCP and UDP checksum flags Ajit Khaparde
2024-10-25 17:57 ` [PATCH 02/13] net/bnxt: fix bad action offset in Tx bd Ajit Khaparde
2024-10-25 17:57 ` [PATCH 03/13] net/bnxt: add check to validate TSO segment size Ajit Khaparde
2024-10-25 17:57 ` [PATCH 04/13] net/bnxt: add check for number of segs Ajit Khaparde
2024-10-25 17:57 ` [PATCH 05/13] net/bnxt: add check for invalid mbuf passed by application Ajit Khaparde
2024-10-25 17:57 ` [PATCH 06/13] net/bnxt: free and account a bad Tx mbuf Ajit Khaparde
2024-10-25 17:57 ` [PATCH 07/13] net/bnxt: register for and handle RSS change event Ajit Khaparde
2024-10-25 17:57 ` [PATCH 08/13] net/bnxt: fix LRO offload capability Ajit Khaparde
2024-10-25 17:57 ` [PATCH 09/13] net/bnxt: disable VLAN filter when TF is enabled Ajit Khaparde
2024-10-25 17:57 ` [PATCH 10/13] net/bnxt: remove the VNIC async event handler Ajit Khaparde
2024-10-25 17:57 ` [PATCH 11/13] net/bnxt: remove some unnecessary logs Ajit Khaparde
2024-10-25 17:57 ` [PATCH 12/13] net/bnxt: add support for buffer split Rx offload Ajit Khaparde
2024-10-25 17:57 ` [PATCH 13/13] net/bnxt: remove unnecessary ifdef Ajit Khaparde

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).