patches for DPDK stable branches
 help / color / mirror / Atom feed
* [PATCH 02/15] net/enetfec: fix restart issue
       [not found] <20220928052516.1279442-1-g.singh@nxp.com>
@ 2022-09-28  5:25 ` Gagandeep Singh
  2022-09-28  5:25 ` [PATCH 03/15] net/enetfec: fix buffer leak issue Gagandeep Singh
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Gagandeep Singh @ 2022-09-28  5:25 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Apeksha Gupta, stable, Sachin Saxena

From: Apeksha Gupta <apeksha.gupta@nxp.com>

Queue reset is missing in restart because of which
IO cannot work on device restart.

This patch fixes the issue by resetting the queues on
device restart.

Fixes: b84fdd39638b ("net/enetfec: support UIO")
Cc: stable@dpdk.org

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
---
 drivers/net/enetfec/enet_ethdev.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index c938e58204..898aad1c37 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -54,6 +54,7 @@ enetfec_restart(struct rte_eth_dev *dev)
 	uint32_t rcntl = OPT_FRAME_SIZE | 0x04;
 	uint32_t ecntl = ENETFEC_ETHEREN;
 	uint32_t val;
+	int i;
 
 	/* Clear any outstanding interrupt. */
 	writel(0xffffffff, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_EIR);
@@ -149,6 +150,9 @@ enetfec_restart(struct rte_eth_dev *dev)
 	/* And last, enable the transmit and receive processing */
 	rte_write32(rte_cpu_to_le_32(ecntl),
 		(uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR);
+
+	for (i = 0; i < fep->max_rx_queues; i++)
+		rte_write32(0, fep->rx_queues[i]->bd.active_reg_desc);
 	rte_delay_us(10);
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 03/15] net/enetfec: fix buffer leak issue
       [not found] <20220928052516.1279442-1-g.singh@nxp.com>
  2022-09-28  5:25 ` [PATCH 02/15] net/enetfec: fix restart issue Gagandeep Singh
@ 2022-09-28  5:25 ` Gagandeep Singh
  2022-09-28  5:25 ` [PATCH 04/15] net/dpaa2: fix dpdmux configuration for error behaviour Gagandeep Singh
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Gagandeep Singh @ 2022-09-28  5:25 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Apeksha Gupta, stable, Sachin Saxena

From: Apeksha Gupta <apeksha.gupta@nxp.com>

Driver has no proper handling to free unused
allocated mbufs in case of error or when the rx
processing complete because of which mempool
can be empty after some time.

This patch fixes this issue by moving the buffer
allocation code to the right place in driver.

Fixes: ecae71571b0d ("net/enetfec: support Rx/Tx")
Cc: stable@dpdk.org

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
---
 drivers/net/enetfec/enet_rxtx.c | 29 ++++++++++++++++-------------
 1 file changed, 16 insertions(+), 13 deletions(-)

diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
index 49b326315d..0aea8b240d 100644
--- a/drivers/net/enetfec/enet_rxtx.c
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -39,11 +39,6 @@ enetfec_recv_pkts(void *rxq1, struct rte_mbuf **rx_pkts,
 		if (pkt_received >= nb_pkts)
 			break;
 
-		new_mbuf = rte_pktmbuf_alloc(pool);
-		if (unlikely(new_mbuf == NULL)) {
-			stats->rx_nombuf++;
-			break;
-		}
 		/* Check for errors. */
 		status ^= RX_BD_LAST;
 		if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
@@ -72,6 +67,12 @@ enetfec_recv_pkts(void *rxq1, struct rte_mbuf **rx_pkts,
 			goto rx_processing_done;
 		}
 
+		new_mbuf = rte_pktmbuf_alloc(pool);
+		if (unlikely(new_mbuf == NULL)) {
+			stats->rx_nombuf++;
+			break;
+		}
+
 		/* Process the incoming frame. */
 		stats->ipackets++;
 		pkt_len = rte_le_to_cpu_16(rte_read16(&bdp->bd_datlen));
@@ -193,7 +194,16 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			tx_st = 0;
 			break;
 		}
+
+		mbuf = *(tx_pkts);
+		if (mbuf->nb_segs > 1) {
+			ENETFEC_DP_LOG(DEBUG, "SG not supported");
+			return pkt_transmitted;
+		}
+
+		tx_pkts++;
 		bdp = txq->bd.cur;
+
 		/* First clean the ring */
 		index = enet_get_bd_index(bdp, &txq->bd);
 		status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
@@ -207,9 +217,6 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			txq->tx_mbuf[index] = NULL;
 		}
 
-		mbuf = *(tx_pkts);
-		tx_pkts++;
-
 		/* Fill in a Tx ring entry */
 		last_bdp = bdp;
 		status &= ~TX_BD_STATS;
@@ -219,10 +226,6 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		stats->opackets++;
 		stats->obytes += buflen;
 
-		if (mbuf->nb_segs > 1) {
-			ENETFEC_DP_LOG(DEBUG, "SG not supported");
-			return -1;
-		}
 		status |= (TX_BD_LAST);
 		data = rte_pktmbuf_mtod(mbuf, void *);
 		for (i = 0; i <= buflen; i += RTE_CACHE_LINE_SIZE)
@@ -268,5 +271,5 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 */
 		txq->bd.cur = bdp;
 	}
-	return nb_pkts;
+	return pkt_transmitted;
 }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 04/15] net/dpaa2: fix dpdmux configuration for error behaviour
       [not found] <20220928052516.1279442-1-g.singh@nxp.com>
  2022-09-28  5:25 ` [PATCH 02/15] net/enetfec: fix restart issue Gagandeep Singh
  2022-09-28  5:25 ` [PATCH 03/15] net/enetfec: fix buffer leak issue Gagandeep Singh
@ 2022-09-28  5:25 ` Gagandeep Singh
  2022-09-28  5:25 ` [PATCH 05/15] net/dpaa2: check free enqueue descriptors before Tx Gagandeep Singh
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Gagandeep Singh @ 2022-09-28  5:25 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Vanshika Shukla, stable

From: Vanshika Shukla <vanshika.shukla@nxp.com>

Driver is giving the wrong interface ID while setting the
error behaviour.

This patch fixes the issue by passing the correct MAC interface
index value to the API.

Fixes: 3d43972b1b42 ("net/dpaa2: do not drop parse error packets by dpdmux")
Cc: stable@dpdk.org

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h | 3 +++
 drivers/net/dpaa2/dpaa2_mux.c    | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index f69df95253..32ae762e4a 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -66,6 +66,9 @@
 /* Tx confirmation enabled */
 #define DPAA2_TX_CONF_ENABLE	0x06
 
+/* DPDMUX index for DPMAC */
+#define DPAA2_DPDMUX_DPMAC_IDX 0
+
 /* HW loopback the egress traffic to self ingress*/
 #define DPAA2_TX_MAC_LOOPBACK_MODE 0x20
 
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 3289f388e1..7456f43f42 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -336,7 +336,7 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 
 		ret = dpdmux_if_set_errors_behavior(&dpdmux_dev->dpdmux,
 				CMD_PRI_LOW,
-				dpdmux_dev->token, dpdmux_id,
+				dpdmux_dev->token, DPAA2_DPDMUX_DPMAC_IDX,
 				&mux_err_cfg);
 		if (ret) {
 			DPAA2_PMD_ERR("dpdmux_if_set_errors_behavior %s err %d",
-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 05/15] net/dpaa2: check free enqueue descriptors before Tx
       [not found] <20220928052516.1279442-1-g.singh@nxp.com>
                   ` (2 preceding siblings ...)
  2022-09-28  5:25 ` [PATCH 04/15] net/dpaa2: fix dpdmux configuration for error behaviour Gagandeep Singh
@ 2022-09-28  5:25 ` Gagandeep Singh
  2022-10-05 14:30   ` Ferruh Yigit
  2022-09-28  5:25 ` [PATCH 08/15] net/dpaa2: fix buffer free on transmit SG packets Gagandeep Singh
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 22+ messages in thread
From: Gagandeep Singh @ 2022-09-28  5:25 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: brick, stable, Rohit Raj

From: brick <brick.yang@nxp.com>

Check if there exists free enqueue descriptors before enqueuing Tx
packet. Also try to free enqueue descriptors in case they are not
free.

Fixes: ed1cdbed6a15 ("net/dpaa2: support multiple Tx queues enqueue for ordered")
Cc: stable@dpdk.org

Signed-off-by: brick <brick.yang@nxp.com>
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/event/dpaa2/dpaa2_eventdev.c |  8 ++---
 drivers/net/dpaa2/dpaa2_rxtx.c       | 50 +++++++++++++++++++---------
 2 files changed, 38 insertions(+), 20 deletions(-)

diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index 1001297cda..d09c5b8778 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017,2019-2021 NXP
+ * Copyright 2017,2019-2022 NXP
  */
 
 #include <assert.h>
@@ -175,7 +175,7 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[],
 				if (retry_count > DPAA2_EV_TX_RETRY_COUNT) {
 					num_tx += loop;
 					nb_events -= loop;
-					return num_tx + loop;
+					return num_tx;
 				}
 			} else {
 				loop += ret;
@@ -1015,9 +1015,7 @@ dpaa2_eventdev_txa_enqueue(void *port,
 		txq[i] = rte_eth_devices[m[i]->port].data->tx_queues[qid];
 	}
 
-	dpaa2_dev_tx_multi_txq_ordered(txq, m, nb_events);
-
-	return nb_events;
+	return dpaa2_dev_tx_multi_txq_ordered(txq, m, nb_events);
 }
 
 static struct eventdev_ops dpaa2_eventdev_ops = {
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 9436a95ac8..bc0e49b0d4 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1525,7 +1525,7 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 	uint32_t loop, retry_count;
 	int32_t ret;
 	struct qbman_fd fd_arr[MAX_TX_RING_SLOTS];
-	uint32_t frames_to_send;
+	uint32_t frames_to_send, num_free_eq_desc = 0;
 	struct rte_mempool *mp;
 	struct qbman_eq_desc eqdesc[MAX_TX_RING_SLOTS];
 	struct dpaa2_queue *dpaa2_q[MAX_TX_RING_SLOTS];
@@ -1547,16 +1547,44 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 	}
 	swp = DPAA2_PER_LCORE_PORTAL;
 
-	for (loop = 0; loop < nb_pkts; loop++) {
+	frames_to_send = (nb_pkts > dpaa2_eqcr_size) ?
+		dpaa2_eqcr_size : nb_pkts;
+
+	for (loop = 0; loop < frames_to_send; loop++) {
 		dpaa2_q[loop] = (struct dpaa2_queue *)queue[loop];
 		eth_data = dpaa2_q[loop]->eth_data;
 		priv = eth_data->dev_private;
+		if (!priv->en_loose_ordered) {
+			if (*dpaa2_seqn(*bufs) & DPAA2_ENQUEUE_FLAG_ORP) {
+				if (!num_free_eq_desc) {
+					num_free_eq_desc = dpaa2_free_eq_descriptors();
+					if (!num_free_eq_desc)
+						goto send_frames;
+				}
+				num_free_eq_desc--;
+			}
+		}
+
+		DPAA2_PMD_DP_DEBUG("===> eth_data =%p, fqid =%d\n",
+				   eth_data, dpaa2_q[loop]->fqid);
+
+		/*Check if the queue is congested*/
+		retry_count = 0;
+		while (qbman_result_SCN_state(dpaa2_q[loop]->cscn)) {
+			retry_count++;
+			/* Retry for some time before giving up */
+			if (retry_count > CONG_RETRY_COUNT)
+				goto send_frames;
+		}
+
+		/*Prepare enqueue descriptor*/
 		qbman_eq_desc_clear(&eqdesc[loop]);
+
 		if (*dpaa2_seqn(*bufs) && priv->en_ordered) {
 			order_sendq = (struct dpaa2_queue *)priv->tx_vq[0];
 			dpaa2_set_enqueue_descriptor(order_sendq,
-							     (*bufs),
-							     &eqdesc[loop]);
+						     (*bufs),
+						     &eqdesc[loop]);
 		} else {
 			qbman_eq_desc_set_no_orp(&eqdesc[loop],
 							 DPAA2_EQ_RESP_ERR_FQ);
@@ -1564,14 +1592,6 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 						     dpaa2_q[loop]->fqid);
 		}
 
-		retry_count = 0;
-		while (qbman_result_SCN_state(dpaa2_q[loop]->cscn)) {
-			retry_count++;
-			/* Retry for some time before giving up */
-			if (retry_count > CONG_RETRY_COUNT)
-				goto send_frames;
-		}
-
 		if (likely(RTE_MBUF_DIRECT(*bufs))) {
 			mp = (*bufs)->pool;
 			/* Check the basic scenario and set
@@ -1591,7 +1611,6 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 					&fd_arr[loop],
 					mempool_to_bpid(mp));
 				bufs++;
-				dpaa2_q[loop]++;
 				continue;
 			}
 		} else {
@@ -1637,18 +1656,19 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 		}
 
 		bufs++;
-		dpaa2_q[loop]++;
 	}
 
 send_frames:
 	frames_to_send = loop;
 	loop = 0;
+	retry_count = 0;
 	while (loop < frames_to_send) {
 		ret = qbman_swp_enqueue_multiple_desc(swp, &eqdesc[loop],
 				&fd_arr[loop],
 				frames_to_send - loop);
 		if (likely(ret > 0)) {
 			loop += ret;
+			retry_count = 0;
 		} else {
 			retry_count++;
 			if (retry_count > DPAA2_MAX_TX_RETRY_COUNT)
@@ -1834,7 +1854,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		retry_count = 0;
 		while (i < loop) {
 			ret = qbman_swp_enqueue_multiple_desc(swp,
-				       &eqdesc[loop], &fd_arr[i], loop - i);
+				       &eqdesc[i], &fd_arr[i], loop - i);
 			if (unlikely(ret < 0)) {
 				retry_count++;
 				if (retry_count > DPAA2_MAX_TX_RETRY_COUNT)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 08/15] net/dpaa2: fix buffer free on transmit SG packets
       [not found] <20220928052516.1279442-1-g.singh@nxp.com>
                   ` (3 preceding siblings ...)
  2022-09-28  5:25 ` [PATCH 05/15] net/dpaa2: check free enqueue descriptors before Tx Gagandeep Singh
@ 2022-09-28  5:25 ` Gagandeep Singh
  2022-10-06  7:48   ` Ferruh Yigit
  2022-09-28  5:25 ` [PATCH 10/15] net/dpaa: fix Jumbo packet Rx in case of VSP Gagandeep Singh
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 22+ messages in thread
From: Gagandeep Singh @ 2022-09-28  5:25 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Gagandeep Singh, stable

When using SG list to TX with external and direct buffers,
HW free the direct buffers and driver free the external buffers.

Software scans the complete SG mbuf list to find the external
buffers to free, but this is wrong as hardware can free the
direct buffers if any present in the list and same can be
re-allocated for other purpose in multi thread or high spead
running traffic environment with new data in it. So the software
which is scanning the SG mbuf list, if that list has any direct
buffer present then that direct buffer's next pointor can give
wrong pointer value, if already freed by hardware which
can do the mempool corruption or memory leak.

In this patch instead of relying on user given SG mbuf list
we are storing the buffers in an internal list which will
be scanned by driver after transmit to free non-direct
buffers.

This patch also fixes 2 more memory leak issues.

Driver is freeing complete SG list by checking external buffer
flag in first segment only, but external buffer can be attached
to any of the segment. Because of which driver either can double
free buffers or there can be memory leak.

In case of indirect buffers, driver is modifying the original
buffer list to free the indirect buffers but this orginal buffer
list is being used even after transmit packets for software
buffer cleanup. This can cause the buffer leak issue.

Fixes: 6bfbafe18d15 ("net/dpaa2: support external buffers in Tx")
Cc: stable@dpdk.org

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h |   9 +++
 drivers/net/dpaa2/dpaa2_rxtx.c   | 111 +++++++++++++++++++++++--------
 2 files changed, 92 insertions(+), 28 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 872dced517..c88c8146dc 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -129,6 +129,15 @@ extern struct rte_mempool *dpaa2_tx_sg_pool;
 #define DPAA2_POOL_SIZE 2048
 /* SG pool cache size */
 #define DPAA2_POOL_CACHE_SIZE 256
+/* structure to free external and indirect
+ * buffers.
+ */
+struct sw_buf_free {
+	/* To which packet this segment belongs */
+	uint16_t pkt_id;
+	/* The actual segment */
+	struct rte_mbuf *seg;
+};
 
 /* enable timestamp in mbuf*/
 extern bool dpaa2_enable_ts[];
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index dcd86c4056..94815485b8 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -403,9 +403,12 @@ eth_fd_to_mbuf(const struct qbman_fd *fd,
 static int __rte_noinline __rte_hot
 eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 		  struct qbman_fd *fd,
+		  struct sw_buf_free *free_buf,
+		  uint32_t *free_count,
+		  uint32_t pkt_id,
 		  uint16_t bpid)
 {
-	struct rte_mbuf *cur_seg = mbuf, *prev_seg, *mi, *temp;
+	struct rte_mbuf *cur_seg = mbuf, *mi, *temp;
 	struct qbman_sge *sgt, *sge = NULL;
 	int i, offset = 0;
 
@@ -486,10 +489,11 @@ eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 #endif
 				}
 			}
-			cur_seg = cur_seg->next;
 		} else if (RTE_MBUF_HAS_EXTBUF(cur_seg)) {
+			free_buf[*free_count].seg = cur_seg;
+			free_buf[*free_count].pkt_id = pkt_id;
+			++*free_count;
 			DPAA2_SET_FLE_IVP(sge);
-			cur_seg = cur_seg->next;
 		} else {
 			/* Get owner MBUF from indirect buffer */
 			mi = rte_mbuf_from_indirect(cur_seg);
@@ -503,11 +507,11 @@ eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 						   mempool_to_bpid(mi->pool));
 				rte_mbuf_refcnt_update(mi, 1);
 			}
-			prev_seg = cur_seg;
-			cur_seg = cur_seg->next;
-			prev_seg->next = NULL;
-			rte_pktmbuf_free(prev_seg);
+			free_buf[*free_count].seg = cur_seg;
+			free_buf[*free_count].pkt_id = pkt_id;
+			++*free_count;
 		}
+		cur_seg = cur_seg->next;
 	}
 	DPAA2_SG_SET_FINAL(sge, true);
 	return 0;
@@ -515,11 +519,19 @@ eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 
 static void
 eth_mbuf_to_fd(struct rte_mbuf *mbuf,
-	       struct qbman_fd *fd, uint16_t bpid) __rte_unused;
+	       struct qbman_fd *fd,
+	       struct sw_buf_free *buf_to_free,
+	       uint32_t *free_count,
+	       uint32_t pkt_id,
+	       uint16_t bpid) __rte_unused;
 
 static void __rte_noinline __rte_hot
 eth_mbuf_to_fd(struct rte_mbuf *mbuf,
-	       struct qbman_fd *fd, uint16_t bpid)
+	       struct qbman_fd *fd,
+	       struct sw_buf_free *buf_to_free,
+	       uint32_t *free_count,
+	       uint32_t pkt_id,
+	       uint16_t bpid)
 {
 	DPAA2_MBUF_TO_CONTIG_FD(mbuf, fd, bpid);
 
@@ -540,6 +552,9 @@ eth_mbuf_to_fd(struct rte_mbuf *mbuf,
 				(void **)&mbuf, 1, 0);
 #endif
 	} else if (RTE_MBUF_HAS_EXTBUF(mbuf)) {
+		buf_to_free[*free_count].seg = mbuf;
+		buf_to_free[*free_count].pkt_id = pkt_id;
+		++*free_count;
 		DPAA2_SET_FD_IVP(fd);
 	} else {
 		struct rte_mbuf *mi;
@@ -549,7 +564,10 @@ eth_mbuf_to_fd(struct rte_mbuf *mbuf,
 			DPAA2_SET_FD_IVP(fd);
 		else
 			rte_mbuf_refcnt_update(mi, 1);
-		rte_pktmbuf_free(mbuf);
+
+		buf_to_free[*free_count].seg = mbuf;
+		buf_to_free[*free_count].pkt_id = pkt_id;
+		++*free_count;
 	}
 }
 
@@ -1226,7 +1244,8 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data;
 	struct dpaa2_dev_priv *priv = eth_data->dev_private;
 	uint32_t flags[MAX_TX_RING_SLOTS] = {0};
-	struct rte_mbuf **orig_bufs = bufs;
+	struct sw_buf_free buf_to_free[DPAA2_MAX_SGS * dpaa2_dqrr_size];
+	uint32_t free_count = 0;
 
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
@@ -1324,11 +1343,17 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 					mp = (*bufs)->pool;
 					if (eth_mbuf_to_sg_fd(*bufs,
 							      &fd_arr[loop],
+							      buf_to_free,
+							      &free_count,
+							      loop,
 							      mempool_to_bpid(mp)))
 						goto send_n_return;
 				} else {
 					eth_mbuf_to_fd(*bufs,
-						       &fd_arr[loop], 0);
+							&fd_arr[loop],
+							buf_to_free,
+							&free_count,
+							loop, 0);
 				}
 				bufs++;
 #ifdef RTE_LIBRTE_IEEE1588
@@ -1373,11 +1398,17 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				if (unlikely((*bufs)->nb_segs > 1)) {
 					if (eth_mbuf_to_sg_fd(*bufs,
 							&fd_arr[loop],
+							buf_to_free,
+							&free_count,
+							loop,
 							bpid))
 						goto send_n_return;
 				} else {
 					eth_mbuf_to_fd(*bufs,
-						       &fd_arr[loop], bpid);
+							&fd_arr[loop],
+							buf_to_free,
+							&free_count,
+							loop, bpid);
 				}
 			}
 #ifdef RTE_LIBRTE_IEEE1588
@@ -1410,12 +1441,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	}
 	dpaa2_q->tx_pkts += num_tx;
 
-	loop = 0;
-	while (loop < num_tx) {
-		if (unlikely(RTE_MBUF_HAS_EXTBUF(*orig_bufs)))
-			rte_pktmbuf_free(*orig_bufs);
-		orig_bufs++;
-		loop++;
+	for (loop = 0; loop < free_count; loop++) {
+		if (buf_to_free[loop].pkt_id < num_tx)
+			rte_pktmbuf_free_seg(buf_to_free[loop].seg);
 	}
 
 	return num_tx;
@@ -1445,12 +1473,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 skip_tx:
 	dpaa2_q->tx_pkts += num_tx;
 
-	loop = 0;
-	while (loop < num_tx) {
-		if (unlikely(RTE_MBUF_HAS_EXTBUF(*orig_bufs)))
-			rte_pktmbuf_free(*orig_bufs);
-		orig_bufs++;
-		loop++;
+	for (loop = 0; loop < free_count; loop++) {
+		if (buf_to_free[loop].pkt_id < num_tx)
+			rte_pktmbuf_free_seg(buf_to_free[loop].seg);
 	}
 
 	return num_tx;
@@ -1523,7 +1548,7 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 		struct rte_mbuf **bufs, uint16_t nb_pkts)
 {
 	/* Function to transmit the frames to multiple queues respectively.*/
-	uint32_t loop, retry_count;
+	uint32_t loop, i, retry_count;
 	int32_t ret;
 	struct qbman_fd fd_arr[MAX_TX_RING_SLOTS];
 	uint32_t frames_to_send, num_free_eq_desc = 0;
@@ -1536,6 +1561,8 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 	struct rte_eth_dev_data *eth_data;
 	struct dpaa2_dev_priv *priv;
 	struct dpaa2_queue *order_sendq;
+	struct sw_buf_free buf_to_free[DPAA2_MAX_SGS * dpaa2_dqrr_size];
+	uint32_t free_count = 0;
 
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
@@ -1647,11 +1674,17 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 			if (unlikely((*bufs)->nb_segs > 1)) {
 				if (eth_mbuf_to_sg_fd(*bufs,
 						      &fd_arr[loop],
+						      buf_to_free,
+						      &free_count,
+						      loop,
 						      bpid))
 					goto send_frames;
 			} else {
 				eth_mbuf_to_fd(*bufs,
-					       &fd_arr[loop], bpid);
+						&fd_arr[loop],
+						buf_to_free,
+						&free_count,
+						loop, bpid);
 			}
 		}
 
@@ -1676,6 +1709,10 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 		}
 	}
 
+	for (i = 0; i < free_count; i++) {
+		if (buf_to_free[i].pkt_id < loop)
+			rte_pktmbuf_free_seg(buf_to_free[i].seg);
+	}
 	return loop;
 }
 
@@ -1698,6 +1735,8 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	int32_t ret;
 	uint16_t num_tx = 0;
 	uint16_t bpid;
+	struct sw_buf_free buf_to_free[DPAA2_MAX_SGS * dpaa2_dqrr_size];
+	uint32_t free_count = 0;
 
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
@@ -1810,11 +1849,17 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				if (unlikely((*bufs)->nb_segs > 1)) {
 					if (eth_mbuf_to_sg_fd(*bufs,
 							      &fd_arr[loop],
+							      buf_to_free,
+							      &free_count,
+							      loop,
 							      bpid))
 						goto send_n_return;
 				} else {
 					eth_mbuf_to_fd(*bufs,
-						       &fd_arr[loop], bpid);
+							&fd_arr[loop],
+							buf_to_free,
+							&free_count,
+							loop, bpid);
 				}
 			}
 			bufs++;
@@ -1843,6 +1888,11 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		nb_pkts -= loop;
 	}
 	dpaa2_q->tx_pkts += num_tx;
+	for (loop = 0; loop < free_count; loop++) {
+		if (buf_to_free[loop].pkt_id < num_tx)
+			rte_pktmbuf_free_seg(buf_to_free[loop].seg);
+	}
+
 	return num_tx;
 
 send_n_return:
@@ -1867,6 +1917,11 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	}
 skip_tx:
 	dpaa2_q->tx_pkts += num_tx;
+	for (loop = 0; loop < free_count; loop++) {
+		if (buf_to_free[loop].pkt_id < num_tx)
+			rte_pktmbuf_free_seg(buf_to_free[loop].seg);
+	}
+
 	return num_tx;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 10/15] net/dpaa: fix Jumbo packet Rx in case of VSP
       [not found] <20220928052516.1279442-1-g.singh@nxp.com>
                   ` (4 preceding siblings ...)
  2022-09-28  5:25 ` [PATCH 08/15] net/dpaa2: fix buffer free on transmit SG packets Gagandeep Singh
@ 2022-09-28  5:25 ` Gagandeep Singh
  2022-09-28  5:25 ` [PATCH 14/15] net/dpaa: fix buffer free on transmit SG packets Gagandeep Singh
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Gagandeep Singh @ 2022-09-28  5:25 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rohit Raj, stable

From: Rohit Raj <rohit.raj@nxp.com>

For packet length of size more than 2K bytes, segmented packets were
being received in DPDK even if mbuf size was greater than packet
length. This is due to the configuration in VSP.

This patch fixes the issue by configurating the VSP according to the
mbuf size configured during mempool configuration.

Fixes: e4abd4ff183c ("net/dpaa: support virtual storage profile")
Cc: stable@dpdk.org

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.c |  5 ++---
 drivers/net/dpaa/dpaa_flow.c   | 13 ++++++-------
 drivers/net/dpaa/dpaa_flow.h   |  5 +++--
 3 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index abcb1bc9ec..9b281ba8cb 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -988,8 +988,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	} else {
 		DPAA_PMD_WARN("The requested maximum Rx packet size (%u) is"
 		     " larger than a single mbuf (%u) and scattered"
-		     " mode has not been requested",
-		     max_rx_pktlen, buffsz - RTE_PKTMBUF_HEADROOM);
+		     " mode has not been requested", max_rx_pktlen, buffsz);
 	}
 
 	dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
@@ -1004,7 +1003,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		if (vsp_id >= 0) {
 			ret = dpaa_port_vsp_update(dpaa_intf, fmc_q, vsp_id,
 					DPAA_MEMPOOL_TO_POOL_INFO(mp)->bpid,
-					fif);
+					fif, buffsz + RTE_PKTMBUF_HEADROOM);
 			if (ret) {
 				DPAA_PMD_ERR("dpaa_port_vsp_update failed");
 				return ret;
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index 1ccd036027..690ba6bcb3 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -939,7 +939,7 @@ int dpaa_fm_term(void)
 
 static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
 		uint8_t vsp_id, t_handle fman_handle,
-		struct fman_if *fif)
+		struct fman_if *fif, u32 mbuf_data_room_size)
 {
 	t_fm_vsp_params vsp_params;
 	t_fm_buffer_prefix_content buf_prefix_cont;
@@ -976,10 +976,8 @@ static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
 		return -1;
 	}
 	vsp_params.ext_buf_pools.num_of_pools_used = 1;
-	vsp_params.ext_buf_pools.ext_buf_pool[0].id =
-		dpaa_intf->vsp_bpid[vsp_id];
-	vsp_params.ext_buf_pools.ext_buf_pool[0].size =
-		RTE_MBUF_DEFAULT_BUF_SIZE;
+	vsp_params.ext_buf_pools.ext_buf_pool[0].id = dpaa_intf->vsp_bpid[vsp_id];
+	vsp_params.ext_buf_pools.ext_buf_pool[0].size = mbuf_data_room_size;
 
 	dpaa_intf->vsp_handle[vsp_id] = fm_vsp_config(&vsp_params);
 	if (!dpaa_intf->vsp_handle[vsp_id]) {
@@ -1023,7 +1021,7 @@ static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
 
 int dpaa_port_vsp_update(struct dpaa_if *dpaa_intf,
 		bool fmc_mode, uint8_t vsp_id, uint32_t bpid,
-		struct fman_if *fif)
+		struct fman_if *fif, u32 mbuf_data_room_size)
 {
 	int ret = 0;
 	t_handle fman_handle;
@@ -1054,7 +1052,8 @@ int dpaa_port_vsp_update(struct dpaa_if *dpaa_intf,
 
 	dpaa_intf->vsp_bpid[vsp_id] = bpid;
 
-	return dpaa_port_vsp_configure(dpaa_intf, vsp_id, fman_handle, fif);
+	return dpaa_port_vsp_configure(dpaa_intf, vsp_id, fman_handle, fif,
+				       mbuf_data_room_size);
 }
 
 int dpaa_port_vsp_cleanup(struct dpaa_if *dpaa_intf, struct fman_if *fif)
diff --git a/drivers/net/dpaa/dpaa_flow.h b/drivers/net/dpaa/dpaa_flow.h
index f5e131acfa..4742b8dd0a 100644
--- a/drivers/net/dpaa/dpaa_flow.h
+++ b/drivers/net/dpaa/dpaa_flow.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017,2019 NXP
+ * Copyright 2017,2019,2022 NXP
  */
 
 #ifndef __DPAA_FLOW_H__
@@ -11,7 +11,8 @@ int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set);
 int dpaa_fm_deconfig(struct dpaa_if *dpaa_intf, struct fman_if *fif);
 void dpaa_write_fm_config_to_file(void);
 int dpaa_port_vsp_update(struct dpaa_if *dpaa_intf,
-	bool fmc_mode, uint8_t vsp_id, uint32_t bpid, struct fman_if *fif);
+	bool fmc_mode, uint8_t vsp_id, uint32_t bpid, struct fman_if *fif,
+	u32 mbuf_data_room_size);
 int dpaa_port_vsp_cleanup(struct dpaa_if *dpaa_intf, struct fman_if *fif);
 int dpaa_port_fmc_init(struct fman_if *fif,
 		       uint32_t *fqids, int8_t *vspids, int max_nb_rxq);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 14/15] net/dpaa: fix buffer free on transmit SG packets
       [not found] <20220928052516.1279442-1-g.singh@nxp.com>
                   ` (5 preceding siblings ...)
  2022-09-28  5:25 ` [PATCH 10/15] net/dpaa: fix Jumbo packet Rx in case of VSP Gagandeep Singh
@ 2022-09-28  5:25 ` Gagandeep Singh
  2022-09-28  5:25 ` [PATCH 15/15] net/dpaa: fix buffer free in slow path Gagandeep Singh
       [not found] ` <20221007032743.2129353-1-g.singh@nxp.com>
  8 siblings, 0 replies; 22+ messages in thread
From: Gagandeep Singh @ 2022-09-28  5:25 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Gagandeep Singh, stable

When using SG list to TX with external and direct buffers,
HW free direct buffers and driver free external buffers.

Software scans the complete SG mbuf list to find the external
buffers to free, but this is wrong as hardware can free the
direct buffers if any present in the list and same can be
re-allocated for other purpose in multi thread or high spead
running traffic environment with new data in it. So the software
which is scanning the SG mbuf list, if that list has any direct
buffer present then that direct buffer's next pointor can give
wrong pointer value, if already freed by hardware which
can do the mempool corruption or memory leak.

In this patch instead of relying on user given SG mbuf list
we are storing the buffers in an internal list which will
be scanned by driver after transmit to free non-direct
buffers.

This patch also fixes below issues.

Driver is freeing complete SG list by checking external buffer
flag in first segment only, but external buffer can be attached
to any of the segment. Because of this, driver either can double
free buffers or there can be memory leak.

In case of indirect buffers, driver is modifying the original
buffer list to free the indirect buffers but this orginal buffer
list is being used by driver even after transmit packets for
non-direct buffer cleanup. This can cause the buffer leak issue.

Fixes: f191d5abda54 ("net/dpaa: support external buffers in Tx")
Cc: stable@dpdk.org

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.h | 10 ++++++
 drivers/net/dpaa/dpaa_rxtx.c   | 61 ++++++++++++++++++++++------------
 2 files changed, 49 insertions(+), 22 deletions(-)

diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index f9c0554530..502c1c88b8 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -112,6 +112,16 @@
 
 extern struct rte_mempool *dpaa_tx_sg_pool;
 
+/* structure to free external and indirect
+ * buffers.
+ */
+struct dpaa_sw_buf_free {
+	/* To which packet this segment belongs */
+	uint16_t pkt_id;
+	/* The actual segment */
+	struct rte_mbuf *seg;
+};
+
 /* Each network interface is represented by one of these */
 struct dpaa_if {
 	int valid;
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index e23206bf5c..4d285b4f38 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -803,9 +803,12 @@ uint16_t dpaa_eth_queue_rx(void *q,
 
 static int
 dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
-		struct qm_fd *fd)
+		struct qm_fd *fd,
+		struct dpaa_sw_buf_free *free_buf,
+		uint32_t *free_count,
+		uint32_t pkt_id)
 {
-	struct rte_mbuf *cur_seg = mbuf, *prev_seg = NULL;
+	struct rte_mbuf *cur_seg = mbuf;
 	struct rte_mbuf *temp, *mi;
 	struct qm_sg_entry *sg_temp, *sgt;
 	int i = 0;
@@ -869,10 +872,11 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 				sg_temp->bpid =
 					DPAA_MEMPOOL_TO_BPID(cur_seg->pool);
 			}
-			cur_seg = cur_seg->next;
 		} else if (RTE_MBUF_HAS_EXTBUF(cur_seg)) {
+			free_buf[*free_count].seg = cur_seg;
+			free_buf[*free_count].pkt_id = pkt_id;
+			++*free_count;
 			sg_temp->bpid = 0xff;
-			cur_seg = cur_seg->next;
 		} else {
 			/* Get owner MBUF from indirect buffer */
 			mi = rte_mbuf_from_indirect(cur_seg);
@@ -885,11 +889,11 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 				sg_temp->bpid = DPAA_MEMPOOL_TO_BPID(mi->pool);
 				rte_mbuf_refcnt_update(mi, 1);
 			}
-			prev_seg = cur_seg;
-			cur_seg = cur_seg->next;
-			prev_seg->next = NULL;
-			rte_pktmbuf_free(prev_seg);
+			free_buf[*free_count].seg = cur_seg;
+			free_buf[*free_count].pkt_id = pkt_id;
+			++*free_count;
 		}
+		cur_seg = cur_seg->next;
 		if (cur_seg == NULL) {
 			sg_temp->final = 1;
 			cpu_to_hw_sg(sg_temp);
@@ -904,7 +908,10 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 static inline void
 tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
 			    struct dpaa_bp_info *bp_info,
-			    struct qm_fd *fd_arr)
+			    struct qm_fd *fd_arr,
+			    struct dpaa_sw_buf_free *buf_to_free,
+			    uint32_t *free_count,
+			    uint32_t pkt_id)
 {
 	struct rte_mbuf *mi = NULL;
 
@@ -923,6 +930,9 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
 			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, bp_info->bpid);
 		}
 	} else if (RTE_MBUF_HAS_EXTBUF(mbuf)) {
+		buf_to_free[*free_count].seg = mbuf;
+		buf_to_free[*free_count].pkt_id = pkt_id;
+		++*free_count;
 		DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr,
 				bp_info ? bp_info->bpid : 0xff);
 	} else {
@@ -946,7 +956,9 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
 			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr,
 						bp_info ? bp_info->bpid : 0xff);
 		}
-		rte_pktmbuf_free(mbuf);
+		buf_to_free[*free_count].seg = mbuf;
+		buf_to_free[*free_count].pkt_id = pkt_id;
+		++*free_count;
 	}
 
 	if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK)
@@ -957,16 +969,21 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
 static inline uint16_t
 tx_on_dpaa_pool(struct rte_mbuf *mbuf,
 		struct dpaa_bp_info *bp_info,
-		struct qm_fd *fd_arr)
+		struct qm_fd *fd_arr,
+		struct dpaa_sw_buf_free *buf_to_free,
+		uint32_t *free_count,
+		uint32_t pkt_id)
 {
 	DPAA_DP_LOG(DEBUG, "BMAN offloaded buffer, mbuf: %p", mbuf);
 
 	if (mbuf->nb_segs == 1) {
 		/* Case for non-segmented buffers */
-		tx_on_dpaa_pool_unsegmented(mbuf, bp_info, fd_arr);
+		tx_on_dpaa_pool_unsegmented(mbuf, bp_info, fd_arr,
+				buf_to_free, free_count, pkt_id);
 	} else if (mbuf->nb_segs > 1 &&
 		   mbuf->nb_segs <= DPAA_SGT_MAX_ENTRIES) {
-		if (dpaa_eth_mbuf_to_sg_fd(mbuf, fd_arr)) {
+		if (dpaa_eth_mbuf_to_sg_fd(mbuf, fd_arr, buf_to_free,
+					   free_count, pkt_id)) {
 			DPAA_PMD_DEBUG("Unable to create Scatter Gather FD");
 			return 1;
 		}
@@ -1070,7 +1087,8 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 	uint16_t state;
 	int ret, realloc_mbuf = 0;
 	uint32_t seqn, index, flags[DPAA_TX_BURST_SIZE] = {0};
-	struct rte_mbuf **orig_bufs = bufs;
+	struct dpaa_sw_buf_free buf_to_free[DPAA_MAX_SGS * DPAA_MAX_DEQUEUE_NUM_FRAMES];
+	uint32_t free_count = 0;
 
 	if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
 		ret = rte_dpaa_portal_init((void *)0);
@@ -1153,7 +1171,10 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 			}
 indirect_buf:
 			state = tx_on_dpaa_pool(mbuf, bp_info,
-						&fd_arr[loop]);
+						&fd_arr[loop],
+						buf_to_free,
+						&free_count,
+						loop);
 			if (unlikely(state)) {
 				/* Set frames_to_send & nb_bufs so
 				 * that packets are transmitted till
@@ -1178,13 +1199,9 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 
 	DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", sent, q);
 
-
-	loop = 0;
-	while (loop < sent) {
-		if (unlikely(RTE_MBUF_HAS_EXTBUF(*orig_bufs)))
-			rte_pktmbuf_free(*orig_bufs);
-		orig_bufs++;
-		loop++;
+	for (loop = 0; loop < free_count; loop++) {
+		if (buf_to_free[loop].pkt_id < sent)
+			rte_pktmbuf_free_seg(buf_to_free[loop].seg);
 	}
 
 	return sent;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 15/15] net/dpaa: fix buffer free in slow path
       [not found] <20220928052516.1279442-1-g.singh@nxp.com>
                   ` (6 preceding siblings ...)
  2022-09-28  5:25 ` [PATCH 14/15] net/dpaa: fix buffer free on transmit SG packets Gagandeep Singh
@ 2022-09-28  5:25 ` Gagandeep Singh
  2022-10-05 14:21   ` Ferruh Yigit
       [not found] ` <20221007032743.2129353-1-g.singh@nxp.com>
  8 siblings, 1 reply; 22+ messages in thread
From: Gagandeep Singh @ 2022-09-28  5:25 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Gagandeep Singh, stable

Adding a check in slow path to free those buffers
which are not external.

Fixes: 9124e65dd3eb ("net/dpaa: enable Tx queue taildrop")
Cc: stable@dpdk.org

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/net/dpaa/dpaa_rxtx.c | 23 ++++++++---------------
 1 file changed, 8 insertions(+), 15 deletions(-)

diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 4d285b4f38..ce4f3d6c85 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -455,7 +455,7 @@ dpaa_free_mbuf(const struct qm_fd *fd)
 	bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
 	format = (fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
 	if (unlikely(format == qm_fd_sg)) {
-		struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
+		struct rte_mbuf *first_seg, *cur_seg;
 		struct qm_sg_entry *sgt, *sg_temp;
 		void *vaddr, *sg_vaddr;
 		int i = 0;
@@ -469,32 +469,25 @@ dpaa_free_mbuf(const struct qm_fd *fd)
 		sgt = vaddr + fd_offset;
 		sg_temp = &sgt[i++];
 		hw_sg_to_cpu(sg_temp);
-		temp = (struct rte_mbuf *)
-			((char *)vaddr - bp_info->meta_data_size);
 		sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
 						qm_sg_entry_get64(sg_temp));
-
 		first_seg = (struct rte_mbuf *)((char *)sg_vaddr -
 						bp_info->meta_data_size);
 		first_seg->nb_segs = 1;
-		prev_seg = first_seg;
 		while (i < DPAA_SGT_MAX_ENTRIES) {
 			sg_temp = &sgt[i++];
 			hw_sg_to_cpu(sg_temp);
-			sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
+			if (sg_temp->bpid != 0xFF) {
+				bp_info = DPAA_BPID_TO_POOL_INFO(sg_temp->bpid);
+				sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
 						qm_sg_entry_get64(sg_temp));
-			cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+				cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
 						      bp_info->meta_data_size);
-			first_seg->nb_segs += 1;
-			prev_seg->next = cur_seg;
-			if (sg_temp->final) {
-				cur_seg->next = NULL;
-				break;
+				rte_pktmbuf_free_seg(cur_seg);
 			}
-			prev_seg = cur_seg;
+			if (sg_temp->final)
+				break;
 		}
-
-		rte_pktmbuf_free_seg(temp);
 		rte_pktmbuf_free_seg(first_seg);
 		return 0;
 	}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 15/15] net/dpaa: fix buffer free in slow path
  2022-09-28  5:25 ` [PATCH 15/15] net/dpaa: fix buffer free in slow path Gagandeep Singh
@ 2022-10-05 14:21   ` Ferruh Yigit
  2022-10-06  8:51     ` Gagandeep Singh
  0 siblings, 1 reply; 22+ messages in thread
From: Ferruh Yigit @ 2022-10-05 14:21 UTC (permalink / raw)
  To: Gagandeep Singh, dev; +Cc: stable

On 9/28/2022 6:25 AM, Gagandeep Singh wrote:
> Adding a check in slow path to free those buffers
> which are not external.
> 

Can you please explain what was the error before fix, what was happening 
when you try to free all mbufs?

Also it seems previous logic was different, with 'prev_seg' etc, can you 
explain what/why changed there?

> Fixes: 9124e65dd3eb ("net/dpaa: enable Tx queue taildrop")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
> ---
>   drivers/net/dpaa/dpaa_rxtx.c | 23 ++++++++---------------
>   1 file changed, 8 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
> index 4d285b4f38..ce4f3d6c85 100644
> --- a/drivers/net/dpaa/dpaa_rxtx.c
> +++ b/drivers/net/dpaa/dpaa_rxtx.c
> @@ -455,7 +455,7 @@ dpaa_free_mbuf(const struct qm_fd *fd)
>   	bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
>   	format = (fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
>   	if (unlikely(format == qm_fd_sg)) {
> -		struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
> +		struct rte_mbuf *first_seg, *cur_seg;
>   		struct qm_sg_entry *sgt, *sg_temp;
>   		void *vaddr, *sg_vaddr;
>   		int i = 0;
> @@ -469,32 +469,25 @@ dpaa_free_mbuf(const struct qm_fd *fd)
>   		sgt = vaddr + fd_offset;
>   		sg_temp = &sgt[i++];
>   		hw_sg_to_cpu(sg_temp);
> -		temp = (struct rte_mbuf *)
> -			((char *)vaddr - bp_info->meta_data_size);
>   		sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
>   						qm_sg_entry_get64(sg_temp));
> -
>   		first_seg = (struct rte_mbuf *)((char *)sg_vaddr -
>   						bp_info->meta_data_size);
>   		first_seg->nb_segs = 1;
> -		prev_seg = first_seg;
>   		while (i < DPAA_SGT_MAX_ENTRIES) {
>   			sg_temp = &sgt[i++];
>   			hw_sg_to_cpu(sg_temp);
> -			sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
> +			if (sg_temp->bpid != 0xFF) {
> +				bp_info = DPAA_BPID_TO_POOL_INFO(sg_temp->bpid);
> +				sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
>   						qm_sg_entry_get64(sg_temp));
> -			cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
> +				cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
>   						      bp_info->meta_data_size);
> -			first_seg->nb_segs += 1;
> -			prev_seg->next = cur_seg;
> -			if (sg_temp->final) {
> -				cur_seg->next = NULL;
> -				break;
> +				rte_pktmbuf_free_seg(cur_seg);
>   			}
> -			prev_seg = cur_seg;
> +			if (sg_temp->final)
> +				break;
>   		}
> -
> -		rte_pktmbuf_free_seg(temp);
>   		rte_pktmbuf_free_seg(first_seg);
>   		return 0;
>   	}


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 05/15] net/dpaa2: check free enqueue descriptors before Tx
  2022-09-28  5:25 ` [PATCH 05/15] net/dpaa2: check free enqueue descriptors before Tx Gagandeep Singh
@ 2022-10-05 14:30   ` Ferruh Yigit
  0 siblings, 0 replies; 22+ messages in thread
From: Ferruh Yigit @ 2022-10-05 14:30 UTC (permalink / raw)
  To: Gagandeep Singh, dev; +Cc: brick, stable, Rohit Raj

On 9/28/2022 6:25 AM, Gagandeep Singh wrote:
> From: brick <brick.yang@nxp.com>
> 
> Check if there exists free enqueue descriptors before enqueuing Tx
> packet. Also try to free enqueue descriptors in case they are not
> free.
> 
> Fixes: ed1cdbed6a15 ("net/dpaa2: support multiple Tx queues enqueue for ordered")
> Cc: stable@dpdk.org
> 
> Signed-off-by: brick <brick.yang@nxp.com>

Can you please use name tag as "Name Surname <email lower case>", like
  Signed-off-by: Brick Yang <brick.yang@nxp.com>

<...>

> +		DPAA2_PMD_DP_DEBUG("===> eth_data =%p, fqid =%d\n",
> +				   eth_data, dpaa2_q[loop]->fqid);
> +
> +		/*Check if the queue is congested*/

syntax, more common to put space before/after '/* ' & ' */'

> +		retry_count = 0;
> +		while (qbman_result_SCN_state(dpaa2_q[loop]->cscn)) {
> +			retry_count++;
> +			/* Retry for some time before giving up */
> +			if (retry_count > CONG_RETRY_COUNT)
> +				goto send_frames;
> +		}
> +
> +		/*Prepare enqueue descriptor*/

ditto

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 08/15] net/dpaa2: fix buffer free on transmit SG packets
  2022-09-28  5:25 ` [PATCH 08/15] net/dpaa2: fix buffer free on transmit SG packets Gagandeep Singh
@ 2022-10-06  7:48   ` Ferruh Yigit
  0 siblings, 0 replies; 22+ messages in thread
From: Ferruh Yigit @ 2022-10-06  7:48 UTC (permalink / raw)
  To: Gagandeep Singh, dev; +Cc: stable

On 9/28/2022 6:25 AM, Gagandeep Singh wrote:
> When using SG list to TX with external and direct buffers,
> HW free the direct buffers and driver free the external buffers.
> 
> Software scans the complete SG mbuf list to find the external
> buffers to free, but this is wrong as hardware can free the
> direct buffers if any present in the list and same can be
> re-allocated for other purpose in multi thread or high spead

s/spead/speed/

> running traffic environment with new data in it. So the software
> which is scanning the SG mbuf list, if that list has any direct
> buffer present then that direct buffer's next pointor can give

s/pointor/pointer/

> wrong pointer value, if already freed by hardware which
> can do the mempool corruption or memory leak.
> 
> In this patch instead of relying on user given SG mbuf list
> we are storing the buffers in an internal list which will
> be scanned by driver after transmit to free non-direct
> buffers.
> 
> This patch also fixes 2 more memory leak issues.
> 
> Driver is freeing complete SG list by checking external buffer
> flag in first segment only, but external buffer can be attached
> to any of the segment. Because of which driver either can double
> free buffers or there can be memory leak.
> 
> In case of indirect buffers, driver is modifying the original
> buffer list to free the indirect buffers but this orginal buffer

s/orginal/original/

same fixes needed for dpaa version of this patch, 14/15.

> list is being used even after transmit packets for software
> buffer cleanup. This can cause the buffer leak issue.
> 
> Fixes: 6bfbafe18d15 ("net/dpaa2: support external buffers in Tx")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>

<...>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [PATCH 15/15] net/dpaa: fix buffer free in slow path
  2022-10-05 14:21   ` Ferruh Yigit
@ 2022-10-06  8:51     ` Gagandeep Singh
  2022-10-06  9:42       ` Ferruh Yigit
  0 siblings, 1 reply; 22+ messages in thread
From: Gagandeep Singh @ 2022-10-06  8:51 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: stable

Hi,

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Wednesday, October 5, 2022 7:52 PM
> To: Gagandeep Singh <G.Singh@nxp.com>; dev@dpdk.org
> Cc: stable@dpdk.org
> Subject: Re: [PATCH 15/15] net/dpaa: fix buffer free in slow path
> 
> On 9/28/2022 6:25 AM, Gagandeep Singh wrote:
> > Adding a check in slow path to free those buffers which are not
> > external.
> >
> 
> Can you please explain what was the error before fix, what was happening
> when you try to free all mbufs?
> 
> Also it seems previous logic was different, with 'prev_seg' etc, can you
> explain what/why changed there?
> 
Actually, there were two issues, this function was converting all the segments present in HW frame
descriptor to mbuf SG list by doing while on segments in FD (HW descriptor) and in the end
it frees only one segment by calling the API rte_pktmbuf_free_seg(), so for other segments
memory will be leaked.

Now in this change, doing the loop on each segment in FD and if the segment has a valid
buffer pool id (HW pool id), freeing that segment in the loop itself without converting to a mbuf list.
if we free all the buffers even those with invalid HW bpid (which will only be the external buffer case),
then there can be double free because all the external buffer free handling is being done by the
Xmit function.

> > Fixes: 9124e65dd3eb ("net/dpaa: enable Tx queue taildrop")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
> > ---
> >   drivers/net/dpaa/dpaa_rxtx.c | 23 ++++++++---------------
> >   1 file changed, 8 insertions(+), 15 deletions(-)
> >
> > diff --git a/drivers/net/dpaa/dpaa_rxtx.c
> > b/drivers/net/dpaa/dpaa_rxtx.c index 4d285b4f38..ce4f3d6c85 100644
> > --- a/drivers/net/dpaa/dpaa_rxtx.c
> > +++ b/drivers/net/dpaa/dpaa_rxtx.c
> > @@ -455,7 +455,7 @@ dpaa_free_mbuf(const struct qm_fd *fd)
> >   	bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
> >   	format = (fd->opaque & DPAA_FD_FORMAT_MASK) >>
> DPAA_FD_FORMAT_SHIFT;
> >   	if (unlikely(format == qm_fd_sg)) {
> > -		struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
> > +		struct rte_mbuf *first_seg, *cur_seg;
> >   		struct qm_sg_entry *sgt, *sg_temp;
> >   		void *vaddr, *sg_vaddr;
> >   		int i = 0;
> > @@ -469,32 +469,25 @@ dpaa_free_mbuf(const struct qm_fd *fd)
> >   		sgt = vaddr + fd_offset;
> >   		sg_temp = &sgt[i++];
> >   		hw_sg_to_cpu(sg_temp);
> > -		temp = (struct rte_mbuf *)
> > -			((char *)vaddr - bp_info->meta_data_size);
> >   		sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
> >
> 	qm_sg_entry_get64(sg_temp));
> > -
> >   		first_seg = (struct rte_mbuf *)((char *)sg_vaddr -
> >   						bp_info->meta_data_size);
> >   		first_seg->nb_segs = 1;
> > -		prev_seg = first_seg;
> >   		while (i < DPAA_SGT_MAX_ENTRIES) {
> >   			sg_temp = &sgt[i++];
> >   			hw_sg_to_cpu(sg_temp);
> > -			sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
> > +			if (sg_temp->bpid != 0xFF) {
> > +				bp_info =
> DPAA_BPID_TO_POOL_INFO(sg_temp->bpid);
> > +				sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
> >
> 	qm_sg_entry_get64(sg_temp));
> > -			cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
> > +				cur_seg = (struct rte_mbuf *)((char
> *)sg_vaddr -
> >   						      bp_info-
> >meta_data_size);
> > -			first_seg->nb_segs += 1;
> > -			prev_seg->next = cur_seg;
> > -			if (sg_temp->final) {
> > -				cur_seg->next = NULL;
> > -				break;
> > +				rte_pktmbuf_free_seg(cur_seg);
> >   			}
> > -			prev_seg = cur_seg;
> > +			if (sg_temp->final)
> > +				break;
> >   		}
> > -
> > -		rte_pktmbuf_free_seg(temp);
> >   		rte_pktmbuf_free_seg(first_seg);
> >   		return 0;
> >   	}


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 15/15] net/dpaa: fix buffer free in slow path
  2022-10-06  8:51     ` Gagandeep Singh
@ 2022-10-06  9:42       ` Ferruh Yigit
  2022-10-06 11:19         ` Gagandeep Singh
  0 siblings, 1 reply; 22+ messages in thread
From: Ferruh Yigit @ 2022-10-06  9:42 UTC (permalink / raw)
  To: Gagandeep Singh, dev; +Cc: stable

On 10/6/2022 9:51 AM, Gagandeep Singh wrote:
> Hi,
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>> Sent: Wednesday, October 5, 2022 7:52 PM
>> To: Gagandeep Singh <G.Singh@nxp.com>; dev@dpdk.org
>> Cc: stable@dpdk.org
>> Subject: Re: [PATCH 15/15] net/dpaa: fix buffer free in slow path
>>
>> On 9/28/2022 6:25 AM, Gagandeep Singh wrote:
>>> Adding a check in slow path to free those buffers which are not
>>> external.
>>>
>>
>> Can you please explain what was the error before fix, what was happening
>> when you try to free all mbufs?
>>
>> Also it seems previous logic was different, with 'prev_seg' etc, can you
>> explain what/why changed there?
>>
> Actually, there were two issues, this function was converting all the segments present in HW frame
> descriptor to mbuf SG list by doing while on segments in FD (HW descriptor) and in the end
> it frees only one segment by calling the API rte_pktmbuf_free_seg(), so for other segments
> memory will be leaked.
> 

ack

> Now in this change, doing the loop on each segment in FD and if the segment has a valid
> buffer pool id (HW pool id), freeing that segment in the loop itself without converting to a mbuf list.
> if we free all the buffers even those with invalid HW bpid (which will only be the external buffer case),
> then there can be double free because all the external buffer free handling is being done by the
> Xmit function.
> 

Got it, can you please give more information in the commit log as above, 
and can you please elaborate impact of possible double free, will it 
crash etc?

>>> Fixes: 9124e65dd3eb ("net/dpaa: enable Tx queue taildrop")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
>>> ---
>>>    drivers/net/dpaa/dpaa_rxtx.c | 23 ++++++++---------------
>>>    1 file changed, 8 insertions(+), 15 deletions(-)
>>>
>>> diff --git a/drivers/net/dpaa/dpaa_rxtx.c
>>> b/drivers/net/dpaa/dpaa_rxtx.c index 4d285b4f38..ce4f3d6c85 100644
>>> --- a/drivers/net/dpaa/dpaa_rxtx.c
>>> +++ b/drivers/net/dpaa/dpaa_rxtx.c
>>> @@ -455,7 +455,7 @@ dpaa_free_mbuf(const struct qm_fd *fd)
>>>    	bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
>>>    	format = (fd->opaque & DPAA_FD_FORMAT_MASK) >>
>> DPAA_FD_FORMAT_SHIFT;
>>>    	if (unlikely(format == qm_fd_sg)) {
>>> -		struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
>>> +		struct rte_mbuf *first_seg, *cur_seg;
>>>    		struct qm_sg_entry *sgt, *sg_temp;
>>>    		void *vaddr, *sg_vaddr;
>>>    		int i = 0;
>>> @@ -469,32 +469,25 @@ dpaa_free_mbuf(const struct qm_fd *fd)
>>>    		sgt = vaddr + fd_offset;
>>>    		sg_temp = &sgt[i++];
>>>    		hw_sg_to_cpu(sg_temp);
>>> -		temp = (struct rte_mbuf *)
>>> -			((char *)vaddr - bp_info->meta_data_size);
>>>    		sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
>>>
>> 	qm_sg_entry_get64(sg_temp));
>>> -
>>>    		first_seg = (struct rte_mbuf *)((char *)sg_vaddr -
>>>    						bp_info->meta_data_size);
>>>    		first_seg->nb_segs = 1;
>>> -		prev_seg = first_seg;
>>>    		while (i < DPAA_SGT_MAX_ENTRIES) {
>>>    			sg_temp = &sgt[i++];
>>>    			hw_sg_to_cpu(sg_temp);
>>> -			sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
>>> +			if (sg_temp->bpid != 0xFF) {
>>> +				bp_info =
>> DPAA_BPID_TO_POOL_INFO(sg_temp->bpid);
>>> +				sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
>>>
>> 	qm_sg_entry_get64(sg_temp));
>>> -			cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
>>> +				cur_seg = (struct rte_mbuf *)((char
>> *)sg_vaddr -
>>>    						      bp_info-
>>> meta_data_size);
>>> -			first_seg->nb_segs += 1;
>>> -			prev_seg->next = cur_seg;
>>> -			if (sg_temp->final) {
>>> -				cur_seg->next = NULL;
>>> -				break;
>>> +				rte_pktmbuf_free_seg(cur_seg);
>>>    			}
>>> -			prev_seg = cur_seg;
>>> +			if (sg_temp->final)
>>> +				break;
>>>    		}
>>> -
>>> -		rte_pktmbuf_free_seg(temp);
>>>    		rte_pktmbuf_free_seg(first_seg);
>>>    		return 0;
>>>    	}
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* RE: [PATCH 15/15] net/dpaa: fix buffer free in slow path
  2022-10-06  9:42       ` Ferruh Yigit
@ 2022-10-06 11:19         ` Gagandeep Singh
  0 siblings, 0 replies; 22+ messages in thread
From: Gagandeep Singh @ 2022-10-06 11:19 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: stable



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Thursday, October 6, 2022 3:12 PM
> To: Gagandeep Singh <G.Singh@nxp.com>; dev@dpdk.org
> Cc: stable@dpdk.org
> Subject: Re: [PATCH 15/15] net/dpaa: fix buffer free in slow path
> 
> On 10/6/2022 9:51 AM, Gagandeep Singh wrote:
> > Hi,
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@amd.com>
> >> Sent: Wednesday, October 5, 2022 7:52 PM
> >> To: Gagandeep Singh <G.Singh@nxp.com>; dev@dpdk.org
> >> Cc: stable@dpdk.org
> >> Subject: Re: [PATCH 15/15] net/dpaa: fix buffer free in slow path
> >>
> >> On 9/28/2022 6:25 AM, Gagandeep Singh wrote:
> >>> Adding a check in slow path to free those buffers which are not
> >>> external.
> >>>
> >>
> >> Can you please explain what was the error before fix, what was
> >> happening when you try to free all mbufs?
> >>
> >> Also it seems previous logic was different, with 'prev_seg' etc, can
> >> you explain what/why changed there?
> >>
> > Actually, there were two issues, this function was converting all the
> > segments present in HW frame descriptor to mbuf SG list by doing while
> > on segments in FD (HW descriptor) and in the end it frees only one
> > segment by calling the API rte_pktmbuf_free_seg(), so for other segments
> memory will be leaked.
> >
> 
> ack
> 
> > Now in this change, doing the loop on each segment in FD and if the
> > segment has a valid buffer pool id (HW pool id), freeing that segment in the
> loop itself without converting to a mbuf list.
> > if we free all the buffers even those with invalid HW bpid (which will
> > only be the external buffer case), then there can be double free
> > because all the external buffer free handling is being done by the Xmit
> function.
> >
> 
> Got it, can you please give more information in the commit log as above, and
> can you please elaborate impact of possible double free, will it crash etc?
> 
Ok. I will update the commit message.

> >>> Fixes: 9124e65dd3eb ("net/dpaa: enable Tx queue taildrop")
> >>> Cc: stable@dpdk.org
> >>>
> >>> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
> >>> ---
> >>>    drivers/net/dpaa/dpaa_rxtx.c | 23 ++++++++---------------
> >>>    1 file changed, 8 insertions(+), 15 deletions(-)
> >>>
> >>> diff --git a/drivers/net/dpaa/dpaa_rxtx.c
> >>> b/drivers/net/dpaa/dpaa_rxtx.c index 4d285b4f38..ce4f3d6c85 100644
> >>> --- a/drivers/net/dpaa/dpaa_rxtx.c
> >>> +++ b/drivers/net/dpaa/dpaa_rxtx.c
> >>> @@ -455,7 +455,7 @@ dpaa_free_mbuf(const struct qm_fd *fd)
> >>>    	bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
> >>>    	format = (fd->opaque & DPAA_FD_FORMAT_MASK) >>
> >> DPAA_FD_FORMAT_SHIFT;
> >>>    	if (unlikely(format == qm_fd_sg)) {
> >>> -		struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
> >>> +		struct rte_mbuf *first_seg, *cur_seg;
> >>>    		struct qm_sg_entry *sgt, *sg_temp;
> >>>    		void *vaddr, *sg_vaddr;
> >>>    		int i = 0;
> >>> @@ -469,32 +469,25 @@ dpaa_free_mbuf(const struct qm_fd *fd)
> >>>    		sgt = vaddr + fd_offset;
> >>>    		sg_temp = &sgt[i++];
> >>>    		hw_sg_to_cpu(sg_temp);
> >>> -		temp = (struct rte_mbuf *)
> >>> -			((char *)vaddr - bp_info->meta_data_size);
> >>>    		sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
> >>>
> >> 	qm_sg_entry_get64(sg_temp));
> >>> -
> >>>    		first_seg = (struct rte_mbuf *)((char *)sg_vaddr -
> >>>    						bp_info->meta_data_size);
> >>>    		first_seg->nb_segs = 1;
> >>> -		prev_seg = first_seg;
> >>>    		while (i < DPAA_SGT_MAX_ENTRIES) {
> >>>    			sg_temp = &sgt[i++];
> >>>    			hw_sg_to_cpu(sg_temp);
> >>> -			sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
> >>> +			if (sg_temp->bpid != 0xFF) {
> >>> +				bp_info =
> >> DPAA_BPID_TO_POOL_INFO(sg_temp->bpid);
> >>> +				sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
> >>>
> >> 	qm_sg_entry_get64(sg_temp));
> >>> -			cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
> >>> +				cur_seg = (struct rte_mbuf *)((char
> >> *)sg_vaddr -
> >>>    						      bp_info-
> >>> meta_data_size);
> >>> -			first_seg->nb_segs += 1;
> >>> -			prev_seg->next = cur_seg;
> >>> -			if (sg_temp->final) {
> >>> -				cur_seg->next = NULL;
> >>> -				break;
> >>> +				rte_pktmbuf_free_seg(cur_seg);
> >>>    			}
> >>> -			prev_seg = cur_seg;
> >>> +			if (sg_temp->final)
> >>> +				break;
> >>>    		}
> >>> -
> >>> -		rte_pktmbuf_free_seg(temp);
> >>>    		rte_pktmbuf_free_seg(first_seg);
> >>>    		return 0;
> >>>    	}
> >


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 02/16] net/enetfec: fix restart issue
       [not found] ` <20221007032743.2129353-1-g.singh@nxp.com>
@ 2022-10-07  3:27   ` Gagandeep Singh
  2022-10-07  3:27   ` [PATCH v2 03/16] net/enetfec: fix buffer leak issue Gagandeep Singh
                     ` (6 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: Gagandeep Singh @ 2022-10-07  3:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Apeksha Gupta, stable, Sachin Saxena, Hemant Agrawal

From: Apeksha Gupta <apeksha.gupta@nxp.com>

Queue reset is missing in restart because of which
IO cannot work on device restart.

This patch fixes the issue by resetting the queues on
device restart.

Fixes: b84fdd39638b ("net/enetfec: support UIO")
Cc: stable@dpdk.org

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/enetfec/enet_ethdev.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index c938e58204..898aad1c37 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -54,6 +54,7 @@ enetfec_restart(struct rte_eth_dev *dev)
 	uint32_t rcntl = OPT_FRAME_SIZE | 0x04;
 	uint32_t ecntl = ENETFEC_ETHEREN;
 	uint32_t val;
+	int i;
 
 	/* Clear any outstanding interrupt. */
 	writel(0xffffffff, (uint8_t *)fep->hw_baseaddr_v + ENETFEC_EIR);
@@ -149,6 +150,9 @@ enetfec_restart(struct rte_eth_dev *dev)
 	/* And last, enable the transmit and receive processing */
 	rte_write32(rte_cpu_to_le_32(ecntl),
 		(uint8_t *)fep->hw_baseaddr_v + ENETFEC_ECR);
+
+	for (i = 0; i < fep->max_rx_queues; i++)
+		rte_write32(0, fep->rx_queues[i]->bd.active_reg_desc);
 	rte_delay_us(10);
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 03/16] net/enetfec: fix buffer leak issue
       [not found] ` <20221007032743.2129353-1-g.singh@nxp.com>
  2022-10-07  3:27   ` [PATCH v2 02/16] net/enetfec: fix restart issue Gagandeep Singh
@ 2022-10-07  3:27   ` Gagandeep Singh
  2022-10-07  3:27   ` [PATCH v2 04/16] net/dpaa2: fix dpdmux configuration for error behaviour Gagandeep Singh
                     ` (5 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: Gagandeep Singh @ 2022-10-07  3:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Apeksha Gupta, stable, Sachin Saxena, Hemant Agrawal

From: Apeksha Gupta <apeksha.gupta@nxp.com>

Driver has no proper handling to free unused
allocated mbufs in case of error or when the rx
processing complete because of which mempool
can be empty after some time.

This patch fixes this issue by moving the buffer
allocation code to the right place in driver.

Fixes: ecae71571b0d ("net/enetfec: support Rx/Tx")
Cc: stable@dpdk.org

Signed-off-by: Apeksha Gupta <apeksha.gupta@nxp.com>
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/enetfec/enet_rxtx.c | 29 ++++++++++++++++-------------
 1 file changed, 16 insertions(+), 13 deletions(-)

diff --git a/drivers/net/enetfec/enet_rxtx.c b/drivers/net/enetfec/enet_rxtx.c
index 49b326315d..0aea8b240d 100644
--- a/drivers/net/enetfec/enet_rxtx.c
+++ b/drivers/net/enetfec/enet_rxtx.c
@@ -39,11 +39,6 @@ enetfec_recv_pkts(void *rxq1, struct rte_mbuf **rx_pkts,
 		if (pkt_received >= nb_pkts)
 			break;
 
-		new_mbuf = rte_pktmbuf_alloc(pool);
-		if (unlikely(new_mbuf == NULL)) {
-			stats->rx_nombuf++;
-			break;
-		}
 		/* Check for errors. */
 		status ^= RX_BD_LAST;
 		if (status & (RX_BD_LG | RX_BD_SH | RX_BD_NO |
@@ -72,6 +67,12 @@ enetfec_recv_pkts(void *rxq1, struct rte_mbuf **rx_pkts,
 			goto rx_processing_done;
 		}
 
+		new_mbuf = rte_pktmbuf_alloc(pool);
+		if (unlikely(new_mbuf == NULL)) {
+			stats->rx_nombuf++;
+			break;
+		}
+
 		/* Process the incoming frame. */
 		stats->ipackets++;
 		pkt_len = rte_le_to_cpu_16(rte_read16(&bdp->bd_datlen));
@@ -193,7 +194,16 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			tx_st = 0;
 			break;
 		}
+
+		mbuf = *(tx_pkts);
+		if (mbuf->nb_segs > 1) {
+			ENETFEC_DP_LOG(DEBUG, "SG not supported");
+			return pkt_transmitted;
+		}
+
+		tx_pkts++;
 		bdp = txq->bd.cur;
+
 		/* First clean the ring */
 		index = enet_get_bd_index(bdp, &txq->bd);
 		status = rte_le_to_cpu_16(rte_read16(&bdp->bd_sc));
@@ -207,9 +217,6 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			txq->tx_mbuf[index] = NULL;
 		}
 
-		mbuf = *(tx_pkts);
-		tx_pkts++;
-
 		/* Fill in a Tx ring entry */
 		last_bdp = bdp;
 		status &= ~TX_BD_STATS;
@@ -219,10 +226,6 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		stats->opackets++;
 		stats->obytes += buflen;
 
-		if (mbuf->nb_segs > 1) {
-			ENETFEC_DP_LOG(DEBUG, "SG not supported");
-			return -1;
-		}
 		status |= (TX_BD_LAST);
 		data = rte_pktmbuf_mtod(mbuf, void *);
 		for (i = 0; i <= buflen; i += RTE_CACHE_LINE_SIZE)
@@ -268,5 +271,5 @@ enetfec_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 */
 		txq->bd.cur = bdp;
 	}
-	return nb_pkts;
+	return pkt_transmitted;
 }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 04/16] net/dpaa2: fix dpdmux configuration for error behaviour
       [not found] ` <20221007032743.2129353-1-g.singh@nxp.com>
  2022-10-07  3:27   ` [PATCH v2 02/16] net/enetfec: fix restart issue Gagandeep Singh
  2022-10-07  3:27   ` [PATCH v2 03/16] net/enetfec: fix buffer leak issue Gagandeep Singh
@ 2022-10-07  3:27   ` Gagandeep Singh
  2022-10-07  3:27   ` [PATCH v2 05/16] net/dpaa2: check free enqueue descriptors before Tx Gagandeep Singh
                     ` (4 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: Gagandeep Singh @ 2022-10-07  3:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Vanshika Shukla, stable, Hemant Agrawal

From: Vanshika Shukla <vanshika.shukla@nxp.com>

Driver is giving the wrong interface ID while setting the
error behaviour.

This patch fixes the issue by passing the correct MAC interface
index value to the API.

Fixes: 3d43972b1b42 ("net/dpaa2: do not drop parse error packets by dpdmux")
Cc: stable@dpdk.org

Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h | 3 +++
 drivers/net/dpaa2/dpaa2_mux.c    | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index f69df95253..32ae762e4a 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -66,6 +66,9 @@
 /* Tx confirmation enabled */
 #define DPAA2_TX_CONF_ENABLE	0x06
 
+/* DPDMUX index for DPMAC */
+#define DPAA2_DPDMUX_DPMAC_IDX 0
+
 /* HW loopback the egress traffic to self ingress*/
 #define DPAA2_TX_MAC_LOOPBACK_MODE 0x20
 
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 3289f388e1..7456f43f42 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -336,7 +336,7 @@ dpaa2_create_dpdmux_device(int vdev_fd __rte_unused,
 
 		ret = dpdmux_if_set_errors_behavior(&dpdmux_dev->dpdmux,
 				CMD_PRI_LOW,
-				dpdmux_dev->token, dpdmux_id,
+				dpdmux_dev->token, DPAA2_DPDMUX_DPMAC_IDX,
 				&mux_err_cfg);
 		if (ret) {
 			DPAA2_PMD_ERR("dpdmux_if_set_errors_behavior %s err %d",
-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 05/16] net/dpaa2: check free enqueue descriptors before Tx
       [not found] ` <20221007032743.2129353-1-g.singh@nxp.com>
                     ` (2 preceding siblings ...)
  2022-10-07  3:27   ` [PATCH v2 04/16] net/dpaa2: fix dpdmux configuration for error behaviour Gagandeep Singh
@ 2022-10-07  3:27   ` Gagandeep Singh
  2022-10-07  3:27   ` [PATCH v2 08/16] net/dpaa2: fix buffer free on transmit SG packets Gagandeep Singh
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: Gagandeep Singh @ 2022-10-07  3:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Brick Yang, stable, Rohit Raj, Hemant Agrawal

From: Brick Yang <brick.yang@nxp.com>

Check if there exists free enqueue descriptors before enqueuing Tx
packet. Also try to free enqueue descriptors in case they are not
free.

Fixes: ed1cdbed6a15 ("net/dpaa2: support multiple Tx queues enqueue for ordered")
Cc: stable@dpdk.org

Signed-off-by: Brick Yang <brick.yang@nxp.com>
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/event/dpaa2/dpaa2_eventdev.c |  8 ++---
 drivers/net/dpaa2/dpaa2_rxtx.c       | 50 +++++++++++++++++++---------
 2 files changed, 38 insertions(+), 20 deletions(-)

diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index f499d0d015..fa1a1ade80 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017,2019-2021 NXP
+ * Copyright 2017,2019-2022 NXP
  */
 
 #include <assert.h>
@@ -176,7 +176,7 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[],
 				if (retry_count > DPAA2_EV_TX_RETRY_COUNT) {
 					num_tx += loop;
 					nb_events -= loop;
-					return num_tx + loop;
+					return num_tx;
 				}
 			} else {
 				loop += ret;
@@ -1016,9 +1016,7 @@ dpaa2_eventdev_txa_enqueue(void *port,
 		txq[i] = rte_eth_devices[m[i]->port].data->tx_queues[qid];
 	}
 
-	dpaa2_dev_tx_multi_txq_ordered(txq, m, nb_events);
-
-	return nb_events;
+	return dpaa2_dev_tx_multi_txq_ordered(txq, m, nb_events);
 }
 
 static struct eventdev_ops dpaa2_eventdev_ops = {
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 9436a95ac8..571ea6d16d 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -1525,7 +1525,7 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 	uint32_t loop, retry_count;
 	int32_t ret;
 	struct qbman_fd fd_arr[MAX_TX_RING_SLOTS];
-	uint32_t frames_to_send;
+	uint32_t frames_to_send, num_free_eq_desc = 0;
 	struct rte_mempool *mp;
 	struct qbman_eq_desc eqdesc[MAX_TX_RING_SLOTS];
 	struct dpaa2_queue *dpaa2_q[MAX_TX_RING_SLOTS];
@@ -1547,16 +1547,44 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 	}
 	swp = DPAA2_PER_LCORE_PORTAL;
 
-	for (loop = 0; loop < nb_pkts; loop++) {
+	frames_to_send = (nb_pkts > dpaa2_eqcr_size) ?
+		dpaa2_eqcr_size : nb_pkts;
+
+	for (loop = 0; loop < frames_to_send; loop++) {
 		dpaa2_q[loop] = (struct dpaa2_queue *)queue[loop];
 		eth_data = dpaa2_q[loop]->eth_data;
 		priv = eth_data->dev_private;
+		if (!priv->en_loose_ordered) {
+			if (*dpaa2_seqn(*bufs) & DPAA2_ENQUEUE_FLAG_ORP) {
+				if (!num_free_eq_desc) {
+					num_free_eq_desc = dpaa2_free_eq_descriptors();
+					if (!num_free_eq_desc)
+						goto send_frames;
+				}
+				num_free_eq_desc--;
+			}
+		}
+
+		DPAA2_PMD_DP_DEBUG("===> eth_data =%p, fqid =%d\n",
+				   eth_data, dpaa2_q[loop]->fqid);
+
+		/* Check if the queue is congested */
+		retry_count = 0;
+		while (qbman_result_SCN_state(dpaa2_q[loop]->cscn)) {
+			retry_count++;
+			/* Retry for some time before giving up */
+			if (retry_count > CONG_RETRY_COUNT)
+				goto send_frames;
+		}
+
+		/* Prepare enqueue descriptor */
 		qbman_eq_desc_clear(&eqdesc[loop]);
+
 		if (*dpaa2_seqn(*bufs) && priv->en_ordered) {
 			order_sendq = (struct dpaa2_queue *)priv->tx_vq[0];
 			dpaa2_set_enqueue_descriptor(order_sendq,
-							     (*bufs),
-							     &eqdesc[loop]);
+						     (*bufs),
+						     &eqdesc[loop]);
 		} else {
 			qbman_eq_desc_set_no_orp(&eqdesc[loop],
 							 DPAA2_EQ_RESP_ERR_FQ);
@@ -1564,14 +1592,6 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 						     dpaa2_q[loop]->fqid);
 		}
 
-		retry_count = 0;
-		while (qbman_result_SCN_state(dpaa2_q[loop]->cscn)) {
-			retry_count++;
-			/* Retry for some time before giving up */
-			if (retry_count > CONG_RETRY_COUNT)
-				goto send_frames;
-		}
-
 		if (likely(RTE_MBUF_DIRECT(*bufs))) {
 			mp = (*bufs)->pool;
 			/* Check the basic scenario and set
@@ -1591,7 +1611,6 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 					&fd_arr[loop],
 					mempool_to_bpid(mp));
 				bufs++;
-				dpaa2_q[loop]++;
 				continue;
 			}
 		} else {
@@ -1637,18 +1656,19 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 		}
 
 		bufs++;
-		dpaa2_q[loop]++;
 	}
 
 send_frames:
 	frames_to_send = loop;
 	loop = 0;
+	retry_count = 0;
 	while (loop < frames_to_send) {
 		ret = qbman_swp_enqueue_multiple_desc(swp, &eqdesc[loop],
 				&fd_arr[loop],
 				frames_to_send - loop);
 		if (likely(ret > 0)) {
 			loop += ret;
+			retry_count = 0;
 		} else {
 			retry_count++;
 			if (retry_count > DPAA2_MAX_TX_RETRY_COUNT)
@@ -1834,7 +1854,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		retry_count = 0;
 		while (i < loop) {
 			ret = qbman_swp_enqueue_multiple_desc(swp,
-				       &eqdesc[loop], &fd_arr[i], loop - i);
+				       &eqdesc[i], &fd_arr[i], loop - i);
 			if (unlikely(ret < 0)) {
 				retry_count++;
 				if (retry_count > DPAA2_MAX_TX_RETRY_COUNT)
-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 08/16] net/dpaa2: fix buffer free on transmit SG packets
       [not found] ` <20221007032743.2129353-1-g.singh@nxp.com>
                     ` (3 preceding siblings ...)
  2022-10-07  3:27   ` [PATCH v2 05/16] net/dpaa2: check free enqueue descriptors before Tx Gagandeep Singh
@ 2022-10-07  3:27   ` Gagandeep Singh
  2022-10-07  3:27   ` [PATCH v2 10/16] net/dpaa: fix Jumbo packet Rx in case of VSP Gagandeep Singh
                     ` (2 subsequent siblings)
  7 siblings, 0 replies; 22+ messages in thread
From: Gagandeep Singh @ 2022-10-07  3:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Gagandeep Singh, stable, Hemant Agrawal

When using SG list to TX with external and direct buffers,
HW free the direct buffers and driver free the external buffers.

Software scans the complete SG mbuf list to find the external
buffers to free, but this is wrong as hardware can free the
direct buffers if any present in the list and same can be
re-allocated for other purpose in multi thread or high speed
running traffic environment with new data in it. So the software
which is scanning the SG mbuf list, if that list has any direct
buffer present then that direct buffer's next pointer can give
wrong pointer value, if already freed by hardware which
can do the mempool corruption or memory leak.

In this patch instead of relying on user given SG mbuf list
we are storing the buffers in an internal list which will
be scanned by driver after transmit to free non-direct
buffers.

This patch also fixes 2 more memory leak issues.

Driver is freeing complete SG list by checking external buffer
flag in first segment only, but external buffer can be attached
to any of the segment. Because of which driver either can double
free buffers or there can be memory leak.

In case of indirect buffers, driver is modifying the original
buffer list to free the indirect buffers but this original buffer
list is being used even after transmit packets for software
buffer cleanup. This can cause the buffer leak issue.

Fixes: 6bfbafe18d15 ("net/dpaa2: support external buffers in Tx")
Cc: stable@dpdk.org

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/dpaa2/dpaa2_ethdev.h |   9 +++
 drivers/net/dpaa2/dpaa2_rxtx.c   | 111 +++++++++++++++++++++++--------
 2 files changed, 92 insertions(+), 28 deletions(-)

diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index 872dced517..c88c8146dc 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -129,6 +129,15 @@ extern struct rte_mempool *dpaa2_tx_sg_pool;
 #define DPAA2_POOL_SIZE 2048
 /* SG pool cache size */
 #define DPAA2_POOL_CACHE_SIZE 256
+/* structure to free external and indirect
+ * buffers.
+ */
+struct sw_buf_free {
+	/* To which packet this segment belongs */
+	uint16_t pkt_id;
+	/* The actual segment */
+	struct rte_mbuf *seg;
+};
 
 /* enable timestamp in mbuf*/
 extern bool dpaa2_enable_ts[];
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 53e06b3884..b0ee58fc9f 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -403,9 +403,12 @@ eth_fd_to_mbuf(const struct qbman_fd *fd,
 static int __rte_noinline __rte_hot
 eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 		  struct qbman_fd *fd,
+		  struct sw_buf_free *free_buf,
+		  uint32_t *free_count,
+		  uint32_t pkt_id,
 		  uint16_t bpid)
 {
-	struct rte_mbuf *cur_seg = mbuf, *prev_seg, *mi, *temp;
+	struct rte_mbuf *cur_seg = mbuf, *mi, *temp;
 	struct qbman_sge *sgt, *sge = NULL;
 	int i, offset = 0;
 
@@ -486,10 +489,11 @@ eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 #endif
 				}
 			}
-			cur_seg = cur_seg->next;
 		} else if (RTE_MBUF_HAS_EXTBUF(cur_seg)) {
+			free_buf[*free_count].seg = cur_seg;
+			free_buf[*free_count].pkt_id = pkt_id;
+			++*free_count;
 			DPAA2_SET_FLE_IVP(sge);
-			cur_seg = cur_seg->next;
 		} else {
 			/* Get owner MBUF from indirect buffer */
 			mi = rte_mbuf_from_indirect(cur_seg);
@@ -503,11 +507,11 @@ eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 						   mempool_to_bpid(mi->pool));
 				rte_mbuf_refcnt_update(mi, 1);
 			}
-			prev_seg = cur_seg;
-			cur_seg = cur_seg->next;
-			prev_seg->next = NULL;
-			rte_pktmbuf_free(prev_seg);
+			free_buf[*free_count].seg = cur_seg;
+			free_buf[*free_count].pkt_id = pkt_id;
+			++*free_count;
 		}
+		cur_seg = cur_seg->next;
 	}
 	DPAA2_SG_SET_FINAL(sge, true);
 	return 0;
@@ -515,11 +519,19 @@ eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 
 static void
 eth_mbuf_to_fd(struct rte_mbuf *mbuf,
-	       struct qbman_fd *fd, uint16_t bpid) __rte_unused;
+	       struct qbman_fd *fd,
+	       struct sw_buf_free *buf_to_free,
+	       uint32_t *free_count,
+	       uint32_t pkt_id,
+	       uint16_t bpid) __rte_unused;
 
 static void __rte_noinline __rte_hot
 eth_mbuf_to_fd(struct rte_mbuf *mbuf,
-	       struct qbman_fd *fd, uint16_t bpid)
+	       struct qbman_fd *fd,
+	       struct sw_buf_free *buf_to_free,
+	       uint32_t *free_count,
+	       uint32_t pkt_id,
+	       uint16_t bpid)
 {
 	DPAA2_MBUF_TO_CONTIG_FD(mbuf, fd, bpid);
 
@@ -540,6 +552,9 @@ eth_mbuf_to_fd(struct rte_mbuf *mbuf,
 				(void **)&mbuf, 1, 0);
 #endif
 	} else if (RTE_MBUF_HAS_EXTBUF(mbuf)) {
+		buf_to_free[*free_count].seg = mbuf;
+		buf_to_free[*free_count].pkt_id = pkt_id;
+		++*free_count;
 		DPAA2_SET_FD_IVP(fd);
 	} else {
 		struct rte_mbuf *mi;
@@ -549,7 +564,10 @@ eth_mbuf_to_fd(struct rte_mbuf *mbuf,
 			DPAA2_SET_FD_IVP(fd);
 		else
 			rte_mbuf_refcnt_update(mi, 1);
-		rte_pktmbuf_free(mbuf);
+
+		buf_to_free[*free_count].seg = mbuf;
+		buf_to_free[*free_count].pkt_id = pkt_id;
+		++*free_count;
 	}
 }
 
@@ -1226,7 +1244,8 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	struct rte_eth_dev_data *eth_data = dpaa2_q->eth_data;
 	struct dpaa2_dev_priv *priv = eth_data->dev_private;
 	uint32_t flags[MAX_TX_RING_SLOTS] = {0};
-	struct rte_mbuf **orig_bufs = bufs;
+	struct sw_buf_free buf_to_free[DPAA2_MAX_SGS * dpaa2_dqrr_size];
+	uint32_t free_count = 0;
 
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
@@ -1324,11 +1343,17 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 					mp = (*bufs)->pool;
 					if (eth_mbuf_to_sg_fd(*bufs,
 							      &fd_arr[loop],
+							      buf_to_free,
+							      &free_count,
+							      loop,
 							      mempool_to_bpid(mp)))
 						goto send_n_return;
 				} else {
 					eth_mbuf_to_fd(*bufs,
-						       &fd_arr[loop], 0);
+							&fd_arr[loop],
+							buf_to_free,
+							&free_count,
+							loop, 0);
 				}
 				bufs++;
 #ifdef RTE_LIBRTE_IEEE1588
@@ -1373,11 +1398,17 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				if (unlikely((*bufs)->nb_segs > 1)) {
 					if (eth_mbuf_to_sg_fd(*bufs,
 							&fd_arr[loop],
+							buf_to_free,
+							&free_count,
+							loop,
 							bpid))
 						goto send_n_return;
 				} else {
 					eth_mbuf_to_fd(*bufs,
-						       &fd_arr[loop], bpid);
+							&fd_arr[loop],
+							buf_to_free,
+							&free_count,
+							loop, bpid);
 				}
 			}
 #ifdef RTE_LIBRTE_IEEE1588
@@ -1410,12 +1441,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	}
 	dpaa2_q->tx_pkts += num_tx;
 
-	loop = 0;
-	while (loop < num_tx) {
-		if (unlikely(RTE_MBUF_HAS_EXTBUF(*orig_bufs)))
-			rte_pktmbuf_free(*orig_bufs);
-		orig_bufs++;
-		loop++;
+	for (loop = 0; loop < free_count; loop++) {
+		if (buf_to_free[loop].pkt_id < num_tx)
+			rte_pktmbuf_free_seg(buf_to_free[loop].seg);
 	}
 
 	return num_tx;
@@ -1445,12 +1473,9 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 skip_tx:
 	dpaa2_q->tx_pkts += num_tx;
 
-	loop = 0;
-	while (loop < num_tx) {
-		if (unlikely(RTE_MBUF_HAS_EXTBUF(*orig_bufs)))
-			rte_pktmbuf_free(*orig_bufs);
-		orig_bufs++;
-		loop++;
+	for (loop = 0; loop < free_count; loop++) {
+		if (buf_to_free[loop].pkt_id < num_tx)
+			rte_pktmbuf_free_seg(buf_to_free[loop].seg);
 	}
 
 	return num_tx;
@@ -1523,7 +1548,7 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 		struct rte_mbuf **bufs, uint16_t nb_pkts)
 {
 	/* Function to transmit the frames to multiple queues respectively.*/
-	uint32_t loop, retry_count;
+	uint32_t loop, i, retry_count;
 	int32_t ret;
 	struct qbman_fd fd_arr[MAX_TX_RING_SLOTS];
 	uint32_t frames_to_send, num_free_eq_desc = 0;
@@ -1536,6 +1561,8 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 	struct rte_eth_dev_data *eth_data;
 	struct dpaa2_dev_priv *priv;
 	struct dpaa2_queue *order_sendq;
+	struct sw_buf_free buf_to_free[DPAA2_MAX_SGS * dpaa2_dqrr_size];
+	uint32_t free_count = 0;
 
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
@@ -1647,11 +1674,17 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 			if (unlikely((*bufs)->nb_segs > 1)) {
 				if (eth_mbuf_to_sg_fd(*bufs,
 						      &fd_arr[loop],
+						      buf_to_free,
+						      &free_count,
+						      loop,
 						      bpid))
 					goto send_frames;
 			} else {
 				eth_mbuf_to_fd(*bufs,
-					       &fd_arr[loop], bpid);
+						&fd_arr[loop],
+						buf_to_free,
+						&free_count,
+						loop, bpid);
 			}
 		}
 
@@ -1676,6 +1709,10 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 		}
 	}
 
+	for (i = 0; i < free_count; i++) {
+		if (buf_to_free[i].pkt_id < loop)
+			rte_pktmbuf_free_seg(buf_to_free[i].seg);
+	}
 	return loop;
 }
 
@@ -1698,6 +1735,8 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	int32_t ret;
 	uint16_t num_tx = 0;
 	uint16_t bpid;
+	struct sw_buf_free buf_to_free[DPAA2_MAX_SGS * dpaa2_dqrr_size];
+	uint32_t free_count = 0;
 
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
@@ -1810,11 +1849,17 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 				if (unlikely((*bufs)->nb_segs > 1)) {
 					if (eth_mbuf_to_sg_fd(*bufs,
 							      &fd_arr[loop],
+							      buf_to_free,
+							      &free_count,
+							      loop,
 							      bpid))
 						goto send_n_return;
 				} else {
 					eth_mbuf_to_fd(*bufs,
-						       &fd_arr[loop], bpid);
+							&fd_arr[loop],
+							buf_to_free,
+							&free_count,
+							loop, bpid);
 				}
 			}
 			bufs++;
@@ -1843,6 +1888,11 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		nb_pkts -= loop;
 	}
 	dpaa2_q->tx_pkts += num_tx;
+	for (loop = 0; loop < free_count; loop++) {
+		if (buf_to_free[loop].pkt_id < num_tx)
+			rte_pktmbuf_free_seg(buf_to_free[loop].seg);
+	}
+
 	return num_tx;
 
 send_n_return:
@@ -1867,6 +1917,11 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	}
 skip_tx:
 	dpaa2_q->tx_pkts += num_tx;
+	for (loop = 0; loop < free_count; loop++) {
+		if (buf_to_free[loop].pkt_id < num_tx)
+			rte_pktmbuf_free_seg(buf_to_free[loop].seg);
+	}
+
 	return num_tx;
 }
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 10/16] net/dpaa: fix Jumbo packet Rx in case of VSP
       [not found] ` <20221007032743.2129353-1-g.singh@nxp.com>
                     ` (4 preceding siblings ...)
  2022-10-07  3:27   ` [PATCH v2 08/16] net/dpaa2: fix buffer free on transmit SG packets Gagandeep Singh
@ 2022-10-07  3:27   ` Gagandeep Singh
  2022-10-07  3:27   ` [PATCH v2 15/16] net/dpaa: fix buffer free on transmit SG packets Gagandeep Singh
  2022-10-07  3:27   ` [PATCH v2 16/16] net/dpaa: fix buffer free in slow path Gagandeep Singh
  7 siblings, 0 replies; 22+ messages in thread
From: Gagandeep Singh @ 2022-10-07  3:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Rohit Raj, stable, Hemant Agrawal

From: Rohit Raj <rohit.raj@nxp.com>

For packet length of size more than 2K bytes, segmented packets were
being received in DPDK even if mbuf size was greater than packet
length. This is due to the configuration in VSP.

This patch fixes the issue by configuring the VSP according to the
mbuf size configured during mempool configuration.

Fixes: e4abd4ff183c ("net/dpaa: support virtual storage profile")
Cc: stable@dpdk.org

Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.c |  5 ++---
 drivers/net/dpaa/dpaa_flow.c   | 13 ++++++-------
 drivers/net/dpaa/dpaa_flow.h   |  5 +++--
 3 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index c4aac424b4..3b4d6575c9 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -989,8 +989,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	} else {
 		DPAA_PMD_WARN("The requested maximum Rx packet size (%u) is"
 		     " larger than a single mbuf (%u) and scattered"
-		     " mode has not been requested",
-		     max_rx_pktlen, buffsz - RTE_PKTMBUF_HEADROOM);
+		     " mode has not been requested", max_rx_pktlen, buffsz);
 	}
 
 	dpaa_intf->bp_info = DPAA_MEMPOOL_TO_POOL_INFO(mp);
@@ -1005,7 +1004,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		if (vsp_id >= 0) {
 			ret = dpaa_port_vsp_update(dpaa_intf, fmc_q, vsp_id,
 					DPAA_MEMPOOL_TO_POOL_INFO(mp)->bpid,
-					fif);
+					fif, buffsz + RTE_PKTMBUF_HEADROOM);
 			if (ret) {
 				DPAA_PMD_ERR("dpaa_port_vsp_update failed");
 				return ret;
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index 1ccd036027..690ba6bcb3 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -939,7 +939,7 @@ int dpaa_fm_term(void)
 
 static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
 		uint8_t vsp_id, t_handle fman_handle,
-		struct fman_if *fif)
+		struct fman_if *fif, u32 mbuf_data_room_size)
 {
 	t_fm_vsp_params vsp_params;
 	t_fm_buffer_prefix_content buf_prefix_cont;
@@ -976,10 +976,8 @@ static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
 		return -1;
 	}
 	vsp_params.ext_buf_pools.num_of_pools_used = 1;
-	vsp_params.ext_buf_pools.ext_buf_pool[0].id =
-		dpaa_intf->vsp_bpid[vsp_id];
-	vsp_params.ext_buf_pools.ext_buf_pool[0].size =
-		RTE_MBUF_DEFAULT_BUF_SIZE;
+	vsp_params.ext_buf_pools.ext_buf_pool[0].id = dpaa_intf->vsp_bpid[vsp_id];
+	vsp_params.ext_buf_pools.ext_buf_pool[0].size = mbuf_data_room_size;
 
 	dpaa_intf->vsp_handle[vsp_id] = fm_vsp_config(&vsp_params);
 	if (!dpaa_intf->vsp_handle[vsp_id]) {
@@ -1023,7 +1021,7 @@ static int dpaa_port_vsp_configure(struct dpaa_if *dpaa_intf,
 
 int dpaa_port_vsp_update(struct dpaa_if *dpaa_intf,
 		bool fmc_mode, uint8_t vsp_id, uint32_t bpid,
-		struct fman_if *fif)
+		struct fman_if *fif, u32 mbuf_data_room_size)
 {
 	int ret = 0;
 	t_handle fman_handle;
@@ -1054,7 +1052,8 @@ int dpaa_port_vsp_update(struct dpaa_if *dpaa_intf,
 
 	dpaa_intf->vsp_bpid[vsp_id] = bpid;
 
-	return dpaa_port_vsp_configure(dpaa_intf, vsp_id, fman_handle, fif);
+	return dpaa_port_vsp_configure(dpaa_intf, vsp_id, fman_handle, fif,
+				       mbuf_data_room_size);
 }
 
 int dpaa_port_vsp_cleanup(struct dpaa_if *dpaa_intf, struct fman_if *fif)
diff --git a/drivers/net/dpaa/dpaa_flow.h b/drivers/net/dpaa/dpaa_flow.h
index f5e131acfa..4742b8dd0a 100644
--- a/drivers/net/dpaa/dpaa_flow.h
+++ b/drivers/net/dpaa/dpaa_flow.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2017,2019 NXP
+ * Copyright 2017,2019,2022 NXP
  */
 
 #ifndef __DPAA_FLOW_H__
@@ -11,7 +11,8 @@ int dpaa_fm_config(struct rte_eth_dev *dev, uint64_t req_dist_set);
 int dpaa_fm_deconfig(struct dpaa_if *dpaa_intf, struct fman_if *fif);
 void dpaa_write_fm_config_to_file(void);
 int dpaa_port_vsp_update(struct dpaa_if *dpaa_intf,
-	bool fmc_mode, uint8_t vsp_id, uint32_t bpid, struct fman_if *fif);
+	bool fmc_mode, uint8_t vsp_id, uint32_t bpid, struct fman_if *fif,
+	u32 mbuf_data_room_size);
 int dpaa_port_vsp_cleanup(struct dpaa_if *dpaa_intf, struct fman_if *fif);
 int dpaa_port_fmc_init(struct fman_if *fif,
 		       uint32_t *fqids, int8_t *vspids, int max_nb_rxq);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 15/16] net/dpaa: fix buffer free on transmit SG packets
       [not found] ` <20221007032743.2129353-1-g.singh@nxp.com>
                     ` (5 preceding siblings ...)
  2022-10-07  3:27   ` [PATCH v2 10/16] net/dpaa: fix Jumbo packet Rx in case of VSP Gagandeep Singh
@ 2022-10-07  3:27   ` Gagandeep Singh
  2022-10-07  3:27   ` [PATCH v2 16/16] net/dpaa: fix buffer free in slow path Gagandeep Singh
  7 siblings, 0 replies; 22+ messages in thread
From: Gagandeep Singh @ 2022-10-07  3:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Gagandeep Singh, stable, Hemant Agrawal

When using SG list to TX with external and direct buffers,
HW free direct buffers and driver free external buffers.

Software scans the complete SG mbuf list to find the external
buffers to free, but this is wrong as hardware can free the
direct buffers if any present in the list and same can be
re-allocated for other purpose in multi thread or high speed
running traffic environment with new data in it. So the software
which is scanning the SG mbuf list, if that list has any direct
buffer present then that direct buffer's next pointer can give
wrong pointer value, if already freed by hardware which
can do the mempool corruption or memory leak.

In this patch instead of relying on user given SG mbuf list
we are storing the buffers in an internal list which will
be scanned by driver after transmit to free non-direct
buffers.

This patch also fixes below issues.

Driver is freeing complete SG list by checking external buffer
flag in first segment only, but external buffer can be attached
to any of the segment. Because of this, driver either can double
free buffers or there can be memory leak.

In case of indirect buffers, driver is modifying the original
buffer list to free the indirect buffers but this original buffer
list is being used by driver even after transmit packets for
non-direct buffer cleanup. This can cause the buffer leak issue.

Fixes: f191d5abda54 ("net/dpaa: support external buffers in Tx")
Cc: stable@dpdk.org

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/dpaa/dpaa_ethdev.h | 10 ++++++
 drivers/net/dpaa/dpaa_rxtx.c   | 61 ++++++++++++++++++++++------------
 2 files changed, 49 insertions(+), 22 deletions(-)

diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index f9c0554530..502c1c88b8 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -112,6 +112,16 @@
 
 extern struct rte_mempool *dpaa_tx_sg_pool;
 
+/* structure to free external and indirect
+ * buffers.
+ */
+struct dpaa_sw_buf_free {
+	/* To which packet this segment belongs */
+	uint16_t pkt_id;
+	/* The actual segment */
+	struct rte_mbuf *seg;
+};
+
 /* Each network interface is represented by one of these */
 struct dpaa_if {
 	int valid;
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index e23206bf5c..4d285b4f38 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -803,9 +803,12 @@ uint16_t dpaa_eth_queue_rx(void *q,
 
 static int
 dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
-		struct qm_fd *fd)
+		struct qm_fd *fd,
+		struct dpaa_sw_buf_free *free_buf,
+		uint32_t *free_count,
+		uint32_t pkt_id)
 {
-	struct rte_mbuf *cur_seg = mbuf, *prev_seg = NULL;
+	struct rte_mbuf *cur_seg = mbuf;
 	struct rte_mbuf *temp, *mi;
 	struct qm_sg_entry *sg_temp, *sgt;
 	int i = 0;
@@ -869,10 +872,11 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 				sg_temp->bpid =
 					DPAA_MEMPOOL_TO_BPID(cur_seg->pool);
 			}
-			cur_seg = cur_seg->next;
 		} else if (RTE_MBUF_HAS_EXTBUF(cur_seg)) {
+			free_buf[*free_count].seg = cur_seg;
+			free_buf[*free_count].pkt_id = pkt_id;
+			++*free_count;
 			sg_temp->bpid = 0xff;
-			cur_seg = cur_seg->next;
 		} else {
 			/* Get owner MBUF from indirect buffer */
 			mi = rte_mbuf_from_indirect(cur_seg);
@@ -885,11 +889,11 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 				sg_temp->bpid = DPAA_MEMPOOL_TO_BPID(mi->pool);
 				rte_mbuf_refcnt_update(mi, 1);
 			}
-			prev_seg = cur_seg;
-			cur_seg = cur_seg->next;
-			prev_seg->next = NULL;
-			rte_pktmbuf_free(prev_seg);
+			free_buf[*free_count].seg = cur_seg;
+			free_buf[*free_count].pkt_id = pkt_id;
+			++*free_count;
 		}
+		cur_seg = cur_seg->next;
 		if (cur_seg == NULL) {
 			sg_temp->final = 1;
 			cpu_to_hw_sg(sg_temp);
@@ -904,7 +908,10 @@ dpaa_eth_mbuf_to_sg_fd(struct rte_mbuf *mbuf,
 static inline void
 tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
 			    struct dpaa_bp_info *bp_info,
-			    struct qm_fd *fd_arr)
+			    struct qm_fd *fd_arr,
+			    struct dpaa_sw_buf_free *buf_to_free,
+			    uint32_t *free_count,
+			    uint32_t pkt_id)
 {
 	struct rte_mbuf *mi = NULL;
 
@@ -923,6 +930,9 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
 			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr, bp_info->bpid);
 		}
 	} else if (RTE_MBUF_HAS_EXTBUF(mbuf)) {
+		buf_to_free[*free_count].seg = mbuf;
+		buf_to_free[*free_count].pkt_id = pkt_id;
+		++*free_count;
 		DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr,
 				bp_info ? bp_info->bpid : 0xff);
 	} else {
@@ -946,7 +956,9 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
 			DPAA_MBUF_TO_CONTIG_FD(mbuf, fd_arr,
 						bp_info ? bp_info->bpid : 0xff);
 		}
-		rte_pktmbuf_free(mbuf);
+		buf_to_free[*free_count].seg = mbuf;
+		buf_to_free[*free_count].pkt_id = pkt_id;
+		++*free_count;
 	}
 
 	if (mbuf->ol_flags & DPAA_TX_CKSUM_OFFLOAD_MASK)
@@ -957,16 +969,21 @@ tx_on_dpaa_pool_unsegmented(struct rte_mbuf *mbuf,
 static inline uint16_t
 tx_on_dpaa_pool(struct rte_mbuf *mbuf,
 		struct dpaa_bp_info *bp_info,
-		struct qm_fd *fd_arr)
+		struct qm_fd *fd_arr,
+		struct dpaa_sw_buf_free *buf_to_free,
+		uint32_t *free_count,
+		uint32_t pkt_id)
 {
 	DPAA_DP_LOG(DEBUG, "BMAN offloaded buffer, mbuf: %p", mbuf);
 
 	if (mbuf->nb_segs == 1) {
 		/* Case for non-segmented buffers */
-		tx_on_dpaa_pool_unsegmented(mbuf, bp_info, fd_arr);
+		tx_on_dpaa_pool_unsegmented(mbuf, bp_info, fd_arr,
+				buf_to_free, free_count, pkt_id);
 	} else if (mbuf->nb_segs > 1 &&
 		   mbuf->nb_segs <= DPAA_SGT_MAX_ENTRIES) {
-		if (dpaa_eth_mbuf_to_sg_fd(mbuf, fd_arr)) {
+		if (dpaa_eth_mbuf_to_sg_fd(mbuf, fd_arr, buf_to_free,
+					   free_count, pkt_id)) {
 			DPAA_PMD_DEBUG("Unable to create Scatter Gather FD");
 			return 1;
 		}
@@ -1070,7 +1087,8 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 	uint16_t state;
 	int ret, realloc_mbuf = 0;
 	uint32_t seqn, index, flags[DPAA_TX_BURST_SIZE] = {0};
-	struct rte_mbuf **orig_bufs = bufs;
+	struct dpaa_sw_buf_free buf_to_free[DPAA_MAX_SGS * DPAA_MAX_DEQUEUE_NUM_FRAMES];
+	uint32_t free_count = 0;
 
 	if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
 		ret = rte_dpaa_portal_init((void *)0);
@@ -1153,7 +1171,10 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 			}
 indirect_buf:
 			state = tx_on_dpaa_pool(mbuf, bp_info,
-						&fd_arr[loop]);
+						&fd_arr[loop],
+						buf_to_free,
+						&free_count,
+						loop);
 			if (unlikely(state)) {
 				/* Set frames_to_send & nb_bufs so
 				 * that packets are transmitted till
@@ -1178,13 +1199,9 @@ dpaa_eth_queue_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
 
 	DPAA_DP_LOG(DEBUG, "Transmitted %d buffers on queue: %p", sent, q);
 
-
-	loop = 0;
-	while (loop < sent) {
-		if (unlikely(RTE_MBUF_HAS_EXTBUF(*orig_bufs)))
-			rte_pktmbuf_free(*orig_bufs);
-		orig_bufs++;
-		loop++;
+	for (loop = 0; loop < free_count; loop++) {
+		if (buf_to_free[loop].pkt_id < sent)
+			rte_pktmbuf_free_seg(buf_to_free[loop].seg);
 	}
 
 	return sent;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 16/16] net/dpaa: fix buffer free in slow path
       [not found] ` <20221007032743.2129353-1-g.singh@nxp.com>
                     ` (6 preceding siblings ...)
  2022-10-07  3:27   ` [PATCH v2 15/16] net/dpaa: fix buffer free on transmit SG packets Gagandeep Singh
@ 2022-10-07  3:27   ` Gagandeep Singh
  7 siblings, 0 replies; 22+ messages in thread
From: Gagandeep Singh @ 2022-10-07  3:27 UTC (permalink / raw)
  To: ferruh.yigit, dev; +Cc: Gagandeep Singh, stable, Hemant Agrawal

If there is any error in packet or taildrop feature is enabled,
HW can reject those packets and put them in error queue. Driver
poll this error queue to free the buffers.
DPAA driver has an issue while freeing these rejected buffers. In
case of scatter gather packets, it is preparing the mbuf SG list
by scanning the HW descriptors and once the mbuf SG list prepared,
it free only first segment of the mbuf SG list by calling the
API rte_pktmbuf_free_seg(), This will leak the memory of other
segments and mempool can be empty.

Also there is one more issue, external buffer's memory may not
belong to mempool so driver itself free the external buffer
after successfully send the packet to HW to transmit instead of
let the HW to free it. So transmit function free all the external
buffers. But driver has no check for external buffers
while freeing the rejected buffers and this can do double free the
memory which can corrupt the user pool and crashes and undefined
behaviour of system can be seen.

This patch fixes the above mentioned issue by checking each
and every segment and freeing all the segments except external.

Fixes: 9124e65dd3eb ("net/dpaa: enable Tx queue taildrop")
Cc: stable@dpdk.org

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/dpaa/dpaa_rxtx.c | 23 ++++++++---------------
 1 file changed, 8 insertions(+), 15 deletions(-)

diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index 4d285b4f38..ce4f3d6c85 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -455,7 +455,7 @@ dpaa_free_mbuf(const struct qm_fd *fd)
 	bp_info = DPAA_BPID_TO_POOL_INFO(fd->bpid);
 	format = (fd->opaque & DPAA_FD_FORMAT_MASK) >> DPAA_FD_FORMAT_SHIFT;
 	if (unlikely(format == qm_fd_sg)) {
-		struct rte_mbuf *first_seg, *prev_seg, *cur_seg, *temp;
+		struct rte_mbuf *first_seg, *cur_seg;
 		struct qm_sg_entry *sgt, *sg_temp;
 		void *vaddr, *sg_vaddr;
 		int i = 0;
@@ -469,32 +469,25 @@ dpaa_free_mbuf(const struct qm_fd *fd)
 		sgt = vaddr + fd_offset;
 		sg_temp = &sgt[i++];
 		hw_sg_to_cpu(sg_temp);
-		temp = (struct rte_mbuf *)
-			((char *)vaddr - bp_info->meta_data_size);
 		sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
 						qm_sg_entry_get64(sg_temp));
-
 		first_seg = (struct rte_mbuf *)((char *)sg_vaddr -
 						bp_info->meta_data_size);
 		first_seg->nb_segs = 1;
-		prev_seg = first_seg;
 		while (i < DPAA_SGT_MAX_ENTRIES) {
 			sg_temp = &sgt[i++];
 			hw_sg_to_cpu(sg_temp);
-			sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
+			if (sg_temp->bpid != 0xFF) {
+				bp_info = DPAA_BPID_TO_POOL_INFO(sg_temp->bpid);
+				sg_vaddr = DPAA_MEMPOOL_PTOV(bp_info,
 						qm_sg_entry_get64(sg_temp));
-			cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
+				cur_seg = (struct rte_mbuf *)((char *)sg_vaddr -
 						      bp_info->meta_data_size);
-			first_seg->nb_segs += 1;
-			prev_seg->next = cur_seg;
-			if (sg_temp->final) {
-				cur_seg->next = NULL;
-				break;
+				rte_pktmbuf_free_seg(cur_seg);
 			}
-			prev_seg = cur_seg;
+			if (sg_temp->final)
+				break;
 		}
-
-		rte_pktmbuf_free_seg(temp);
 		rte_pktmbuf_free_seg(first_seg);
 		return 0;
 	}
-- 
2.25.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2022-10-07  3:29 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20220928052516.1279442-1-g.singh@nxp.com>
2022-09-28  5:25 ` [PATCH 02/15] net/enetfec: fix restart issue Gagandeep Singh
2022-09-28  5:25 ` [PATCH 03/15] net/enetfec: fix buffer leak issue Gagandeep Singh
2022-09-28  5:25 ` [PATCH 04/15] net/dpaa2: fix dpdmux configuration for error behaviour Gagandeep Singh
2022-09-28  5:25 ` [PATCH 05/15] net/dpaa2: check free enqueue descriptors before Tx Gagandeep Singh
2022-10-05 14:30   ` Ferruh Yigit
2022-09-28  5:25 ` [PATCH 08/15] net/dpaa2: fix buffer free on transmit SG packets Gagandeep Singh
2022-10-06  7:48   ` Ferruh Yigit
2022-09-28  5:25 ` [PATCH 10/15] net/dpaa: fix Jumbo packet Rx in case of VSP Gagandeep Singh
2022-09-28  5:25 ` [PATCH 14/15] net/dpaa: fix buffer free on transmit SG packets Gagandeep Singh
2022-09-28  5:25 ` [PATCH 15/15] net/dpaa: fix buffer free in slow path Gagandeep Singh
2022-10-05 14:21   ` Ferruh Yigit
2022-10-06  8:51     ` Gagandeep Singh
2022-10-06  9:42       ` Ferruh Yigit
2022-10-06 11:19         ` Gagandeep Singh
     [not found] ` <20221007032743.2129353-1-g.singh@nxp.com>
2022-10-07  3:27   ` [PATCH v2 02/16] net/enetfec: fix restart issue Gagandeep Singh
2022-10-07  3:27   ` [PATCH v2 03/16] net/enetfec: fix buffer leak issue Gagandeep Singh
2022-10-07  3:27   ` [PATCH v2 04/16] net/dpaa2: fix dpdmux configuration for error behaviour Gagandeep Singh
2022-10-07  3:27   ` [PATCH v2 05/16] net/dpaa2: check free enqueue descriptors before Tx Gagandeep Singh
2022-10-07  3:27   ` [PATCH v2 08/16] net/dpaa2: fix buffer free on transmit SG packets Gagandeep Singh
2022-10-07  3:27   ` [PATCH v2 10/16] net/dpaa: fix Jumbo packet Rx in case of VSP Gagandeep Singh
2022-10-07  3:27   ` [PATCH v2 15/16] net/dpaa: fix buffer free on transmit SG packets Gagandeep Singh
2022-10-07  3:27   ` [PATCH v2 16/16] net/dpaa: fix buffer free in slow path Gagandeep Singh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).