patches for DPDK stable branches
 help / color / mirror / Atom feed
* [PATCH 1/3] net/qede: fix Tx callback completion routine
@ 2022-03-04 12:08 Devendra Singh Rawat
  2022-03-04 12:08 ` [PATCH 2/3] net/qede: fix Rx callback Devendra Singh Rawat
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Devendra Singh Rawat @ 2022-03-04 12:08 UTC (permalink / raw)
  To: dev, thomas, jerinj, ferruh.yigit, rmody
  Cc: palok, Devendra Singh Rawat, stable

Tx completion routine was first incrementing no. of free slots in Tx
ring and then freeing corresponding mbufs in bulk. In some situations
no. of mbufs freed were less than no. of Tx ring slots freed. This
caused TX ring to get into an inconsistent state and ultimately
application fails to transmit further traffic.

The fix first updates Tx ring SW consumer index, then increments Tx ring
free slot no. and finally frees the mbuf, this is done in a single
iteration of loop.

Fixes: 2c41740bf19e ("net/qede: get consumer index once")
Fixes: 4996b959cde6 ("net/qede: free packets in bulk")
Cc: stable@dpdk.org

Signed-off-by: Devendra Singh Rawat <dsinghrawat@marvell.com>
Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/qede_rxtx.c | 79 +++++++++++++++---------------------
 1 file changed, 33 insertions(+), 46 deletions(-)

diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 911bb1a260..0c52568180 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -885,68 +885,55 @@ qede_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id)
 }
 
 static inline void
-qede_process_tx_compl(__rte_unused struct ecore_dev *edev,
-		      struct qede_tx_queue *txq)
+qede_free_tx_pkt(struct qede_tx_queue *txq)
 {
-	uint16_t hw_bd_cons;
-	uint16_t sw_tx_cons;
-	uint16_t remaining;
-	uint16_t mask;
 	struct rte_mbuf *mbuf;
 	uint16_t nb_segs;
 	uint16_t idx;
-	uint16_t first_idx;
-
-	rte_compiler_barrier();
-	rte_prefetch0(txq->hw_cons_ptr);
-	sw_tx_cons = ecore_chain_get_cons_idx(&txq->tx_pbl);
-	hw_bd_cons = rte_le_to_cpu_16(*txq->hw_cons_ptr);
-#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
-	PMD_TX_LOG(DEBUG, txq, "Tx Completions = %u\n",
-		   abs(hw_bd_cons - sw_tx_cons));
-#endif
-
-	mask = NUM_TX_BDS(txq);
-	idx = txq->sw_tx_cons & mask;
 
-	remaining = hw_bd_cons - sw_tx_cons;
-	txq->nb_tx_avail += remaining;
-	first_idx = idx;
-
-	while (remaining) {
-		mbuf = txq->sw_tx_ring[idx];
-		RTE_ASSERT(mbuf);
+	idx = TX_CONS(txq);
+	mbuf = txq->sw_tx_ring[idx];
+	if (mbuf) {
 		nb_segs = mbuf->nb_segs;
-		remaining -= nb_segs;
-
-		/* Prefetch the next mbuf. Note that at least the last 4 mbufs
-		 * that are prefetched will not be used in the current call.
-		 */
-		rte_mbuf_prefetch_part1(txq->sw_tx_ring[(idx + 4) & mask]);
-		rte_mbuf_prefetch_part2(txq->sw_tx_ring[(idx + 4) & mask]);
-
 		PMD_TX_LOG(DEBUG, txq, "nb_segs to free %u\n", nb_segs);
-
 		while (nb_segs) {
+			/* It's like consuming rxbuf in recv() */
 			ecore_chain_consume(&txq->tx_pbl);
+			txq->nb_tx_avail++;
 			nb_segs--;
 		}
-
-		idx = (idx + 1) & mask;
+		rte_pktmbuf_free(mbuf);
+		txq->sw_tx_ring[idx] = NULL;
+		txq->sw_tx_cons++;
 		PMD_TX_LOG(DEBUG, txq, "Freed tx packet\n");
-	}
-	txq->sw_tx_cons = idx;
-
-	if (first_idx > idx) {
-		rte_pktmbuf_free_bulk(&txq->sw_tx_ring[first_idx],
-							  mask - first_idx + 1);
-		rte_pktmbuf_free_bulk(&txq->sw_tx_ring[0], idx);
 	} else {
-		rte_pktmbuf_free_bulk(&txq->sw_tx_ring[first_idx],
-							  idx - first_idx);
+		ecore_chain_consume(&txq->tx_pbl);
+		txq->nb_tx_avail++;
 	}
 }
 
+static inline void
+qede_process_tx_compl(__rte_unused struct ecore_dev *edev,
+		      struct qede_tx_queue *txq)
+{
+	uint16_t hw_bd_cons;
+#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
+	uint16_t sw_tx_cons;
+#endif
+
+	hw_bd_cons = rte_le_to_cpu_16(*txq->hw_cons_ptr);
+	/* read barrier prevents speculative execution on stale data */
+	rte_rmb();
+
+#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
+	sw_tx_cons = ecore_chain_get_cons_idx(&txq->tx_pbl);
+	PMD_TX_LOG(DEBUG, txq, "Tx Completions = %u\n",
+		   abs(hw_bd_cons - sw_tx_cons));
+#endif
+	while (hw_bd_cons !=  ecore_chain_get_cons_idx(&txq->tx_pbl))
+		qede_free_tx_pkt(txq);
+}
+
 static int qede_drain_txq(struct qede_dev *qdev,
 			  struct qede_tx_queue *txq, bool allow_drain)
 {
-- 
2.18.2


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 2/3] net/qede: fix Rx callback
  2022-03-04 12:08 [PATCH 1/3] net/qede: fix Tx callback completion routine Devendra Singh Rawat
@ 2022-03-04 12:08 ` Devendra Singh Rawat
  2022-03-04 12:08 ` [PATCH 3/3] net/qede: fix max Rx pktlen calculation Devendra Singh Rawat
  2022-03-10  7:40 ` [PATCH 1/3] net/qede: fix Tx callback completion routine Jerin Jacob
  2 siblings, 0 replies; 6+ messages in thread
From: Devendra Singh Rawat @ 2022-03-04 12:08 UTC (permalink / raw)
  To: dev, thomas, jerinj, ferruh.yigit, rmody
  Cc: palok, Devendra Singh Rawat, stable

qede_alloc_rx_bulk_mbufs was trimming the no. of requested mbufs count
to QEDE_MAX_BULK_ALLOC_COUNT. The RX callback was ignorant of this
trimming and it was always resetting the no. of empty RX BD ring
slots to 0. This resulted in RX BD ring getting into an inconsistent
state and ultimately the application fails to receive any traffic.

The fix trims the no. of requested mbufs count before making call to
qede_alloc_rx_bulk_mbufs. After qede_alloc_rx_bulk_mbufs returns
successfully, the no. of empty RX BD ring slots are decremented by the
correct count.

Fixes: 8f2312474529 ("net/qede: fix performance bottleneck in Rx path")
Cc: stable@dpdk.org

Signed-off-by: Devendra Singh Rawat <dsinghrawat@marvell.com>
Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/qede_rxtx.c | 68 ++++++++++++++++--------------------
 1 file changed, 31 insertions(+), 37 deletions(-)

diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 0c52568180..02fa1fcaa1 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -38,48 +38,40 @@ static inline int qede_alloc_rx_buffer(struct qede_rx_queue *rxq)
 
 static inline int qede_alloc_rx_bulk_mbufs(struct qede_rx_queue *rxq, int count)
 {
+	void *obj_p[QEDE_MAX_BULK_ALLOC_COUNT] __rte_cache_aligned;
 	struct rte_mbuf *mbuf = NULL;
 	struct eth_rx_bd *rx_bd;
 	dma_addr_t mapping;
 	int i, ret = 0;
 	uint16_t idx;
-	uint16_t mask = NUM_RX_BDS(rxq);
-
-	if (count > QEDE_MAX_BULK_ALLOC_COUNT)
-		count = QEDE_MAX_BULK_ALLOC_COUNT;
 
 	idx = rxq->sw_rx_prod & NUM_RX_BDS(rxq);
 
-	if (count > mask - idx + 1)
-		count = mask - idx + 1;
-
-	ret = rte_mempool_get_bulk(rxq->mb_pool, (void **)&rxq->sw_rx_ring[idx],
-				   count);
-
+	ret = rte_mempool_get_bulk(rxq->mb_pool, obj_p, count);
 	if (unlikely(ret)) {
 		PMD_RX_LOG(ERR, rxq,
 			   "Failed to allocate %d rx buffers "
 			    "sw_rx_prod %u sw_rx_cons %u mp entries %u free %u",
-			    count,
-			    rxq->sw_rx_prod & NUM_RX_BDS(rxq),
-			    rxq->sw_rx_cons & NUM_RX_BDS(rxq),
+			    count, idx, rxq->sw_rx_cons & NUM_RX_BDS(rxq),
 			    rte_mempool_avail_count(rxq->mb_pool),
 			    rte_mempool_in_use_count(rxq->mb_pool));
 		return -ENOMEM;
 	}
 
 	for (i = 0; i < count; i++) {
-		rte_prefetch0(rxq->sw_rx_ring[(idx + 1) & NUM_RX_BDS(rxq)]);
-		mbuf = rxq->sw_rx_ring[idx & NUM_RX_BDS(rxq)];
+		mbuf = obj_p[i];
+		if (likely(i < count - 1))
+			rte_prefetch0(obj_p[i + 1]);
 
+		idx = rxq->sw_rx_prod & NUM_RX_BDS(rxq);
+		rxq->sw_rx_ring[idx] = mbuf;
 		mapping = rte_mbuf_data_iova_default(mbuf);
 		rx_bd = (struct eth_rx_bd *)
 			ecore_chain_produce(&rxq->rx_bd_ring);
 		rx_bd->addr.hi = rte_cpu_to_le_32(U64_HI(mapping));
 		rx_bd->addr.lo = rte_cpu_to_le_32(U64_LO(mapping));
-		idx++;
+		rxq->sw_rx_prod++;
 	}
-	rxq->sw_rx_prod = idx;
 
 	return 0;
 }
@@ -1544,25 +1536,26 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	uint8_t bitfield_val;
 #endif
 	uint8_t offset, flags, bd_num;
-
+	uint16_t count = 0;
 
 	/* Allocate buffers that we used in previous loop */
 	if (rxq->rx_alloc_count) {
-		if (unlikely(qede_alloc_rx_bulk_mbufs(rxq,
-			     rxq->rx_alloc_count))) {
+		count = rxq->rx_alloc_count > QEDE_MAX_BULK_ALLOC_COUNT ?
+			QEDE_MAX_BULK_ALLOC_COUNT : rxq->rx_alloc_count;
+
+		if (unlikely(qede_alloc_rx_bulk_mbufs(rxq, count))) {
 			struct rte_eth_dev *dev;
 
 			PMD_RX_LOG(ERR, rxq,
-				   "New buffer allocation failed,"
-				   "dropping incoming packetn");
+				   "New buffers allocation failed,"
+				   "dropping incoming packets\n");
 			dev = &rte_eth_devices[rxq->port_id];
-			dev->data->rx_mbuf_alloc_failed +=
-							rxq->rx_alloc_count;
-			rxq->rx_alloc_errors += rxq->rx_alloc_count;
+			dev->data->rx_mbuf_alloc_failed += count;
+			rxq->rx_alloc_errors += count;
 			return 0;
 		}
 		qede_update_rx_prod(qdev, rxq);
-		rxq->rx_alloc_count = 0;
+		rxq->rx_alloc_count -= count;
 	}
 
 	hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr);
@@ -1731,7 +1724,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	}
 
 	/* Request number of buffers to be allocated in next loop */
-	rxq->rx_alloc_count = rx_alloc_count;
+	rxq->rx_alloc_count += rx_alloc_count;
 
 	rxq->rcv_pkts += rx_pkt;
 	rxq->rx_segs += rx_pkt;
@@ -1771,25 +1764,26 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	struct qede_agg_info *tpa_info = NULL;
 	uint32_t rss_hash;
 	int rx_alloc_count = 0;
-
+	uint16_t count = 0;
 
 	/* Allocate buffers that we used in previous loop */
 	if (rxq->rx_alloc_count) {
-		if (unlikely(qede_alloc_rx_bulk_mbufs(rxq,
-			     rxq->rx_alloc_count))) {
+		count = rxq->rx_alloc_count > QEDE_MAX_BULK_ALLOC_COUNT ?
+			QEDE_MAX_BULK_ALLOC_COUNT : rxq->rx_alloc_count;
+
+		if (unlikely(qede_alloc_rx_bulk_mbufs(rxq, count))) {
 			struct rte_eth_dev *dev;
 
 			PMD_RX_LOG(ERR, rxq,
-				   "New buffer allocation failed,"
-				   "dropping incoming packetn");
+				   "New buffers allocation failed,"
+				   "dropping incoming packets\n");
 			dev = &rte_eth_devices[rxq->port_id];
-			dev->data->rx_mbuf_alloc_failed +=
-							rxq->rx_alloc_count;
-			rxq->rx_alloc_errors += rxq->rx_alloc_count;
+			dev->data->rx_mbuf_alloc_failed += count;
+			rxq->rx_alloc_errors += count;
 			return 0;
 		}
 		qede_update_rx_prod(qdev, rxq);
-		rxq->rx_alloc_count = 0;
+		rxq->rx_alloc_count -= count;
 	}
 
 	hw_comp_cons = rte_le_to_cpu_16(*rxq->hw_cons_ptr);
@@ -2028,7 +2022,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 	}
 
 	/* Request number of buffers to be allocated in next loop */
-	rxq->rx_alloc_count = rx_alloc_count;
+	rxq->rx_alloc_count += rx_alloc_count;
 
 	rxq->rcv_pkts += rx_pkt;
 
-- 
2.18.2


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 3/3] net/qede: fix max Rx pktlen calculation
  2022-03-04 12:08 [PATCH 1/3] net/qede: fix Tx callback completion routine Devendra Singh Rawat
  2022-03-04 12:08 ` [PATCH 2/3] net/qede: fix Rx callback Devendra Singh Rawat
@ 2022-03-04 12:08 ` Devendra Singh Rawat
  2022-03-10  7:40 ` [PATCH 1/3] net/qede: fix Tx callback completion routine Jerin Jacob
  2 siblings, 0 replies; 6+ messages in thread
From: Devendra Singh Rawat @ 2022-03-04 12:08 UTC (permalink / raw)
  To: dev, thomas, jerinj, ferruh.yigit, rmody
  Cc: palok, Devendra Singh Rawat, stable

size of CRC is not added to max_rx_pktlen, due to this bigger sized
packets(size 1480, 1490 1500) are being dropped.
This fix adds RTE_ETHER_CRC_LEN to max_rx_pktlen.

Fixes: 1bb4a528c41f ("ethdev: fix max Rx packet length")
Cc: stable@dpdk.org

Signed-off-by: Devendra Singh Rawat <dsinghrawat@marvell.com>
Signed-off-by: Rasesh Mody <rmody@marvell.com>
---
 drivers/net/qede/qede_rxtx.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 02fa1fcaa1..c35585f5fd 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -235,7 +235,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
 		dev->data->rx_queues[qid] = NULL;
 	}
 
-	max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN;
+	max_rx_pktlen = dev->data->mtu + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;
 
 	/* Fix up RX buffer size */
 	bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
-- 
2.18.2


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/3] net/qede: fix Tx callback completion routine
  2022-03-04 12:08 [PATCH 1/3] net/qede: fix Tx callback completion routine Devendra Singh Rawat
  2022-03-04 12:08 ` [PATCH 2/3] net/qede: fix Rx callback Devendra Singh Rawat
  2022-03-04 12:08 ` [PATCH 3/3] net/qede: fix max Rx pktlen calculation Devendra Singh Rawat
@ 2022-03-10  7:40 ` Jerin Jacob
  2022-03-10  8:56   ` Thomas Monjalon
  2 siblings, 1 reply; 6+ messages in thread
From: Jerin Jacob @ 2022-03-10  7:40 UTC (permalink / raw)
  To: Devendra Singh Rawat
  Cc: dpdk-dev, Thomas Monjalon, Jerin Jacob, Ferruh Yigit,
	Rasesh Mody, palok, dpdk stable

On Fri, Mar 4, 2022 at 5:38 PM Devendra Singh Rawat
<dsinghrawat@marvell.com> wrote:
>
> Tx completion routine was first incrementing no. of free slots in Tx
> ring and then freeing corresponding mbufs in bulk. In some situations
> no. of mbufs freed were less than no. of Tx ring slots freed. This
> caused TX ring to get into an inconsistent state and ultimately
> application fails to transmit further traffic.
>
> The fix first updates Tx ring SW consumer index, then increments Tx ring
> free slot no. and finally frees the mbuf, this is done in a single
> iteration of loop.
>
> Fixes: 2c41740bf19e ("net/qede: get consumer index once")
> Fixes: 4996b959cde6 ("net/qede: free packets in bulk")
> Cc: stable@dpdk.org
>
> Signed-off-by: Devendra Singh Rawat <dsinghrawat@marvell.com>
> Signed-off-by: Rasesh Mody <rmody@marvell.com>

Updated the git commits as follows and applied series to
dpdk-next-net-mrvl/for-next-net. Thanks


--
    net/qede: fix max Rx packet length calculation

    Size of CRC is not added to max_rx_pktlen, due to this bigger sized
    packets(size 1480, 1490 1500) are being dropped.
    This fix adds RTE_ETHER_CRC_LEN to max_rx_pktlen.

    Fixes: 1bb4a528c41f ("ethdev: fix max Rx packet length")
    Cc: stable@dpdk.org

    Signed-off-by: Devendra Singh Rawat <dsinghrawat@marvell.com>
    Signed-off-by: Rasesh Mody <rmody@marvell.com>

---
    net/qede: fix Rx callback

    qede_alloc_rx_bulk_mbufs() was trimming the number of requested
    mbufs count to QEDE_MAX_BULK_ALLOC_COUNT.
    The RX callback was ignorant of this trimming and it was always
    resetting the number of empty Rx BD ring slots to 0.
    This resulted in Rx BD ring getting into an inconsistent
    state and ultimately the application fails to receive any traffic.

    The fix trims the number of requested mbufs count before
    making call to qede_alloc_rx_bulk_mbufs().
    After qede_alloc_rx_bulk_mbufs() returns successfully, the
    number of empty Rx BD ring slots are decremented by the
    correct count.

    Fixes: 8f2312474529 ("net/qede: fix performance bottleneck in Rx path")
    Cc: stable@dpdk.org

    Signed-off-by: Devendra Singh Rawat <dsinghrawat@marvell.com>
    Signed-off-by: Rasesh Mody <rmody@marvell.com>

--
    net/qede: fix Tx callback completion routine

    Tx completion routine was first incrementing the number of free slots in
    Tx ring and then freeing corresponding mbufs in bulk.
    In some situations, the number of mbufs freed were less than
    number of Tx ring slots freed. This caused Tx ring to get into an
    inconsistent state and ultimately application fails to transmit
    further traffic.

    The fix first updates Tx ring SW consumer index, then increments
    Tx ring free slot number and finally frees the mbuf,
    this is done in a single iteration of loop.

    Fixes: 2c41740bf19e ("net/qede: get consumer index once")
    Fixes: 4996b959cde6 ("net/qede: free packets in bulk")
    Cc: stable@dpdk.org

     Signed-off-by: Devendra Singh Rawat <dsinghrawat@marvell.com>
    Signed-off-by: Rasesh Mody <rmody@marvell.com>
---

> ---
>  drivers/net/qede/qede_rxtx.c | 79 +++++++++++++++---------------------
>  1 file changed, 33 insertions(+), 46 deletions(-)
>
> diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
> index 911bb1a260..0c52568180 100644
> --- a/drivers/net/qede/qede_rxtx.c
> +++ b/drivers/net/qede/qede_rxtx.c
> @@ -885,68 +885,55 @@ qede_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id)
>  }
>
>  static inline void
> -qede_process_tx_compl(__rte_unused struct ecore_dev *edev,
> -                     struct qede_tx_queue *txq)
> +qede_free_tx_pkt(struct qede_tx_queue *txq)
>  {
> -       uint16_t hw_bd_cons;
> -       uint16_t sw_tx_cons;
> -       uint16_t remaining;
> -       uint16_t mask;
>         struct rte_mbuf *mbuf;
>         uint16_t nb_segs;
>         uint16_t idx;
> -       uint16_t first_idx;
> -
> -       rte_compiler_barrier();
> -       rte_prefetch0(txq->hw_cons_ptr);
> -       sw_tx_cons = ecore_chain_get_cons_idx(&txq->tx_pbl);
> -       hw_bd_cons = rte_le_to_cpu_16(*txq->hw_cons_ptr);
> -#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
> -       PMD_TX_LOG(DEBUG, txq, "Tx Completions = %u\n",
> -                  abs(hw_bd_cons - sw_tx_cons));
> -#endif
> -
> -       mask = NUM_TX_BDS(txq);
> -       idx = txq->sw_tx_cons & mask;
>
> -       remaining = hw_bd_cons - sw_tx_cons;
> -       txq->nb_tx_avail += remaining;
> -       first_idx = idx;
> -
> -       while (remaining) {
> -               mbuf = txq->sw_tx_ring[idx];
> -               RTE_ASSERT(mbuf);
> +       idx = TX_CONS(txq);
> +       mbuf = txq->sw_tx_ring[idx];
> +       if (mbuf) {
>                 nb_segs = mbuf->nb_segs;
> -               remaining -= nb_segs;
> -
> -               /* Prefetch the next mbuf. Note that at least the last 4 mbufs
> -                * that are prefetched will not be used in the current call.
> -                */
> -               rte_mbuf_prefetch_part1(txq->sw_tx_ring[(idx + 4) & mask]);
> -               rte_mbuf_prefetch_part2(txq->sw_tx_ring[(idx + 4) & mask]);
> -
>                 PMD_TX_LOG(DEBUG, txq, "nb_segs to free %u\n", nb_segs);
> -
>                 while (nb_segs) {
> +                       /* It's like consuming rxbuf in recv() */
>                         ecore_chain_consume(&txq->tx_pbl);
> +                       txq->nb_tx_avail++;
>                         nb_segs--;
>                 }
> -
> -               idx = (idx + 1) & mask;
> +               rte_pktmbuf_free(mbuf);
> +               txq->sw_tx_ring[idx] = NULL;
> +               txq->sw_tx_cons++;
>                 PMD_TX_LOG(DEBUG, txq, "Freed tx packet\n");
> -       }
> -       txq->sw_tx_cons = idx;
> -
> -       if (first_idx > idx) {
> -               rte_pktmbuf_free_bulk(&txq->sw_tx_ring[first_idx],
> -                                                         mask - first_idx + 1);
> -               rte_pktmbuf_free_bulk(&txq->sw_tx_ring[0], idx);
>         } else {
> -               rte_pktmbuf_free_bulk(&txq->sw_tx_ring[first_idx],
> -                                                         idx - first_idx);
> +               ecore_chain_consume(&txq->tx_pbl);
> +               txq->nb_tx_avail++;
>         }
>  }
>
> +static inline void
> +qede_process_tx_compl(__rte_unused struct ecore_dev *edev,
> +                     struct qede_tx_queue *txq)
> +{
> +       uint16_t hw_bd_cons;
> +#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
> +       uint16_t sw_tx_cons;
> +#endif
> +
> +       hw_bd_cons = rte_le_to_cpu_16(*txq->hw_cons_ptr);
> +       /* read barrier prevents speculative execution on stale data */
> +       rte_rmb();
> +
> +#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
> +       sw_tx_cons = ecore_chain_get_cons_idx(&txq->tx_pbl);
> +       PMD_TX_LOG(DEBUG, txq, "Tx Completions = %u\n",
> +                  abs(hw_bd_cons - sw_tx_cons));
> +#endif
> +       while (hw_bd_cons !=  ecore_chain_get_cons_idx(&txq->tx_pbl))
> +               qede_free_tx_pkt(txq);
> +}
> +
>  static int qede_drain_txq(struct qede_dev *qdev,
>                           struct qede_tx_queue *txq, bool allow_drain)
>  {
> --
> 2.18.2
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/3] net/qede: fix Tx callback completion routine
  2022-03-10  7:40 ` [PATCH 1/3] net/qede: fix Tx callback completion routine Jerin Jacob
@ 2022-03-10  8:56   ` Thomas Monjalon
  2022-03-10  9:07     ` [EXT] " Devendra Singh Rawat
  0 siblings, 1 reply; 6+ messages in thread
From: Thomas Monjalon @ 2022-03-10  8:56 UTC (permalink / raw)
  To: Devendra Singh Rawat, Jerin Jacob
  Cc: dpdk-dev, Jerin Jacob, Ferruh Yigit, Rasesh Mody, palok, dpdk stable

10/03/2022 08:40, Jerin Jacob:
> Updated the git commits as follows and applied series to
> dpdk-next-net-mrvl/for-next-net. Thanks

What is the intent? Do you want to get them in 22.03?
Is it validated enough for a last minute merge?



^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: [EXT] Re: [PATCH 1/3] net/qede: fix Tx callback completion routine
  2022-03-10  8:56   ` Thomas Monjalon
@ 2022-03-10  9:07     ` Devendra Singh Rawat
  0 siblings, 0 replies; 6+ messages in thread
From: Devendra Singh Rawat @ 2022-03-10  9:07 UTC (permalink / raw)
  To: Thomas Monjalon, Jerin Jacob
  Cc: dpdk-dev, Jerin Jacob Kollanukkaran, Ferruh Yigit, Rasesh Mody,
	Alok Prasad, dpdk stable



> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Thursday, March 10, 2022 2:26 PM
> To: Devendra Singh Rawat <dsinghrawat@marvell.com>; Jerin Jacob
> <jerinjacobk@gmail.com>
> Cc: dpdk-dev <dev@dpdk.org>; Jerin Jacob Kollanukkaran
> <jerinj@marvell.com>; Ferruh Yigit <ferruh.yigit@intel.com>; Rasesh Mody
> <rmody@marvell.com>; Alok Prasad <palok@marvell.com>; dpdk stable
> <stable@dpdk.org>
> Subject: [EXT] Re: [PATCH 1/3] net/qede: fix Tx callback completion routine
> 
> External Email
> 
> ----------------------------------------------------------------------
> 10/03/2022 08:40, Jerin Jacob:
> > Updated the git commits as follows and applied series to
> > dpdk-next-net-mrvl/for-next-net. Thanks
> 
> What is the intent? Do you want to get them in 22.03?
> Is it validated enough for a last minute merge?
> 

These are fixes for regressions introduced by few past commits. They are critical for receiving and transmitting traffic on Marvell FastLinQ adapters.
We do want them in 22.03.
We have completed testing cycles on our end and believe they are good enough to be merged.

Thanks,
Devendra
 

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2022-03-10  9:09 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-04 12:08 [PATCH 1/3] net/qede: fix Tx callback completion routine Devendra Singh Rawat
2022-03-04 12:08 ` [PATCH 2/3] net/qede: fix Rx callback Devendra Singh Rawat
2022-03-04 12:08 ` [PATCH 3/3] net/qede: fix max Rx pktlen calculation Devendra Singh Rawat
2022-03-10  7:40 ` [PATCH 1/3] net/qede: fix Tx callback completion routine Jerin Jacob
2022-03-10  8:56   ` Thomas Monjalon
2022-03-10  9:07     ` [EXT] " Devendra Singh Rawat

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).