DPDK patches and discussions
 help / color / mirror / Atom feed
From: Aleksandr Miloshenko <a.miloshenko@f5.com>
To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com
Cc: dev@dpdk.org, Aleksandr Miloshenko <a.miloshenko@f5.com>,
	stable@dpdk.org
Subject: [PATCH v2] net/iavf: do Tx done cleanup starting from Tx tail
Date: Mon, 29 Aug 2022 18:05:25 -0700	[thread overview]
Message-ID: <20220830010525.24632-1-a.miloshenko@f5.com> (raw)

iavf_xmit_pkts() sets tx_tail to the next of last transmitted
Tx descriptor. So the cleanup of Tx done descriptors must be started
from tx_tail, not from the next of tx_tail.
Otherwise rte_eth_tx_done_cleanup() doesn't free the first Tx done mbuf
when tx queue is full.

Fixes: 86e44244f95c ("net/iavf: cleanup Tx buffers")
Cc: stable@dpdk.org

Signed-off-by: Aleksandr Miloshenko <a.miloshenko@f5.com>
---

v2:
* Fixed the commit style.

 drivers/net/iavf/iavf_rxtx.c | 18 ++++++++----------
 1 file changed, 8 insertions(+), 10 deletions(-)

diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 109ba756f8..7cd5db6e49 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -3184,14 +3184,14 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
 			uint32_t free_cnt)
 {
 	struct iavf_tx_entry *swr_ring = txq->sw_ring;
-	uint16_t i, tx_last, tx_id;
+	uint16_t tx_last, tx_id;
 	uint16_t nb_tx_free_last;
 	uint16_t nb_tx_to_clean;
-	uint32_t pkt_cnt;
+	uint32_t pkt_cnt = 0;
 
-	/* Start free mbuf from the next of tx_tail */
-	tx_last = txq->tx_tail;
-	tx_id  = swr_ring[tx_last].next_id;
+	/* Start free mbuf from tx_tail */
+	tx_id = txq->tx_tail;
+	tx_last = tx_id;
 
 	if (txq->nb_free == 0 && iavf_xmit_cleanup(txq))
 		return 0;
@@ -3204,10 +3204,8 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
 	/* Loop through swr_ring to count the amount of
 	 * freeable mubfs and packets.
 	 */
-	for (pkt_cnt = 0; pkt_cnt < free_cnt; ) {
-		for (i = 0; i < nb_tx_to_clean &&
-			pkt_cnt < free_cnt &&
-			tx_id != tx_last; i++) {
+	while (pkt_cnt < free_cnt) {
+		do {
 			if (swr_ring[tx_id].mbuf != NULL) {
 				rte_pktmbuf_free_seg(swr_ring[tx_id].mbuf);
 				swr_ring[tx_id].mbuf = NULL;
@@ -3220,7 +3218,7 @@ iavf_tx_done_cleanup_full(struct iavf_tx_queue *txq,
 			}
 
 			tx_id = swr_ring[tx_id].next_id;
-		}
+		} while (--nb_tx_to_clean && pkt_cnt < free_cnt && tx_id != tx_last);
 
 		if (txq->rs_thresh > txq->nb_tx_desc -
 			txq->nb_free || tx_id == tx_last)
-- 
2.37.0


             reply	other threads:[~2022-08-30  1:06 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-30  1:05 Aleksandr Miloshenko [this message]
  -- strict thread matches above, loose matches on Subject: below --
2022-07-07  0:14 [PATCH] iavf: do tx done cleanup starting from tx_tail Aleksandr Miloshenko
2022-08-30  1:13 ` [PATCH v2] net/iavf: do Tx done cleanup starting from Tx tail Aleksandr Miloshenko
2022-08-30  1:45   ` Zhang, Qi Z

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220830010525.24632-1-a.miloshenko@f5.com \
    --to=a.miloshenko@f5.com \
    --cc=beilei.xing@intel.com \
    --cc=dev@dpdk.org \
    --cc=jingjing.wu@intel.com \
    --cc=qi.z.zhang@intel.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).