DPDK patches and discussions
 help / color / mirror / Atom feed
From: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
To: dev@dpdk.org
Cc: yskoh@mellanox.com, shahafs@mellanox.com
Subject: [dpdk-dev] [PATCH 1/3] net/mlx5: fix Tx completion descriptors fetching loop
Date: Mon, 29 Jul 2019 12:41:03 +0000	[thread overview]
Message-ID: <1564404065-4823-2-git-send-email-viacheslavo@mellanox.com> (raw)
In-Reply-To: <1564404065-4823-1-git-send-email-viacheslavo@mellanox.com>

This patch limits the amount of fetched and processed
completion descriptors in one tx_burst routine call.

The completion processing involves the buffer freeing
which may be time consuming and introduce the significant
latency, so limiting the amount of processed completions
mitigates the latency issue.

Fixes: 18a1c20044c0 ("net/mlx5: implement Tx burst template")

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5_defs.h |  7 +++++++
 drivers/net/mlx5/mlx5_rxtx.c | 46 +++++++++++++++++++++++++++++---------------
 2 files changed, 38 insertions(+), 15 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index 8c118d5..461e916 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -37,6 +37,13 @@
  */
 #define MLX5_TX_COMP_THRESH_INLINE_DIV (1 << 3)
 
+/*
+ * Maximal amount of normal completion CQEs
+ * processed in one call of tx_burst() routine.
+ */
+#define MLX5_TX_COMP_MAX_CQE 2u
+
+
 /* Size of per-queue MR cache array for linear search. */
 #define MLX5_MR_CACHE_N 8
 
diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 007df8f..c2b93c6 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -1992,13 +1992,13 @@ enum mlx5_txcmp_code {
 mlx5_tx_handle_completion(struct mlx5_txq_data *restrict txq,
 			  unsigned int olx __rte_unused)
 {
+	unsigned int count = MLX5_TX_COMP_MAX_CQE;
 	bool update = false;
+	uint16_t tail = txq->elts_tail;
 	int ret;
 
 	do {
-		volatile struct mlx5_wqe_cseg *cseg;
 		volatile struct mlx5_cqe *cqe;
-		uint16_t tail;
 
 		cqe = &txq->cqes[txq->cq_ci & txq->cqe_m];
 		ret = check_cqe(cqe, txq->cqe_s, txq->cq_ci);
@@ -2006,19 +2006,21 @@ enum mlx5_txcmp_code {
 			if (likely(ret != MLX5_CQE_STATUS_ERR)) {
 				/* No new CQEs in completion queue. */
 				assert(ret == MLX5_CQE_STATUS_HW_OWN);
-				if (likely(update)) {
-					/* Update the consumer index. */
-					rte_compiler_barrier();
-					*txq->cq_db =
-						rte_cpu_to_be_32(txq->cq_ci);
-				}
-				return;
+				break;
 			}
 			/* Some error occurred, try to restart. */
 			rte_wmb();
 			tail = mlx5_tx_error_cqe_handle
 				(txq, (volatile struct mlx5_err_cqe *)cqe);
+			if (likely(tail != txq->elts_tail)) {
+				mlx5_tx_free_elts(txq, tail, olx);
+				assert(tail == txq->elts_tail);
+			}
+			/* Allow flushing all CQEs from the queue. */
+			count = txq->cqe_s;
 		} else {
+			volatile struct mlx5_wqe_cseg *cseg;
+
 			/* Normal transmit completion. */
 			++txq->cq_ci;
 			rte_cio_rmb();
@@ -2031,13 +2033,27 @@ enum mlx5_txcmp_code {
 		if (txq->cq_pi)
 			--txq->cq_pi;
 #endif
-		if (likely(tail != txq->elts_tail)) {
-			/* Free data buffers from elts. */
-			mlx5_tx_free_elts(txq, tail, olx);
-			assert(tail == txq->elts_tail);
-		}
 		update = true;
-	} while (true);
+	/*
+	 * We have to restrict the amount of processed CQEs
+	 * in one tx_burst routine call. The CQ may be large
+	 * and many CQEs may be updated by the NIC in one
+	 * transaction. Buffers freeing is time consuming,
+	 * multiple iterations may introduce significant
+	 * latency.
+	 */
+	} while (--count);
+	if (likely(tail != txq->elts_tail)) {
+		/* Free data buffers from elts. */
+		mlx5_tx_free_elts(txq, tail, olx);
+		assert(tail == txq->elts_tail);
+	}
+	if (likely(update)) {
+		/* Update the consumer index. */
+		rte_compiler_barrier();
+		*txq->cq_db =
+		rte_cpu_to_be_32(txq->cq_ci);
+	}
 }
 
 /**
-- 
1.8.3.1


  reply	other threads:[~2019-07-29 12:41 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-29 12:41 [dpdk-dev] [PATCH 0/3] net/mlx5: transmit datapath cumulative fix pack Viacheslav Ovsiienko
2019-07-29 12:41 ` Viacheslav Ovsiienko [this message]
2019-07-29 12:41 ` [dpdk-dev] [PATCH 2/3] net/mlx5: fix ConnectX-4LX minimal inline data limit Viacheslav Ovsiienko
2019-08-01  7:43   ` [dpdk-dev] [PATCH] net/mlx5: fix default minimal data inline Viacheslav Ovsiienko
2019-07-29 12:41 ` [dpdk-dev] [PATCH 3/3] net/mlx5: fix the Tx completion request generation Viacheslav Ovsiienko
2019-07-29 15:13 ` [dpdk-dev] [PATCH 0/3] net/mlx5: transmit datapath cumulative fix pack Matan Azrad
2019-07-29 15:23 ` Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1564404065-4823-2-git-send-email-viacheslavo@mellanox.com \
    --to=viacheslavo@mellanox.com \
    --cc=dev@dpdk.org \
    --cc=shahafs@mellanox.com \
    --cc=yskoh@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).