DPDK patches and discussions
 help / color / mirror / Atom feed
From: Bing Zhao <bingz@mellanox.com>
To: orika@mellanox.com, viacheslavo@mellanox.com
Cc: rasland@mellanox.com, matan@mellanox.com, dev@dpdk.org, stable@dpdk.org
Subject: [dpdk-dev] [PATCH v2] net/mlx5: fix the hairpin queue capacity
Date: Wed, 19 Feb 2020 16:28:39 +0800	[thread overview]
Message-ID: <1582100919-334723-1-git-send-email-bingz@mellanox.com> (raw)
In-Reply-To: <1581946301-457865-1-git-send-email-bingz@mellanox.com>

The hairpin TX/RX queue depth and packet size is fixed in the past.
When the firmware has some fix or improvement, the PMD will not
make full use of it. And also, 32 packets for a single queue will not
guarantee a good performance for hairpin flows. It will make the
stride size larger and for small packets, it is a waste of memory.
The recommended stride size is 64B now.

The parameter of hairpin queue setup needs to be adjusted.
1. A proper buffer size should support the standard jumbo frame with
9KB, and also more than 1 jumbo frame packet for performance.
2. Number of packets of a single queue should be the maximum
supported value (total buffer size / stride size).

There is no need to support the max capacity of total buffer size
because the memory consumption should also be taken into
consideration.

Fixes: e79c9be91515 ("net/mlx5: support Rx hairpin queues")
Cc: orika@mellanox.com
Cc: stable@dpdk.org

Signed-off-by: Bing Zhao <bingz@mellanox.com>

------------

v2: change the capacity parameters and the commit details

---
 drivers/net/mlx5/mlx5_defs.h |  4 ++++
 drivers/net/mlx5/mlx5_rxq.c  | 13 +++++++++----
 drivers/net/mlx5/mlx5_txq.c  | 13 +++++++++----
 3 files changed, 22 insertions(+), 8 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index 9b392ed..83ca367 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -173,6 +173,10 @@
 #define MLX5_FLOW_MREG_HNAME "MARK_COPY_TABLE"
 #define MLX5_DEFAULT_COPY_ID UINT32_MAX
 
+/* Hairpin TX/RX queue configuration parameters. */
+#define MLX5_HAIRPIN_QUEUE_STRIDE 6
+#define MLX5_HAIRPIN_JUMBO_LOG_SIZE (15 + 2)
+
 /* Definition of static_assert found in /usr/include/assert.h */
 #ifndef HAVE_STATIC_ASSERT
 #define static_assert _Static_assert
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index dc0fd82..8a6b410 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1268,6 +1268,7 @@
 	struct mlx5_devx_create_rq_attr attr = { 0 };
 	struct mlx5_rxq_obj *tmpl = NULL;
 	int ret = 0;
+	uint32_t max_wq_data;
 
 	MLX5_ASSERT(rxq_data);
 	MLX5_ASSERT(!rxq_ctrl->obj);
@@ -1283,11 +1284,15 @@
 	tmpl->type = MLX5_RXQ_OBJ_TYPE_DEVX_HAIRPIN;
 	tmpl->rxq_ctrl = rxq_ctrl;
 	attr.hairpin = 1;
-	/* Workaround for hairpin startup */
-	attr.wq_attr.log_hairpin_num_packets = log2above(32);
-	/* Workaround for packets larger than 1KB */
+	max_wq_data = priv->config.hca_attr.log_max_hairpin_wq_data_sz;
+	/* Jumbo frames > 9KB should be supported, and more packets. */
 	attr.wq_attr.log_hairpin_data_sz =
-			priv->config.hca_attr.log_max_hairpin_wq_data_sz;
+			(max_wq_data < MLX5_HAIRPIN_JUMBO_LOG_SIZE) ?
+			max_wq_data : MLX5_HAIRPIN_JUMBO_LOG_SIZE;
+	/* Set the packets number to the maximum value for performance. */
+	attr.wq_attr.log_hairpin_num_packets =
+			attr.wq_attr.log_hairpin_data_sz -
+			MLX5_HAIRPIN_QUEUE_STRIDE;
 	tmpl->rq = mlx5_devx_cmd_create_rq(priv->sh->ctx, &attr,
 					   rxq_ctrl->socket);
 	if (!tmpl->rq) {
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index bc13abf..2ad849a 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -493,6 +493,7 @@
 	struct mlx5_devx_create_sq_attr attr = { 0 };
 	struct mlx5_txq_obj *tmpl = NULL;
 	int ret = 0;
+	uint32_t max_wq_data;
 
 	MLX5_ASSERT(txq_data);
 	MLX5_ASSERT(!txq_ctrl->obj);
@@ -509,11 +510,15 @@
 	tmpl->txq_ctrl = txq_ctrl;
 	attr.hairpin = 1;
 	attr.tis_lst_sz = 1;
-	/* Workaround for hairpin startup */
-	attr.wq_attr.log_hairpin_num_packets = log2above(32);
-	/* Workaround for packets larger than 1KB */
+	max_wq_data = priv->config.hca_attr.log_max_hairpin_wq_data_sz;
+	/* Jumbo frames > 9KB should be supported, and more packets. */
 	attr.wq_attr.log_hairpin_data_sz =
-			priv->config.hca_attr.log_max_hairpin_wq_data_sz;
+			(max_wq_data < MLX5_HAIRPIN_JUMBO_LOG_SIZE) ?
+			max_wq_data : MLX5_HAIRPIN_JUMBO_LOG_SIZE;
+	/* Set the packets number to the maximum value for performance. */
+	attr.wq_attr.log_hairpin_num_packets =
+			attr.wq_attr.log_hairpin_data_sz -
+			MLX5_HAIRPIN_QUEUE_STRIDE;
 	attr.tis_num = priv->sh->tis->id;
 	tmpl->sq = mlx5_devx_cmd_create_sq(priv->sh->ctx, &attr);
 	if (!tmpl->sq) {
-- 
1.8.3.1


  parent reply	other threads:[~2020-02-19  8:28 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-17 13:31 [dpdk-dev] [PATCH] " Bing Zhao
2020-02-18  9:05 ` Ori Kam
2020-02-19  8:28 ` Bing Zhao [this message]
2020-02-19 12:32   ` [dpdk-dev] [PATCH v2] " Ori Kam
2020-02-19 14:54   ` Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1582100919-334723-1-git-send-email-bingz@mellanox.com \
    --to=bingz@mellanox.com \
    --cc=dev@dpdk.org \
    --cc=matan@mellanox.com \
    --cc=orika@mellanox.com \
    --cc=rasland@mellanox.com \
    --cc=stable@dpdk.org \
    --cc=viacheslavo@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).