DPDK patches and discussions
 help / color / mirror / Atom feed
From: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
To: dev@dpdk.org
Cc: rasland@mellanox.com, matan@mellanox.com
Subject: [dpdk-dev] [PATCH v2] net/mlx5: adjust inline setting for large Tx queue sizes
Date: Tue,  1 Oct 2019 06:53:37 +0000	[thread overview]
Message-ID: <1569912817-25331-1-git-send-email-viacheslavo@mellanox.com> (raw)

The hardware may have limitations on maximal amount of
supported Tx descriptors building blocks (WQEBB). Application
requires the Tx queue must accept the specified amount of packets.
If inline data feature is engaged the packet may require more WQEBBs
and overall amount of blocks may exceed the hardware capabilities.
Application has to make a trade-off between Tx queue size and maximal
data inline size.

In case if the inline settings are not requested explicitly with
devarg keys the default values are used. This patch adjusts the
applied default values if large Tx queue size is requested and
default inline settings can not be satisfied due to hardware
limitations.

The explicitly requested inline setting may be aligned (enlarging
only) by configurations routines to provide better WQEBB filling,
this implicit alignment is the subject for adjustment either.

The warning message is emitted to the log if adjustment happens.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>

---
v2: - check explicitly specified parameters
    - allow adjust aligned parameters
    - added extra asserts
v1: http://patches.dpdk.org/patch/59735/

 doc/guides/nics/mlx5.rst    |  11 +++-
 drivers/net/mlx5/mlx5_txq.c | 157 ++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 162 insertions(+), 6 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index c2e9003..414c9c1 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -473,6 +473,11 @@ Run-time configuration
   The default ``txq_inline_max`` value is 290. The specified value may be adjusted
   by the driver in order not to exceed the limit (930 bytes) and to provide better
   WQE space filling without gaps, the adjustment is reflected in the debug log.
+  Also, the default value (290) may be decreased in run-time if the large transmit
+  queue size is requested and hardware does not support enough descriptor
+  amount, in this case warning is emitted. If ``txq_inline_max`` key is
+  specified and requested inline settings can not be satisfied then error
+  will be raised.
 
 - ``txq_inline_mpw`` parameter [int]
 
@@ -494,7 +499,11 @@ Run-time configuration
   WQE space filling without gaps, the adjustment is reflected in the debug log.
   Due to multiple packets may be included to the same WQE with Enhanced Multi
   Packet Write Method and overall WQE size is limited it is not recommended to
-  specify large values for the ``txq_inline_mpw``.
+  specify large values for the ``txq_inline_mpw``. Also, the default value (268)
+  may be decreased in run-time if the large transmit queue size is requested
+  and hardware does not support enough descriptor amount, in this case warning
+  is emitted. If ``txq_inline_mpw`` key is  specified and requested inline
+  settings can not be satisfied then error will be raised.
 
 - ``txqs_max_vec`` parameter [int]
 
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index d9fd143..53d45e7 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -702,6 +702,38 @@ struct mlx5_txq_ibv *
 }
 
 /**
+ * Calculate the maximal inline data size for Tx queue.
+ *
+ * @param txq_ctrl
+ *   Pointer to Tx queue control structure.
+ *
+ * @return
+ *   The maximal inline data size.
+ */
+static unsigned int
+txq_calc_inline_max(struct mlx5_txq_ctrl *txq_ctrl)
+{
+	const unsigned int desc = 1 << txq_ctrl->txq.elts_n;
+	struct mlx5_priv *priv = txq_ctrl->priv;
+	unsigned int wqe_size;
+
+	wqe_size = priv->sh->device_attr.orig_attr.max_qp_wr / desc;
+	if (!wqe_size)
+		return 0;
+	/*
+	 * This calculation is derived from tthe source of
+	 * mlx5_calc_send_wqe() in rdma_core library.
+	 */
+	wqe_size = wqe_size * MLX5_WQE_SIZE -
+		   MLX5_WQE_CSEG_SIZE -
+		   MLX5_WQE_ESEG_SIZE -
+		   MLX5_WSEG_SIZE -
+		   MLX5_WSEG_SIZE +
+		   MLX5_DSEG_MIN_INLINE_SIZE;
+	return wqe_size;
+}
+
+/**
  * Set Tx queue parameters from device configuration.
  *
  * @param txq_ctrl
@@ -794,8 +826,11 @@ struct mlx5_txq_ibv *
 		 * may be inlined in Ethernet Segment, align the
 		 * length accordingly to fit entire WQEBBs.
 		 */
-		temp = (inlen_send / MLX5_WQE_SIZE) * MLX5_WQE_SIZE +
-			MLX5_ESEG_MIN_INLINE_SIZE + MLX5_WQE_DSEG_SIZE;
+		temp = RTE_MAX(inlen_send,
+			       MLX5_ESEG_MIN_INLINE_SIZE + MLX5_WQE_DSEG_SIZE);
+		temp -= MLX5_ESEG_MIN_INLINE_SIZE + MLX5_WQE_DSEG_SIZE;
+		temp = RTE_ALIGN(temp, MLX5_WQE_SIZE);
+		temp += MLX5_ESEG_MIN_INLINE_SIZE + MLX5_WQE_DSEG_SIZE;
 		temp = RTE_MIN(temp, MLX5_WQE_SIZE_MAX +
 				     MLX5_ESEG_MIN_INLINE_SIZE -
 				     MLX5_WQE_CSEG_SIZE -
@@ -854,9 +889,11 @@ struct mlx5_txq_ibv *
 		 * may be inlined in Data Segment, align the
 		 * length accordingly to fit entire WQEBBs.
 		 */
-		temp = (inlen_empw + MLX5_WQE_SIZE - 1) / MLX5_WQE_SIZE;
-		temp = temp * MLX5_WQE_SIZE +
-		       MLX5_DSEG_MIN_INLINE_SIZE - MLX5_WQE_DSEG_SIZE;
+		temp = RTE_MAX(inlen_empw,
+			       MLX5_WQE_SIZE + MLX5_DSEG_MIN_INLINE_SIZE);
+		temp -= MLX5_DSEG_MIN_INLINE_SIZE;
+		temp = RTE_ALIGN(temp, MLX5_WQE_SIZE);
+		temp += MLX5_DSEG_MIN_INLINE_SIZE;
 		temp = RTE_MIN(temp, MLX5_WQE_SIZE_MAX +
 				     MLX5_DSEG_MIN_INLINE_SIZE -
 				     MLX5_WQE_CSEG_SIZE -
@@ -893,6 +930,114 @@ struct mlx5_txq_ibv *
 }
 
 /**
+ * Adjust Tx queue data inline parameters for large queue sizes.
+ * The data inline feature requires multiple WQEs to fit the packets,
+ * and if the large amount of Tx descriptors is requested by application
+ * the total WQE amount may exceed the hardware capabilities. If the
+ * default inline setting are used we can try to adjust these ones and
+ * meet the hardware requirements and not exceed the queue size.
+ *
+ * @param txq_ctrl
+ *   Pointer to Tx queue control structure.
+ *
+ * @return
+ *   Zero on success, otherwise the parameters can not be adjusted.
+ */
+static int
+txq_adjust_params(struct mlx5_txq_ctrl *txq_ctrl)
+{
+	struct mlx5_priv *priv = txq_ctrl->priv;
+	struct mlx5_dev_config *config = &priv->config;
+	unsigned int max_inline;
+
+	max_inline = txq_calc_inline_max(txq_ctrl);
+	if (!txq_ctrl->txq.inlen_send) {
+		/*
+		 * Inline data feature is not engaged at all.
+		 * There is nothing to adjust.
+		 */
+		return 0;
+	}
+	if (txq_ctrl->max_inline_data <= max_inline) {
+		/*
+		 * The requested inline data length does not
+		 * exceed queue capabilities.
+		 */
+		return 0;
+	}
+	if (txq_ctrl->txq.inlen_mode > max_inline) {
+		DRV_LOG(ERR,
+			"minimal data inline requirements (%u) are not"
+			" satisfied (%u) on port %u, try the smaller"
+			" Tx queue size (%d)",
+			txq_ctrl->txq.inlen_mode, max_inline,
+			priv->dev_data->port_id,
+			priv->sh->device_attr.orig_attr.max_qp_wr);
+		goto error;
+	}
+	if (txq_ctrl->txq.inlen_send > max_inline &&
+	    config->txq_inline_max != MLX5_ARG_UNSET &&
+	    config->txq_inline_max > (int)max_inline) {
+		DRV_LOG(ERR,
+			"txq_inline_max requirements (%u) are not"
+			" satisfied (%u) on port %u, try the smaller"
+			" Tx queue size (%d)",
+			txq_ctrl->txq.inlen_send, max_inline,
+			priv->dev_data->port_id,
+			priv->sh->device_attr.orig_attr.max_qp_wr);
+		goto error;
+	}
+	if (txq_ctrl->txq.inlen_empw > max_inline &&
+	    config->txq_inline_mpw != MLX5_ARG_UNSET &&
+	    config->txq_inline_mpw > (int)max_inline) {
+		DRV_LOG(ERR,
+			"txq_inline_mpw requirements (%u) are not"
+			" satisfied (%u) on port %u, try the smaller"
+			" Tx queue size (%d)",
+			txq_ctrl->txq.inlen_empw, max_inline,
+			priv->dev_data->port_id,
+			priv->sh->device_attr.orig_attr.max_qp_wr);
+		goto error;
+	}
+	if (txq_ctrl->txq.tso_en && max_inline < MLX5_MAX_TSO_HEADER) {
+		DRV_LOG(ERR,
+			"tso header inline requirements (%u) are not"
+			" satisfied (%u) on port %u, try the smaller"
+			" Tx queue size (%d)",
+			MLX5_MAX_TSO_HEADER, max_inline,
+			priv->dev_data->port_id,
+			priv->sh->device_attr.orig_attr.max_qp_wr);
+		goto error;
+	}
+	if (txq_ctrl->txq.inlen_send > max_inline) {
+		DRV_LOG(WARNING,
+			"adjust txq_inline_max (%u->%u)"
+			" due to large Tx queue on port %u",
+			txq_ctrl->txq.inlen_send, max_inline,
+			priv->dev_data->port_id);
+		txq_ctrl->txq.inlen_send = max_inline;
+	}
+	if (txq_ctrl->txq.inlen_empw > max_inline) {
+		DRV_LOG(WARNING,
+			"adjust txq_inline_mpw (%u->%u)"
+			"due to large Tx queue on port %u",
+			txq_ctrl->txq.inlen_empw, max_inline,
+			priv->dev_data->port_id);
+		txq_ctrl->txq.inlen_empw = max_inline;
+	}
+	txq_ctrl->max_inline_data = RTE_MAX(txq_ctrl->txq.inlen_send,
+					    txq_ctrl->txq.inlen_empw);
+	assert(txq_ctrl->max_inline_data <= max_inline);
+	assert(txq_ctrl->txq.inlen_mode <= max_inline);
+	assert(txq_ctrl->txq.inlen_mode <= txq_ctrl->txq.inlen_send);
+	assert(txq_ctrl->txq.inlen_mode <= txq_ctrl->txq.inlen_empw);
+	return 0;
+error:
+	rte_errno = ENOMEM;
+	return -ENOMEM;
+}
+
+/**
  * Create a DPDK Tx queue.
  *
  * @param dev
@@ -942,6 +1087,8 @@ struct mlx5_txq_ctrl *
 	tmpl->txq.port_id = dev->data->port_id;
 	tmpl->txq.idx = idx;
 	txq_set_params(tmpl);
+	if (txq_adjust_params(tmpl))
+		goto error;
 	if (txq_calc_wqebb_cnt(tmpl) >
 	    priv->sh->device_attr.orig_attr.max_qp_wr) {
 		DRV_LOG(ERR,
-- 
1.8.3.1


             reply	other threads:[~2019-10-01  6:53 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-01  6:53 Viacheslav Ovsiienko [this message]
2019-10-08  9:33 ` Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1569912817-25331-1-git-send-email-viacheslavo@mellanox.com \
    --to=viacheslavo@mellanox.com \
    --cc=dev@dpdk.org \
    --cc=matan@mellanox.com \
    --cc=rasland@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).