From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
To: dev@dpdk.org
Cc: rasland@nvidia.com, matan@nvidia.com, orika@nvidia.com,
thomas@monjalon.net, stable@dpdk.org
Subject: [dpdk-dev] [PATCH] net/mlx5: fix Tx queue size created with DevX
Date: Thu, 4 Feb 2021 14:04:09 +0200 [thread overview]
Message-ID: <20210204120409.1194-1-viacheslavo@nvidia.com> (raw)
The number of descriptors specified for queue creation
implies the queue should be able to contain the specified
amount of packets being sent. Typically one packet takes
one queue descriptor (WQE) to be handled. If there is inline
data option enabled one packet might require more WQEs to
embrace the inline data and the overall queue size (the
number of queue descriptors) should be adjusted accordingly.
In mlx5 PMD the queues can be created either via Verbs, using
the rdma-core library or via DevX as direct kernel/firmware call.
The rdma-core does queue size adjustment internally, depending on
TSO and inline setting. The DevX approach missed this point.
This caused the queue size discrepancy and performance variations.
The patch adjusts the Tx queue size for the DexV approach
in the same as it is done in rdma-core implementation.
Fixes: 86d259cec852 ("net/mlx5: separate Tx queue object creations")
Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/mlx5_devx.c | 21 +++++++++++++++++++--
1 file changed, 19 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 935cbd03ab..ef34c38580 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -1036,7 +1036,7 @@ mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx)
};
void *reg_addr;
uint32_t cqe_n, log_desc_n;
- uint32_t wqe_n;
+ uint32_t wqe_n, wqe_size;
int ret = 0;
MLX5_ASSERT(txq_data);
@@ -1069,8 +1069,25 @@ mlx5_txq_devx_obj_new(struct rte_eth_dev *dev, uint16_t idx)
txq_data->cq_pi = 0;
txq_data->cq_db = txq_obj->cq_obj.db_rec;
*txq_data->cq_db = 0;
+ /*
+ * Ajust the amount of WQEs depending on inline settings.
+ * The number of descriptors should be enough to handle
+ * the specified number of packets. If queue is being created
+ * with Verbs the rdma-core does queue size adjustment
+ * internally in the mlx5_calc_sq_size(), we do the same
+ * for the queue being created with DevX at this point.
+ */
+ wqe_size = txq_data->tso_en ? txq_ctrl->max_tso_header : 0;
+ wqe_size += sizeof(struct mlx5_wqe_cseg) +
+ sizeof(struct mlx5_wqe_eseg) +
+ sizeof(struct mlx5_wqe_dseg);
+ if (txq_data->inlen_send)
+ wqe_size = RTE_MAX(wqe_size, txq_data->inlen_send +
+ sizeof(struct mlx5_wqe_cseg) +
+ sizeof(struct mlx5_wqe_eseg));
+ wqe_size = RTE_ALIGN_CEIL(wqe_size, MLX5_WQE_SIZE) / MLX5_WQE_SIZE;
/* Create Send Queue object with DevX. */
- wqe_n = RTE_MIN(1UL << txq_data->elts_n,
+ wqe_n = RTE_MIN((1UL << txq_data->elts_n) * wqe_size,
(uint32_t)priv->sh->device_attr.max_qp_wr);
log_desc_n = log2above(wqe_n);
ret = mlx5_txq_create_devx_sq_resources(dev, idx, log_desc_n);
--
2.18.1
next reply other threads:[~2021-02-04 12:04 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-04 12:04 Viacheslav Ovsiienko [this message]
2021-02-04 16:51 ` Raslan Darawsheh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210204120409.1194-1-viacheslavo@nvidia.com \
--to=viacheslavo@nvidia.com \
--cc=dev@dpdk.org \
--cc=matan@nvidia.com \
--cc=orika@nvidia.com \
--cc=rasland@nvidia.com \
--cc=stable@dpdk.org \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).