DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Lukáš Šišmiš" <sismis@cesnet.cz>
To: Viacheslav Ovsiienko <viacheslavo@nvidia.com>, dev@dpdk.org
Cc: rasland@nvidia.com, matan@nvidia.com, suanmingm@nvidia.com,
	dsosnowski@nvidia.com
Subject: Re: [PATCH v2] net/mlx5: mitigate the Tx queue parameter adjustment
Date: Fri, 25 Apr 2025 13:54:47 +0200	[thread overview]
Message-ID: <a7d47175-214e-41c8-9312-712add4517dc@cesnet.cz> (raw)
In-Reply-To: <20250424133128.133900-1-viacheslavo@nvidia.com>

[-- Attachment #1: Type: text/plain, Size: 8890 bytes --]

Hello all,

I tested v1 patch on CX-4 card and I can confirm my application boots now!
For traceability I am adding the original discussion thread:
https://mails.dpdk.org/archives/users/2025-April/008242.html

Probably a some other problem, but it still outputs these logs:

Config: dpdk: 0000:b3:00.0: setting up TX queue 0: tx_desc: 32768 tx: 
offloads: 0x10000 hthresh: 0 pthresh: 0 wthresh: 0 tx_free_thresh: 0 
tx_rs_thresh: 0 txq_deferred_start: 0 
[DeviceConfigureQueues:runmode-dpdk.c:1487]
mlx5_net: adjust txq_inline_max (290->18) due to large Tx queue on port 1
Config: dpdk: 0000:b3:00.0: setting up TX queue 1: tx_desc: 32768 tx: 
offloads: 0x10000 hthresh: 0 pthresh: 0 wthresh: 0 tx_free_thresh: 0 
tx_rs_thresh: 0 txq_deferred_start: 0 
[DeviceConfigureQueues:runmode-dpdk.c:1487]
mlx5_net: adjust txq_inline_max (290->18) due to large Tx queue on port 1
Config: dpdk: 0000:b3:00.0: setting up TX queue 2: tx_desc: 32768 tx: 
offloads: 0x10000 hthresh: 0 pthresh: 0 wthresh: 0 tx_free_thresh: 0 
tx_rs_thresh: 0 txq_deferred_start: 0 
[DeviceConfigureQueues:runmode-dpdk.c:1487]
mlx5_net: adjust txq_inline_max (290->18) due to large Tx queue on port 1

Is there any way how I can avoid these logs?

Thank you.

Lukas

On 4/24/25 15:31, Viacheslav Ovsiienko wrote:
> he DPDK API rte_eth_tx_queue_setup() has a parameter nb_tx_desc
> specifying the desired queue capacity, measured in packets.
>
> The ConnectX NIC series has a hardware-imposed queue size
> limit of 32K WQEs (packet hardware descriptors). Typically,
> one packet requires one WQE to be sent.
>
> There is a special offload option, data-inlining, to improve
> performance for small packets. Also, NICs in some configurations
> require a minimum amount of inline data for the steering engine
> to operate correctly.
>
> In the case of inline data, more than one WQEs might be required
> to send a single packet. The mlx5 PMD takes this into account
> and adjusts the number of queue WQEs accordingly.
>
> If the requested queue capacity can't be satisfied due to
> the hardware queue size limit, the mlx5 PMD rejected the queue
> creation, causing unresolvable application failure.
>
> The patch provides the following:
>
> - fixes the calculation of the number of required WQEs
>    to send a single packet with inline data, making it more precise
>    and extending the painless operating range.
>
> - If the requested queue capacity can't be satisfied due to WQE
>    number adjustment for inline data, it no longer causes a severe
>    error. Instead, a warning message is emitted, and the queue
>    is created with the maximum available size, with a reported success.
>
>    Please note that the inline data size depends on many options
>    (NIC configuration, queue offload flags, packet offload flags,
>     packet size, etc.), so the actual queue capacity might not be
>     impacted at all.
>
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
>
> ---
> v2: diagnostics messages made less wordy
> ---
>   drivers/net/mlx5/mlx5_txq.c | 74 +++++++++++--------------------------
>   1 file changed, 22 insertions(+), 52 deletions(-)
>
> diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
> index 3e93517323..eebf3c2534 100644
> --- a/drivers/net/mlx5/mlx5_txq.c
> +++ b/drivers/net/mlx5/mlx5_txq.c
> @@ -731,7 +731,7 @@ txq_calc_inline_max(struct mlx5_txq_ctrl *txq_ctrl)
>   	if (!wqe_size)
>   		return 0;
>   	/*
> -	 * This calculation is derived from tthe source of
> +	 * This calculation is derived from the source of
>   	 * mlx5_calc_send_wqe() in rdma_core library.
>   	 */
>   	wqe_size = wqe_size * MLX5_WQE_SIZE -
> @@ -739,7 +739,7 @@ txq_calc_inline_max(struct mlx5_txq_ctrl *txq_ctrl)
>   		   MLX5_WQE_ESEG_SIZE -
>   		   MLX5_WSEG_SIZE -
>   		   MLX5_WSEG_SIZE +
> -		   MLX5_DSEG_MIN_INLINE_SIZE;
> +		   MLX5_ESEG_MIN_INLINE_SIZE;
>   	return wqe_size;
>   }
>   
> @@ -964,11 +964,8 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
>    *
>    * @param txq_ctrl
>    *   Pointer to Tx queue control structure.
> - *
> - * @return
> - *   Zero on success, otherwise the parameters can not be adjusted.
>    */
> -static int
> +static void
>   txq_adjust_params(struct mlx5_txq_ctrl *txq_ctrl)
>   {
>   	struct mlx5_priv *priv = txq_ctrl->priv;
> @@ -981,82 +978,56 @@ txq_adjust_params(struct mlx5_txq_ctrl *txq_ctrl)
>   		 * Inline data feature is not engaged at all.
>   		 * There is nothing to adjust.
>   		 */
> -		return 0;
> +		return;
>   	}
>   	if (txq_ctrl->max_inline_data <= max_inline) {
>   		/*
>   		 * The requested inline data length does not
>   		 * exceed queue capabilities.
>   		 */
> -		return 0;
> +		return;
>   	}
>   	if (txq_ctrl->txq.inlen_mode > max_inline) {
> -		DRV_LOG(ERR,
> -			"minimal data inline requirements (%u) are not"
> -			" satisfied (%u) on port %u, try the smaller"
> -			" Tx queue size (%d)",
> -			txq_ctrl->txq.inlen_mode, max_inline,
> -			priv->dev_data->port_id, priv->sh->dev_cap.max_qp_wr);
> -		goto error;
> +		DRV_LOG(WARNING,
> +			"minimal data inline requirements (%u) are not satisfied (%u) on port %u",
> +			txq_ctrl->txq.inlen_mode, max_inline, priv->dev_data->port_id);
>   	}
>   	if (txq_ctrl->txq.inlen_send > max_inline &&
>   	    config->txq_inline_max != MLX5_ARG_UNSET &&
>   	    config->txq_inline_max > (int)max_inline) {
> -		DRV_LOG(ERR,
> -			"txq_inline_max requirements (%u) are not"
> -			" satisfied (%u) on port %u, try the smaller"
> -			" Tx queue size (%d)",
> -			txq_ctrl->txq.inlen_send, max_inline,
> -			priv->dev_data->port_id, priv->sh->dev_cap.max_qp_wr);
> -		goto error;
> +		DRV_LOG(WARNING,
> +			"txq_inline_max requirements (%u) are not satisfied (%u) on port %u",
> +			txq_ctrl->txq.inlen_send, max_inline, priv->dev_data->port_id);
>   	}
>   	if (txq_ctrl->txq.inlen_empw > max_inline &&
>   	    config->txq_inline_mpw != MLX5_ARG_UNSET &&
>   	    config->txq_inline_mpw > (int)max_inline) {
> -		DRV_LOG(ERR,
> -			"txq_inline_mpw requirements (%u) are not"
> -			" satisfied (%u) on port %u, try the smaller"
> -			" Tx queue size (%d)",
> -			txq_ctrl->txq.inlen_empw, max_inline,
> -			priv->dev_data->port_id, priv->sh->dev_cap.max_qp_wr);
> -		goto error;
> +		DRV_LOG(WARNING,
> +			"txq_inline_mpw requirements (%u) are not satisfied (%u) on port %u",
> +			txq_ctrl->txq.inlen_empw, max_inline, priv->dev_data->port_id);
>   	}
>   	if (txq_ctrl->txq.tso_en && max_inline < MLX5_MAX_TSO_HEADER) {
> -		DRV_LOG(ERR,
> -			"tso header inline requirements (%u) are not"
> -			" satisfied (%u) on port %u, try the smaller"
> -			" Tx queue size (%d)",
> -			MLX5_MAX_TSO_HEADER, max_inline,
> -			priv->dev_data->port_id, priv->sh->dev_cap.max_qp_wr);
> -		goto error;
> +		DRV_LOG(WARNING,
> +			"tso header inline requirements (%u) are not satisfied (%u) on port %u",
> +			MLX5_MAX_TSO_HEADER, max_inline, priv->dev_data->port_id);
>   	}
>   	if (txq_ctrl->txq.inlen_send > max_inline) {
>   		DRV_LOG(WARNING,
> -			"adjust txq_inline_max (%u->%u)"
> -			" due to large Tx queue on port %u",
> -			txq_ctrl->txq.inlen_send, max_inline,
> -			priv->dev_data->port_id);
> +			"adjust txq_inline_max (%u->%u) due to large Tx queue on port %u",
> +			txq_ctrl->txq.inlen_send, max_inline, priv->dev_data->port_id);
>   		txq_ctrl->txq.inlen_send = max_inline;
>   	}
>   	if (txq_ctrl->txq.inlen_empw > max_inline) {
>   		DRV_LOG(WARNING,
> -			"adjust txq_inline_mpw (%u->%u)"
> -			"due to large Tx queue on port %u",
> -			txq_ctrl->txq.inlen_empw, max_inline,
> -			priv->dev_data->port_id);
> +			"adjust txq_inline_mpw (%u->%u) due to large Tx queue on port %u",
> +			txq_ctrl->txq.inlen_empw, max_inline, priv->dev_data->port_id);
>   		txq_ctrl->txq.inlen_empw = max_inline;
>   	}
>   	txq_ctrl->max_inline_data = RTE_MAX(txq_ctrl->txq.inlen_send,
>   					    txq_ctrl->txq.inlen_empw);
> -	MLX5_ASSERT(txq_ctrl->max_inline_data <= max_inline);
> -	MLX5_ASSERT(txq_ctrl->txq.inlen_mode <= max_inline);
>   	MLX5_ASSERT(txq_ctrl->txq.inlen_mode <= txq_ctrl->txq.inlen_send);
>   	MLX5_ASSERT(txq_ctrl->txq.inlen_mode <= txq_ctrl->txq.inlen_empw ||
>   		    !txq_ctrl->txq.inlen_empw);
> -	return 0;
> -error:
> -	rte_errno = ENOMEM;
> -	return -ENOMEM;
>   }
>   
>   /**
> @@ -1105,8 +1076,7 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
>   	tmpl->txq.port_id = dev->data->port_id;
>   	tmpl->txq.idx = idx;
>   	txq_set_params(tmpl);
> -	if (txq_adjust_params(tmpl))
> -		goto error;
> +	txq_adjust_params(tmpl);
>   	if (txq_calc_wqebb_cnt(tmpl) >
>   	    priv->sh->dev_cap.max_qp_wr) {
>   		DRV_LOG(ERR,

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 5986 bytes --]

      reply	other threads:[~2025-04-25 11:55 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <<20250423082450.5eb6cdee@hermes.local>
2025-04-24 13:31 ` Viacheslav Ovsiienko
2025-04-25 11:54   ` Lukáš Šišmiš [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a7d47175-214e-41c8-9312-712add4517dc@cesnet.cz \
    --to=sismis@cesnet.cz \
    --cc=dev@dpdk.org \
    --cc=dsosnowski@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=suanmingm@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).