* Re: [PATCH v2] net/mlx5: mitigate the Tx queue parameter adjustment
2025-04-24 13:31 ` [PATCH v2] net/mlx5: mitigate the Tx queue parameter adjustment Viacheslav Ovsiienko
@ 2025-04-25 11:54 ` Lukáš Šišmiš
0 siblings, 0 replies; 2+ messages in thread
From: Lukáš Šišmiš @ 2025-04-25 11:54 UTC (permalink / raw)
To: Viacheslav Ovsiienko, dev; +Cc: rasland, matan, suanmingm, dsosnowski
[-- Attachment #1: Type: text/plain, Size: 8890 bytes --]
Hello all,
I tested v1 patch on CX-4 card and I can confirm my application boots now!
For traceability I am adding the original discussion thread:
https://mails.dpdk.org/archives/users/2025-April/008242.html
Probably a some other problem, but it still outputs these logs:
Config: dpdk: 0000:b3:00.0: setting up TX queue 0: tx_desc: 32768 tx:
offloads: 0x10000 hthresh: 0 pthresh: 0 wthresh: 0 tx_free_thresh: 0
tx_rs_thresh: 0 txq_deferred_start: 0
[DeviceConfigureQueues:runmode-dpdk.c:1487]
mlx5_net: adjust txq_inline_max (290->18) due to large Tx queue on port 1
Config: dpdk: 0000:b3:00.0: setting up TX queue 1: tx_desc: 32768 tx:
offloads: 0x10000 hthresh: 0 pthresh: 0 wthresh: 0 tx_free_thresh: 0
tx_rs_thresh: 0 txq_deferred_start: 0
[DeviceConfigureQueues:runmode-dpdk.c:1487]
mlx5_net: adjust txq_inline_max (290->18) due to large Tx queue on port 1
Config: dpdk: 0000:b3:00.0: setting up TX queue 2: tx_desc: 32768 tx:
offloads: 0x10000 hthresh: 0 pthresh: 0 wthresh: 0 tx_free_thresh: 0
tx_rs_thresh: 0 txq_deferred_start: 0
[DeviceConfigureQueues:runmode-dpdk.c:1487]
mlx5_net: adjust txq_inline_max (290->18) due to large Tx queue on port 1
Is there any way how I can avoid these logs?
Thank you.
Lukas
On 4/24/25 15:31, Viacheslav Ovsiienko wrote:
> he DPDK API rte_eth_tx_queue_setup() has a parameter nb_tx_desc
> specifying the desired queue capacity, measured in packets.
>
> The ConnectX NIC series has a hardware-imposed queue size
> limit of 32K WQEs (packet hardware descriptors). Typically,
> one packet requires one WQE to be sent.
>
> There is a special offload option, data-inlining, to improve
> performance for small packets. Also, NICs in some configurations
> require a minimum amount of inline data for the steering engine
> to operate correctly.
>
> In the case of inline data, more than one WQEs might be required
> to send a single packet. The mlx5 PMD takes this into account
> and adjusts the number of queue WQEs accordingly.
>
> If the requested queue capacity can't be satisfied due to
> the hardware queue size limit, the mlx5 PMD rejected the queue
> creation, causing unresolvable application failure.
>
> The patch provides the following:
>
> - fixes the calculation of the number of required WQEs
> to send a single packet with inline data, making it more precise
> and extending the painless operating range.
>
> - If the requested queue capacity can't be satisfied due to WQE
> number adjustment for inline data, it no longer causes a severe
> error. Instead, a warning message is emitted, and the queue
> is created with the maximum available size, with a reported success.
>
> Please note that the inline data size depends on many options
> (NIC configuration, queue offload flags, packet offload flags,
> packet size, etc.), so the actual queue capacity might not be
> impacted at all.
>
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
>
> ---
> v2: diagnostics messages made less wordy
> ---
> drivers/net/mlx5/mlx5_txq.c | 74 +++++++++++--------------------------
> 1 file changed, 22 insertions(+), 52 deletions(-)
>
> diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
> index 3e93517323..eebf3c2534 100644
> --- a/drivers/net/mlx5/mlx5_txq.c
> +++ b/drivers/net/mlx5/mlx5_txq.c
> @@ -731,7 +731,7 @@ txq_calc_inline_max(struct mlx5_txq_ctrl *txq_ctrl)
> if (!wqe_size)
> return 0;
> /*
> - * This calculation is derived from tthe source of
> + * This calculation is derived from the source of
> * mlx5_calc_send_wqe() in rdma_core library.
> */
> wqe_size = wqe_size * MLX5_WQE_SIZE -
> @@ -739,7 +739,7 @@ txq_calc_inline_max(struct mlx5_txq_ctrl *txq_ctrl)
> MLX5_WQE_ESEG_SIZE -
> MLX5_WSEG_SIZE -
> MLX5_WSEG_SIZE +
> - MLX5_DSEG_MIN_INLINE_SIZE;
> + MLX5_ESEG_MIN_INLINE_SIZE;
> return wqe_size;
> }
>
> @@ -964,11 +964,8 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
> *
> * @param txq_ctrl
> * Pointer to Tx queue control structure.
> - *
> - * @return
> - * Zero on success, otherwise the parameters can not be adjusted.
> */
> -static int
> +static void
> txq_adjust_params(struct mlx5_txq_ctrl *txq_ctrl)
> {
> struct mlx5_priv *priv = txq_ctrl->priv;
> @@ -981,82 +978,56 @@ txq_adjust_params(struct mlx5_txq_ctrl *txq_ctrl)
> * Inline data feature is not engaged at all.
> * There is nothing to adjust.
> */
> - return 0;
> + return;
> }
> if (txq_ctrl->max_inline_data <= max_inline) {
> /*
> * The requested inline data length does not
> * exceed queue capabilities.
> */
> - return 0;
> + return;
> }
> if (txq_ctrl->txq.inlen_mode > max_inline) {
> - DRV_LOG(ERR,
> - "minimal data inline requirements (%u) are not"
> - " satisfied (%u) on port %u, try the smaller"
> - " Tx queue size (%d)",
> - txq_ctrl->txq.inlen_mode, max_inline,
> - priv->dev_data->port_id, priv->sh->dev_cap.max_qp_wr);
> - goto error;
> + DRV_LOG(WARNING,
> + "minimal data inline requirements (%u) are not satisfied (%u) on port %u",
> + txq_ctrl->txq.inlen_mode, max_inline, priv->dev_data->port_id);
> }
> if (txq_ctrl->txq.inlen_send > max_inline &&
> config->txq_inline_max != MLX5_ARG_UNSET &&
> config->txq_inline_max > (int)max_inline) {
> - DRV_LOG(ERR,
> - "txq_inline_max requirements (%u) are not"
> - " satisfied (%u) on port %u, try the smaller"
> - " Tx queue size (%d)",
> - txq_ctrl->txq.inlen_send, max_inline,
> - priv->dev_data->port_id, priv->sh->dev_cap.max_qp_wr);
> - goto error;
> + DRV_LOG(WARNING,
> + "txq_inline_max requirements (%u) are not satisfied (%u) on port %u",
> + txq_ctrl->txq.inlen_send, max_inline, priv->dev_data->port_id);
> }
> if (txq_ctrl->txq.inlen_empw > max_inline &&
> config->txq_inline_mpw != MLX5_ARG_UNSET &&
> config->txq_inline_mpw > (int)max_inline) {
> - DRV_LOG(ERR,
> - "txq_inline_mpw requirements (%u) are not"
> - " satisfied (%u) on port %u, try the smaller"
> - " Tx queue size (%d)",
> - txq_ctrl->txq.inlen_empw, max_inline,
> - priv->dev_data->port_id, priv->sh->dev_cap.max_qp_wr);
> - goto error;
> + DRV_LOG(WARNING,
> + "txq_inline_mpw requirements (%u) are not satisfied (%u) on port %u",
> + txq_ctrl->txq.inlen_empw, max_inline, priv->dev_data->port_id);
> }
> if (txq_ctrl->txq.tso_en && max_inline < MLX5_MAX_TSO_HEADER) {
> - DRV_LOG(ERR,
> - "tso header inline requirements (%u) are not"
> - " satisfied (%u) on port %u, try the smaller"
> - " Tx queue size (%d)",
> - MLX5_MAX_TSO_HEADER, max_inline,
> - priv->dev_data->port_id, priv->sh->dev_cap.max_qp_wr);
> - goto error;
> + DRV_LOG(WARNING,
> + "tso header inline requirements (%u) are not satisfied (%u) on port %u",
> + MLX5_MAX_TSO_HEADER, max_inline, priv->dev_data->port_id);
> }
> if (txq_ctrl->txq.inlen_send > max_inline) {
> DRV_LOG(WARNING,
> - "adjust txq_inline_max (%u->%u)"
> - " due to large Tx queue on port %u",
> - txq_ctrl->txq.inlen_send, max_inline,
> - priv->dev_data->port_id);
> + "adjust txq_inline_max (%u->%u) due to large Tx queue on port %u",
> + txq_ctrl->txq.inlen_send, max_inline, priv->dev_data->port_id);
> txq_ctrl->txq.inlen_send = max_inline;
> }
> if (txq_ctrl->txq.inlen_empw > max_inline) {
> DRV_LOG(WARNING,
> - "adjust txq_inline_mpw (%u->%u)"
> - "due to large Tx queue on port %u",
> - txq_ctrl->txq.inlen_empw, max_inline,
> - priv->dev_data->port_id);
> + "adjust txq_inline_mpw (%u->%u) due to large Tx queue on port %u",
> + txq_ctrl->txq.inlen_empw, max_inline, priv->dev_data->port_id);
> txq_ctrl->txq.inlen_empw = max_inline;
> }
> txq_ctrl->max_inline_data = RTE_MAX(txq_ctrl->txq.inlen_send,
> txq_ctrl->txq.inlen_empw);
> - MLX5_ASSERT(txq_ctrl->max_inline_data <= max_inline);
> - MLX5_ASSERT(txq_ctrl->txq.inlen_mode <= max_inline);
> MLX5_ASSERT(txq_ctrl->txq.inlen_mode <= txq_ctrl->txq.inlen_send);
> MLX5_ASSERT(txq_ctrl->txq.inlen_mode <= txq_ctrl->txq.inlen_empw ||
> !txq_ctrl->txq.inlen_empw);
> - return 0;
> -error:
> - rte_errno = ENOMEM;
> - return -ENOMEM;
> }
>
> /**
> @@ -1105,8 +1076,7 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
> tmpl->txq.port_id = dev->data->port_id;
> tmpl->txq.idx = idx;
> txq_set_params(tmpl);
> - if (txq_adjust_params(tmpl))
> - goto error;
> + txq_adjust_params(tmpl);
> if (txq_calc_wqebb_cnt(tmpl) >
> priv->sh->dev_cap.max_qp_wr) {
> DRV_LOG(ERR,
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 5986 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread