From: Slava Ovsiienko <viacheslavo@nvidia.com>
To: "Lukáš Šišmiš" <sismis@cesnet.cz>, "dev@dpdk.org" <dev@dpdk.org>
Cc: Raslan Darawsheh <rasland@nvidia.com>,
Matan Azrad <matan@nvidia.com>,
Suanming Mou <suanmingm@nvidia.com>,
Dariusz Sosnowski <dsosnowski@nvidia.com>
Subject: RE: [PATCH v2] net/mlx5: mitigate the Tx queue parameter adjustment
Date: Tue, 29 Apr 2025 11:30:55 +0000 [thread overview]
Message-ID: <MN6PR12MB85674229B5B04CE09CCCA635DF802@MN6PR12MB8567.namprd12.prod.outlook.com> (raw)
In-Reply-To: <a7d47175-214e-41c8-9312-712add4517dc@cesnet.cz>
Hi, Lukáš
I think this message is important to notify users that something is going as not expected.
Either queue size or inline length should be adjusted by user/developer.
To suppress the message in your specific case (with max queue) the devarg "txq_inline_max=18"
can be used. So, the message encourages us to recognize the inline data length is truncated.
With best regards,
Slava
> -----Original Message-----
> From: Lukáš Šišmiš <sismis@cesnet.cz>
> Sent: Friday, April 25, 2025 2:55 PM
> To: Slava Ovsiienko <viacheslavo@nvidia.com>; dev@dpdk.org
> Cc: Raslan Darawsheh <rasland@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; Suanming Mou <suanmingm@nvidia.com>; Dariusz
> Sosnowski <dsosnowski@nvidia.com>
> Subject: Re: [PATCH v2] net/mlx5: mitigate the Tx queue parameter
> adjustment
>
> Hello all,
>
> I tested v1 patch on CX-4 card and I can confirm my application boots now!
> For traceability I am adding the original discussion thread:
> https://mails.dpdk.org/archives/users/2025-April/008242.html
>
> Probably a some other problem, but it still outputs these logs:
>
> Config: dpdk: 0000:b3:00.0: setting up TX queue 0: tx_desc: 32768 tx:
> offloads: 0x10000 hthresh: 0 pthresh: 0 wthresh: 0 tx_free_thresh: 0
> tx_rs_thresh: 0 txq_deferred_start: 0
> [DeviceConfigureQueues:runmode-dpdk.c:1487]
> mlx5_net: adjust txq_inline_max (290->18) due to large Tx queue on port 1
> Config: dpdk: 0000:b3:00.0: setting up TX queue 1: tx_desc: 32768 tx:
> offloads: 0x10000 hthresh: 0 pthresh: 0 wthresh: 0 tx_free_thresh: 0
> tx_rs_thresh: 0 txq_deferred_start: 0
> [DeviceConfigureQueues:runmode-dpdk.c:1487]
> mlx5_net: adjust txq_inline_max (290->18) due to large Tx queue on port 1
> Config: dpdk: 0000:b3:00.0: setting up TX queue 2: tx_desc: 32768 tx:
> offloads: 0x10000 hthresh: 0 pthresh: 0 wthresh: 0 tx_free_thresh: 0
> tx_rs_thresh: 0 txq_deferred_start: 0
> [DeviceConfigureQueues:runmode-dpdk.c:1487]
> mlx5_net: adjust txq_inline_max (290->18) due to large Tx queue on port 1
>
> Is there any way how I can avoid these logs?
>
> Thank you.
>
> Lukas
>
> On 4/24/25 15:31, Viacheslav Ovsiienko wrote:
> > he DPDK API rte_eth_tx_queue_setup() has a parameter nb_tx_desc
> > specifying the desired queue capacity, measured in packets.
> >
> > The ConnectX NIC series has a hardware-imposed queue size
> > limit of 32K WQEs (packet hardware descriptors). Typically,
> > one packet requires one WQE to be sent.
> >
> > There is a special offload option, data-inlining, to improve
> > performance for small packets. Also, NICs in some configurations
> > require a minimum amount of inline data for the steering engine
> > to operate correctly.
> >
> > In the case of inline data, more than one WQEs might be required
> > to send a single packet. The mlx5 PMD takes this into account
> > and adjusts the number of queue WQEs accordingly.
> >
> > If the requested queue capacity can't be satisfied due to
> > the hardware queue size limit, the mlx5 PMD rejected the queue
> > creation, causing unresolvable application failure.
> >
> > The patch provides the following:
> >
> > - fixes the calculation of the number of required WQEs
> > to send a single packet with inline data, making it more precise
> > and extending the painless operating range.
> >
> > - If the requested queue capacity can't be satisfied due to WQE
> > number adjustment for inline data, it no longer causes a severe
> > error. Instead, a warning message is emitted, and the queue
> > is created with the maximum available size, with a reported success.
> >
> > Please note that the inline data size depends on many options
> > (NIC configuration, queue offload flags, packet offload flags,
> > packet size, etc.), so the actual queue capacity might not be
> > impacted at all.
> >
> > Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> > Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
> >
> > ---
> > v2: diagnostics messages made less wordy
> > ---
> > drivers/net/mlx5/mlx5_txq.c | 74 +++++++++++--------------------------
> > 1 file changed, 22 insertions(+), 52 deletions(-)
> >
> > diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
> > index 3e93517323..eebf3c2534 100644
> > --- a/drivers/net/mlx5/mlx5_txq.c
> > +++ b/drivers/net/mlx5/mlx5_txq.c
> > @@ -731,7 +731,7 @@ txq_calc_inline_max(struct mlx5_txq_ctrl *txq_ctrl)
> > if (!wqe_size)
> > return 0;
> > /*
> > - * This calculation is derived from tthe source of
> > + * This calculation is derived from the source of
> > * mlx5_calc_send_wqe() in rdma_core library.
> > */
> > wqe_size = wqe_size * MLX5_WQE_SIZE -
> > @@ -739,7 +739,7 @@ txq_calc_inline_max(struct mlx5_txq_ctrl *txq_ctrl)
> > MLX5_WQE_ESEG_SIZE -
> > MLX5_WSEG_SIZE -
> > MLX5_WSEG_SIZE +
> > - MLX5_DSEG_MIN_INLINE_SIZE;
> > + MLX5_ESEG_MIN_INLINE_SIZE;
> > return wqe_size;
> > }
> >
> > @@ -964,11 +964,8 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
> > *
> > * @param txq_ctrl
> > * Pointer to Tx queue control structure.
> > - *
> > - * @return
> > - * Zero on success, otherwise the parameters can not be adjusted.
> > */
> > -static int
> > +static void
> > txq_adjust_params(struct mlx5_txq_ctrl *txq_ctrl)
> > {
> > struct mlx5_priv *priv = txq_ctrl->priv;
> > @@ -981,82 +978,56 @@ txq_adjust_params(struct mlx5_txq_ctrl *txq_ctrl)
> > * Inline data feature is not engaged at all.
> > * There is nothing to adjust.
> > */
> > - return 0;
> > + return;
> > }
> > if (txq_ctrl->max_inline_data <= max_inline) {
> > /*
> > * The requested inline data length does not
> > * exceed queue capabilities.
> > */
> > - return 0;
> > + return;
> > }
> > if (txq_ctrl->txq.inlen_mode > max_inline) {
> > - DRV_LOG(ERR,
> > - "minimal data inline requirements (%u) are not"
> > - " satisfied (%u) on port %u, try the smaller"
> > - " Tx queue size (%d)",
> > - txq_ctrl->txq.inlen_mode, max_inline,
> > - priv->dev_data->port_id, priv->sh-
> >dev_cap.max_qp_wr);
> > - goto error;
> > + DRV_LOG(WARNING,
> > + "minimal data inline requirements (%u) are not
> satisfied (%u) on port %u",
> > + txq_ctrl->txq.inlen_mode, max_inline, priv-
> >dev_data->port_id);
> > }
> > if (txq_ctrl->txq.inlen_send > max_inline &&
> > config->txq_inline_max != MLX5_ARG_UNSET &&
> > config->txq_inline_max > (int)max_inline) {
> > - DRV_LOG(ERR,
> > - "txq_inline_max requirements (%u) are not"
> > - " satisfied (%u) on port %u, try the smaller"
> > - " Tx queue size (%d)",
> > - txq_ctrl->txq.inlen_send, max_inline,
> > - priv->dev_data->port_id, priv->sh-
> >dev_cap.max_qp_wr);
> > - goto error;
> > + DRV_LOG(WARNING,
> > + "txq_inline_max requirements (%u) are not satisfied
> (%u) on port %u",
> > + txq_ctrl->txq.inlen_send, max_inline, priv->dev_data-
> >port_id);
> > }
> > if (txq_ctrl->txq.inlen_empw > max_inline &&
> > config->txq_inline_mpw != MLX5_ARG_UNSET &&
> > config->txq_inline_mpw > (int)max_inline) {
> > - DRV_LOG(ERR,
> > - "txq_inline_mpw requirements (%u) are not"
> > - " satisfied (%u) on port %u, try the smaller"
> > - " Tx queue size (%d)",
> > - txq_ctrl->txq.inlen_empw, max_inline,
> > - priv->dev_data->port_id, priv->sh-
> >dev_cap.max_qp_wr);
> > - goto error;
> > + DRV_LOG(WARNING,
> > + "txq_inline_mpw requirements (%u) are not satisfied
> (%u) on port %u",
> > + txq_ctrl->txq.inlen_empw, max_inline, priv-
> >dev_data->port_id);
> > }
> > if (txq_ctrl->txq.tso_en && max_inline < MLX5_MAX_TSO_HEADER) {
> > - DRV_LOG(ERR,
> > - "tso header inline requirements (%u) are not"
> > - " satisfied (%u) on port %u, try the smaller"
> > - " Tx queue size (%d)",
> > - MLX5_MAX_TSO_HEADER, max_inline,
> > - priv->dev_data->port_id, priv->sh-
> >dev_cap.max_qp_wr);
> > - goto error;
> > + DRV_LOG(WARNING,
> > + "tso header inline requirements (%u) are not satisfied
> (%u) on port %u",
> > + MLX5_MAX_TSO_HEADER, max_inline, priv-
> >dev_data->port_id);
> > }
> > if (txq_ctrl->txq.inlen_send > max_inline) {
> > DRV_LOG(WARNING,
> > - "adjust txq_inline_max (%u->%u)"
> > - " due to large Tx queue on port %u",
> > - txq_ctrl->txq.inlen_send, max_inline,
> > - priv->dev_data->port_id);
> > + "adjust txq_inline_max (%u->%u) due to large Tx
> queue on port %u",
> > + txq_ctrl->txq.inlen_send, max_inline, priv->dev_data-
> >port_id);
> > txq_ctrl->txq.inlen_send = max_inline;
> > }
> > if (txq_ctrl->txq.inlen_empw > max_inline) {
> > DRV_LOG(WARNING,
> > - "adjust txq_inline_mpw (%u->%u)"
> > - "due to large Tx queue on port %u",
> > - txq_ctrl->txq.inlen_empw, max_inline,
> > - priv->dev_data->port_id);
> > + "adjust txq_inline_mpw (%u->%u) due to large Tx
> queue on port %u",
> > + txq_ctrl->txq.inlen_empw, max_inline, priv-
> >dev_data->port_id);
> > txq_ctrl->txq.inlen_empw = max_inline;
> > }
> > txq_ctrl->max_inline_data = RTE_MAX(txq_ctrl->txq.inlen_send,
> > txq_ctrl->txq.inlen_empw);
> > - MLX5_ASSERT(txq_ctrl->max_inline_data <= max_inline);
> > - MLX5_ASSERT(txq_ctrl->txq.inlen_mode <= max_inline);
> > MLX5_ASSERT(txq_ctrl->txq.inlen_mode <= txq_ctrl->txq.inlen_send);
> > MLX5_ASSERT(txq_ctrl->txq.inlen_mode <= txq_ctrl->txq.inlen_empw
> ||
> > !txq_ctrl->txq.inlen_empw);
> > - return 0;
> > -error:
> > - rte_errno = ENOMEM;
> > - return -ENOMEM;
> > }
> >
> > /**
> > @@ -1105,8 +1076,7 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t
> idx, uint16_t desc,
> > tmpl->txq.port_id = dev->data->port_id;
> > tmpl->txq.idx = idx;
> > txq_set_params(tmpl);
> > - if (txq_adjust_params(tmpl))
> > - goto error;
> > + txq_adjust_params(tmpl);
> > if (txq_calc_wqebb_cnt(tmpl) >
> > priv->sh->dev_cap.max_qp_wr) {
> > DRV_LOG(ERR,
next prev parent reply other threads:[~2025-04-29 11:31 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <<20250423082450.5eb6cdee@hermes.local>
2025-04-24 13:31 ` Viacheslav Ovsiienko
2025-04-25 11:54 ` Lukáš Šišmiš
2025-04-29 11:30 ` Slava Ovsiienko [this message]
2025-05-12 6:24 ` Raslan Darawsheh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=MN6PR12MB85674229B5B04CE09CCCA635DF802@MN6PR12MB8567.namprd12.prod.outlook.com \
--to=viacheslavo@nvidia.com \
--cc=dev@dpdk.org \
--cc=dsosnowski@nvidia.com \
--cc=matan@nvidia.com \
--cc=rasland@nvidia.com \
--cc=sismis@cesnet.cz \
--cc=suanmingm@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).