From: Raslan Darawsheh <rasland@nvidia.com>
To: Viacheslav Ovsiienko <viacheslavo@nvidia.com>, dev@dpdk.org
Cc: matan@nvidia.com, suanmingm@nvidia.com, dsosnowski@nvidia.com
Subject: Re: [PATCH v2] net/mlx5: mitigate the Tx queue parameter adjustment
Date: Mon, 12 May 2025 09:24:06 +0300 [thread overview]
Message-ID: <b484a4d7-b4d1-4b66-94a6-6abd30d76bd5@nvidia.com> (raw)
In-Reply-To: <20250424133128.133900-1-viacheslavo@nvidia.com>
Hi,
On 24/04/2025 4:31 PM, Viacheslav Ovsiienko wrote:
> he DPDK API rte_eth_tx_queue_setup() has a parameter nb_tx_desc
> specifying the desired queue capacity, measured in packets.
>
> The ConnectX NIC series has a hardware-imposed queue size
> limit of 32K WQEs (packet hardware descriptors). Typically,
> one packet requires one WQE to be sent.
>
> There is a special offload option, data-inlining, to improve
> performance for small packets. Also, NICs in some configurations
> require a minimum amount of inline data for the steering engine
> to operate correctly.
>
> In the case of inline data, more than one WQEs might be required
> to send a single packet. The mlx5 PMD takes this into account
> and adjusts the number of queue WQEs accordingly.
>
> If the requested queue capacity can't be satisfied due to
> the hardware queue size limit, the mlx5 PMD rejected the queue
> creation, causing unresolvable application failure.
>
> The patch provides the following:
>
> - fixes the calculation of the number of required WQEs
> to send a single packet with inline data, making it more precise
> and extending the painless operating range.
>
> - If the requested queue capacity can't be satisfied due to WQE
> number adjustment for inline data, it no longer causes a severe
> error. Instead, a warning message is emitted, and the queue
> is created with the maximum available size, with a reported success.
>
> Please note that the inline data size depends on many options
> (NIC configuration, queue offload flags, packet offload flags,
> packet size, etc.), so the actual queue capacity might not be
> impacted at all.
>
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
> Patch applied to next-net-mlx,
--
Kindest regards
Raslan Darawsheh
prev parent reply other threads:[~2025-05-12 6:24 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <<20250423082450.5eb6cdee@hermes.local>
2025-04-24 13:31 ` Viacheslav Ovsiienko
2025-04-25 11:54 ` Lukáš Šišmiš
2025-04-29 11:30 ` Slava Ovsiienko
2025-05-12 6:24 ` Raslan Darawsheh [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b484a4d7-b4d1-4b66-94a6-6abd30d76bd5@nvidia.com \
--to=rasland@nvidia.com \
--cc=dev@dpdk.org \
--cc=dsosnowski@nvidia.com \
--cc=matan@nvidia.com \
--cc=suanmingm@nvidia.com \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).