From: Stephen Hemminger <stephen@networkplumber.org>
To: Bing Zhao <bingz@nvidia.com>
Cc: Slava Ovsiienko <viacheslavo@nvidia.com>,
Matan Azrad <matan@nvidia.com>, dev <dev@dpdk.org>,
"NBU-Contact-Thomas Monjalon (EXTERNAL)" <thomas@monjalon.net>,
Dariusz Sosnowski <dsosnowski@nvidia.com>,
Suanming Mou <suanmingm@nvidia.com>,
Raslan Darawsheh <rasland@nvidia.com>
Subject: Re: [PATCH v2 2/3] net/mlx5: add new devarg for Tx queue consecutive memory
Date: Thu, 26 Jun 2025 07:29:54 -0700 [thread overview]
Message-ID: <20250626072954.6ca04456@hermes.local> (raw)
In-Reply-To: <PH7PR12MB6905F9250E91DEE1DE0BF4E8D07AA@PH7PR12MB6905.namprd12.prod.outlook.com>
On Thu, 26 Jun 2025 13:18:18 +0000
Bing Zhao <bingz@nvidia.com> wrote:
> Hi Stephen,
>
> Thanks for your review and comments. I will add the description about the new devarg in our mlx5.rst file to have a detailed description.
> Indeed, after some review and internal call discussion with our datapath experts. We would like to change the devarg a little bit but not only 0 / 1 as a chicken bit.
>
> Since the memory accessing footprints and orders may impact the performance. In the perf test, we found that the alignment of the queue address may impact it. The basic starting address alignment is system page size, but it can be bigger.
> So the new devarg use will be the log value of the alignment for all queues’ starting addresses. And on different CPU architectures / generations that have different LLC systems can try to use different alignment to get the best performance without rebuilding the binary application from the source code and it is configurable. WDYT?
Please make it automatic, the driver already has too many config bits.
The users should just get good performance with the default.
If driver needs to it should look at any/all system info to determine what the best setting is.
next prev parent reply other threads:[~2025-06-26 14:29 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20250623173524.128125-1:x-bingz@nvidia.com>
2025-06-23 18:34 ` [PATCH v2 0/3] Use consecutive Tx queues' memory Bing Zhao
2025-06-23 18:34 ` [PATCH v2 1/3] net/mlx5: fix the WQE size calculation for Tx queue Bing Zhao
2025-06-23 18:34 ` [PATCH v2 2/3] net/mlx5: add new devarg for Tx queue consecutive memory Bing Zhao
2025-06-24 12:01 ` Stephen Hemminger
2025-06-26 13:18 ` Bing Zhao
2025-06-26 14:29 ` Stephen Hemminger [this message]
2025-06-26 15:21 ` Thomas Monjalon
2025-06-23 18:34 ` [PATCH v2 3/3] net/mlx5: use consecutive memory for all Tx queues Bing Zhao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250626072954.6ca04456@hermes.local \
--to=stephen@networkplumber.org \
--cc=bingz@nvidia.com \
--cc=dev@dpdk.org \
--cc=dsosnowski@nvidia.com \
--cc=matan@nvidia.com \
--cc=rasland@nvidia.com \
--cc=suanmingm@nvidia.com \
--cc=thomas@monjalon.net \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).