From: "Tummala, Sivaprasad" <Sivaprasad.Tummala@amd.com>
To: Konstantin Ananyev <konstantin.ananyev@huawei.com>,
Nithin Dabilpuram <nithind1988@gmail.com>
Cc: "jerinj@marvell.com" <jerinj@marvell.com>,
"kirankumark@marvell.com" <kirankumark@marvell.com>,
"ndabilpuram@marvell.com" <ndabilpuram@marvell.com>,
"yanzhirun_163@163.com" <yanzhirun_163@163.com>,
"david.marchand@redhat.com" <david.marchand@redhat.com>,
"ktraynor@redhat.com" <ktraynor@redhat.com>,
"thomas@monjalon.net" <thomas@monjalon.net>,
"konstantin.v.ananyev@yandex.ru" <konstantin.v.ananyev@yandex.ru>,
"bruce.richardson@intel.com" <bruce.richardson@intel.com>,
"maxime.coquelin@redhat.com" <maxime.coquelin@redhat.com>,
"aconole@redhat.com" <aconole@redhat.com>,
"dev@dpdk.org" <dev@dpdk.org>,
"stable@dpdk.org" <stable@dpdk.org>
Subject: Re: [PATCH] examples/l3fwd-graph: remove redundant Tx queue limit
Date: Mon, 6 Oct 2025 08:58:01 +0000 [thread overview]
Message-ID: <CH3PR12MB8233CD444E9194EB5C45DD5186E3A@CH3PR12MB8233.namprd12.prod.outlook.com> (raw)
In-Reply-To: <819a994c1ae6461795b6ee8356e2fc32@huawei.com>
[-- Attachment #1: Type: text/plain, Size: 3555 bytes --]
[AMD Official Use Only - AMD Internal Distribution Only]
Hi Konstantin,
________________________________
From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Sent: Monday, September 22, 2025 12:19 PM
To: Nithin Dabilpuram <nithind1988@gmail.com>; Tummala, Sivaprasad <Sivaprasad.Tummala@amd.com>
Cc: jerinj@marvell.com <jerinj@marvell.com>; kirankumark@marvell.com <kirankumark@marvell.com>; ndabilpuram@marvell.com <ndabilpuram@marvell.com>; yanzhirun_163@163.com <yanzhirun_163@163.com>; david.marchand@redhat.com <david.marchand@redhat.com>; ktraynor@redhat.com <ktraynor@redhat.com>; thomas@monjalon.net <thomas@monjalon.net>; konstantin.v.ananyev@yandex.ru <konstantin.v.ananyev@yandex.ru>; bruce.richardson@intel.com <bruce.richardson@intel.com>; maxime.coquelin@redhat.com <maxime.coquelin@redhat.com>; aconole@redhat.com <aconole@redhat.com>; dev@dpdk.org <dev@dpdk.org>; stable@dpdk.org <stable@dpdk.org>
Subject: RE: [PATCH] examples/l3fwd-graph: remove redundant Tx queue limit
Caution: This message originated from an External Source. Use proper caution when opening attachments, clicking links, or responding.
> > On Mon, Sep 1, 2025 at 11:39 PM Sivaprasad Tummala
> > <sivaprasad.tummala@amd.com> wrote:
> > >
> > > In `l3fwd-graph` application, Tx queues are configured per lcore
> > > to enable a lockless design and achieve optimal performance.
> > >
> > > The `MAX_TX_QUEUE_PER_PORT` macro, defined as `RTE_MAX_ETHPORTS`,
> > > introduced an artificial constraint on the number of Tx queues
> > > and limited core-scaling performance.
> > >
> > > This patch removes the unused `MAX_TX_QUEUE_PER_PORT` macro and
> > > redundant Tx queue check, allowing Tx queues to scale directly
> > > with the no. of lcores.
> > >
> > > Fixes: 08bd1a174461 ("examples/l3fwd-graph: add graph-based l3fwd skeleton")
> > > Cc: ndabilpuram@marvell.com
> > > Cc: stable@dpdk.org
> > >
> > > Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > > ---
> > > examples/l3fwd-graph/main.c | 3 ---
> > > 1 file changed, 3 deletions(-)
> > >
> > > diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
> > > index 92cdaa1ebe..12908acbba 100644
> > > --- a/examples/l3fwd-graph/main.c
> > > +++ b/examples/l3fwd-graph/main.c
> > > @@ -49,7 +49,6 @@
> > > #define RX_DESC_DEFAULT 1024
> > > #define TX_DESC_DEFAULT 1024
> > >
> > > -#define MAX_TX_QUEUE_PER_PORT RTE_MAX_ETHPORTS
> > > #define MAX_RX_QUEUE_PER_PORT 128
>
> AFAIK, in the mainline we actually have:
> #define MAX_TX_QUEUE_PER_PORT RTE_MAX_LCORE
>
In l3fwd-graph app, this change is not available and instead we have
#define MAX_TX_QUEUE_PER_PORT RTE_MAX_ETHPORTS
> since:
> commit 88256ed85338c572d73006e4c4530a52d3b477ff
> Author: Harman Kalra <hkalra@marvell.com>
> Date: Tue Jan 12 23:54:46 2021 +0530
>
> examples/l3fwd: remove limitation on Tx queue count
>
> What I am missing here?
This patch marked here was fixing l3fwd app and not l3fwd-graph
>
> > >
> > > #define MAX_RX_QUEUE_PER_LCORE 16
> > > @@ -1076,8 +1075,6 @@ main(int argc, char **argv)
> > >
> > > nb_rx_queue = get_port_n_rx_queues(portid);
> > > n_tx_queue = nb_lcores;
> > > - if (n_tx_queue > MAX_TX_QUEUE_PER_PORT)
> > > - n_tx_queue = MAX_TX_QUEUE_PER_PORT;
> > > printf("Creating queues: nb_rxq=%d nb_txq=%u... ",
> > > nb_rx_queue, n_tx_queue);
> > >
> > > --
> > > 2.43.0
> > >
[-- Attachment #2: Type: text/html, Size: 6655 bytes --]
prev parent reply other threads:[~2025-10-06 8:58 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-01 15:44 Sivaprasad Tummala
2025-09-22 5:52 ` Nithin Dabilpuram
2025-09-22 6:49 ` Konstantin Ananyev
2025-10-06 8:58 ` Tummala, Sivaprasad [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CH3PR12MB8233CD444E9194EB5C45DD5186E3A@CH3PR12MB8233.namprd12.prod.outlook.com \
--to=sivaprasad.tummala@amd.com \
--cc=aconole@redhat.com \
--cc=bruce.richardson@intel.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=jerinj@marvell.com \
--cc=kirankumark@marvell.com \
--cc=konstantin.ananyev@huawei.com \
--cc=konstantin.v.ananyev@yandex.ru \
--cc=ktraynor@redhat.com \
--cc=maxime.coquelin@redhat.com \
--cc=ndabilpuram@marvell.com \
--cc=nithind1988@gmail.com \
--cc=stable@dpdk.org \
--cc=thomas@monjalon.net \
--cc=yanzhirun_163@163.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).