DPDK patches and discussions
 help / color / mirror / Atom feed
From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
To: "Tummala, Sivaprasad" <Sivaprasad.Tummala@amd.com>,
	"david.hunt@intel.com" <david.hunt@intel.com>,
	"anatoly.burakov@intel.com" <anatoly.burakov@intel.com>,
	"jerinj@marvell.com" <jerinj@marvell.com>,
	"radu.nicolau@intel.com" <radu.nicolau@intel.com>,
	"gakhil@marvell.com" <gakhil@marvell.com>,
	"cristian.dumitrescu@intel.com" <cristian.dumitrescu@intel.com>,
	"Yigit, Ferruh" <Ferruh.Yigit@amd.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, "stable@dpdk.org" <stable@dpdk.org>
Subject: RE: [PATCH v2 1/6] examples/l3fwd: fix lcore ID restriction
Date: Tue, 19 Dec 2023 15:10:00 +0000	[thread overview]
Message-ID: <2a21fc7ad41243ce935cdc6d80b0eb96@huawei.com> (raw)
In-Reply-To: <DM3PR12MB9286CAAD899052CF805509078697A@DM3PR12MB9286.namprd12.prod.outlook.com>


Hi Sivaprasad,

> 
> Hi Konstantin,
> 
> > -----Original Message-----
> > From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> > Sent: Tuesday, December 19, 2023 6:00 PM
> > To: Konstantin Ananyev <konstantin.ananyev@huawei.com>; Tummala, Sivaprasad
> > <Sivaprasad.Tummala@amd.com>; david.hunt@intel.com;
> > anatoly.burakov@intel.com; jerinj@marvell.com; radu.nicolau@intel.com;
> > gakhil@marvell.com; cristian.dumitrescu@intel.com; Yigit, Ferruh
> > <Ferruh.Yigit@amd.com>
> > Cc: dev@dpdk.org; stable@dpdk.org
> > Subject: RE: [PATCH v2 1/6] examples/l3fwd: fix lcore ID restriction
> >
> > Caution: This message originated from an External Source. Use proper caution
> > when opening attachments, clicking links, or responding.
> >
> >
> > >
> > > > Currently the config option allows lcore IDs up to 255, irrespective
> > > > of RTE_MAX_LCORES and needs to be fixed.
> > > >
> > > > The patch allows config options based on DPDK config.
> > > >
> > > > Fixes: af75078fece3 ("first public release")
> > > > Cc: stable@dpdk.org
> > > >
> > > > Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
> > > > ---
> > > >  examples/l3fwd/main.c | 19 +++++++++++--------
> > > >  1 file changed, 11 insertions(+), 8 deletions(-)
> > > >
> > > > diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c index
> > > > 3bf28aec0c..ed116da09c 100644
> > > > --- a/examples/l3fwd/main.c
> > > > +++ b/examples/l3fwd/main.c
> > > > @@ -99,7 +99,7 @@ struct parm_cfg parm_config;  struct lcore_params
> > > > {
> > > >     uint16_t port_id;
> > > >     uint8_t queue_id;
> >
> > Actually one comment:
> > As lcore_id becomes uint16_t it might be worth to do the same queue_id, they
> > usually are very much related.
> Yes, that's a valid statement for one network interface.
> With multiple interfaces, it's a combination of port/queue that maps to a specific lcore.
> If there a NICs that support more than 256 queues, then it makes sense to change the
> queue_id type as well.

AFAIK, majority of modern NICs do support more than 256 queues.
That's why in rte_ethev API queue_id is uint16_t.

> 
> Please let me know your thoughts.
> >
> > > > -   uint8_t lcore_id;
> > > > +   uint16_t lcore_id;
> > > >  } __rte_cache_aligned;
> > > >
> > > >  static struct lcore_params lcore_params_array[MAX_LCORE_PARAMS];
> > > > @@ -292,8 +292,8 @@ setup_l3fwd_lookup_tables(void)  static int
> > > >  check_lcore_params(void)
> > > >  {
> > > > -   uint8_t queue, lcore;
> > > > -   uint16_t i;
> > > > +   uint8_t queue;
> > > > +   uint16_t i, lcore;
> > > >     int socketid;
> > > >
> > > >     for (i = 0; i < nb_lcore_params; ++i) { @@ -304,12 +304,12 @@
> > > > check_lcore_params(void)
> > > >             }
> > > >             lcore = lcore_params[i].lcore_id;
> > > >             if (!rte_lcore_is_enabled(lcore)) {
> > > > -                   printf("error: lcore %hhu is not enabled in lcore mask\n", lcore);
> > > > +                   printf("error: lcore %hu is not enabled in lcore
> > > > + mask\n", lcore);
> > > >                     return -1;
> > > >             }
> > > >             if ((socketid = rte_lcore_to_socket_id(lcore) != 0) &&
> > > >                     (numa_on == 0)) {
> > > > -                   printf("warning: lcore %hhu is on socket %d with numa off \n",
> > > > +                   printf("warning: lcore %hu is on socket %d with
> > > > + numa off\n",
> > > >                             lcore, socketid);
> > > >             }
> > > >     }
> > > > @@ -359,7 +359,7 @@ static int
> > > >  init_lcore_rx_queues(void)
> > > >  {
> > > >     uint16_t i, nb_rx_queue;
> > > > -   uint8_t lcore;
> > > > +   uint16_t lcore;
> > > >
> > > >     for (i = 0; i < nb_lcore_params; ++i) {
> > > >             lcore = lcore_params[i].lcore_id; @@ -500,6 +500,8 @@
> > > > parse_config(const char *q_arg)
> > > >     char *str_fld[_NUM_FLD];
> > > >     int i;
> > > >     unsigned size;
> > > > +   unsigned int max_fld[_NUM_FLD] = {RTE_MAX_ETHPORTS,
> > > > +                                   255, RTE_MAX_LCORE};
> > > >
> > > >     nb_lcore_params = 0;
> > > >
> > > > @@ -518,7 +520,8 @@ parse_config(const char *q_arg)
> > > >             for (i = 0; i < _NUM_FLD; i++){
> > > >                     errno = 0;
> > > >                     int_fld[i] = strtoul(str_fld[i], &end, 0);
> > > > -                   if (errno != 0 || end == str_fld[i] || int_fld[i] > 255)
> > > > +                   if (errno != 0 || end == str_fld[i] || int_fld[i] >
> > > > +
> > > > + max_fld[i])
> > > >                             return -1;
> > > >             }
> > > >             if (nb_lcore_params >= MAX_LCORE_PARAMS) { @@ -531,7
> > > > +534,7 @@ parse_config(const char *q_arg)
> > > >             lcore_params_array[nb_lcore_params].queue_id =
> > > >                     (uint8_t)int_fld[FLD_QUEUE];
> > > >             lcore_params_array[nb_lcore_params].lcore_id =
> > > > -                   (uint8_t)int_fld[FLD_LCORE];
> > > > +                   (uint16_t)int_fld[FLD_LCORE];
> > > >             ++nb_lcore_params;
> > > >     }
> > > >     lcore_params = lcore_params_array;
> > > > --
> > >
> > > Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> > >
> > >
> > > > 2.25.1


  reply	other threads:[~2023-12-19 15:10 UTC|newest]

Thread overview: 96+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-18  7:49 [PATCH " Sivaprasad Tummala
2023-12-18  7:49 ` [PATCH 2/6] examples/l3fwd-power: " Sivaprasad Tummala
2023-12-18  7:49 ` [PATCH 3/6] examples/l3fwd-graph: " Sivaprasad Tummala
2023-12-18  7:49 ` [PATCH 4/6] examples/ipsec-secgw: " Sivaprasad Tummala
2023-12-18  7:49 ` [PATCH 5/6] examples/qos_sched: " Sivaprasad Tummala
2023-12-18  7:49 ` [PATCH 6/6] examples/vm_power_manager: " Sivaprasad Tummala
2023-12-19  3:28 ` [PATCH v2 0/6] " Sivaprasad Tummala
2023-12-19  3:28   ` [PATCH v2 1/6] examples/l3fwd: " Sivaprasad Tummala
2023-12-19 12:05     ` Konstantin Ananyev
2023-12-19 12:30       ` Konstantin Ananyev
2023-12-19 14:18         ` Tummala, Sivaprasad
2023-12-19 15:10           ` Konstantin Ananyev [this message]
2023-12-20  1:32             ` Tummala, Sivaprasad
2023-12-19  3:28   ` [PATCH v2 2/6] examples/l3fwd-power: " Sivaprasad Tummala
2023-12-19  3:28   ` [PATCH v2 3/6] examples/l3fwd-graph: " Sivaprasad Tummala
2023-12-19  3:28   ` [PATCH v2 4/6] examples/ipsec-secgw: " Sivaprasad Tummala
2023-12-19 12:03     ` Konstantin Ananyev
2023-12-19  3:28   ` [PATCH v2 5/6] examples/qos_sched: " Sivaprasad Tummala
2023-12-19  3:28   ` [PATCH v2 6/6] examples/vm_power_manager: " Sivaprasad Tummala
2023-12-19  3:28   ` [PATCH v2 0/6] " Sivaprasad Tummala
2023-12-20  6:44   ` [PATCH v3 " Sivaprasad Tummala
2023-12-20  6:44     ` [PATCH v3 1/6] examples/l3fwd: " Sivaprasad Tummala
2023-12-20  6:44     ` [PATCH v3 2/6] examples/l3fwd-power: " Sivaprasad Tummala
2023-12-20  6:44     ` [PATCH v3 3/6] examples/l3fwd-graph: " Sivaprasad Tummala
2023-12-20  6:44     ` [PATCH v3 4/6] examples/ipsec-secgw: " Sivaprasad Tummala
2023-12-20  6:45     ` [PATCH v3 5/6] examples/qos_sched: " Sivaprasad Tummala
2023-12-20 16:31       ` Stephen Hemminger
2024-01-09 15:16         ` Ferruh Yigit
2024-01-16 12:33           ` Tummala, Sivaprasad
2024-01-16 16:28             ` Stephen Hemminger
2023-12-20  6:45     ` [PATCH v3 6/6] examples/vm_power_manager: " Sivaprasad Tummala
2023-12-20  6:45     ` [PATCH v3 0/6] " Sivaprasad Tummala
2024-01-16 18:23     ` [PATCH v4 " Sivaprasad Tummala
2024-01-16 18:23       ` [PATCH v4 1/6] examples/l3fwd: " Sivaprasad Tummala
2024-01-16 18:23       ` [PATCH v4 2/6] examples/l3fwd-power: " Sivaprasad Tummala
2024-01-16 18:23       ` [PATCH v4 3/6] examples/l3fwd-graph: " Sivaprasad Tummala
2024-01-16 18:23       ` [PATCH v4 4/6] examples/ipsec-secgw: " Sivaprasad Tummala
2024-01-16 18:23       ` [PATCH v4 5/6] examples/qos_sched: " Sivaprasad Tummala
2024-01-16 18:23       ` [PATCH v4 6/6] examples/vm_power_manager: " Sivaprasad Tummala
2024-01-16 18:23       ` [PATCH v4 0/6] " Sivaprasad Tummala
2024-03-18 17:31       ` [PATCH v5 " Sivaprasad Tummala
2024-03-18 17:31         ` [PATCH v5 1/6] examples/l3fwd: " Sivaprasad Tummala
2024-03-19  7:24           ` Morten Brørup
2024-03-21  9:55             ` Thomas Monjalon
2024-03-21 11:05               ` Tummala, Sivaprasad
2024-03-21 11:18                 ` Thomas Monjalon
2024-03-21 18:26                   ` Tummala, Sivaprasad
2024-03-21 11:05             ` Tummala, Sivaprasad
2024-03-18 17:31         ` [PATCH v5 2/6] examples/l3fwd-power: " Sivaprasad Tummala
2024-03-18 17:31         ` [PATCH v5 3/6] examples/l3fwd-graph: " Sivaprasad Tummala
2024-03-18 17:31         ` [PATCH v5 4/6] examples/ipsec-secgw: " Sivaprasad Tummala
2024-03-18 17:31         ` [PATCH v5 5/6] examples/qos_sched: " Sivaprasad Tummala
2024-03-18 17:31         ` [PATCH v5 6/6] examples/vm_power_manager: " Sivaprasad Tummala
2024-03-18 17:31         ` [PATCH v5 0/6] " Sivaprasad Tummala
2024-03-19 18:41         ` Ferruh Yigit
2024-03-21 18:47         ` [PATCH v6 00/10] " Sivaprasad Tummala
2024-03-21 18:47           ` [PATCH v6 01/14] examples/l3fwd: fix queue " Sivaprasad Tummala
2024-03-22 15:41             ` David Marchand
2024-03-25 12:45               ` Tummala, Sivaprasad
2024-03-21 18:47           ` [PATCH v6 02/14] examples/l3fwd-power: " Sivaprasad Tummala
2024-03-21 18:47           ` [PATCH v6 03/14] examples/l3fwd-graph: " Sivaprasad Tummala
2024-03-21 18:47           ` [PATCH v6 04/14] examples/ipsec-secgw: " Sivaprasad Tummala
2024-03-21 18:47           ` [PATCH v6 05/14] examples/l3fwd: fix lcore " Sivaprasad Tummala
2024-03-21 18:47           ` [PATCH v6 06/14] examples/l3fwd-power: " Sivaprasad Tummala
2024-03-21 18:47           ` [PATCH v6 07/14] examples/l3fwd-graph: " Sivaprasad Tummala
2024-03-21 18:47           ` [PATCH v6 08/14] examples/ipsec-secgw: " Sivaprasad Tummala
2024-03-21 18:47           ` [PATCH v6 09/14] examples/qos_sched: " Sivaprasad Tummala
2024-03-21 18:47           ` [PATCH v6 10/14] examples/vm_power_manager: " Sivaprasad Tummala
2024-03-21 18:47           ` [PATCH v6 11/14] examples/l3fwd: fix port " Sivaprasad Tummala
2024-03-21 18:47           ` [PATCH v6 12/14] examples/l3fwd-power: " Sivaprasad Tummala
2024-03-21 18:47           ` [PATCH v6 13/14] examples/l3fwd-graph: " Sivaprasad Tummala
2024-03-21 18:47           ` [PATCH v6 14/14] examples/ipsec-secgw: " Sivaprasad Tummala
2024-03-21 18:47           ` [PATCH v6 00/10] fix lcore " Sivaprasad Tummala
2024-03-26 12:55           ` [PATCH v7 00/14] " Sivaprasad Tummala
2024-03-26 12:55             ` [PATCH v7 01/14] examples/l3fwd: fix queue " Sivaprasad Tummala
2024-03-26 12:55             ` [PATCH v7 02/14] examples/l3fwd-power: " Sivaprasad Tummala
2024-03-26 12:55             ` [PATCH v7 03/14] examples/l3fwd-graph: " Sivaprasad Tummala
2024-03-26 12:55             ` [PATCH v7 04/14] examples/ipsec-secgw: " Sivaprasad Tummala
2024-03-26 12:55             ` [PATCH v7 05/14] examples/l3fwd: fix lcore " Sivaprasad Tummala
2024-03-26 12:55             ` [PATCH v7 06/14] examples/l3fwd-power: " Sivaprasad Tummala
2024-03-26 12:55             ` [PATCH v7 07/14] examples/l3fwd-graph: " Sivaprasad Tummala
2024-03-26 12:55             ` [PATCH v7 08/14] examples/ipsec-secgw: " Sivaprasad Tummala
2024-03-26 12:55             ` [PATCH v7 09/14] examples/qos_sched: " Sivaprasad Tummala
2024-03-26 12:55             ` [PATCH v7 10/14] examples/vm_power_manager: " Sivaprasad Tummala
2024-03-26 12:55             ` [PATCH v7 11/14] examples/l3fwd: fix port " Sivaprasad Tummala
2024-03-26 12:55             ` [PATCH v7 12/14] examples/l3fwd-power: " Sivaprasad Tummala
2024-03-26 12:55             ` [PATCH v7 13/14] examples/l3fwd-graph: " Sivaprasad Tummala
2024-03-26 12:55             ` [PATCH v7 14/14] examples/ipsec-secgw: " Sivaprasad Tummala
2024-03-26 12:55             ` [PATCH v7 00/14] fix lcore " Sivaprasad Tummala
2024-04-25 12:31             ` Ferruh Yigit
2024-04-30 13:47               ` Ferruh Yigit
2024-03-07  8:34 ` [PATCH 1/6] examples/l3fwd: " David Marchand
2024-03-07  9:16   ` Morten Brørup
2024-03-07  9:22     ` David Marchand
2024-03-07  9:53       ` Morten Brørup
2024-03-13  9:14   ` Tummala, Sivaprasad

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2a21fc7ad41243ce935cdc6d80b0eb96@huawei.com \
    --to=konstantin.ananyev@huawei.com \
    --cc=Ferruh.Yigit@amd.com \
    --cc=Sivaprasad.Tummala@amd.com \
    --cc=anatoly.burakov@intel.com \
    --cc=cristian.dumitrescu@intel.com \
    --cc=david.hunt@intel.com \
    --cc=dev@dpdk.org \
    --cc=gakhil@marvell.com \
    --cc=jerinj@marvell.com \
    --cc=radu.nicolau@intel.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).