From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: "Walsh, Conor" <conor.walsh@intel.com>,
"Medvedkin, Vladimir" <vladimir.medvedkin@intel.com>,
"ruifeng.wang@arm.com" <ruifeng.wang@arm.com>,
"jerinj@marvell.com" <jerinj@marvell.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
"Gributs, Paulis" <paulis.gributs@intel.com>
Subject: Re: [dpdk-dev] [PATCH] examples/l3fwd: remove useless reloads in FIB main loop
Date: Mon, 5 Jul 2021 17:32:09 +0000 [thread overview]
Message-ID: <DM6PR11MB449180F65E5353D8B44BE0E29A1C9@DM6PR11MB4491.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20210705170546.1002806-1-conor.walsh@intel.com>
>
> This patch aligns the l3fwd FIB code with the changes made to LPM in
> commit 74fb854a3de6 ("examples/l3fwd: remove useless reloads in LPM
> main loop").
> This change ensures the compiler knows that the lcore config variables
> are constant values and the compiler will then optimize the code
> accordingly.
>
> Signed-off-by: Conor Walsh <conor.walsh@intel.com>
> ---
> examples/l3fwd/l3fwd_fib.c | 10 ++++++----
> 1 file changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/examples/l3fwd/l3fwd_fib.c b/examples/l3fwd/l3fwd_fib.c
> index 1787229942..d083ddfdd5 100644
> --- a/examples/l3fwd/l3fwd_fib.c
> +++ b/examples/l3fwd/l3fwd_fib.c
> @@ -182,14 +182,16 @@ fib_main_loop(__rte_unused void *dummy)
> lcore_id = rte_lcore_id();
> qconf = &lcore_conf[lcore_id];
>
> - if (qconf->n_rx_queue == 0) {
> + const uint16_t n_rx_q = qconf->n_rx_queue;
> + const uint16_t n_tx_p = qconf->n_tx_port;
> + if (n_rx_q == 0) {
> RTE_LOG(INFO, L3FWD, "lcore %u has nothing to do\n", lcore_id);
> return 0;
> }
>
> RTE_LOG(INFO, L3FWD, "entering main loop on lcore %u\n", lcore_id);
>
> - for (i = 0; i < qconf->n_rx_queue; i++) {
> + for (i = 0; i < n_rx_q; i++) {
>
> portid = qconf->rx_queue_list[i].port_id;
> queueid = qconf->rx_queue_list[i].queue_id;
> @@ -207,7 +209,7 @@ fib_main_loop(__rte_unused void *dummy)
> diff_tsc = cur_tsc - prev_tsc;
> if (unlikely(diff_tsc > drain_tsc)) {
>
> - for (i = 0; i < qconf->n_tx_port; ++i) {
> + for (i = 0; i < n_tx_p; ++i) {
> portid = qconf->tx_port_id[i];
> if (qconf->tx_mbufs[portid].len == 0)
> continue;
> @@ -221,7 +223,7 @@ fib_main_loop(__rte_unused void *dummy)
> }
>
> /* Read packet from RX queues. */
> - for (i = 0; i < qconf->n_rx_queue; ++i) {
> + for (i = 0; i < n_rx_q; ++i) {
> portid = qconf->rx_queue_list[i].port_id;
> queueid = qconf->rx_queue_list[i].queue_id;
> nb_rx = rte_eth_rx_burst(portid, queueid, pkts_burst,
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.25.1
next prev parent reply other threads:[~2021-07-05 17:32 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-05 17:05 Conor Walsh
2021-07-05 17:32 ` Ananyev, Konstantin [this message]
2021-07-06 1:38 ` Ruifeng Wang
2021-07-06 11:29 ` David Marchand
2021-07-06 11:55 ` Walsh, Conor
2021-07-07 9:57 ` David Marchand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DM6PR11MB449180F65E5353D8B44BE0E29A1C9@DM6PR11MB4491.namprd11.prod.outlook.com \
--to=konstantin.ananyev@intel.com \
--cc=conor.walsh@intel.com \
--cc=dev@dpdk.org \
--cc=jerinj@marvell.com \
--cc=paulis.gributs@intel.com \
--cc=ruifeng.wang@arm.com \
--cc=vladimir.medvedkin@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).