patches for DPDK stable branches
 help / color / mirror / Atom feed
From: "Li, Xiaoyun" <xiaoyun.li@intel.com>
To: "Zhang, Yuying" <yuying.zhang@intel.com>,
	"Singh, Aman Deep" <aman.deep.singh@intel.com>,
	Dmitry Kozlyuk <dkozlyuk@nvidia.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: "jing.d.chen@intel.com" <jing.d.chen@intel.com>,
	"stable@dpdk.org" <stable@dpdk.org>,
	Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: [PATCH] app/testpmd: skip stopped queues when forwarding
Date: Wed, 9 Feb 2022 14:56:13 +0000	[thread overview]
Message-ID: <DM4PR11MB55340E120B1E614BAAE55A6C992E9@DM4PR11MB5534.namprd11.prod.outlook.com> (raw)
In-Reply-To: <DM6PR11MB35163319E170CBFE67F1A1C18E2E9@DM6PR11MB3516.namprd11.prod.outlook.com>

Hi Dmitry

Sorry to be direct, but I don't think this patch makes sense.
I need the code to clarify the problems so I'll use this thread not the one you're answering.

> -----Original Message-----
> From: Zhang, Yuying <yuying.zhang@intel.com>
> Sent: Wednesday, February 9, 2022 09:00
> To: Singh, Aman Deep <aman.deep.singh@intel.com>; Dmitry Kozlyuk
> <dkozlyuk@nvidia.com>; dev@dpdk.org
> Cc: Li, Xiaoyun <xiaoyun.li@intel.com>; jing.d.chen@intel.com;
> stable@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> Subject: RE: [PATCH] app/testpmd: skip stopped queues when forwarding
> 
> Hi Dmitry,
> 
> > -----Original Message-----
> > From: Singh, Aman Deep <aman.deep.singh@intel.com>
> > Sent: Thursday, February 3, 2022 9:52 PM
> > To: Dmitry Kozlyuk <dkozlyuk@nvidia.com>; dev@dpdk.org
> > Cc: Li, Xiaoyun <xiaoyun.li@intel.com>; Zhang, Yuying
> > <yuying.zhang@intel.com>; jing.d.chen@intel.com; stable@dpdk.org;
> > Raslan Darawsheh <rasland@nvidia.com>
> > Subject: Re: [PATCH] app/testpmd: skip stopped queues when forwarding
> >
> > Hi Dmitry,
> >
> > Thanks for the patch.
> >
> > On 1/13/2022 2:51 PM, Dmitry Kozlyuk wrote:
> > > After "port <port_id> rxq|txq <queue_id> stop"
> > > the stopped queue was used in forwarding nonetheless, which may
> > > cause undefined behavior in the PMD.
> > >
> > > Record the configured queue state
> > > and account for it when launching forwarding as follows:
> > > +--------+---------+-----------------+---------------+
> > > |RxQ     |TxQ      |Configured mode  |Launch routine |
> > > +--------+---------+-----------------+---------------+
> > > |stopped |stopped  |*                |-              |
> > > |stopped |started  |txonly           |(configured)   |
> > > |stopped |started  |*                |-              |
> > > |started |stopped  |*                |rxonly         |
> > > |started |started  |*                |(configured)   |
> > > +--------+---------+-----------------+---------------+
> > > Display stopped queues on "show port config rxtx".
> > >
> > > Fixes: 5f4ec54f1d16 ("testpmd: queue start and stop")
> > > Cc: jing.d.chen@intel.com
> > > Cc: stable@dpdk.org
> > >
> > > Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> > > Reviewed-by: Raslan Darawsheh <rasland@nvidia.com>
> > > ---
> > >   app/test-pmd/cmdline.c |  8 ++++++++
> > >   app/test-pmd/config.c  |  6 ++++++
> > >   app/test-pmd/testpmd.c | 18 ++++++++++++++++--
> > >   app/test-pmd/testpmd.h | 10 ++++++++++
> > >   4 files changed, 40 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
> > > e626b1c7d9..8b0920e23d 100644
> > > --- a/app/test-pmd/cmdline.c
> > > +++ b/app/test-pmd/cmdline.c

<snip>

> > > +2225,20 @@ run_pkt_fwd_on_lcore(struct fwd_lcore *fc,
> packet_fwd_t
> > pkt_fwd)
> > >   static int
> > >   start_pkt_forward_on_core(void *fwd_arg)
> > >   {
> > > -	run_pkt_fwd_on_lcore((struct fwd_lcore *) fwd_arg,
> > > -			     cur_fwd_config.fwd_eng->packet_fwd);
> > > +	struct fwd_lcore *fc = fwd_arg;
> > > +	struct fwd_stream *fsm = fwd_streams[fc->stream_idx];
> > > +	struct queue_state *rxq = &ports[fsm->rx_port].rxq_state[fsm-
> > >rx_queue];
> > > +	struct queue_state *txq = &ports[fsm->tx_port].txq_state[fsm-
> > >tx_queue];
> > > +	struct fwd_engine *fwd_engine = cur_fwd_config.fwd_eng;
> > > +	packet_fwd_t packet_fwd;

1. For each lcore, there may be many fwd streams which are different queue pair even different port pair.
Just imagine the simplest case, you run iofwd with 4 ports single queue with single forwarding core.
But you're only dealing with the first fwd stream here. Each fc actually includes stream from fc->stream_idx to fc->stream_idx+fc->stream_nb-1.

What are you going to do with this?
If you stop port 0 rxq 0, you shouldn't change the whole lcore behavior to rxonly. Because the pair of port 2 rxq 0-> port 3 txq 0 and port 3 rxq0 -> port 2 txq0 should be normal iofwd.
And this is only the basic case.
And I don't think it's that harmful to just let those fwd running even if you stop some rxq. Because their pair won't have pkts to do tx if there's no rx.

2. Even if you're going to narrow down for the case which fc->nb_stream=1.
You keep the value of cur_fwd_config. There's a potential issue here since cur_fwd_config is not really cur_fwd_config anymore.
In stop_packet_forwarding(), It sets fc->stopped=1 and uses cur_fwd_config.fwd_eng->port_fwd_end to stop packeting forward.
Right now, in most of the fwd mode, port_fwd_end=NULL. But txonly and noisy_vnf and icmp_echo are not.
What if someday rxonly engine needs to have a port_fwd_end? Should you call port_fwd_end for your lcore (rxonly) too while others are still cur_fwd_engine?

Also, in this case, shouldn't you init "stopped" (or "state" following other comments) to be 1 for txonly? Because the default value will be 0, following your code, txonly engine will change to rxonly if the txq is stopped.

Also, even if you init "stopped" for txonly, have you considered flow_gen engine? Flow_gen is also kind of a txonly engine (all received pkts will be dropped) to generate multi-flow packets.


Anyway, I still think it's not worth it. And there may be other issues too. The patch looks dangerous to me.

BRs
Xiaoyun

> > > +
> > > +	/* Check if there will ever be any packets to send. */
> > > +	if (rxq->stopped && (txq->stopped || fwd_engine !=
> > &tx_only_engine))
> > > +		return 0;
> Have you considered other fwd_engines such as io_fwd_engine and
> mac_fwd_engine?
> > > +	/* Force rxonly mode if RxQ is started, but TxQ is stopped. */
> > > +	packet_fwd = !rxq->stopped && txq->stopped ?
> > rx_only_engine.packet_fwd
> > > +						   : fwd_engine->packet_fwd;
> > Should we have a print here for user info, that mode has been changed
> > or ignored.
> Why need to force rxonly mode for this situation? BTW, the value of
> cur_fwd_eng hasn't been updated after you changed forward mode.
> > > +	run_pkt_fwd_on_lcore(fc, packet_fwd);
> > >   	return 0;
> > >   }
> > >
> > > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> > > 2149ecd93a..2744fa4d76 100644
> > > --- a/app/test-pmd/testpmd.h
> > > +++ b/app/test-pmd/testpmd.h
> > > @@ -216,6 +216,12 @@ struct xstat_display_info {
> > >   	bool	 allocated;
> > >   };
> > >
> > > +/** Application state of a queue. */ struct queue_state {
> > > +	/** The queue is stopped and should not be used. */
> > > +	bool stopped;
> > > +};
> > > +
> > >   /**
> > >    * The data structure associated with each port.
> > >    */
> > > @@ -256,6 +262,10 @@ struct rte_port {
> > >   	uint64_t		mbuf_dynf;
> > >   	const struct rte_eth_rxtx_callback
> > *tx_set_dynf_cb[RTE_MAX_QUEUES_PER_PORT+1];
> > >   	struct xstat_display_info xstats_info;
> > > +	/** Per-Rx-queue state. */
> > > +	struct queue_state rxq_state[RTE_MAX_QUEUES_PER_PORT];
> > > +	/** Per-Tx-queue state. */
> > > +	struct queue_state txq_state[RTE_MAX_QUEUES_PER_PORT];
> > Can we think of adding rxq_state/txq_state as part of existing
> > structures under rte_port->rte_eth_rxconf/rte_eth_txconf.
> > And if it helps, rather than bool can we use u8 with eth_dev defines-
> > #define RTE_ETH_QUEUE_STATE_STOPPED 0 /**< Queue stopped. */
> #define
> > RTE_ETH_QUEUE_STATE_STARTED 1 /**< Queue started. */
> The same.
> > >   };
> > >
> > >   /**

  parent reply	other threads:[~2022-02-09 14:56 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-13  9:21 Dmitry Kozlyuk
2022-02-02 10:02 ` Dmitry Kozlyuk
2022-02-03 13:52 ` Singh, Aman Deep
2022-02-09  8:59   ` Zhang, Yuying
2022-02-09 10:38     ` Dmitry Kozlyuk
2022-02-09 14:56     ` Li, Xiaoyun [this message]
     [not found] ` <20220306232310.613552-1-dkozlyuk@nvidia.com>
2022-03-06 23:23   ` [PATCH v2 1/2] ethdev: prohibit polling of a stopped queue Dmitry Kozlyuk
2022-03-06 23:23   ` [PATCH v2 2/2] app/testpmd: do not poll stopped queues Dmitry Kozlyuk
     [not found]   ` <20220307125351.697936-1-dkozlyuk@nvidia.com>
2022-03-07 12:53     ` [PATCH v3 1/2] " Dmitry Kozlyuk
2022-03-09 10:36       ` Dmitry Kozlyuk
2023-07-08  1:54         ` Stephen Hemminger
2022-05-25 15:46       ` Thomas Monjalon
2022-06-10 11:28         ` Jiang, YuX
2022-03-07 12:53     ` [PATCH v3 2/2] ethdev: prohibit polling of a stopped queue Dmitry Kozlyuk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM4PR11MB55340E120B1E614BAAE55A6C992E9@DM4PR11MB5534.namprd11.prod.outlook.com \
    --to=xiaoyun.li@intel.com \
    --cc=aman.deep.singh@intel.com \
    --cc=dev@dpdk.org \
    --cc=dkozlyuk@nvidia.com \
    --cc=jing.d.chen@intel.com \
    --cc=rasland@nvidia.com \
    --cc=stable@dpdk.org \
    --cc=yuying.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).