DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Nélio Laranjeiro" <nelio.laranjeiro@6wind.com>
To: "Xueming(Steven) Li" <xuemingl@mellanox.com>
Cc: Shahaf Shuler <shahafs@mellanox.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v2 04/15] net/mlx5: support Rx tunnel type identification
Date: Thu, 12 Apr 2018 11:50:58 +0200	[thread overview]
Message-ID: <20180412095058.3vcynmscyvc6wl7e@laranjeiro-vm.dev.6wind.com> (raw)
In-Reply-To: <VI1PR05MB16786639EFF14130073A8E35ACBD0@VI1PR05MB1678.eurprd05.prod.outlook.com>

On Wed, Apr 11, 2018 at 08:11:50AM +0000, Xueming(Steven) Li wrote:
> Hi Nelio,
> 
> > -----Original Message-----
> > From: Nélio Laranjeiro <nelio.laranjeiro@6wind.com>
> > Sent: Tuesday, April 10, 2018 11:17 PM
> > To: Xueming(Steven) Li <xuemingl@mellanox.com>
> > Cc: Shahaf Shuler <shahafs@mellanox.com>; dev@dpdk.org
> > Subject: Re: [PATCH v2 04/15] net/mlx5: support Rx tunnel type
> > identification
> > 
> > On Tue, Apr 10, 2018 at 09:34:04PM +0800, Xueming Li wrote:
> > > This patch introduced tunnel type identification based on flow rules.
> > > If flows of multiple tunnel types built on same queue,
> > > RTE_PTYPE_TUNNEL_MASK will be returned, bits in flow mark could be
> > > used as tunnel type identifier.
> > 
> > I don't see anywhere in this patch where the bits are reserved to identify
> > a flow, nor values which can help to identify it.
> > 
> > Is this missing?
> > 
> > Anyway we have already very few bits in the mark making it difficult to be
> > used by the user, reserving again some to may lead to remove the mark
> > support from the flows.
> 
> Not all users will use multiple tunnel types, this is not included in this patch
> set and left to user decision. I'll update comments to make this clear.

Thanks,

> > > Signed-off-by: Xueming Li <xuemingl@mellanox.com>
<snip/>
> > >  /**
> > > + * RXQ update after flow rule creation.
> > > + *
> > > + * @param dev
> > > + *   Pointer to Ethernet device.
> > > + * @param flow
> > > + *   Pointer to the flow rule.
> > > + */
> > > +static void
> > > +mlx5_flow_create_update_rxqs(struct rte_eth_dev *dev, struct rte_flow
> > > +*flow) {
> > > +	struct priv *priv = dev->data->dev_private;
> > > +	unsigned int i;
> > > +
> > > +	if (!dev->data->dev_started)
> > > +		return;
> > > +	for (i = 0; i != flow->rss_conf.queue_num; ++i) {
> > > +		struct mlx5_rxq_data *rxq_data = (*priv->rxqs)
> > > +						 [(*flow->queues)[i]];
> > > +		struct mlx5_rxq_ctrl *rxq_ctrl =
> > > +			container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
> > > +		uint8_t tunnel = PTYPE_IDX(flow->tunnel);
> > > +
> > > +		rxq_data->mark |= flow->mark;
> > > +		if (!tunnel)
> > > +			continue;
> > > +		rxq_ctrl->tunnel_types[tunnel] += 1;
> > 
> > I don't understand why you need such array, the NIC is unable to return
> > the tunnel type has it returns only one bit saying tunnel.
> > Why don't it store in the priv structure the current configured tunnel?
> 
> This array is used to count tunnel types bound to queue, if only one tunnel type,
> ptype will report that tunnel type, TUNNEL MASK(max value) will be returned if 
> multiple types bound to a queue.
> 
> Flow rss action specifies queues that binding to tunnel, thus we can't assume
> all queues have same tunnel types, so this is a per queue structure.

There is something I am missing here, how in the dataplane the PMD can
understand from 1 bit which kind of tunnel the packet is matching?

<snip/>
> > > @@ -2334,9 +2414,9 @@ mlx5_flow_stop(struct rte_eth_dev *dev, struct
> > > mlx5_flows *list)  {
> > >  	struct priv *priv = dev->data->dev_private;
> > >  	struct rte_flow *flow;
> > > +	unsigned int i;
> > >
> > >  	TAILQ_FOREACH_REVERSE(flow, list, mlx5_flows, next) {
> > > -		unsigned int i;
> > >  		struct mlx5_ind_table_ibv *ind_tbl = NULL;
> > >
> > >  		if (flow->drop) {
> > > @@ -2382,6 +2462,16 @@ mlx5_flow_stop(struct rte_eth_dev *dev, struct
> > mlx5_flows *list)
> > >  		DRV_LOG(DEBUG, "port %u flow %p removed", dev->data->port_id,
> > >  			(void *)flow);
> > >  	}
> > > +	/* Cleanup Rx queue tunnel info. */
> > > +	for (i = 0; i != priv->rxqs_n; ++i) {
> > > +		struct mlx5_rxq_data *q = (*priv->rxqs)[i];
> > > +		struct mlx5_rxq_ctrl *rxq_ctrl =
> > > +			container_of(q, struct mlx5_rxq_ctrl, rxq);
> > > +
> > > +		memset((void *)rxq_ctrl->tunnel_types, 0,
> > > +		       sizeof(rxq_ctrl->tunnel_types));
> > > +		q->tunnel = 0;
> > > +	}
> > >  }
> > 
> > This hunk does not handle the fact the Rx queue array may have some holes
> > i.e. the application is allowed to ask for 10 queues and only initialise
> > some.  In such situation this code will segfault.
> 
> In other words, "q" could be NULL, correct? I'll add check for this.

Correct.

> BTW, there should be an action item to add such check in rss/queue flow creation.

As it is the responsibility of the application/user to make rule according
to what it has configured, it has not been added.  It can still be
added, but it cannot be considered as a fix.

> > It should only memset the Rx queues making part of the flow not the others.
> 
> Clean this(decrease tunnel_types counter of each queue) from each flow would be time 
> consuming.

Considering flows are already relying on syscall to communicate with
the kernel, the extra cycles consumption to only clear the queues making
part of this flow is neglectable.  

By the way in the same function the mark is cleared only for the queues
making part of the flow, the same loop can be used to clear those tunnel
informations at the same time.

> If an error happened, counter will not be cleared and such state will
> impact tunnel type after port start again.

Unless an implementation error which other kind of them do you fear to
happen?

Thanks,

-- 
Nélio Laranjeiro
6WIND

  reply	other threads:[~2018-04-12  9:50 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-10 13:34 [dpdk-dev] [PATCH v2 00/15] mlx5 Rx tunnel offloading Xueming Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 01/15] net/mlx5: support 16 hardware priorities Xueming Li
2018-04-10 14:41   ` Nélio Laranjeiro
2018-04-10 15:22     ` Xueming(Steven) Li
2018-04-12  9:09       ` Nélio Laranjeiro
2018-04-12 13:43         ` Xueming(Steven) Li
2018-04-12 14:02           ` Nélio Laranjeiro
2018-04-12 14:46             ` Xueming(Steven) Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 02/15] net/mlx5: support GRE tunnel flow Xueming Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 03/15] net/mlx5: support L3 vxlan flow Xueming Li
2018-04-10 14:53   ` Nélio Laranjeiro
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 04/15] net/mlx5: support Rx tunnel type identification Xueming Li
2018-04-10 15:17   ` Nélio Laranjeiro
2018-04-11  8:11     ` Xueming(Steven) Li
2018-04-12  9:50       ` Nélio Laranjeiro [this message]
2018-04-12 14:27         ` Xueming(Steven) Li
2018-04-13  8:37           ` Nélio Laranjeiro
2018-04-13 12:09             ` Xueming(Steven) Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 05/15] net/mlx5: support tunnel inner checksum offloads Xueming Li
2018-04-10 15:27   ` Nélio Laranjeiro
2018-04-11  8:46     ` Xueming(Steven) Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 06/15] net/mlx5: split flow RSS handling logic Xueming Li
2018-04-10 15:28   ` Nélio Laranjeiro
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 07/15] net/mlx5: support tunnel RSS level Xueming Li
2018-04-11  8:55   ` Nélio Laranjeiro
2018-04-14 12:25     ` Xueming(Steven) Li
2018-04-16  7:14       ` Nélio Laranjeiro
2018-04-16  7:46         ` Xueming(Steven) Li
2018-04-16  8:09           ` Nélio Laranjeiro
2018-04-16 10:06             ` Xueming(Steven) Li
2018-04-16 12:27               ` Nélio Laranjeiro
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 08/15] net/mlx5: add hardware flow debug dump Xueming Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 09/15] net/mlx5: introduce VXLAN-GPE tunnel type Xueming Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 10/15] net/mlx5: allow flow tunnel ID 0 with outer pattern Xueming Li
2018-04-11 12:25   ` Nélio Laranjeiro
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 11/15] net/mlx5: support MPLS-in-GRE and MPLS-in-UDP Xueming Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 12/15] doc: update mlx5 guide on tunnel offloading Xueming Li
2018-04-11 12:32   ` Nélio Laranjeiro
2018-04-11 12:43     ` Thomas Monjalon
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 13/15] net/mlx5: setup RSS flow regardless of queue count Xueming Li
2018-04-11 12:37   ` Nélio Laranjeiro
2018-04-11 13:01     ` Xueming(Steven) Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 14/15] net/mlx5: fix invalid flow item check Xueming Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 15/15] net/mlx5: support RSS configuration in isolated mode Xueming Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180412095058.3vcynmscyvc6wl7e@laranjeiro-vm.dev.6wind.com \
    --to=nelio.laranjeiro@6wind.com \
    --cc=dev@dpdk.org \
    --cc=shahafs@mellanox.com \
    --cc=xuemingl@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).