DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Nélio Laranjeiro" <nelio.laranjeiro@6wind.com>
To: "Xueming(Steven) Li" <xuemingl@mellanox.com>
Cc: Shahaf Shuler <shahafs@mellanox.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v2 04/15] net/mlx5: support Rx tunnel type identification
Date: Fri, 13 Apr 2018 10:37:48 +0200	[thread overview]
Message-ID: <20180413083748.nshoium4m5dpysvb@laranjeiro-vm.dev.6wind.com> (raw)
In-Reply-To: <VI1PR05MB167875AA692A2D0B7250675DACBC0@VI1PR05MB1678.eurprd05.prod.outlook.com>

On Thu, Apr 12, 2018 at 02:27:45PM +0000, Xueming(Steven) Li wrote:
> > -----Original Message-----
> > From: Nélio Laranjeiro <nelio.laranjeiro@6wind.com>
> > Sent: Thursday, April 12, 2018 5:51 PM
> > To: Xueming(Steven) Li <xuemingl@mellanox.com>
> > Cc: Shahaf Shuler <shahafs@mellanox.com>; dev@dpdk.org
> > Subject: Re: [PATCH v2 04/15] net/mlx5: support Rx tunnel type
> > identification
> > 
> > On Wed, Apr 11, 2018 at 08:11:50AM +0000, Xueming(Steven) Li wrote:
> > > Hi Nelio,
> > >
> > > > -----Original Message-----
> > > > From: Nélio Laranjeiro <nelio.laranjeiro@6wind.com>
> > > > Sent: Tuesday, April 10, 2018 11:17 PM
> > > > To: Xueming(Steven) Li <xuemingl@mellanox.com>
> > > > Cc: Shahaf Shuler <shahafs@mellanox.com>; dev@dpdk.org
> > > > Subject: Re: [PATCH v2 04/15] net/mlx5: support Rx tunnel type
> > > > identification
> > > >
> > > > On Tue, Apr 10, 2018 at 09:34:04PM +0800, Xueming Li wrote:
> > > > > This patch introduced tunnel type identification based on flow rules.
> > > > > If flows of multiple tunnel types built on same queue,
> > > > > RTE_PTYPE_TUNNEL_MASK will be returned, bits in flow mark could be
> > > > > used as tunnel type identifier.
> > > >
> > > > I don't see anywhere in this patch where the bits are reserved to
> > > > identify a flow, nor values which can help to identify it.
> > > >
> > > > Is this missing?
> > > >
> > > > Anyway we have already very few bits in the mark making it difficult
> > > > to be used by the user, reserving again some to may lead to remove
> > > > the mark support from the flows.
> > >
> > > Not all users will use multiple tunnel types, this is not included in
> > > this patch set and left to user decision. I'll update comments to make
> > this clear.
> > 
> > Thanks,
> > 
> > > > > Signed-off-by: Xueming Li <xuemingl@mellanox.com>
> > <snip/>
> > > > >  /**
> > > > > + * RXQ update after flow rule creation.
> > > > > + *
> > > > > + * @param dev
> > > > > + *   Pointer to Ethernet device.
> > > > > + * @param flow
> > > > > + *   Pointer to the flow rule.
> > > > > + */
> > > > > +static void
> > > > > +mlx5_flow_create_update_rxqs(struct rte_eth_dev *dev, struct
> > > > > +rte_flow
> > > > > +*flow) {
> > > > > +	struct priv *priv = dev->data->dev_private;
> > > > > +	unsigned int i;
> > > > > +
> > > > > +	if (!dev->data->dev_started)
> > > > > +		return;
> > > > > +	for (i = 0; i != flow->rss_conf.queue_num; ++i) {
> > > > > +		struct mlx5_rxq_data *rxq_data = (*priv->rxqs)
> > > > > +						 [(*flow->queues)[i]];
> > > > > +		struct mlx5_rxq_ctrl *rxq_ctrl =
> > > > > +			container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
> > > > > +		uint8_t tunnel = PTYPE_IDX(flow->tunnel);
> > > > > +
> > > > > +		rxq_data->mark |= flow->mark;
> > > > > +		if (!tunnel)
> > > > > +			continue;
> > > > > +		rxq_ctrl->tunnel_types[tunnel] += 1;
> > > >
> > > > I don't understand why you need such array, the NIC is unable to
> > > > return the tunnel type has it returns only one bit saying tunnel.
> > > > Why don't it store in the priv structure the current configured tunnel?
> > >
> > > This array is used to count tunnel types bound to queue, if only one
> > > tunnel type, ptype will report that tunnel type, TUNNEL MASK(max
> > > value) will be returned if multiple types bound to a queue.
> > >
> > > Flow rss action specifies queues that binding to tunnel, thus we can't
> > > assume all queues have same tunnel types, so this is a per queue
> > structure.
> > 
> > There is something I am missing here, how in the dataplane the PMD can
> > understand from 1 bit which kind of tunnel the packet is matching?
> 
> The code under this line is answer, let me post here: 
> 		if (rxq_data->tunnel != flow->tunnel)
> 			rxq_data->tunnel = rxq_data->tunnel ?
> 					   RTE_PTYPE_TUNNEL_MASK :
> 					   flow->tunnel;
> If no tunnel type associated to rxq, use tunnel type from flow.
> If a new tunnel type from flow, use RTE_PTYPE_TUNNEL_MASK.

>From my understanding, when in the same queue there are several tunnel
offloads, the mbuf ptype will contains RTE_PTYPE_TUNNEL_MASK:

   @@ -1601,7 +1605,7 @@ rxq_cq_to_pkt_type(volatile struct mlx5_cqe *cqe)
           * bit[7] = outer_l3_type
           */
          idx = ((pinfo & 0x3) << 6) | ((ptype & 0xfc00) >> 10);
  -       return mlx5_ptype_table[idx];
  +       return mlx5_ptype_table[idx] | rxq->tunnel * !!(idx & (1 << 6));
   }


Used by Rx burst functions,

/* Update packet information. */
 pkt->packet_type = rxq_cq_to_pkt_type(cqe);

Is this correct?

There is another strange point here, 

 +       [PTYPE_IDX(RTE_PTYPE_TUNNEL_VXLAN)] = RTE_PTYPE_TUNNEL_VXLAN |
 +                                             RTE_PTYPE_L4_UDP,

According to the RFC 7348 [1] having a VXLAN with an outer IPv6 is
possible.  How do you handle it?

> > <snip/>
> > > > > @@ -2334,9 +2414,9 @@ mlx5_flow_stop(struct rte_eth_dev *dev,
> > > > > struct mlx5_flows *list)  {
> > > > >  	struct priv *priv = dev->data->dev_private;
> > > > >  	struct rte_flow *flow;
> > > > > +	unsigned int i;
> > > > >
> > > > >  	TAILQ_FOREACH_REVERSE(flow, list, mlx5_flows, next) {
> > > > > -		unsigned int i;
> > > > >  		struct mlx5_ind_table_ibv *ind_tbl = NULL;
> > > > >
> > > > >  		if (flow->drop) {
> > > > > @@ -2382,6 +2462,16 @@ mlx5_flow_stop(struct rte_eth_dev *dev,
> > > > > struct
> > > > mlx5_flows *list)
> > > > >  		DRV_LOG(DEBUG, "port %u flow %p removed", dev->data-
> > >port_id,
> > > > >  			(void *)flow);
> > > > >  	}
> > > > > +	/* Cleanup Rx queue tunnel info. */
> > > > > +	for (i = 0; i != priv->rxqs_n; ++i) {
> > > > > +		struct mlx5_rxq_data *q = (*priv->rxqs)[i];
> > > > > +		struct mlx5_rxq_ctrl *rxq_ctrl =
> > > > > +			container_of(q, struct mlx5_rxq_ctrl, rxq);
> > > > > +
> > > > > +		memset((void *)rxq_ctrl->tunnel_types, 0,
> > > > > +		       sizeof(rxq_ctrl->tunnel_types));
> > > > > +		q->tunnel = 0;
> > > > > +	}
> > > > >  }
> > > >
> > > > This hunk does not handle the fact the Rx queue array may have some
> > > > holes i.e. the application is allowed to ask for 10 queues and only
> > > > initialise some.  In such situation this code will segfault.
> > >
> > > In other words, "q" could be NULL, correct? I'll add check for this.
> > 
> > Correct.
> > 
> > > BTW, there should be an action item to add such check in rss/queue flow
> > creation.
> > 
> > As it is the responsibility of the application/user to make rule according
> > to what it has configured, it has not been added.  It can still be added,
> > but it cannot be considered as a fix.
> > 
> > > > It should only memset the Rx queues making part of the flow not the
> > others.
> > >
> > > Clean this(decrease tunnel_types counter of each queue) from each flow
> > > would be time consuming.
> > 
> > Considering flows are already relying on syscall to communicate with the
> > kernel, the extra cycles consumption to only clear the queues making part
> > of this flow is neglectable.
> > 
> > By the way in the same function the mark is cleared only for the queues
> > making part of the flow, the same loop can be used to clear those tunnel
> > informations at the same time.
> > 
> > > If an error happened, counter will not be cleared and such state will
> > > impact tunnel type after port start again.
> > 
> > Unless an implementation error which other kind of them do you fear to
> > happen?
> 
> Mark of rxq simply reset to 0, this field is counter, the final target is to 
> clear field value, so my code should be straight forward and error free 😊
> 
> From a quick look, this function could be much simple that what it is today:
> 1. clean verb flow and hrex where possible, despite of flow type.
> 2. clean rxq state: mark and tunnel_types.

Ok.

Thanks,

[1] https://dpdk.org/patch/37965

-- 
Nélio Laranjeiro
6WIND

  reply	other threads:[~2018-04-13  8:37 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-10 13:34 [dpdk-dev] [PATCH v2 00/15] mlx5 Rx tunnel offloading Xueming Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 01/15] net/mlx5: support 16 hardware priorities Xueming Li
2018-04-10 14:41   ` Nélio Laranjeiro
2018-04-10 15:22     ` Xueming(Steven) Li
2018-04-12  9:09       ` Nélio Laranjeiro
2018-04-12 13:43         ` Xueming(Steven) Li
2018-04-12 14:02           ` Nélio Laranjeiro
2018-04-12 14:46             ` Xueming(Steven) Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 02/15] net/mlx5: support GRE tunnel flow Xueming Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 03/15] net/mlx5: support L3 vxlan flow Xueming Li
2018-04-10 14:53   ` Nélio Laranjeiro
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 04/15] net/mlx5: support Rx tunnel type identification Xueming Li
2018-04-10 15:17   ` Nélio Laranjeiro
2018-04-11  8:11     ` Xueming(Steven) Li
2018-04-12  9:50       ` Nélio Laranjeiro
2018-04-12 14:27         ` Xueming(Steven) Li
2018-04-13  8:37           ` Nélio Laranjeiro [this message]
2018-04-13 12:09             ` Xueming(Steven) Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 05/15] net/mlx5: support tunnel inner checksum offloads Xueming Li
2018-04-10 15:27   ` Nélio Laranjeiro
2018-04-11  8:46     ` Xueming(Steven) Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 06/15] net/mlx5: split flow RSS handling logic Xueming Li
2018-04-10 15:28   ` Nélio Laranjeiro
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 07/15] net/mlx5: support tunnel RSS level Xueming Li
2018-04-11  8:55   ` Nélio Laranjeiro
2018-04-14 12:25     ` Xueming(Steven) Li
2018-04-16  7:14       ` Nélio Laranjeiro
2018-04-16  7:46         ` Xueming(Steven) Li
2018-04-16  8:09           ` Nélio Laranjeiro
2018-04-16 10:06             ` Xueming(Steven) Li
2018-04-16 12:27               ` Nélio Laranjeiro
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 08/15] net/mlx5: add hardware flow debug dump Xueming Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 09/15] net/mlx5: introduce VXLAN-GPE tunnel type Xueming Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 10/15] net/mlx5: allow flow tunnel ID 0 with outer pattern Xueming Li
2018-04-11 12:25   ` Nélio Laranjeiro
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 11/15] net/mlx5: support MPLS-in-GRE and MPLS-in-UDP Xueming Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 12/15] doc: update mlx5 guide on tunnel offloading Xueming Li
2018-04-11 12:32   ` Nélio Laranjeiro
2018-04-11 12:43     ` Thomas Monjalon
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 13/15] net/mlx5: setup RSS flow regardless of queue count Xueming Li
2018-04-11 12:37   ` Nélio Laranjeiro
2018-04-11 13:01     ` Xueming(Steven) Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 14/15] net/mlx5: fix invalid flow item check Xueming Li
2018-04-10 13:34 ` [dpdk-dev] [PATCH v2 15/15] net/mlx5: support RSS configuration in isolated mode Xueming Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180413083748.nshoium4m5dpysvb@laranjeiro-vm.dev.6wind.com \
    --to=nelio.laranjeiro@6wind.com \
    --cc=dev@dpdk.org \
    --cc=shahafs@mellanox.com \
    --cc=xuemingl@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).