DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH] net/mlx5: fix missing Rx queue flags clear
@ 2020-04-17  7:23 Bing Zhao
  2020-04-29  9:03 ` Raslan Darawsheh
  0 siblings, 1 reply; 2+ messages in thread
From: Bing Zhao @ 2020-04-17  7:23 UTC (permalink / raw)
  To: orika, viacheslavo, rasland; +Cc: matan, dev

After inserting an offload flow, the software flag information will
be updated based on the flow. When receiving a packet on this queue,
the hardware packet type bits and the software flag will be used
together to get the inner packet and tunnel header type (if any) from
the global packet type table.
When destroying a flow, the corresponding Rx queue flag needs to be
updated. All flags should be cleared when closing a device because
all control flows and application flows are invalid anymore.
Such behavior is missed when implementing the non-cached mode.

Fixes: e1f94d51b8f7 ("net/mlx5: change operations for non-cached flows")

Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
 drivers/net/mlx5/mlx5_flow.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index c529aa3..bb7fb1e 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -4653,6 +4653,7 @@ struct rte_flow *
 mlx5_flow_stop_default(struct rte_eth_dev *dev)
 {
 	flow_mreg_del_default_copy_action(dev);
+	flow_rxq_flags_clear(dev);
 }
 
 /**
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-dev] [PATCH] net/mlx5: fix missing Rx queue flags clear
  2020-04-17  7:23 [dpdk-dev] [PATCH] net/mlx5: fix missing Rx queue flags clear Bing Zhao
@ 2020-04-29  9:03 ` Raslan Darawsheh
  0 siblings, 0 replies; 2+ messages in thread
From: Raslan Darawsheh @ 2020-04-29  9:03 UTC (permalink / raw)
  To: Bing Zhao, Ori Kam, Slava Ovsiienko; +Cc: Matan Azrad, dev

Hi,

> -----Original Message-----
> From: Bing Zhao <bingz@mellanox.com>
> Sent: Friday, April 17, 2020 10:24 AM
> To: Ori Kam <orika@mellanox.com>; Slava Ovsiienko
> <viacheslavo@mellanox.com>; Raslan Darawsheh <rasland@mellanox.com>
> Cc: Matan Azrad <matan@mellanox.com>; dev@dpdk.org
> Subject: [PATCH] net/mlx5: fix missing Rx queue flags clear
> 
> After inserting an offload flow, the software flag information will
> be updated based on the flow. When receiving a packet on this queue,
> the hardware packet type bits and the software flag will be used
> together to get the inner packet and tunnel header type (if any) from
> the global packet type table.
> When destroying a flow, the corresponding Rx queue flag needs to be
> updated. All flags should be cleared when closing a device because
> all control flows and application flows are invalid anymore.
> Such behavior is missed when implementing the non-cached mode.
> 
> Fixes: e1f94d51b8f7 ("net/mlx5: change operations for non-cached flows")
Fixed fixes line,
> 
> Signed-off-by: Bing Zhao <bingz@mellanox.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> ---
>  drivers/net/mlx5/mlx5_flow.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
> index c529aa3..bb7fb1e 100644
> --- a/drivers/net/mlx5/mlx5_flow.c
> +++ b/drivers/net/mlx5/mlx5_flow.c
> @@ -4653,6 +4653,7 @@ struct rte_flow *
>  mlx5_flow_stop_default(struct rte_eth_dev *dev)
>  {
>  	flow_mreg_del_default_copy_action(dev);
> +	flow_rxq_flags_clear(dev);
>  }
> 
>  /**
> --
> 1.8.3.1


Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-04-29  9:03 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-17  7:23 [dpdk-dev] [PATCH] net/mlx5: fix missing Rx queue flags clear Bing Zhao
2020-04-29  9:03 ` Raslan Darawsheh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).