patches for DPDK stable branches
 help / color / mirror / Atom feed
From: "Xueming(Steven) Li" <xuemingl@nvidia.com>
To: Dmitry Kozlyuk <dkozlyuk@nvidia.com>,
	"stable@dpdk.org" <stable@dpdk.org>
Cc: Matan Azrad <matan@nvidia.com>, Slava Ovsiienko <viacheslavo@nvidia.com>
Subject: Re: [PATCH 20.11] net/mlx5: fix flow shared age action reference counting
Date: Mon, 6 Dec 2021 05:55:14 +0000	[thread overview]
Message-ID: <9fb1c42f5f2c54b48d002a6903e63d8243959877.camel@nvidia.com> (raw)
In-Reply-To: <20211203201139.1633370-1-dkozlyuk@nvidia.com>

Thanks, applied.

On Fri, 2021-12-03 at 22:11 +0200, Dmitry Kozlyuk wrote:
> [ upstream commit b09c65fa4f8bb55880b6b36c849e4ed1bb815227 ]
> 
> When an shared AGE action is used in a flow rule with a pattern that
> causes RSS expansion, each device flow generated by the expansion
> incremented the reference counter of the action. When such a flow was
> destroyed, its action reference counter had been decremented only once.
> The action remained marked as being used and could not be destroyed.
> The error was only visible with --log-level=pmd.net.mlx5:debug
> ("...references 1" is not an error):
> 
>     mlx5_pci: Shared age action 65536 was released with references 4.
> 
> Increment action counter only once for the original flow rule.
> 
> Fixes: 81073e1f8ce1 ("net/mlx5: support shared age action")
> 
> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> ---
>  drivers/net/mlx5/mlx5_flow_dv.c | 12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
> index 1793683421..66732cbdd4 100644
> --- a/drivers/net/mlx5/mlx5_flow_dv.c
> +++ b/drivers/net/mlx5/mlx5_flow_dv.c
> @@ -9812,6 +9812,7 @@ flow_dv_translate(struct rte_eth_dev *dev,
>  		const struct rte_flow_action_meter *mtr;
>  		struct mlx5_flow_tbl_resource *tbl;
>  		struct mlx5_aso_age_action *age_act;
> +		uint32_t owner_idx;
>  		uint32_t port_id = 0;
>  		struct mlx5_flow_dv_port_id_action_resource port_id_resource;
>  		int action_type = actions->type;
> @@ -9951,10 +9952,13 @@ flow_dv_translate(struct rte_eth_dev *dev,
>  				MLX5_FLOW_FATE_QUEUE;
>  			break;
>  		case MLX5_RTE_FLOW_ACTION_TYPE_AGE:
> -			flow->age = (uint32_t)(uintptr_t)(action->conf);
> -			age_act = flow_aso_age_get_by_idx(dev, flow->age);
> -			__atomic_fetch_add(&age_act->refcnt, 1,
> -					   __ATOMIC_RELAXED);
> +			owner_idx = (uint32_t)(uintptr_t)action->conf;
> +			age_act = flow_aso_age_get_by_idx(dev, owner_idx);
> +			if (flow->age == 0) {
> +				flow->age = owner_idx;
> +				__atomic_fetch_add(&age_act->refcnt, 1,
> +						   __ATOMIC_RELAXED);
> +			}
>  			dev_flow->dv.actions[actions_n++] = age_act->dr_action;
>  			action_flags |= MLX5_FLOW_ACTION_AGE;
>  			break;


      reply	other threads:[~2021-12-06  5:55 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-03 20:11 Dmitry Kozlyuk
2021-12-06  5:55 ` Xueming(Steven) Li [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9fb1c42f5f2c54b48d002a6903e63d8243959877.camel@nvidia.com \
    --to=xuemingl@nvidia.com \
    --cc=dkozlyuk@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=stable@dpdk.org \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).