DPDK patches and discussions
 help / color / mirror / Atom feed
From: Dariusz Sosnowski <dsosnowski@nvidia.com>
To: Bing Zhao <bingz@nvidia.com>,
	Slava Ovsiienko <viacheslavo@nvidia.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	Raslan Darawsheh <rasland@nvidia.com>
Cc: Ori Kam <orika@nvidia.com>, Suanming Mou <suanmingm@nvidia.com>,
	Matan Azrad <matan@nvidia.com>,
	Maayan Kashani <mkashani@nvidia.com>
Subject: RE: [PATCH] net/mlx5: fix the leak of action data list
Date: Mon, 18 Nov 2024 10:36:34 +0000	[thread overview]
Message-ID: <CH3PR12MB84605F985354965C4D750B88A4272@CH3PR12MB8460.namprd12.prod.outlook.com> (raw)
In-Reply-To: <20241118065259.250538-1-bingz@nvidia.com>

Hi,

> -----Original Message-----
> From: Bing Zhao <bingz@nvidia.com>
> Sent: Monday, November 18, 2024 07:53
> To: Dariusz Sosnowski <dsosnowski@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; dev@dpdk.org; Raslan Darawsheh
> <rasland@nvidia.com>
> Cc: Ori Kam <orika@nvidia.com>; Suanming Mou <suanmingm@nvidia.com>;
> Matan Azrad <matan@nvidia.com>; Maayan Kashani
> <mkashani@nvidia.com>
> Subject: [PATCH] net/mlx5: fix the leak of action data list
> 
> In the actions construction of NT2HWS, the `masks` parameter is always set to
> NULL and all the actions will be translated in the "construct" stage as non-fixed
> ones.

Please replace NT2HWS in commit message with "non-template flow API".

> 
> In the stage of translating actions template, the actions data would be
> allocated from the pool and managed in a list. The list would be released when
> destroying the template with the actions. In the NT2HWS implementation, the
> temporary template was freed directly and the actions will be destroyed with
> the flow rule deletion. No other rule would use this list anymore.
> 
> The actions data in the list should be freed when the actions construction is
> done.
> 
> Fixes: ff4064d5b1fe ("net/mlx5: support bulk actions in non-template flow")
> Cc: mkashani@nvidia.com
> 
> Signed-off-by: Bing Zhao <bingz@nvidia.com>
> ---
>  drivers/net/mlx5/mlx5_flow_hw.c | 63 ++++++++++++++++++++-------------
>  1 file changed, 39 insertions(+), 24 deletions(-)
> 
> diff --git a/drivers/net/mlx5/mlx5_flow_hw.c
> b/drivers/net/mlx5/mlx5_flow_hw.c index 50dbaa27ab..7233ac46c4 100644
> --- a/drivers/net/mlx5/mlx5_flow_hw.c
> +++ b/drivers/net/mlx5/mlx5_flow_hw.c
> @@ -966,18 +966,15 @@ __flow_hw_actions_release(struct rte_eth_dev
> *dev, struct mlx5_hw_actions *acts)  }
> 
>  /**
> - * Destroy DR actions created by action template.
> - *
> - * For DR actions created during table creation's action translate.
> - * Need to destroy the DR action when destroying the table.
> + * Release the action data back into the pool without destory any action.
>   *
>   * @param[in] dev
>   *   Pointer to the rte_eth_dev structure.
>   * @param[in] acts
>   *   Pointer to the template HW steering DR actions.
>   */
> -static void
> -__flow_hw_action_template_destroy(struct rte_eth_dev *dev, struct
> mlx5_hw_actions *acts)
> +static inline void
> +__flow_hw_act_data_flush(struct rte_eth_dev *dev, struct
> +mlx5_hw_actions *acts)
>  {
>  	struct mlx5_priv *priv = dev->data->dev_private;
>  	struct mlx5_action_construct_data *data; @@ -987,7 +984,23 @@
> __flow_hw_action_template_destroy(struct rte_eth_dev *dev, struct
> mlx5_hw_action
>  		LIST_REMOVE(data, next);
>  		mlx5_ipool_free(priv->acts_ipool, data->idx);
>  	}
> +}
> 
> +/*
> + * Destroy DR actions created by action template.
> + *
> + * For DR actions created during table creation's action translate.
> + * Need to destroy the DR action when destroying the table.
> + *
> + * @param[in] dev
> + *   Pointer to the rte_eth_dev structure.
> + * @param[in] acts
> + *   Pointer to the template HW steering DR actions.
> + */
> +static void
> +__flow_hw_action_template_destroy(struct rte_eth_dev *dev, struct
> +mlx5_hw_actions *acts) {
> +	__flow_hw_act_data_flush(dev, acts);
>  	__flow_hw_actions_release(dev, acts);
>  }
> 
> @@ -13492,14 +13505,14 @@ flow_nta_build_template_mask(const struct
> rte_flow_action actions[],
> 
>  static int
>  flow_hw_translate_flow_actions(struct rte_eth_dev *dev,
> -			  const struct rte_flow_attr *attr,
> -			  const struct rte_flow_action actions[],
> -			  struct rte_flow_hw *flow,
> -			  struct mlx5_flow_hw_action_params *ap,
> -			  struct mlx5_hw_actions *hw_acts,
> -			  uint64_t item_flags, uint64_t action_flags,
> -			  bool external,
> -			  struct rte_flow_error *error)
> +			       const struct rte_flow_attr *attr,
> +			       const struct rte_flow_action actions[],
> +			       struct rte_flow_hw *flow,
> +			       struct mlx5_flow_hw_action_params *ap,
> +			       struct mlx5_hw_actions *hw_acts,
> +			       uint64_t item_flags, uint64_t action_flags,
> +			       bool external,
> +			       struct rte_flow_error *error)
>  {
>  	int ret = 0;
>  	uint32_t src_group = 0;
> @@ -13542,7 +13555,7 @@ flow_hw_translate_flow_actions(struct
> rte_eth_dev *dev,
>  	table = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*table), 0,
> SOCKET_ID_ANY);
>  	if (!table)
>  		return rte_flow_error_set(error, ENOMEM,
> RTE_FLOW_ERROR_TYPE_ACTION,
> -				   actions, "Failed to allocate dummy
> table");
> +					  actions, "Failed to allocate
> dummy table");
>  	at = __flow_hw_actions_template_create(dev, &template_attr,
> actions, masks, true, error);
>  	if (!at) {
>  		ret = -rte_errno;
> @@ -13556,27 +13569,29 @@ flow_hw_translate_flow_actions(struct
> rte_eth_dev *dev,
>  	memcpy(&table->cfg.attr.flow_attr, attr, sizeof(*attr));
>  	table->ats[0].action_template = at;
>  	ret = __flow_hw_translate_actions_template(dev, &table->cfg,
> hw_acts, at,
> -		&table->mpctx, true, error);
> +						   &table->mpctx, true,
> error);
>  	if (ret)
>  		goto end;
>  	/* handle bulk actions register. */
>  	ret = flow_hw_encap_decap_resource_register(dev, table, hw_acts,
> flow, error);
>  	if (ret)
> -		goto clean_up;
> +		goto end;
>  	ret = flow_hw_modify_hdr_resource_register(dev, table, hw_acts,
> flow, error);
>  	if (ret)
> -		goto clean_up;
> +		goto end;
>  	table->ats[0].acts = *hw_acts;
>  	ret = flow_hw_actions_construct(dev, flow, ap,
> -		&table->ats[0], item_flags, table,
> -		actions, hw_acts->rule_acts, 0, error);
> +					&table->ats[0], item_flags, table,
> +					actions, hw_acts->rule_acts, 0,
> error);
>  	if (ret)
> -		goto clean_up;
> +		goto end;
>  	goto end;
> -clean_up:
> -	/* Make sure that there is no garbage in the actions. */
> -	__flow_hw_action_template_destroy(dev, hw_acts);
>  end:
> +	if (ret)
> +		/* Make sure that there is no garbage in the actions. */
> +		__flow_hw_action_template_destroy(dev, hw_acts);
> +	else
> +		__flow_hw_act_data_flush(dev, hw_acts);
>  	if (table)
>  		mlx5_free(table);
>  	if (at)
> --
> 2.34.1

Best regards,
Dariusz Sosnowski


      reply	other threads:[~2024-11-18 10:36 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-18  6:52 Bing Zhao
2024-11-18 10:36 ` Dariusz Sosnowski [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CH3PR12MB84605F985354965C4D750B88A4272@CH3PR12MB8460.namprd12.prod.outlook.com \
    --to=dsosnowski@nvidia.com \
    --cc=bingz@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=mkashani@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=suanmingm@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).