DPDK patches and discussions
 help / color / mirror / Atom feed
From: Slava Ovsiienko <viacheslavo@nvidia.com>
To: Slava Ovsiienko <viacheslavo@nvidia.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "ferruh.yigit@intel.com" <ferruh.yigit@intel.com>,
	Raslan Darawsheh <rasland@nvidia.com>,
	Matan Azrad <matan@nvidia.com>,
	"stable@dpdk.org" <stable@dpdk.org>
Subject: RE: [PATCH] net/mlx5: remove redundant "set used"
Date: Thu, 11 Nov 2021 08:59:11 +0000	[thread overview]
Message-ID: <DM6PR12MB3753A4E679FBBF6414DBB3BADF949@DM6PR12MB3753.namprd12.prod.outlook.com> (raw)
In-Reply-To: <20211111084751.26721-1-viacheslavo@nvidia.com>

Hi, Ferruh

I've also inspected the mlx5 PMD code for RTE_SET_USED() for the similar
issues related to the MLX5_ASSERT().

The patch http://patches.dpdk.org/project/dpdk/patch/20211111084751.26721-1-viacheslavo@nvidia.com/
should refine the few found ones. 

I do not mind about squashing with "net/mlx5: fix mutex unlock in txpp cleanup"
After getting this code in Upstream will care about the version for LTS.

With best regards,
Slava

> -----Original Message-----
> From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> Sent: Thursday, November 11, 2021 10:48
> To: dev@dpdk.org
> Cc: ferruh.yigit@intel.com; Raslan Darawsheh <rasland@nvidia.com>; Matan
> Azrad <matan@nvidia.com>; stable@dpdk.org
> Subject: [PATCH] net/mlx5: remove redundant "set used"
> 
> The patch just refines the code and replaces the pairs of MLX5_ASSERT() and
> RTE_SET_USED() with equivalent claim_zero().
> 
> Cc: stable@dpdk.org
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  drivers/net/mlx5/mlx5_txpp.c | 30 ++++++++++--------------------
>  1 file changed, 10 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
> index 73626f0e8f..af77e91e4c 100644
> --- a/drivers/net/mlx5/mlx5_txpp.c
> +++ b/drivers/net/mlx5/mlx5_txpp.c
> @@ -890,7 +890,6 @@ mlx5_txpp_start(struct rte_eth_dev *dev)
>  	struct mlx5_priv *priv = dev->data->dev_private;
>  	struct mlx5_dev_ctx_shared *sh = priv->sh;
>  	int err = 0;
> -	int ret;
> 
>  	if (!priv->config.tx_pp) {
>  		/* Packet pacing is not requested for the device. */ @@ -
> 903,14 +902,14 @@ mlx5_txpp_start(struct rte_eth_dev *dev)
>  		return 0;
>  	}
>  	if (priv->config.tx_pp > 0) {
> -		ret = rte_mbuf_dynflag_lookup
> -
> 	(RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME, NULL);
> -		if (ret < 0)
> +		err = rte_mbuf_dynflag_lookup
> +			(RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME,
> NULL);
> +		/* No flag registered means no service needed. */
> +		if (err < 0)
>  			return 0;
> +		err = 0;
>  	}
> -	ret = pthread_mutex_lock(&sh->txpp.mutex);
> -	MLX5_ASSERT(!ret);
> -	RTE_SET_USED(ret);
> +	claim_zero(pthread_mutex_lock(&sh->txpp.mutex));
>  	if (sh->txpp.refcnt) {
>  		priv->txpp_en = 1;
>  		++sh->txpp.refcnt;
> @@ -924,9 +923,7 @@ mlx5_txpp_start(struct rte_eth_dev *dev)
>  			rte_errno = -err;
>  		}
>  	}
> -	ret = pthread_mutex_unlock(&sh->txpp.mutex);
> -	MLX5_ASSERT(!ret);
> -	RTE_SET_USED(ret);
> +	claim_zero(pthread_mutex_unlock(&sh->txpp.mutex));
>  	return err;
>  }
> 
> @@ -944,28 +941,21 @@ mlx5_txpp_stop(struct rte_eth_dev *dev)  {
>  	struct mlx5_priv *priv = dev->data->dev_private;
>  	struct mlx5_dev_ctx_shared *sh = priv->sh;
> -	int ret;
> 
>  	if (!priv->txpp_en) {
>  		/* Packet pacing is already disabled for the device. */
>  		return;
>  	}
>  	priv->txpp_en = 0;
> -	ret = pthread_mutex_lock(&sh->txpp.mutex);
> -	MLX5_ASSERT(!ret);
> -	RTE_SET_USED(ret);
> +	claim_zero(pthread_mutex_lock(&sh->txpp.mutex));
>  	MLX5_ASSERT(sh->txpp.refcnt);
>  	if (!sh->txpp.refcnt || --sh->txpp.refcnt) {
> -		ret = pthread_mutex_unlock(&sh->txpp.mutex);
> -		MLX5_ASSERT(!ret);
> -		RTE_SET_USED(ret);
> +		claim_zero(pthread_mutex_unlock(&sh->txpp.mutex));
>  		return;
>  	}
>  	/* No references any more, do actual destroy. */
>  	mlx5_txpp_destroy(sh);
> -	ret = pthread_mutex_unlock(&sh->txpp.mutex);
> -	MLX5_ASSERT(!ret);
> -	RTE_SET_USED(ret);
> +	claim_zero(pthread_mutex_unlock(&sh->txpp.mutex));
>  }
> 
>  /*
> --
> 2.18.1


  reply	other threads:[~2021-11-11  8:59 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-12 10:02 [dpdk-dev] [PATCH v5] net/mlx5: fix mutex unlock in txpp cleanup Chengfeng Ye
2021-11-02  7:55 ` Slava Ovsiienko
2021-11-09 11:08 ` Raslan Darawsheh
2021-11-10 16:57 ` Ferruh Yigit
2021-11-11  7:06   ` Slava Ovsiienko
2021-11-11  8:47     ` [PATCH] net/mlx5: remove redundant "set used" Viacheslav Ovsiienko
2021-11-11  8:59       ` Slava Ovsiienko [this message]
2021-11-11 12:08         ` Ferruh Yigit
2021-11-11 12:27           ` Slava Ovsiienko
2021-11-11 16:07             ` YE Chengfeng
2021-11-11 16:47               ` Slava Ovsiienko
2021-11-11 18:31                 ` YE Chengfeng
2021-11-16 14:52                   ` 回复: " YE Chengfeng
2021-11-11 11:25     ` [dpdk-dev] [PATCH v5] net/mlx5: fix mutex unlock in txpp cleanup Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM6PR12MB3753A4E679FBBF6414DBB3BADF949@DM6PR12MB3753.namprd12.prod.outlook.com \
    --to=viacheslavo@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=matan@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).