DPDK patches and discussions
 help / color / mirror / Atom feed
From: Bing Zhao <bingz@nvidia.com>
To: <viacheslavo@nvidia.com>, <matan@nvidia.com>
Cc: <dev@dpdk.org>, <orika@nvidia.com>, <rasland@nvidia.com>,
	<stable@dpdk.org>
Subject: [dpdk-dev] [PATCH] net/mlx5: fix queue leaking in hairpin auto bind checking
Date: Sat, 24 Jul 2021 16:27:11 +0300	[thread overview]
Message-ID: <20210724132711.90109-1-bingz@nvidia.com> (raw)

During the start up stage, the hairpin auto bind was executed for
each port. All the Tx and Rx queues configured for this port should
be checked to confirm if the auto bind of hairpin is needed.
1. The queue is hairpin queue.
2. The peer port is the same one and the peer queue should also be
   with hairpin type.
3. The manual bind attribute is not set for this queue.

If the queue is not a hairpin queue or it doesn't need to be bound
automatically, the reference count should be decreased by 1 since
the count was increased when calling the mlx5_*xq_get().
When the peer port is not the same, it means that no auto bind is
supported and the mlx5_*xq_release() was missed in the current
implementation.

By calling the release function before continue, the count is
correct when calling the device close.

Fixes: aa8bea0e3455 ("net/mlx5: add conditional hairpin auto bind")
Cc: stable@dpdk.org

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_trigger.c | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index a9d5d58fd9..54173bfacb 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -228,12 +228,11 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 		txq_ctrl = mlx5_txq_get(dev, i);
 		if (!txq_ctrl)
 			continue;
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN ||
+		    txq_ctrl->hairpin_conf.peers[0].port != self_port) {
 			mlx5_txq_release(dev, i);
 			continue;
 		}
-		if (txq_ctrl->hairpin_conf.peers[0].port != self_port)
-			continue;
 		if (txq_ctrl->hairpin_conf.manual_bind) {
 			mlx5_txq_release(dev, i);
 			return 0;
@@ -247,13 +246,12 @@ mlx5_hairpin_auto_bind(struct rte_eth_dev *dev)
 		txq_ctrl = mlx5_txq_get(dev, i);
 		if (!txq_ctrl)
 			continue;
-		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN) {
+		/* Skip hairpin queues with other peer ports. */
+		if (txq_ctrl->type != MLX5_TXQ_TYPE_HAIRPIN ||
+		    txq_ctrl->hairpin_conf.peers[0].port != self_port) {
 			mlx5_txq_release(dev, i);
 			continue;
 		}
-		/* Skip hairpin queues with other peer ports. */
-		if (txq_ctrl->hairpin_conf.peers[0].port != self_port)
-			continue;
 		if (!txq_ctrl->obj) {
 			rte_errno = ENOMEM;
 			DRV_LOG(ERR, "port %u no txq object found: %d",
-- 
2.27.0


             reply	other threads:[~2021-07-24 13:27 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-24 13:27 Bing Zhao [this message]
2021-07-28  7:22 ` Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210724132711.90109-1-bingz@nvidia.com \
    --to=bingz@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=stable@dpdk.org \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).