DPDK patches and discussions
 help / color / mirror / Atom feed
From: Dariusz Sosnowski <dsosnowski@nvidia.com>
To: Matan Azrad <matan@nvidia.com>,
	Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
	Ori Kam <orika@nvidia.com>, Suanming Mou <suanmingm@nvidia.com>
Cc: <dev@dpdk.org>
Subject: [PATCH 1/2] net/mlx5: fix queue used to deallocate counter
Date: Tue, 27 Jun 2023 11:27:45 +0300	[thread overview]
Message-ID: <20230627082746.2466304-2-dsosnowski@nvidia.com> (raw)
In-Reply-To: <20230627082746.2466304-1-dsosnowski@nvidia.com>

When ports are configured to share flow engine resources
(configured through rte_flow_configure()),
counter objects used during flow rule creation are pulled
directly from the shared counter pool.
This is done by calling mlx5_hws_cnt_pool_get() with NULL pointer
passed as queue pointer.

When a flow rule was destroyed on the host port,
counter object was not returned to the pool, because
mlx5_hws_cnt_pool_put() was always called with a pointer
to the currently used queue.
This forced placing the counter in the local queue cache.
As a result, during the application runtime, some counter objects
became inaccessible, since this queue is never fully flushed.

This patch fixes that behavior, by changing the flow rule destroying
logic. If port is configured to use shared resources,
mlx5_hws_cnt_pool_put() is called with NULL pointer
passed as queue pointer.

Fixes: 13ea6bdcc7ee ("net/mlx5: support counters in cross port shared mode")
Cc: viacheslavo@nvidia.com

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_hw.c | 14 +++++++++-----
 drivers/net/mlx5/mlx5_hws_cnt.h | 16 ++++++++++++++++
 2 files changed, 25 insertions(+), 5 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index b5137a822a..a0c7956626 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2273,6 +2273,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 		struct mlx5_hrxq *hrxq;
 		uint32_t ct_idx;
 		cnt_id_t cnt_id;
+		uint32_t *cnt_queue;
 		uint32_t mtr_id;
 
 		action = &actions[act_data->action_src];
@@ -2429,10 +2430,9 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 				break;
 			/* Fall-through. */
 		case RTE_FLOW_ACTION_TYPE_COUNT:
-			ret = mlx5_hws_cnt_pool_get(priv->hws_cpool,
-					(priv->shared_refcnt ||
-					 priv->hws_cpool->cfg.host_cpool) ?
-					NULL : &queue, &cnt_id, age_idx);
+			/* If the port is engaged in resource sharing, do not use queue cache. */
+			cnt_queue = mlx5_hws_cnt_is_pool_shared(priv) ? NULL : &queue;
+			ret = mlx5_hws_cnt_pool_get(priv->hws_cpool, cnt_queue, &cnt_id, age_idx);
 			if (ret != 0)
 				return ret;
 			ret = mlx5_hws_cnt_pool_get_action_offset
@@ -3014,6 +3014,8 @@ flow_hw_age_count_release(struct mlx5_priv *priv, uint32_t queue,
 			  struct rte_flow_hw *flow,
 			  struct rte_flow_error *error)
 {
+	uint32_t *cnt_queue;
+
 	if (mlx5_hws_cnt_is_shared(priv->hws_cpool, flow->cnt_id)) {
 		if (flow->age_idx && !mlx5_hws_age_is_indirect(flow->age_idx)) {
 			/* Remove this AGE parameter from indirect counter. */
@@ -3024,8 +3026,10 @@ flow_hw_age_count_release(struct mlx5_priv *priv, uint32_t queue,
 		}
 		return;
 	}
+	/* If the port is engaged in resource sharing, do not use queue cache. */
+	cnt_queue = mlx5_hws_cnt_is_pool_shared(priv) ? NULL : &queue;
 	/* Put the counter first to reduce the race risk in BG thread. */
-	mlx5_hws_cnt_pool_put(priv->hws_cpool, &queue, &flow->cnt_id);
+	mlx5_hws_cnt_pool_put(priv->hws_cpool, cnt_queue, &flow->cnt_id);
 	flow->cnt_id = 0;
 	if (flow->age_idx) {
 		if (mlx5_hws_age_is_indirect(flow->age_idx)) {
diff --git a/drivers/net/mlx5/mlx5_hws_cnt.h b/drivers/net/mlx5/mlx5_hws_cnt.h
index b4f3db0533..f37a7d6151 100644
--- a/drivers/net/mlx5/mlx5_hws_cnt.h
+++ b/drivers/net/mlx5/mlx5_hws_cnt.h
@@ -553,6 +553,22 @@ mlx5_hws_cnt_pool_get(struct mlx5_hws_cnt_pool *cpool, uint32_t *queue,
 	return 0;
 }
 
+/**
+ * Check if counter pool allocated for HWS is shared between ports.
+ *
+ * @param[in] priv
+ *   Pointer to the port private data structure.
+ *
+ * @return
+ *   True if counter pools is shared between ports. False otherwise.
+ */
+static __rte_always_inline bool
+mlx5_hws_cnt_is_pool_shared(struct mlx5_priv *priv)
+{
+	return priv && priv->hws_cpool &&
+	    (priv->shared_refcnt || priv->hws_cpool->cfg.host_cpool != NULL);
+}
+
 static __rte_always_inline unsigned int
 mlx5_hws_cnt_pool_get_size(struct mlx5_hws_cnt_pool *cpool)
 {
-- 
2.25.1


  reply	other threads:[~2023-06-27  8:28 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-27  8:27 [PATCH 0/2] net/mlx5: fix counter object leaks Dariusz Sosnowski
2023-06-27  8:27 ` Dariusz Sosnowski [this message]
2023-06-27  8:27 ` [PATCH 2/2] net/mlx5: fix counter allocation from shared pool Dariusz Sosnowski
2023-07-02 14:41 ` [PATCH 0/2] net/mlx5: fix counter object leaks Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230627082746.2466304-2-dsosnowski@nvidia.com \
    --to=dsosnowski@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=suanmingm@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).