DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH] net/mlx5: fix indexed pool fetch overlap issue
@ 2022-02-23  6:26 Suanming Mou
  2022-03-02 10:19 ` Raslan Darawsheh
  0 siblings, 1 reply; 2+ messages in thread
From: Suanming Mou @ 2022-02-23  6:26 UTC (permalink / raw)
  To: viacheslavo, matan; +Cc: rasland, dev, stable

For indexed pool with local cache, when a new trunk is allocated,
half of the trunk's index was fetched to the local cache. In case
of local cache size was less then half of the trunk size, memory
overlap happened.

This commit adds the check of the fetch size, if local cache size
is less than fetch size, adjust the fetch size to be local cache
size.

Fixes: d15c0946beea ("net/mlx5: add indexed pool local cache")
Cc: stable@dpdk.org

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_utils.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c
index be33af96fe..4115a2ad77 100644
--- a/drivers/net/mlx5/mlx5_utils.c
+++ b/drivers/net/mlx5/mlx5_utils.c
@@ -340,6 +340,8 @@ mlx5_ipool_allocate_from_global(struct mlx5_indexed_pool *pool, int cidx)
 	/* Enqueue half of the index to global. */
 	ts_idx = mlx5_trunk_idx_offset_get(pool, trunk_idx) + 1;
 	fetch_size = trunk->free >> 1;
+	if (fetch_size > pool->cfg.per_core_cache)
+		fetch_size = trunk->free - pool->cfg.per_core_cache;
 	for (i = 0; i < fetch_size; i++)
 		lc->idx[i] = ts_idx + i;
 	lc->len = fetch_size;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-03-02 10:19 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-23  6:26 [PATCH] net/mlx5: fix indexed pool fetch overlap issue Suanming Mou
2022-03-02 10:19 ` Raslan Darawsheh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).