From: vanshika.shukla@nxp.com
To: dev@dpdk.org, Hemant Agrawal <hemant.agrawal@nxp.com>,
Sachin Saxena <sachin.saxena@nxp.com>
Cc: Vanshika Shukla <vanshika.shukla@nxp.com>
Subject: [PATCH v3 8/8] mempool/dpaax: cache free optimization
Date: Mon, 8 Jul 2024 12:59:45 +0530 [thread overview]
Message-ID: <20240708072945.2376209-9-vanshika.shukla@nxp.com> (raw)
In-Reply-To: <20240708072945.2376209-1-vanshika.shukla@nxp.com>
From: Sachin Saxena <sachin.saxena@nxp.com>
- Updates the cache threshold value as per
the platform specific optimal value.
Signed-off-by: Sachin Saxena <sachin.saxena@nxp.com>
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
---
drivers/mempool/dpaa/dpaa_mempool.c | 16 +++++++++++++++-
drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 15 +++++++++++++++
2 files changed, 30 insertions(+), 1 deletion(-)
diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c
index 21e8938cc6..9e3a743575 100644
--- a/drivers/mempool/dpaa/dpaa_mempool.c
+++ b/drivers/mempool/dpaa/dpaa_mempool.c
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
*
- * Copyright 2017,2019 NXP
+ * Copyright 2017,2019,2023 NXP
*
*/
@@ -51,6 +51,8 @@ dpaa_mbuf_create_pool(struct rte_mempool *mp)
struct bman_pool_params params = {
.flags = BMAN_POOL_FLAG_DYNAMIC_BPID
};
+ unsigned int lcore_id;
+ struct rte_mempool_cache *cache;
MEMPOOL_INIT_FUNC_TRACE();
@@ -118,6 +120,18 @@ dpaa_mbuf_create_pool(struct rte_mempool *mp)
rte_memcpy(bp_info, (void *)&rte_dpaa_bpid_info[bpid],
sizeof(struct dpaa_bp_info));
mp->pool_data = (void *)bp_info;
+ /* Update per core mempool cache threshold to optimal value which is
+ * number of buffers that can be released to HW buffer pool in
+ * a single API call.
+ */
+ for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+ cache = &mp->local_cache[lcore_id];
+ DPAA_MEMPOOL_DEBUG("lCore %d: cache->flushthresh %d -> %d\n",
+ lcore_id, cache->flushthresh,
+ (uint32_t)(cache->size + DPAA_MBUF_MAX_ACQ_REL));
+ if (cache->flushthresh)
+ cache->flushthresh = cache->size + DPAA_MBUF_MAX_ACQ_REL;
+ }
DPAA_MEMPOOL_INFO("BMAN pool created for bpid =%d", bpid);
return 0;
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 4c9245cb81..fe82475b10 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -44,6 +44,8 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
struct dpaa2_bp_info *bp_info;
struct dpbp_attr dpbp_attr;
uint32_t bpid;
+ unsigned int lcore_id;
+ struct rte_mempool_cache *cache;
int ret;
avail_dpbp = dpaa2_alloc_dpbp_dev();
@@ -132,6 +134,19 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
DPAA2_MEMPOOL_DEBUG("BP List created for bpid =%d", dpbp_attr.bpid);
h_bp_list = bp_list;
+ /* Update per core mempool cache threshold to optimal value which is
+ * number of buffers that can be released to HW buffer pool in
+ * a single API call.
+ */
+ for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+ cache = &mp->local_cache[lcore_id];
+ DPAA2_MEMPOOL_DEBUG("lCore %d: cache->flushthresh %d -> %d\n",
+ lcore_id, cache->flushthresh,
+ (uint32_t)(cache->size + DPAA2_MBUF_MAX_ACQ_REL));
+ if (cache->flushthresh)
+ cache->flushthresh = cache->size + DPAA2_MBUF_MAX_ACQ_REL;
+ }
+
return 0;
err3:
rte_free(bp_info);
--
2.25.1
next prev parent reply other threads:[~2024-07-08 7:30 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-03 11:16 [PATCH 0/8] DPAA specific fixes vanshika.shukla
2024-07-03 11:16 ` [PATCH 1/8] bus/dpaa: fix bus scan for DMA devices vanshika.shukla
2024-07-03 11:16 ` [PATCH 2/8] bus/dpaa: fix resource leak in variable dev vanshika.shukla
2024-07-03 11:16 ` [PATCH 3/8] common/dpaax: fix IOVA table cleanup vanshika.shukla
2024-07-03 11:16 ` [PATCH 4/8] common/dpaax: fix array overrun issue vanshika.shukla
2024-07-03 11:16 ` [PATCH 5/8] bus/dpaa: remove redundant file descriptor check vanshika.shukla
2024-07-03 11:16 ` [PATCH 6/8] bus/dpaa: remove unused code vanshika.shukla
2024-07-03 11:16 ` [PATCH 7/8] net/dpaa: restrict MTU config for shared intf vanshika.shukla
2024-07-03 11:16 ` [PATCH 8/8] mempool/dpaax: cache free optimization vanshika.shukla
2024-07-05 7:42 ` [v2 0/7] DPAA specific fixes vanshika.shukla
2024-07-05 7:42 ` [v2 1/7] bus/dpaa: fix bus scan for DMA devices vanshika.shukla
2024-07-08 7:29 ` [v3 0/8] DPAA specific fixes vanshika.shukla
2024-07-08 7:29 ` [PATCH v3 1/8] bus/dpaa: fix bus scan for DMA devices vanshika.shukla
2024-07-08 7:29 ` [PATCH v3 2/8] bus/dpaa: fix resource leak in variable dev vanshika.shukla
2024-07-08 13:28 ` David Marchand
2024-07-08 7:29 ` [PATCH v3 3/8] common/dpaax: fix IOVA table cleanup vanshika.shukla
2024-07-08 7:29 ` [PATCH v3 4/8] common/dpaax: fix array overrun issue vanshika.shukla
2024-07-08 7:29 ` [PATCH v3 5/8] bus/dpaa: remove redundant file descriptor check vanshika.shukla
2024-07-08 7:29 ` [PATCH v3 6/8] bus/dpaa: remove unused code vanshika.shukla
2024-07-08 7:29 ` [PATCH v3 7/8] net/dpaa: restrict MTU config for shared intf vanshika.shukla
2024-07-09 8:47 ` David Marchand
2024-07-08 7:29 ` vanshika.shukla [this message]
2024-07-08 13:28 ` [PATCH v3 8/8] mempool/dpaax: cache free optimization David Marchand
2024-07-09 10:04 ` [v4 0/8] DPAA specific fixes vanshika.shukla
2024-07-09 10:04 ` [v4 1/8] bus/dpaa: fix bus scan for DMA devices vanshika.shukla
2024-07-09 10:04 ` [v4 2/8] bus/dpaa: fix resource leak in variable dev vanshika.shukla
2024-07-09 10:05 ` [v4 3/8] common/dpaax: fix IOVA table cleanup vanshika.shukla
2024-07-09 10:05 ` [v4 4/8] common/dpaax: fix array overrun issue vanshika.shukla
2024-07-09 10:05 ` [v4 5/8] bus/dpaa: remove redundant file descriptor check vanshika.shukla
2024-07-09 10:05 ` [v4 6/8] bus/dpaa: remove unused code vanshika.shukla
2024-07-09 10:05 ` [v4 7/8] net/dpaa: restrict MTU config for shared intf vanshika.shukla
2024-07-09 10:05 ` [v4 8/8] mempool/dpaax: cache free optimization vanshika.shukla
2024-07-09 12:05 ` [v4 0/8] DPAA specific fixes David Marchand
2024-07-10 8:55 ` [v5 " vanshika.shukla
2024-07-10 8:55 ` [v5 1/8] bus/dpaa: fix bus scan for DMA devices vanshika.shukla
2024-07-10 8:55 ` [v5 2/8] bus/dpaa: fix resource leak in variable dev vanshika.shukla
2024-07-10 8:55 ` [v5 3/8] common/dpaax: fix IOVA table cleanup vanshika.shukla
2024-07-10 8:55 ` [v5 4/8] common/dpaax: fix array overrun issue vanshika.shukla
2024-07-10 8:55 ` [v5 5/8] bus/dpaa: remove redundant file descriptor check vanshika.shukla
2024-07-10 8:55 ` [v5 6/8] bus/dpaa: remove unused code vanshika.shukla
2024-07-10 8:55 ` [v5 7/8] net/dpaa: restrict MTU config for shared intf vanshika.shukla
2024-07-10 8:55 ` [v5 8/8] mempool/dpaax: cache free optimization vanshika.shukla
2024-07-11 15:41 ` [v5 0/8] DPAA specific fixes David Marchand
2024-07-05 7:42 ` [v2 2/7] bus/dpaa: fix resource leak in variable dev vanshika.shukla
2024-07-05 7:42 ` [v2 3/7] common/dpaax: fix IOVA table cleanup vanshika.shukla
2024-07-05 7:42 ` [v2 4/7] common/dpaax: fix array overrun issue vanshika.shukla
2024-07-05 7:42 ` [v2 5/7] bus/dpaa: remove redundant file descriptor check vanshika.shukla
2024-07-05 7:42 ` [v2 6/7] bus/dpaa: remove unused code vanshika.shukla
2024-07-05 7:42 ` [v2 7/7] net/dpaa: restrict MTU config for shared intf vanshika.shukla
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240708072945.2376209-9-vanshika.shukla@nxp.com \
--to=vanshika.shukla@nxp.com \
--cc=dev@dpdk.org \
--cc=hemant.agrawal@nxp.com \
--cc=sachin.saxena@nxp.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).