From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 56C394555B; Wed, 3 Jul 2024 13:17:39 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 654D342DCA; Wed, 3 Jul 2024 13:16:58 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 1AAF0427C1 for ; Wed, 3 Jul 2024 13:16:50 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id EF90F1A1A2C; Wed, 3 Jul 2024 13:16:49 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id A42341A1A27; Wed, 3 Jul 2024 13:16:49 +0200 (CEST) Received: from lsv03379.swis.in-blr01.nxp.com (lsv03379.swis.in-blr01.nxp.com [92.120.147.188]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 85CE9180226C; Wed, 3 Jul 2024 19:16:48 +0800 (+08) From: vanshika.shukla@nxp.com To: dev@dpdk.org, Hemant Agrawal , Sachin Saxena Cc: Vanshika Shukla Subject: [PATCH 8/8] mempool/dpaax: cache free optimization Date: Wed, 3 Jul 2024 16:46:44 +0530 Message-Id: <20240703111644.1523242-9-vanshika.shukla@nxp.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240703111644.1523242-1-vanshika.shukla@nxp.com> References: <20240703111644.1523242-1-vanshika.shukla@nxp.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Virus-Scanned: ClamAV using ClamSMTP X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Sachin Saxena - Updates the cache threshold value as per the platform specific optimal value. Signed-off-by: Sachin Saxena Signed-off-by: Vanshika Shukla --- drivers/mempool/dpaa/dpaa_mempool.c | 16 +++++++++++++++- drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 15 +++++++++++++++ 2 files changed, 30 insertions(+), 1 deletion(-) diff --git a/drivers/mempool/dpaa/dpaa_mempool.c b/drivers/mempool/dpaa/dpaa_mempool.c index 0b484b3d9c..3a65ef7d60 100644 --- a/drivers/mempool/dpaa/dpaa_mempool.c +++ b/drivers/mempool/dpaa/dpaa_mempool.c @@ -1,6 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * - * Copyright 2017,2019 NXP + * Copyright 2017,2019,2023 NXP * */ @@ -51,6 +51,8 @@ dpaa_mbuf_create_pool(struct rte_mempool *mp) struct bman_pool_params params = { .flags = BMAN_POOL_FLAG_DYNAMIC_BPID }; + unsigned int lcore_id; + struct rte_mempool_cache *cache; MEMPOOL_INIT_FUNC_TRACE(); @@ -118,6 +120,18 @@ dpaa_mbuf_create_pool(struct rte_mempool *mp) rte_memcpy(bp_info, (void *)&rte_dpaa_bpid_info[bpid], sizeof(struct dpaa_bp_info)); mp->pool_data = (void *)bp_info; + /* Update per core mempool cache threshold to optimal value which is + * number of buffers that can be released to HW buffer pool in + * a single API call. + */ + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + cache = &mp->local_cache[lcore_id]; + DPAA_MEMPOOL_DEBUG("lCore %d: cache->flushthresh %d -> %d\n", + lcore_id, cache->flushthresh, + (uint32_t)(cache->size + DPAA_MBUF_MAX_ACQ_REL)); + if (cache->flushthresh) + cache->flushthresh = cache->size + DPAA_MBUF_MAX_ACQ_REL; + } DPAA_MEMPOOL_INFO("BMAN pool created for bpid =%d", bpid); return 0; diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c index 4c9245cb81..fe82475b10 100644 --- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c +++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c @@ -44,6 +44,8 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp) struct dpaa2_bp_info *bp_info; struct dpbp_attr dpbp_attr; uint32_t bpid; + unsigned int lcore_id; + struct rte_mempool_cache *cache; int ret; avail_dpbp = dpaa2_alloc_dpbp_dev(); @@ -132,6 +134,19 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp) DPAA2_MEMPOOL_DEBUG("BP List created for bpid =%d", dpbp_attr.bpid); h_bp_list = bp_list; + /* Update per core mempool cache threshold to optimal value which is + * number of buffers that can be released to HW buffer pool in + * a single API call. + */ + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + cache = &mp->local_cache[lcore_id]; + DPAA2_MEMPOOL_DEBUG("lCore %d: cache->flushthresh %d -> %d\n", + lcore_id, cache->flushthresh, + (uint32_t)(cache->size + DPAA2_MBUF_MAX_ACQ_REL)); + if (cache->flushthresh) + cache->flushthresh = cache->size + DPAA2_MBUF_MAX_ACQ_REL; + } + return 0; err3: rte_free(bp_info); -- 2.25.1