From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 88DB945D3D; Tue, 19 Nov 2024 06:34:41 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1887E4278C; Tue, 19 Nov 2024 06:34:37 +0100 (CET) Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) by mails.dpdk.org (Postfix) with ESMTP id 6C5C24278C for ; Tue, 19 Nov 2024 06:34:35 +0100 (CET) Received: by mail-pf1-f172.google.com with SMTP id d2e1a72fcca58-71e592d7f6eso1823978b3a.3 for ; Mon, 18 Nov 2024 21:34:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1731994474; x=1732599274; darn=dpdk.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9oInq/pYaCfAm5OGV5DFPcZuJbvD3llYSB2OXPmMFNE=; b=TgPWxkhVX8nCq/GTV7Bij4IxDHxaKB79N8HjCubppM6kwPvkfW7bO7FI3+2fN3RuLx ty0CxNdtBTtRe4q5Z51KaFbnFR80nBJBP9w6xRG3aB2Wym8JPoJ/FXv61hrHWEl9tRBO l+pzmlFbigGBON1fnVHvXsqiFxoaZyhoo1TVU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731994474; x=1732599274; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9oInq/pYaCfAm5OGV5DFPcZuJbvD3llYSB2OXPmMFNE=; b=uf/FFgPTz2nHV8vHYCDTu8nMkDvhCSP4O+fs2ial4x8q1JXajqWXfV8/msnrCDCV98 SY9yzBBq0zo4DMIBOTcNGr0hfhy6rYQnqOq0XfARVvq5/LK8Sy/76QAi2LBIqS4U9CzV hM5bvw7zeghfgmYJFe758HTS7jNFyNa21BHOklpOiP6Dxx5kDWgxjeOX0y0zry6IiZAI jds4CqKzV5Xt8V7nHn752jpvWsFUzqwooNjyq30QkbH6qk7MBNtJ7fAr5BMI4UZHkrRk mu7nwXhAxrN10bZTKP6k8RlPM/fH3nsRXsYPBDdVVD7mPPyMUH2CSgn235nF8Wei+G7g 2knQ== X-Gm-Message-State: AOJu0YzZS/iiRm3hxDuLGYK8FM5Msm0O4l5nwhJ8HqwuK+NLonqN76ZS lqaQaj7Nqdtm6BahtmP7Ef/2u+HZP6RVjrTcuzdjQuRjCmIgxqHLQz76g2p/A8NR9FkfYbKrp1R 7VUcQDfP1+NBUN/NfvdiypohfkPu4D5nhnMqo//WOXmUPdYxMlVXHqAOUr2KHhr81plCC2m887c 1axKIpzkuzY257FFFmNQpHSLACT5S7sYCDMdvt5j9gHg== X-Google-Smtp-Source: AGHT+IHzyKCDf8aVZA28kZqBN8sO1hiibW1sAtCgwuOZsA+2XcLrfni9P7IklxMddCVpHRInlsegeA== X-Received: by 2002:a05:6a00:2309:b0:71e:1722:d019 with SMTP id d2e1a72fcca58-72476e7b88emr21378699b3a.22.1731994473824; Mon, 18 Nov 2024 21:34:33 -0800 (PST) Received: from dhcp-10-123-154-23.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-724771c1142sm7399224b3a.109.2024.11.18.21.34.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Nov 2024 21:34:33 -0800 (PST) From: Sriharsha Basavapatna To: dev@dpdk.org Cc: Kishore Padmanabha , Peter Spreadborough , Shuanglin Wang , Sriharsha Basavapatna Subject: [PATCH v2 2/5] net/bnxt/tf_ulp: fix vfr clean up and stats lockup Date: Tue, 19 Nov 2024 11:15:47 +0530 Message-Id: <20241119054550.1255359-3-sriharsha.basavapatna@broadcom.com> X-Mailer: git-send-email 2.39.0.189.g4dbebc36b0 In-Reply-To: <20241119054550.1255359-1-sriharsha.basavapatna@broadcom.com> References: <20241119054550.1255359-1-sriharsha.basavapatna@broadcom.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Kishore Padmanabha The representor flows were not being deleted as part of the vfr clean up. Added code to delete flows related to vfr interface. Also fixed the stats counter thread lockup. Fixes: 0513f0af034d ("net/bnxt/tf_ulp: add stats cache for Thor2") Reviewed-by: Peter Spreadborough Reviewed-by: Shuanglin Wang Signed-off-by: Kishore Padmanabha Signed-off-by: Sriharsha Basavapatna --- drivers/net/bnxt/tf_ulp/bnxt_ulp.c | 17 +++ drivers/net/bnxt/tf_ulp/bnxt_ulp.h | 3 + drivers/net/bnxt/tf_ulp/ulp_sc_mgr.c | 164 ++++++++------------------- drivers/net/bnxt/tf_ulp/ulp_sc_mgr.h | 12 +- 4 files changed, 74 insertions(+), 122 deletions(-) diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c index e28a481f5e..1bfc88cf79 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c @@ -596,6 +596,9 @@ bnxt_ulp_port_deinit(struct bnxt *bp) /* close the session associated with this port */ bp->ulp_ctx->ops->ulp_ctx_detach(bp, session); } else { + /* Free the ulp context in the context entry list */ + bnxt_ulp_cntxt_list_del(bp->ulp_ctx); + /* clean up default flows */ bnxt_ulp_destroy_df_rules(bp, true); @@ -662,6 +665,20 @@ bnxt_ulp_cntxt_list_del(struct bnxt_ulp_context *ulp_ctx) rte_spinlock_unlock(&bnxt_ulp_ctxt_lock); } +int +bnxt_ulp_cntxt_list_count(void) +{ + struct ulp_context_list_entry *entry, *temp; + int count_1 = 0; + + rte_spinlock_lock(&bnxt_ulp_ctxt_lock); + RTE_TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) { + count_1++; + } + rte_spinlock_unlock(&bnxt_ulp_ctxt_lock); + return count_1; +} + struct bnxt_ulp_context * bnxt_ulp_cntxt_entry_acquire(void *arg) { diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h index 83fb205f68..e0e31532fd 100644 --- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.h +++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.h @@ -297,6 +297,9 @@ bnxt_ulp_cntxt_list_add(struct bnxt_ulp_context *ulp_ctx); void bnxt_ulp_cntxt_list_del(struct bnxt_ulp_context *ulp_ctx); +int +bnxt_ulp_cntxt_list_count(void); + struct bnxt_ulp_context * bnxt_ulp_cntxt_entry_acquire(void *arg); diff --git a/drivers/net/bnxt/tf_ulp/ulp_sc_mgr.c b/drivers/net/bnxt/tf_ulp/ulp_sc_mgr.c index 1317668555..c82fdaf6dd 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_sc_mgr.c +++ b/drivers/net/bnxt/tf_ulp/ulp_sc_mgr.c @@ -89,12 +89,6 @@ int32_t ulp_sc_mgr_init(struct bnxt_ulp_context *ctxt) ulp_sc_info->sc_ops = sc_ops; ulp_sc_info->flags = 0; - rc = pthread_mutex_init(&ulp_sc_info->sc_lock, NULL); - if (rc) { - BNXT_DRV_DBG(ERR, "Failed to initialize sc mutex\n"); - goto error; - } - /* Add the SC info tbl to the ulp context. */ bnxt_ulp_cntxt_ptr2_sc_info_set(ctxt, ulp_sc_info); @@ -109,9 +103,10 @@ int32_t ulp_sc_mgr_init(struct bnxt_ulp_context *ctxt) * Size is determined by the number of flows + 10% to cover IDs * used for resources. */ + ulp_sc_info->cache_tbl_size = ulp_sc_info->num_counters + + (ulp_sc_info->num_counters / 10); stats_cache_tbl_sz = sizeof(struct ulp_sc_tfc_stats_cache_entry) * - (ulp_sc_info->num_counters + - (ulp_sc_info->num_counters / 10)); + ulp_sc_info->cache_tbl_size; ulp_sc_info->stats_cache_tbl = rte_zmalloc("ulp_stats_cache_tbl", stats_cache_tbl_sz, 0); @@ -153,12 +148,6 @@ ulp_sc_mgr_deinit(struct bnxt_ulp_context *ctxt) if (!ulp_sc_info) return -EINVAL; - pthread_mutex_lock(&ulp_sc_info->sc_lock); - - ulp_sc_mgr_thread_cancel(ctxt); - - pthread_mutex_destroy(&ulp_sc_info->sc_lock); - if (ulp_sc_info->stats_cache_tbl) rte_free(ulp_sc_info->stats_cache_tbl); @@ -173,13 +162,13 @@ ulp_sc_mgr_deinit(struct bnxt_ulp_context *ctxt) return 0; } -#define ULP_SC_PERIOD_S 1 -#define ULP_SC_PERIOD_MS (ULP_SC_PERIOD_S * 1000) +#define ULP_SC_PERIOD_US 256 +#define ULP_SC_CTX_DELAY 10000 static uint32_t ulp_stats_cache_main_loop(void *arg) { struct ulp_sc_tfc_stats_cache_entry *count; - const struct bnxt_ulp_sc_core_ops *sc_ops; + const struct bnxt_ulp_sc_core_ops *sc_ops = NULL; struct ulp_sc_tfc_stats_cache_entry *sce; struct ulp_sc_tfc_stats_cache_entry *sce_end; struct tfc_mpc_batch_info_t batch_info; @@ -188,95 +177,68 @@ static uint32_t ulp_stats_cache_main_loop(void *arg) uint16_t words = (ULP_TFC_CNTR_READ_BYTES + ULP_TFC_ACT_WORD_SZ - 1) / ULP_TFC_ACT_WORD_SZ; uint32_t batch_size; struct tfc *tfcp = NULL; - uint32_t batch; - uint32_t delay = ULP_SC_PERIOD_MS; - uint64_t start; - uint64_t stop; - uint64_t hz; + uint32_t batch, stat_cnt; uint8_t *data; int rc; - static uint32_t loop; - uint64_t cycles = 0; - uint64_t cpms = 0; - - while (!ctxt) { - ctxt = bnxt_ulp_cntxt_entry_acquire(arg); - - if (ctxt) - break; - - BNXT_DRV_DBG(INFO, "could not get the ulp context lock\n"); - rte_delay_us_block(1000); - } - - - ulp_sc_info = bnxt_ulp_cntxt_ptr2_sc_info_get(ctxt); - if (!ulp_sc_info) { - bnxt_ulp_cntxt_entry_release(); - goto terminate; - } - - sc_ops = ulp_sc_info->sc_ops; - - hz = rte_get_timer_hz(); - cpms = hz / 1000; while (true) { - bnxt_ulp_cntxt_entry_release(); ctxt = NULL; - rte_delay_ms(delay); - while (!ctxt) { ctxt = bnxt_ulp_cntxt_entry_acquire(arg); if (ctxt) break; + /* If there are no more contexts just exit */ + if (bnxt_ulp_cntxt_list_count() == 0) + goto terminate; + rte_delay_us_block(ULP_SC_CTX_DELAY); + } - BNXT_DRV_DBG(INFO, "could not get the ulp context lock\n"); - rte_delay_us_block(1); + /* get the stats counter info block from ulp context */ + ulp_sc_info = bnxt_ulp_cntxt_ptr2_sc_info_get(ctxt); + if (unlikely(!ulp_sc_info)) { + bnxt_ulp_cntxt_entry_release(); + goto terminate; } - start = rte_get_timer_cycles(); sce = ulp_sc_info->stats_cache_tbl; - sce_end = sce + (ulp_sc_info->num_counters + (ulp_sc_info->num_counters / 10)); + sce_end = sce + ulp_sc_info->cache_tbl_size; - while (ulp_sc_info->num_entries && (sce < sce_end)) { - data = ulp_sc_info->read_data; + if (unlikely(!sc_ops)) + sc_ops = ulp_sc_info->sc_ops; - rc = tfc_mpc_batch_start(&batch_info); - if (rc) { - PMD_DRV_LOG_LINE(ERR, - "MPC batch start failed rc:%d loop:%d", - rc, loop); - break; - } + stat_cnt = 0; + while (stat_cnt < ulp_sc_info->num_entries && (sce < sce_end)) { + data = ulp_sc_info->read_data; if (bnxt_ulp_cntxt_acquire_fdb_lock(ctxt)) break; - rc = pthread_mutex_lock(&ulp_sc_info->sc_lock); - if (rc) { + rc = tfc_mpc_batch_start(&batch_info); + if (unlikely(rc)) { PMD_DRV_LOG_LINE(ERR, - "Failed to get SC lock, terminating main loop rc:%d loop:%d", - rc, loop); - goto terminate; + "MPC batch start failed rc:%d", rc); + bnxt_ulp_cntxt_release_fdb_lock(ctxt); + break; } - for (batch = 0; (batch < ULP_SC_BATCH_SIZE) && (sce < sce_end);) { + for (batch = 0; (batch < ULP_SC_BATCH_SIZE) && + (sce < sce_end);) { if (!(sce->flags & ULP_SC_ENTRY_FLAG_VALID)) { sce++; continue; } - + stat_cnt++; tfcp = bnxt_ulp_cntxt_tfcp_get(sce->ctxt); - if (tfcp == NULL) { + if (unlikely(!tfcp)) { + bnxt_ulp_cntxt_release_fdb_lock(ctxt); bnxt_ulp_cntxt_entry_release(); goto terminate; } - /* Store the entry pointer to use for counter update */ - batch_info.em_hdl[batch_info.count] = (uint64_t)sce; + batch_info.em_hdl[batch_info.count] = + (uint64_t)sce; rc = sc_ops->ulp_stats_cache_update(tfcp, sce->dir, @@ -285,11 +247,10 @@ static uint32_t ulp_stats_cache_main_loop(void *arg) &words, &batch_info, sce->reset); - if (rc) { + if (unlikely(rc)) { /* Abort this batch */ PMD_DRV_LOG_LINE(ERR, - "loop:%d read_counter() failed:%d", - loop, rc); + "read_counter() failed:%d", rc); break; } @@ -305,13 +266,10 @@ static uint32_t ulp_stats_cache_main_loop(void *arg) batch_size = batch_info.count; rc = tfc_mpc_batch_end(tfcp, &batch_info); - pthread_mutex_unlock(&ulp_sc_info->sc_lock); bnxt_ulp_cntxt_release_fdb_lock(ctxt); - if (rc) { - PMD_DRV_LOG_LINE(ERR, - "MPC batch end failed rc:%d loop:%d", - rc, loop); + if (unlikely(rc)) { + PMD_DRV_LOG_LINE(ERR, "MPC batch end failed rc:%d", rc); batch_info.enabled = false; break; } @@ -322,8 +280,7 @@ static uint32_t ulp_stats_cache_main_loop(void *arg) for (batch = 0; batch < batch_size; batch++) { /* Check for error in completion */ if (batch_info.result[batch]) { - PMD_DRV_LOG_LINE(ERR, - "batch:%d result:%d", + PMD_DRV_LOG_LINE(ERR, "batch:%d result:%d", batch, batch_info.result[batch]); } else { count = (struct ulp_sc_tfc_stats_cache_entry *) @@ -334,28 +291,13 @@ static uint32_t ulp_stats_cache_main_loop(void *arg) data += ULP_SC_PAGE_SIZE; } } - - loop++; - stop = rte_get_timer_cycles(); - cycles = stop - start; - if (cycles > (hz * ULP_SC_PERIOD_S)) { - PMD_DRV_LOG_LINE(ERR, - "Stats collection time exceeded %dmS Cycles:%" PRIu64, - ULP_SC_PERIOD_MS, cycles); - delay = ULP_SC_PERIOD_MS; - } else { - delay = ULP_SC_PERIOD_MS - (cycles / cpms); - - if (delay > ULP_SC_PERIOD_MS) { - PMD_DRV_LOG_LINE(ERR, - "Stats collection delay:%dmS exceedes %dmS", - delay, ULP_SC_PERIOD_MS); - delay = ULP_SC_PERIOD_MS; - } - } + bnxt_ulp_cntxt_entry_release(); + /* Sleep to give any other threads opportunity to access ULP */ + rte_delay_us_sleep(ULP_SC_PERIOD_US); } terminate: + PMD_DRV_LOG_LINE(DEBUG, "Terminating the stats cachce thread"); return 0; } @@ -534,14 +476,13 @@ int ulp_sc_mgr_entry_alloc(struct bnxt_ulp_mapper_parms *parms, if (!ulp_sc_info) return -ENODEV; - pthread_mutex_lock(&ulp_sc_info->sc_lock); - sce = ulp_sc_info->stats_cache_tbl; sce += parms->flow_id; /* If entry is not free return an error */ if (sce->flags & ULP_SC_ENTRY_FLAG_VALID) { - pthread_mutex_unlock(&ulp_sc_info->sc_lock); + BNXT_DRV_DBG(ERR, "Entry is not free, invalid flow id %u\n", + parms->flow_id); return -EBUSY; } @@ -553,8 +494,6 @@ int ulp_sc_mgr_entry_alloc(struct bnxt_ulp_mapper_parms *parms, sce->handle = counter_handle; sce->dir = tbl->direction; ulp_sc_info->num_entries++; - pthread_mutex_unlock(&ulp_sc_info->sc_lock); - return 0; } @@ -568,20 +507,17 @@ void ulp_sc_mgr_entry_free(struct bnxt_ulp_context *ulp, if (!ulp_sc_info) return; - pthread_mutex_lock(&ulp_sc_info->sc_lock); - sce = ulp_sc_info->stats_cache_tbl; sce += fid; if (!(sce->flags & ULP_SC_ENTRY_FLAG_VALID)) { - pthread_mutex_unlock(&ulp_sc_info->sc_lock); + BNXT_DRV_DBG(ERR, "Entry already free, invalid flow id %u\n", + fid); return; } sce->flags = 0; ulp_sc_info->num_entries--; - - pthread_mutex_unlock(&ulp_sc_info->sc_lock); } /* @@ -606,11 +542,7 @@ void ulp_sc_mgr_set_pc_idx(struct bnxt_ulp_context *ctxt, if (!ulp_sc_info) return; - pthread_mutex_lock(&ulp_sc_info->sc_lock); - sce = ulp_sc_info->stats_cache_tbl; sce += flow_id; sce->pc_idx = pc_idx & ULP_SC_PC_IDX_MASK; - - pthread_mutex_unlock(&ulp_sc_info->sc_lock); } diff --git a/drivers/net/bnxt/tf_ulp/ulp_sc_mgr.h b/drivers/net/bnxt/tf_ulp/ulp_sc_mgr.h index 2c08d08b17..d5c835cd75 100644 --- a/drivers/net/bnxt/tf_ulp/ulp_sc_mgr.h +++ b/drivers/net/bnxt/tf_ulp/ulp_sc_mgr.h @@ -35,12 +35,12 @@ struct ulp_sc_tfc_stats_cache_entry { struct bnxt_ulp_sc_info { struct ulp_sc_tfc_stats_cache_entry *stats_cache_tbl; - uint8_t *read_data; - uint32_t flags; - uint32_t num_entries; - pthread_mutex_t sc_lock; - uint32_t num_counters; - rte_thread_t tid; + uint8_t *read_data; + uint32_t flags; + uint32_t num_entries; + uint32_t num_counters; + uint32_t cache_tbl_size; + rte_thread_t tid; const struct bnxt_ulp_sc_core_ops *sc_ops; }; -- 2.39.3