From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 33135A0A0F; Fri, 4 Jun 2021 16:26:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 232E14114C; Fri, 4 Jun 2021 16:25:00 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id 674E041149 for ; Fri, 4 Jun 2021 16:24:59 +0200 (CEST) Received: by shelob.oktetlabs.ru (Postfix, from userid 122) id 346BA7F684; Fri, 4 Jun 2021 17:24:59 +0300 (MSK) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on shelob.oktetlabs.ru X-Spam-Level: X-Spam-Status: No, score=0.8 required=5.0 tests=ALL_TRUSTED, DKIM_ADSP_DISCARD, URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.2 Received: from aros.oktetlabs.ru (aros.oktetlabs.ru [192.168.38.17]) by shelob.oktetlabs.ru (Postfix) with ESMTP id 933447F68B; Fri, 4 Jun 2021 17:24:25 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 933447F68B Authentication-Results: shelob.oktetlabs.ru/933447F68B; dkim=none; dkim-atps=neutral From: Andrew Rybchenko To: dev@dpdk.org Cc: Andy Moreton , Ivan Malov Date: Fri, 4 Jun 2021 17:24:05 +0300 Message-Id: <20210604142414.283611-12-andrew.rybchenko@oktetlabs.ru> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210604142414.283611-1-andrew.rybchenko@oktetlabs.ru> References: <20210527152510.1551026-1-andrew.rybchenko@oktetlabs.ru> <20210604142414.283611-1-andrew.rybchenko@oktetlabs.ru> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v2 11/20] net/sfc: add NUMA-aware registry of service logical cores X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The driver requires service cores for housekeeping. Share these cores for many adapters and various purposes to avoid extra CPU overhead. Since housekeeping services will talk to NIC, it should be possible to choose logical core on matching NUMA node. Signed-off-by: Andrew Rybchenko Reviewed-by: Andy Moreton Reviewed-by: Ivan Malov --- drivers/net/sfc/meson.build | 1 + drivers/net/sfc/sfc_service.c | 99 +++++++++++++++++++++++++++++++++++ drivers/net/sfc/sfc_service.h | 20 +++++++ 3 files changed, 120 insertions(+) create mode 100644 drivers/net/sfc/sfc_service.c create mode 100644 drivers/net/sfc/sfc_service.h diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build index ccf5984d87..4ac97e8d43 100644 --- a/drivers/net/sfc/meson.build +++ b/drivers/net/sfc/meson.build @@ -62,4 +62,5 @@ sources = files( 'sfc_ef10_tx.c', 'sfc_ef100_rx.c', 'sfc_ef100_tx.c', + 'sfc_service.c', ) diff --git a/drivers/net/sfc/sfc_service.c b/drivers/net/sfc/sfc_service.c new file mode 100644 index 0000000000..9c89484406 --- /dev/null +++ b/drivers/net/sfc/sfc_service.c @@ -0,0 +1,99 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include + +#include +#include +#include +#include +#include + +#include "sfc_log.h" +#include "sfc_service.h" +#include "sfc_debug.h" + +static uint32_t sfc_service_lcore[RTE_MAX_NUMA_NODES]; +static rte_spinlock_t sfc_service_lcore_lock = RTE_SPINLOCK_INITIALIZER; + +RTE_INIT(sfc_service_lcore_init) +{ + size_t i; + + for (i = 0; i < RTE_DIM(sfc_service_lcore); ++i) + sfc_service_lcore[i] = RTE_MAX_LCORE; +} + +static uint32_t +sfc_find_service_lcore(int *socket_id) +{ + uint32_t service_core_list[RTE_MAX_LCORE]; + uint32_t lcore_id; + int num; + int i; + + SFC_ASSERT(rte_spinlock_is_locked(&sfc_service_lcore_lock)); + + num = rte_service_lcore_list(service_core_list, + RTE_DIM(service_core_list)); + if (num == 0) { + SFC_GENERIC_LOG(WARNING, "No service cores available"); + return RTE_MAX_LCORE; + } + if (num < 0) { + SFC_GENERIC_LOG(ERR, "Failed to get service core list"); + return RTE_MAX_LCORE; + } + + for (i = 0; i < num; ++i) { + lcore_id = service_core_list[i]; + + if (*socket_id == SOCKET_ID_ANY) { + *socket_id = rte_lcore_to_socket_id(lcore_id); + break; + } else if (rte_lcore_to_socket_id(lcore_id) == + (unsigned int)*socket_id) { + break; + } + } + + if (i == num) { + SFC_GENERIC_LOG(WARNING, + "No service cores reserved at socket %d", *socket_id); + return RTE_MAX_LCORE; + } + + return lcore_id; +} + +uint32_t +sfc_get_service_lcore(int socket_id) +{ + uint32_t lcore_id = RTE_MAX_LCORE; + + rte_spinlock_lock(&sfc_service_lcore_lock); + + if (socket_id != SOCKET_ID_ANY) { + lcore_id = sfc_service_lcore[socket_id]; + } else { + size_t i; + + for (i = 0; i < RTE_DIM(sfc_service_lcore); ++i) { + if (sfc_service_lcore[i] != RTE_MAX_LCORE) { + lcore_id = sfc_service_lcore[i]; + break; + } + } + } + + if (lcore_id == RTE_MAX_LCORE) { + lcore_id = sfc_find_service_lcore(&socket_id); + if (lcore_id != RTE_MAX_LCORE) + sfc_service_lcore[socket_id] = lcore_id; + } + + rte_spinlock_unlock(&sfc_service_lcore_lock); + return lcore_id; +} diff --git a/drivers/net/sfc/sfc_service.h b/drivers/net/sfc/sfc_service.h new file mode 100644 index 0000000000..bbcce28479 --- /dev/null +++ b/drivers/net/sfc/sfc_service.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#ifndef _SFC_SERVICE_H +#define _SFC_SERVICE_H + +#include + +#ifdef __cplusplus +extern "C" { +#endif + +uint32_t sfc_get_service_lcore(int socket_id); + +#ifdef __cplusplus +} +#endif +#endif /* _SFC_SERVICE_H */ -- 2.30.2