From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B216D45BCD; Wed, 30 Oct 2024 06:44:35 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A059C42FB9; Wed, 30 Oct 2024 06:44:35 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2082.outbound.protection.outlook.com [40.107.220.82]) by mails.dpdk.org (Postfix) with ESMTP id 2206642EC5 for ; Wed, 30 Oct 2024 06:44:34 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=S+Un0GFrsd/RVxjBsL5V7fx+rNdzyPfxnrxa997JRtdLutcstIRn6a3npjPe/EK6cfH8IXj1gGXqupQqmi+l/ZCp2Cf/gjtbOpYHFd2HkNnOX6Y7t0/WOMNDXxLOAi2BTB/dNstTcFRVUCIsilds7bod2j9U7tyGVcjED3/qbq7o3GlT/0QAaE9ItrnOioqLB8AUwgnJ0oSvT9YgITSmFuytg2xeus2YLQx7/PlkP9by0rDysApZIPNlwcslt06NnmRHgkCI9r/332ZRKqQhpe4jcmrgeQxL0B0+6/8PjbWabS0m5Cekb41nD2lq+U0hUK3uftgaVAyv2gcaeSGu0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dcHeyIFZZrO+EHDTZDs4mUN1GZAqe0qL1B8ayP2uysg=; b=hMd6ujqcZMLy/O4i1L+LXqW8y6xz4NgoXVywcC6MGQhuo8YhXfnV+GA/4fYClk2PWE3qtaycqHqQ5EGTMOYpaBoWhnL1X0MR/VPsHtGFF8/nM4ROINQP/6H9y1o81WoC4/26JR8nVIj1eLK5AKJwHsi2vYCFoThXLxtZ6Q5bMKQoilh8/QSKR7fU5pESwFwS7XN5vVvjYrCYPmdClO46lOOHgim+d1vsZAwKy9PPyfTWnNdf8VR4wEd6vecjPxfH5V/uMg0XwGqfDdioLTQejvg2Xp5VY4GFoyjSBVug5fMEYIW+SUgapGzDVHbUuzFwRfmippNUgUl8oknqFRJnJw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=dpdk.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dcHeyIFZZrO+EHDTZDs4mUN1GZAqe0qL1B8ayP2uysg=; b=TSBdLT/IqpTbfvY7edSt5ulDnAWuuqOD8lrniNObLwBjlcLQDPamE38e0p/Jwu7uPT6/8sxMP/XghPCnAlGfyKIYI2lbUxeKKQI7IRCF1UHMBlIgUrtqHrusAErVvPcOXODZFId73z+qVKC5RGouqMsoY6W0nd3RD3ExCUh1hrk= Received: from MN0P220CA0011.NAMP220.PROD.OUTLOOK.COM (2603:10b6:208:52e::23) by PH7PR12MB6836.namprd12.prod.outlook.com (2603:10b6:510:1b6::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8093.25; Wed, 30 Oct 2024 05:44:27 +0000 Received: from BL02EPF0002992E.namprd02.prod.outlook.com (2603:10b6:208:52e:cafe::46) by MN0P220CA0011.outlook.office365.com (2603:10b6:208:52e::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8114.20 via Frontend Transport; Wed, 30 Oct 2024 05:44:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by BL02EPF0002992E.mail.protection.outlook.com (10.167.249.59) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.8114.16 via Frontend Transport; Wed, 30 Oct 2024 05:44:26 +0000 Received: from BLRVIVARGHE.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 30 Oct 2024 00:43:29 -0500 From: Vipin Varghese To: , , , , , CC: , , , , , , , Subject: [RFC v3 1/3] eal/lcore: add topology based functions Date: Wed, 30 Oct 2024 11:11:31 +0530 Message-ID: <20241030054133.520-2-vipin.varghese@amd.com> X-Mailer: git-send-email 2.47.0.windows.1 In-Reply-To: <20241030054133.520-1-vipin.varghese@amd.com> References: <20241030054133.520-1-vipin.varghese@amd.com> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF0002992E:EE_|PH7PR12MB6836:EE_ X-MS-Office365-Filtering-Correlation-Id: e58d60d1-b745-4b85-390e-08dcf8a5ea79 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|1800799024|36860700013|82310400026|376014|7416014; X-Microsoft-Antispam-Message-Info: =?utf-8?B?bTBadkNBV1dLQ0lIZ3lBTE9SYm5RZUlMcnVURXdaeVZJcFBMbTVyVXBIVTMx?= =?utf-8?B?MzA1dzl1N2MvM3FBWkVWcnNuanYzSnRNMEJ0cEhVTnB6cXU3a0hvQ0RUZFNj?= =?utf-8?B?UEVzZEZOb3hkUEM3cnpLNGlwZ3ovVGFNNjBWY01CRkdiektRMlkvbnovUjRy?= =?utf-8?B?WFh0THNCTXIrUlo2ZmJ3QTlnQ2dIWTUwcmgyd0tjbWRrUzRSaWVrcHVJOXJN?= =?utf-8?B?N2p1RFNkOElQd0RPNXBJMWlMVHlveDlhNENHRmlTdmxqbGxlbmhLNER0N0Iz?= =?utf-8?B?aCtqMjB5bk5URWhNdStOU2txajJ4R1QxZXhHYjMxWUdjd3VUM1VHU0NpMmRH?= =?utf-8?B?RW1nNmNNM2hXRTI3TlZOcDhjcjNJaHZSUGtDb1k2c3cvdW9SOWswcnlWSTl2?= =?utf-8?B?TW5ONVU5eEI0ZWRUQzlVQzkzR29Jay90S3o2bC9SVUhIalNLWWNHeE85MkU3?= =?utf-8?B?bC9OQy96MnlZS1JBL2x4SHM0WkxLaFFvQ2hOcXloaVJiVnIxd0Y5R0dmQUdu?= =?utf-8?B?TlJOekttWnc0b2d0V09ickttdDBranBSK0M1TGhzOTM2N3ErbEh2aHAzekZj?= =?utf-8?B?SW40UWN0VXRZbEo4U1dmVE1xdmZ5WDI1RytTbmo5eWMyclJ5NUd1RGFSNTd6?= =?utf-8?B?QWZuNWxNZHVYNDF5dmxJVjEvdkxxTkV2SVd6UFZ5QVMzaWJ2ejZiUHVIZUlM?= =?utf-8?B?blppVUtZSHZqZzkxd2lFYm43N1VZSFkwNXZBUFF4cnhyNEp3dG5tQVcwWUFi?= =?utf-8?B?cjFVcGsvY2pvMEVHNnRaYm45TFJQY1hMS1lsV3R1NXBqNVdkQXFLWjhmL2s2?= =?utf-8?B?VnZ4SUZ5Q1kyK0wzMG9KbXY1SURnVmN0N1FvQzJuK2R2WXBNTmZVZEc4b1F5?= =?utf-8?B?Uko5VHpvajF3OUFaNEs3bktpNFI3NjZDSmZveWRrNmJnRGpCbXlkOVpoS0VK?= =?utf-8?B?b3NOZnlYL3VoZVNHcEQwTnNwVVZnV3pkNnk1aExpTlFUK0pROTROcVF5TEh6?= =?utf-8?B?QTFPalYzZVpkd1FTdmNCRXZDQnBBcm1hMnFPUVN0TjZtaWVwVExxenhoRmtP?= =?utf-8?B?UTBjOHIxS2k2OTVzajRsbndYM1N6U3FxR2Z1enpodmVNd2RaTnFRSU4xY3VC?= =?utf-8?B?VnUvYnNuYzRTeXAvcHo5NGhQVW5Mc05TbHJGQ1BDcVJKL01RQjM4QkozWGxC?= =?utf-8?B?d2lONFd5a3lZd3ZrM1BlSzFaVkdmeWt3bnJDMmtNRlRZcFZuMDBhVy9ySVJ1?= =?utf-8?B?cVNZeVFNNjlpMTd0K1d3R3BiUmpZWktGRWNhV0lSYy94TTBzbnRscmZQMTRV?= =?utf-8?B?bThSeHMwSFh0VUhPV1FuYXo5R2piSm1WWXYyQndWaU1UaGd4cDhpaDhKcjlx?= =?utf-8?B?ZG91dk5uU0swSmsyNk4wR2Z5MktJTGNpWFNUYjVzVHV3Tnh3UVdzWEdxeEVw?= =?utf-8?B?MmdvSWZDWjVROUdMR0ZpTEFkTnNSQ1o3Z3BCaWQvQlZ6cDNKblcxUW91SU9i?= =?utf-8?B?U0tGaG52bnB5UGlmTFFYZkdEVUgxbkhYRmU1b3NIUDgvenZmSGJYc3EyZjg0?= =?utf-8?B?V01UVFE1UWs1ZUZZM2RNUWZ6dDA4YytDOTJSWi9pdncxZzJkR1BkRVdYeVdP?= =?utf-8?B?WTJzZm8zdTVPVnF2aHRYbytJVUZiNFU5SkdYSkdraW1udWhGRjNYeTN1ejBa?= =?utf-8?B?YStXSGt5eEg1UDE0SnMyNzFhTjRXbjhISm5xanh2aVBJd2FVdzFQNDNEVlFZ?= =?utf-8?B?WndjRERPZ2JNRzZtd3pIcktZblB0OXd4NGh4SDRTbWM3WG01MVZCVXU5cGdv?= =?utf-8?B?bDBEMm94ZzBWcDNVTEN3eVplMWo0a1VWVVBJbDVTRmp6Tlo5YU1xY3VBdldw?= =?utf-8?B?VjQzcEY1TDVWVE9VU1FxVDJFamhVZWJsRUtKaW41N2R2SGc9PQ==?= X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230040)(1800799024)(36860700013)(82310400026)(376014)(7416014); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Oct 2024 05:44:26.6091 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e58d60d1-b745-4b85-390e-08dcf8a5ea79 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF0002992E.namprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6836 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce topology aware lcore mapping into lcore API. With higher core density, more and more cores are categorized into various chiplets based on IO (memory and PCIe) and Last Level Cache (mainly L3). Using hwloc library, the dpdk available lcores can be grouped into various groups nameley L1, L2, L3 and IO. This patch introduces functions and MACRO that helps to identify such groups. Internal API: - rte_eal_topology_init - rte_eal_topology_release - get_domain_lcore_mapping External Experimental API: - rte_domain_count - rte_lcore_count_from_domain - rte_get_lcore_from_domain - rte_get_next_lcore_from_domain - rte_get_next_lcore_from_next_domain v2 changes: focuses on rte_lcore api for getting topology - use hwloc instead of sysfs exploration - Mattias Rönnblom - L1, L2 and IO domain mapping - Ferruh, Vipin - new API marked experimental - Stephen Hemminger Signed-off-by: Vipin Varghese --- config/meson.build | 18 + lib/eal/common/eal_common_lcore.c | 580 ++++++++++++++++++++++++++++++ lib/eal/common/eal_private.h | 48 +++ lib/eal/freebsd/eal.c | 10 + lib/eal/include/rte_lcore.h | 168 +++++++++ lib/eal/linux/eal.c | 11 + lib/eal/meson.build | 4 + lib/eal/version.map | 9 + lib/eal/windows/eal.c | 12 + 9 files changed, 860 insertions(+) diff --git a/config/meson.build b/config/meson.build index 8dae811378..a48822dcb1 100644 --- a/config/meson.build +++ b/config/meson.build @@ -240,6 +240,24 @@ if find_libnuma endif endif +has_libhwloc = false +find_libhwloc = true + +if meson.is_cross_build() and not meson.get_cross_property('hwloc', true) + # don't look for libhwloc if explicitly disabled in cross build + find_libhwloc = false +endif + +if find_libhwloc + hwloc_dep = cc.find_library('hwloc', required: false) + if hwloc_dep.found() and cc.has_header('hwloc.h') + dpdk_conf.set10('RTE_HAS_LIBHWLOC', true) + has_libhwloc = true + add_project_link_arguments('-lhwloc', language: 'c') + dpdk_extra_ldflags += '-lhwloc' + endif +endif + has_libfdt = false fdt_dep = cc.find_library('fdt', required: false) if fdt_dep.found() and cc.has_header('fdt.h') diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c index 2ff9252c52..1ab9ead1d1 100644 --- a/lib/eal/common/eal_common_lcore.c +++ b/lib/eal/common/eal_common_lcore.c @@ -112,6 +112,302 @@ unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap) return i; } +#ifdef RTE_EAL_HWLOC_TOPOLOGY_PROBE +static struct core_domain_mapping * +get_domain_lcore_mapping(unsigned int domain_sel, unsigned int domain_indx) +{ + struct core_domain_mapping *ptr = + (domain_sel & RTE_LCORE_DOMAIN_IO) ? topo_cnfg.io[domain_indx] : + (domain_sel & RTE_LCORE_DOMAIN_L3) ? topo_cnfg.l3[domain_indx] : + (domain_sel & RTE_LCORE_DOMAIN_L2) ? topo_cnfg.l2[domain_indx] : + (domain_sel & RTE_LCORE_DOMAIN_L1) ? topo_cnfg.l1[domain_indx] : NULL; + + return ptr; +} +#endif + +unsigned int rte_get_domain_count(unsigned int domain_sel __rte_unused) +{ + unsigned int domain_cnt = 0; + +#ifdef RTE_EAL_HWLOC_TOPOLOGY_PROBE + if (domain_sel & RTE_LCORE_DOMAIN_ALL) { + domain_cnt = + (domain_sel & RTE_LCORE_DOMAIN_IO) ? topo_cnfg.io_count : + (domain_sel & RTE_LCORE_DOMAIN_L3) ? topo_cnfg.l3_count : + (domain_sel & RTE_LCORE_DOMAIN_L2) ? topo_cnfg.l2_count : + (domain_sel & RTE_LCORE_DOMAIN_L1) ? topo_cnfg.l1_count : 0; + } +#endif + + return domain_cnt; +} + +unsigned int +rte_lcore_count_from_domain(unsigned int domain_sel __rte_unused, +unsigned int domain_indx __rte_unused) +{ + unsigned int core_cnt = 0; + +#ifdef RTE_EAL_HWLOC_TOPOLOGY_PROBE + unsigned int domain_cnt = 0; + + if ((domain_sel & RTE_LCORE_DOMAIN_ALL) == 0) + return core_cnt; + + domain_cnt = rte_get_domain_count(domain_sel); + if ((domain_indx != RTE_LCORE_DOMAIN_LCORES_ALL) && (domain_indx >= domain_cnt)) + return core_cnt; + + core_cnt = (domain_sel & RTE_LCORE_DOMAIN_IO) ? topo_cnfg.io_core_count : + (domain_sel & RTE_LCORE_DOMAIN_L3) ? topo_cnfg.l3_core_count : + (domain_sel & RTE_LCORE_DOMAIN_L2) ? topo_cnfg.l2_core_count : + (domain_sel & RTE_LCORE_DOMAIN_L1) ? topo_cnfg.l1_core_count : 0; + + if ((domain_indx != RTE_LCORE_DOMAIN_LCORES_ALL) && (core_cnt)) { + struct core_domain_mapping *ptr = get_domain_lcore_mapping(domain_sel, domain_indx); + core_cnt = ptr->core_count; + } +#endif + + return core_cnt; +} + +unsigned int +rte_get_lcore_in_domain(unsigned int domain_sel __rte_unused, +unsigned int domain_indx __rte_unused, unsigned int lcore_pos __rte_unused) +{ + uint16_t sel_core = RTE_MAX_LCORE; + +#ifdef RTE_EAL_HWLOC_TOPOLOGY_PROBE + unsigned int domain_cnt = 0; + unsigned int core_cnt = 0; + + if (domain_sel & RTE_LCORE_DOMAIN_ALL) { + domain_cnt = rte_get_domain_count(domain_sel); + if (domain_cnt == 0) + return sel_core; + + core_cnt = rte_lcore_count_from_domain(domain_sel, RTE_LCORE_DOMAIN_LCORES_ALL); + if (core_cnt == 0) + return sel_core; + + struct core_domain_mapping *ptr = get_domain_lcore_mapping(domain_sel, domain_indx); + if ((ptr) && (ptr->core_count)) { + if (lcore_pos < ptr->core_count) + sel_core = ptr->cores[lcore_pos]; + } + } +#endif + + return sel_core; +} + +unsigned int +rte_get_next_lcore_from_domain(unsigned int indx __rte_unused, +int skip_main __rte_unused, int wrap __rte_unused, uint32_t flag __rte_unused) +{ + if (indx >= RTE_MAX_LCORE) { + indx = rte_get_next_lcore(-1, skip_main, wrap); + return indx; + } + uint16_t usr_lcore = indx % RTE_MAX_LCORE; + uint16_t sel_domain_core = RTE_MAX_LCORE; + + EAL_LOG(DEBUG, "lcore (%u), skip main lcore (%d), wrap (%d), flag (%u)", + usr_lcore, skip_main, wrap, flag); + + /* check the input lcore indx */ + if (!rte_lcore_is_enabled(indx)) { + EAL_LOG(ERR, "User input lcore (%u) is not enabled!!!", indx); + return sel_domain_core; + } + + if ((rte_lcore_count() == 1)) { + EAL_LOG(DEBUG, "1 lcore only in dpdk process!!!"); + sel_domain_core = wrap ? indx : sel_domain_core; + return sel_domain_core; + } + +#ifdef RTE_EAL_HWLOC_TOPOLOGY_PROBE + uint16_t main_lcore = rte_get_main_lcore(); + uint16_t sel_domain = 0xffff; + uint16_t sel_domain_core_index = 0xffff; + uint16_t sel_domain_core_count = 0; + + struct core_domain_mapping *ptr = NULL; + uint16_t domain_count = 0; + uint16_t domain_core_count = 0; + uint16_t *domain_core_list = NULL; + + + domain_count = rte_get_domain_count(flag); + if (domain_count == 0) { + EAL_LOG(ERR, "No cores found for the flag (%u)!!!", flag); + return sel_domain_core; + } + + /* identify the lcore to get the domain to start from */ + for (int i = 0; (i < domain_count) && (sel_domain_core_index == 0xffff); i++) { + ptr = get_domain_lcore_mapping(flag, i); + + domain_core_count = ptr->core_count; + domain_core_list = ptr->cores; + + for (int j = 0; j < domain_core_count; j++) { + if (usr_lcore == domain_core_list[j]) { + sel_domain_core_index = j; + sel_domain_core_count = domain_core_count; + sel_domain = i; + break; + } + } + } + + if (sel_domain_core_count == 1) { + EAL_LOG(DEBUG, "there is no more lcore in the domain!!!"); + return sel_domain_core; + } + + EAL_LOG(DEBUG, "selected: domain (%u), core: count %u, index %u, core: current %u", + sel_domain, sel_domain_core_count, sel_domain_core_index, + domain_core_list[sel_domain_core_index]); + + /* get next lcore from the selected domain */ + /* next lcore is always `sel_domain_core_index + 1`, but needs boundary check */ + bool lcore_found = false; + uint16_t next_domain_lcore_index = sel_domain_core_index + 1; + while (false == lcore_found) { + + if (next_domain_lcore_index >= sel_domain_core_count) { + if (wrap) { + next_domain_lcore_index = 0; + continue; + } + break; + } + + /* check if main lcore skip */ + if ((domain_core_list[next_domain_lcore_index] == main_lcore) && (skip_main)) { + next_domain_lcore_index += 1; + continue; + } + + lcore_found = true; + } + if (true == lcore_found) + sel_domain_core = domain_core_list[next_domain_lcore_index]; +#endif + + EAL_LOG(DEBUG, "Selected core (%u)", sel_domain_core); + return sel_domain_core; +} + +unsigned int +rte_get_next_lcore_from_next_domain(unsigned int indx __rte_unused, +int skip_main __rte_unused, int wrap __rte_unused, +uint32_t flag __rte_unused, int cores_to_skip __rte_unused) +{ + uint16_t sel_domain_core = RTE_MAX_LCORE; + uint16_t usr_lcore = indx % RTE_MAX_LCORE; + + if (indx >= RTE_MAX_LCORE) { + indx = rte_get_next_lcore(-1, skip_main, wrap); + return indx; + } + + EAL_LOG(DEBUG, "lcore (%u), skip main lcore (%d), wrap (%d), flag (%u)", + usr_lcore, skip_main, wrap, flag); + + /* check the input lcore indx */ + if (!rte_lcore_is_enabled(indx)) { + EAL_LOG(ERR, "User input lcore (%u) is not enabled!!!", indx); + return sel_domain_core; + } + +#ifdef RTE_EAL_HWLOC_TOPOLOGY_PROBE + uint16_t main_lcore = rte_get_main_lcore(); + + uint16_t sel_domain = 0xffff; + uint16_t sel_domain_core_index = 0xffff; + + uint16_t domain_count = 0; + uint16_t domain_core_count = 0; + uint16_t *domain_core_list = NULL; + + domain_count = rte_get_domain_count(flag); + if (domain_count == 0) { + EAL_LOG(ERR, "No Domains found for the flag (%u)!!!", flag); + return sel_domain_core; + } + + /* identify the lcore to get the domain to start from */ + struct core_domain_mapping *ptr = NULL; + for (int i = 0; (i < domain_count) && (sel_domain_core_index == 0xffff); i++) { + ptr = get_domain_lcore_mapping(flag, i); + domain_core_count = ptr->core_count; + domain_core_list = ptr->cores; + + for (int j = 0; j < domain_core_count; j++) { + if (usr_lcore == domain_core_list[j]) { + sel_domain_core_index = j; + sel_domain = i; + break; + } + } + } + + if (sel_domain_core_index == 0xffff) { + EAL_LOG(ERR, "Invalid lcore %u for the flag (%u)!!!", indx, flag); + return sel_domain_core; + } + + EAL_LOG(DEBUG, "Selected - core_index (%u); domain (%u), core_count (%u), cores (%p)", + sel_domain_core_index, sel_domain, domain_core_count, domain_core_list); + + uint16_t skip_cores = (cores_to_skip >= 0) ? cores_to_skip : (0 - cores_to_skip); + + /* get the next domain & valid lcore */ + sel_domain = (((1 + sel_domain) == domain_count) && (wrap)) ? 0 : (1 + sel_domain); + sel_domain_core_index = 0xffff; + + bool iter_loop = false; + for (int i = sel_domain; (i < domain_count) && (sel_domain_core == RTE_MAX_LCORE); i++) { + ptr = (flag & RTE_LCORE_DOMAIN_L1) ? topo_cnfg.l1[i] : + (flag & RTE_LCORE_DOMAIN_L2) ? topo_cnfg.l2[i] : + (flag & RTE_LCORE_DOMAIN_L3) ? topo_cnfg.l3[i] : + (flag & RTE_LCORE_DOMAIN_IO) ? topo_cnfg.io[i] : NULL; + + domain_core_count = ptr->core_count; + domain_core_list = ptr->cores; + + /* check if we have cores to iterate from this domain */ + if (skip_cores >= domain_core_count) + continue; + + if (((1 + sel_domain) == domain_count) && (wrap)) { + if (iter_loop == true) + break; + + iter_loop = true; + } + + sel_domain_core_index = (cores_to_skip >= 0) ? skip_cores : + (domain_core_count - skip_cores); + sel_domain_core = domain_core_list[sel_domain_core_index]; + + if ((skip_main) && (sel_domain_core == main_lcore)) { + sel_domain_core_index = 0xffff; + sel_domain_core = RTE_MAX_LCORE; + continue; + } + } +#endif + + EAL_LOG(DEBUG, "Selected core (%u)", sel_domain_core); + return sel_domain_core; +} + unsigned int rte_lcore_to_socket_id(unsigned int lcore_id) { @@ -131,6 +427,290 @@ socket_id_cmp(const void *a, const void *b) return 0; } + + +/* + * Use HWLOC library to parse L1|L2|L3|NUMA-IO on the running target machine. + * Store the topology structure in memory. + */ +int +rte_eal_topology_init(void) +{ +#ifdef RTE_EAL_HWLOC_TOPOLOGY_PROBE + memset(&topo_cnfg, 0, sizeof(struct topology_config)); + + hwloc_topology_init(&topo_cnfg.topology); + hwloc_topology_load(topo_cnfg.topology); + + int l1_depth = hwloc_get_type_depth(topo_cnfg.topology, HWLOC_OBJ_L1CACHE); + int l2_depth = hwloc_get_type_depth(topo_cnfg.topology, HWLOC_OBJ_L2CACHE); + int l3_depth = hwloc_get_type_depth(topo_cnfg.topology, HWLOC_OBJ_L3CACHE); + int io_depth = hwloc_get_type_depth(topo_cnfg.topology, HWLOC_OBJ_NUMANODE); + + if (l1_depth) { + topo_cnfg.l1_count = hwloc_get_nbobjs_by_depth(topo_cnfg.topology, l1_depth); + topo_cnfg.l1 = (struct core_domain_mapping **) + malloc(sizeof(struct core_domain_mapping *) * topo_cnfg.l1_count); + if (topo_cnfg.l1 == NULL) { + rte_eal_topology_release(); + return -1; + } + + for (int j = 0; j < topo_cnfg.l1_count; j++) { + hwloc_obj_t obj = hwloc_get_obj_by_depth(topo_cnfg.topology, l1_depth, j); + unsigned int first_cpu = hwloc_bitmap_first(obj->cpuset); + unsigned int cpu_count = hwloc_bitmap_weight(obj->cpuset); + + topo_cnfg.l1[j] = (struct core_domain_mapping *) + malloc(sizeof(struct core_domain_mapping)); + if (topo_cnfg.l1[j] == NULL) { + rte_eal_topology_release(); + return -1; + } + + topo_cnfg.l1[j]->core_count = 0; + topo_cnfg.l1[j]->cores = (uint16_t *) + malloc(sizeof(uint16_t) * cpu_count); + if (topo_cnfg.l1[j]->cores == NULL) { + rte_eal_topology_release(); + return -1; + } + + signed int cpu_id = first_cpu; + unsigned int cpu_index = 0; + do { + if (rte_lcore_is_enabled(cpu_id)) { + EAL_LOG(DEBUG, " L1|SMT domain (%u) lcore %u", j, cpu_id); + topo_cnfg.l1[j]->cores[cpu_index] = cpu_id; + cpu_index++; + + topo_cnfg.l1[j]->core_count += 1; + topo_cnfg.l1_core_count += 1; + } + cpu_id = hwloc_bitmap_next(obj->cpuset, cpu_id); + cpu_count -= 1; + } while ((cpu_id != -1) && (cpu_count)); + } + } + + if (l2_depth) { + topo_cnfg.l2_count = hwloc_get_nbobjs_by_depth(topo_cnfg.topology, l2_depth); + topo_cnfg.l2 = (struct core_domain_mapping **) + malloc(sizeof(struct core_domain_mapping *) * topo_cnfg.l2_count); + if (topo_cnfg.l2 == NULL) { + rte_eal_topology_release(); + return -1; + } + + for (int j = 0; j < topo_cnfg.l2_count; j++) { + hwloc_obj_t obj = hwloc_get_obj_by_depth(topo_cnfg.topology, l2_depth, j); + unsigned int first_cpu = hwloc_bitmap_first(obj->cpuset); + unsigned int cpu_count = hwloc_bitmap_weight(obj->cpuset); + + topo_cnfg.l2[j] = (struct core_domain_mapping *) + malloc(sizeof(struct core_domain_mapping)); + if (topo_cnfg.l2[j] == NULL) { + rte_eal_topology_release(); + return -1; + } + + topo_cnfg.l2[j]->core_count = 0; + topo_cnfg.l2[j]->cores = (uint16_t *) + malloc(sizeof(uint16_t) * cpu_count); + if (topo_cnfg.l2[j]->cores == NULL) { + rte_eal_topology_release(); + return -1; + } + + signed int cpu_id = first_cpu; + unsigned int cpu_index = 0; + do { + if (rte_lcore_is_enabled(cpu_id)) { + EAL_LOG(DEBUG, " L2 domain (%u) lcore %u", j, cpu_id); + topo_cnfg.l2[j]->cores[cpu_index] = cpu_id; + cpu_index++; + + topo_cnfg.l2[j]->core_count += 1; + topo_cnfg.l2_core_count += 1; + } + cpu_id = hwloc_bitmap_next(obj->cpuset, cpu_id); + cpu_count -= 1; + } while ((cpu_id != -1) && (cpu_count)); + } + } + if (l3_depth) { + topo_cnfg.l3_count = hwloc_get_nbobjs_by_depth(topo_cnfg.topology, l3_depth); + topo_cnfg.l3 = (struct core_domain_mapping **) + malloc(sizeof(struct core_domain_mapping *) * topo_cnfg.l3_count); + if (topo_cnfg.l3 == NULL) { + rte_eal_topology_release(); + return -1; + } + + for (int j = 0; j < topo_cnfg.l3_count; j++) { + hwloc_obj_t obj = hwloc_get_obj_by_depth(topo_cnfg.topology, l3_depth, j); + unsigned int first_cpu = hwloc_bitmap_first(obj->cpuset); + unsigned int cpu_count = hwloc_bitmap_weight(obj->cpuset); + + topo_cnfg.l3[j] = (struct core_domain_mapping *) + malloc(sizeof(struct core_domain_mapping)); + if (topo_cnfg.l3[j] == NULL) { + rte_eal_topology_release(); + return -1; + } + + topo_cnfg.l3[j]->core_count = 0; + topo_cnfg.l3[j]->cores = (uint16_t *) + malloc(sizeof(uint16_t) * cpu_count); + if (topo_cnfg.l3[j]->cores == NULL) { + rte_eal_topology_release(); + return -1; + } + + signed int cpu_id = first_cpu; + unsigned int cpu_index = 0; + do { + if (rte_lcore_is_enabled(cpu_id)) { + EAL_LOG(DEBUG, " L3 domain (%u) lcore %u", j, cpu_id); + topo_cnfg.l3[j]->cores[cpu_index] = cpu_id; + cpu_index++; + + topo_cnfg.l3[j]->core_count += 1; + topo_cnfg.l3_core_count += 1; + } + cpu_id = hwloc_bitmap_next(obj->cpuset, cpu_id); + cpu_count -= 1; + } while ((cpu_id != -1) && (cpu_count)); + } + } + if (io_depth) { + topo_cnfg.io_count = hwloc_get_nbobjs_by_depth(topo_cnfg.topology, io_depth); + topo_cnfg.io = (struct core_domain_mapping **) + malloc(sizeof(struct core_domain_mapping *) * topo_cnfg.io_count); + if (topo_cnfg.io == NULL) { + rte_eal_topology_release(); + return -1; + } + + for (int j = 0; j < topo_cnfg.io_count; j++) { + hwloc_obj_t obj = hwloc_get_obj_by_depth(topo_cnfg.topology, io_depth, j); + unsigned int first_cpu = hwloc_bitmap_first(obj->cpuset); + unsigned int cpu_count = hwloc_bitmap_weight(obj->cpuset); + + topo_cnfg.io[j] = (struct core_domain_mapping *) + malloc(sizeof(struct core_domain_mapping)); + if (topo_cnfg.io[j] == NULL) { + rte_eal_topology_release(); + return -1; + } + + topo_cnfg.io[j]->core_count = 0; + topo_cnfg.io[j]->cores = (uint16_t *) + malloc(sizeof(uint16_t) * cpu_count); + if (topo_cnfg.io[j]->cores == NULL) { + rte_eal_topology_release(); + return -1; + } + + signed int cpu_id = first_cpu; + unsigned int cpu_index = 0; + do { + if (rte_lcore_is_enabled(cpu_id)) { + EAL_LOG(DEBUG, " IO domain (%u) lcore %u", j, cpu_id); + topo_cnfg.io[j]->cores[cpu_index] = cpu_id; + cpu_index++; + + topo_cnfg.io[j]->core_count += 1; + topo_cnfg.io_core_count += 1; + } + cpu_id = hwloc_bitmap_next(obj->cpuset, cpu_id); + cpu_count -= 1; + } while ((cpu_id != -1) && (cpu_count)); + } + } + + hwloc_topology_destroy(topo_cnfg.topology); + topo_cnfg.topology = NULL; + + EAL_LOG(INFO, "Details of Topology:"); + EAL_LOG(INFO, " - domain count: l1 %u, l2, %u, l3 %u, io %u", + topo_cnfg.l1_count, topo_cnfg.l2_count, + topo_cnfg.l3_count, topo_cnfg.io_count); + EAL_LOG(INFO, " - core count: l1 %u, l2 %u, l3 %u, io %u", + topo_cnfg.l1_core_count, topo_cnfg.l2_core_count, + topo_cnfg.l3_core_count, topo_cnfg.io_core_count); +#endif + + return 0; +} + +/* + * release HWLOC topology structure memory + */ +int +rte_eal_topology_release(void) +{ +#ifdef RTE_EAL_HWLOC_TOPOLOGY_PROBE + EAL_LOG(DEBUG, "release l1 domain memory!"); + for (int i = 0; i < topo_cnfg.l1_count; i++) { + if (topo_cnfg.l1[i]->cores) { + free(topo_cnfg.l1[i]->cores); + topo_cnfg.l1[i]->core_count = 0; + } + } + + if (topo_cnfg.l1) { + free(topo_cnfg.l1); + topo_cnfg.l1 = NULL; + } + topo_cnfg.l1_count = 0; + + EAL_LOG(DEBUG, "release l2 domain memory!"); + for (int i = 0; i < topo_cnfg.l2_count; i++) { + if (topo_cnfg.l2[i]->cores) { + free(topo_cnfg.l2[i]->cores); + topo_cnfg.l2[i]->core_count = 0; + } + } + + if (topo_cnfg.l2) { + free(topo_cnfg.l2); + topo_cnfg.l2 = NULL; + } + topo_cnfg.l2_count = 0; + + EAL_LOG(DEBUG, "release l3 domain memory!"); + for (int i = 0; i < topo_cnfg.l3_count; i++) { + if (topo_cnfg.l3[i]->cores) { + free(topo_cnfg.l3[i]->cores); + topo_cnfg.l3[i]->core_count = 0; + } + } + + if (topo_cnfg.l3) { + free(topo_cnfg.l3); + topo_cnfg.l3 = NULL; + } + topo_cnfg.l3_count = 0; + + EAL_LOG(DEBUG, "release IO domain memory!"); + for (int i = 0; i < topo_cnfg.io_count; i++) { + if (topo_cnfg.io[i]->cores) { + free(topo_cnfg.io[i]->cores); + topo_cnfg.io[i]->core_count = 0; + } + } + + if (topo_cnfg.io) { + free(topo_cnfg.io); + topo_cnfg.io = NULL; + } + topo_cnfg.io_count = 0; +#endif + + return 0; +} + /* * Parse /sys/devices/system/cpu to get the number of physical and logical * processors on the machine. The function will fill the cpu_info diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h index bb315dab04..ed97f112ca 100644 --- a/lib/eal/common/eal_private.h +++ b/lib/eal/common/eal_private.h @@ -17,6 +17,10 @@ #include "eal_internal_cfg.h" +#ifdef RTE_EAL_HWLOC_TOPOLOGY_PROBE +#include +#endif + /** * Structure storing internal configuration (per-lcore) */ @@ -40,6 +44,36 @@ struct lcore_config { extern struct lcore_config lcore_config[RTE_MAX_LCORE]; +struct core_domain_mapping { + uint16_t core_count; + uint16_t *cores; +}; + +struct topology_config { +#ifdef RTE_EAL_HWLOC_TOPOLOGY_PROBE + hwloc_topology_t topology; +#endif + + /* domain count */ + uint16_t l1_count; + uint16_t l2_count; + uint8_t l3_count; + uint8_t io_count; + + /* total cores under all domain */ + uint16_t l1_core_count; + uint16_t l2_core_count; + uint16_t l3_core_count; + uint16_t io_core_count; + + /* two dimensional array for each domain */ + struct core_domain_mapping **l1; + struct core_domain_mapping **l2; + struct core_domain_mapping **l3; + struct core_domain_mapping **io; +}; +extern struct topology_config topo_cnfg; + /** * The global RTE configuration structure. */ @@ -81,6 +115,20 @@ struct rte_config *rte_eal_get_configuration(void); */ int rte_eal_memzone_init(void); + +/** + * Initialize the topology structure using HWLOC Library + */ +__rte_internal +int rte_eal_topology_init(void); + +/** + * Release the memory held by Topology structure + */ +__rte_internal +int rte_eal_topology_release(void); + + /** * Fill configuration with number of physical and logical processors * diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c index 1229230063..301f993748 100644 --- a/lib/eal/freebsd/eal.c +++ b/lib/eal/freebsd/eal.c @@ -73,6 +73,8 @@ struct lcore_config lcore_config[RTE_MAX_LCORE]; /* used by rte_rdtsc() */ int rte_cycles_vmware_tsc_map; +/* holds topology information */ +struct topology_config topo_cnfg; int eal_clean_runtime_dir(void) @@ -912,6 +914,12 @@ rte_eal_init(int argc, char **argv) return -1; } + if (rte_eal_topology_init()) { + rte_eal_init_alert("Cannot invoke topology!!!"); + rte_errno = ENOTSUP; + return -1; + } + eal_mcfg_complete(); return fctret; @@ -932,6 +940,8 @@ rte_eal_cleanup(void) struct internal_config *internal_conf = eal_get_internal_configuration(); + + rte_eal_topology_release(); rte_service_finalize(); rte_mp_channel_cleanup(); eal_bus_cleanup(); diff --git a/lib/eal/include/rte_lcore.h b/lib/eal/include/rte_lcore.h index 7deae47af3..f6c3597656 100644 --- a/lib/eal/include/rte_lcore.h +++ b/lib/eal/include/rte_lcore.h @@ -18,6 +18,7 @@ #include #include #include +#include #ifdef __cplusplus extern "C" { @@ -37,6 +38,39 @@ enum rte_lcore_role_t { ROLE_NON_EAL, }; +/** + * The lcore grouping with in the L1 Domain. + */ +#define RTE_LCORE_DOMAIN_L1 RTE_BIT32(0) +/** + * The lcore grouping with in the L2 Domain. + */ +#define RTE_LCORE_DOMAIN_L2 RTE_BIT32(1) +/** + * The lcore grouping with in the L3 Domain. + */ +#define RTE_LCORE_DOMAIN_L3 RTE_BIT32(2) +/** + * The lcore grouping with in the IO Domain. + */ +#define RTE_LCORE_DOMAIN_IO RTE_BIT32(3) +/** + * The lcore grouping with in the SMT Domain (Like L1 Domain). + */ +#define RTE_LCORE_DOMAIN_SMT RTE_LCORE_DOMAIN_L1 +/** + * The lcore grouping based on Domains (L1|L2|L3|IO). + */ +#define RTE_LCORE_DOMAIN_ALL (RTE_LCORE_DOMAIN_L1 | \ + RTE_LCORE_DOMAIN_L2 | \ + RTE_LCORE_DOMAIN_L3 | \ + RTE_LCORE_DOMAIN_IO) +/** + * The mask for getting all cores under same topology. + */ +#define RTE_LCORE_DOMAIN_LCORES_ALL RTE_GENMASK32(31, 0) + + /** * Get a lcore's role. * @@ -211,6 +245,108 @@ int rte_lcore_is_enabled(unsigned int lcore_id); */ unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap); +/** + * Get count for selected domain. + * + * @param domain_sel + * Domain selection, RTE_LCORE_DOMAIN_[L1|L2|L3|IO]. + * @return + * total count for selected domain. + * + * @note valid for EAL args of lcore and coremask. + * + */ +__rte_experimental +unsigned int rte_get_domain_count(unsigned int domain_sel); + +/** + * Get count for lcores for a domain. + * + * @param domain_sel + * Domain selection, RTE_LCORE_DOMAIN_[L1|L2|L3|IO]. + * @param domain_indx + * Domain Index, valid range from 0 to (rte_get_domain_count - 1). + * @return + * total count for lcore in a selected index of a domain. + * + * @note valid for EAL args of lcore and coremask. + * + */ +__rte_experimental +unsigned int +rte_lcore_count_from_domain(unsigned int domain_sel, unsigned int domain_indx); + +/** + * Get n'th lcore from a selected domain. + * + * @param domain_sel + * Domain selection, RTE_LCORE_DOMAIN_[L1|L2|L3|IO]. + * @param domain_indx + * Domain Index, valid range from 0 to (rte_get_domain_count - 1). + * @param lcore_pos + * lcore position, valid range from 0 to (dpdk_enabled_lcores in the domain -1) + * @return + * lcore from the list for the selected domain. + * + * @note valid for EAL args of lcore and coremask. + * + */ +__rte_experimental +unsigned int +rte_get_lcore_in_domain(unsigned int domain_sel, +unsigned int domain_indx, unsigned int lcore_pos); + +/** + * Get the enabled lcores from next domain based on extended flag. + * + * @param i + * The current lcore (reference). + * @param skip_main + * If true, do not return the ID of the main lcore. + * @param wrap + * If true, go back to first core of flag based domain when last core is reached. + * If false, return RTE_MAX_LCORE when no more cores are available. + * @param flag + * Allows user to select various domain as specified under RTE_LCORE_DOMAIN_[L1|L2|L3|IO] + * + * @return + * The next lcore_id or RTE_MAX_LCORE if not found. + * + * @note valid for EAL args of lcore and coremask. + * + */ +__rte_experimental +unsigned int +rte_get_next_lcore_from_domain(unsigned int i, int skip_main, int wrap, +uint32_t flag); + +/** + * Get the Nth (first|last) lcores from next domain based on extended flag. + * + * @param i + * The current lcore (reference). + * @param skip_main + * If true, do not return the ID of the main lcore. + * @param wrap + * If true, go back to first core of flag based domain when last core is reached. + * If false, return RTE_MAX_LCORE when no more cores are available. + * @param flag + * Allows user to select various domain as specified under RTE_LCORE_DOMAIN_(L1|L2|L3|IO) + * @param cores_to_skip + * If set to positive value, will skip to Nth lcore from start. + * If set to negative value, will skipe to Nth lcore from last. + * + * @return + * The next lcore_id or RTE_MAX_LCORE if not found. + * + * @note valid for EAL args of lcore and coremask. + * + */ +__rte_experimental +unsigned int +rte_get_next_lcore_from_next_domain(unsigned int i, +int skip_main, int wrap, uint32_t flag, int cores_to_skip); + /** * Macro to browse all running lcores. */ @@ -227,6 +363,38 @@ unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap); i < RTE_MAX_LCORE; \ i = rte_get_next_lcore(i, 1, 0)) +/** + * Macro to browse all running lcores in a domain. + */ +#define RTE_LCORE_FOREACH_DOMAIN(i, flag) \ + for (i = rte_get_next_lcore_from_domain(-1, 0, 0, flag); \ + i < RTE_MAX_LCORE; \ + i = rte_get_next_lcore_from_domain(i, 0, 0, flag)) + +/** + * Macro to browse all running lcores except the main lcore in domain. + */ +#define RTE_LCORE_FOREACH_WORKER_DOMAIN(i, flag) \ + for (i = rte_get_next_lcore_from_domain(-1, 1, 0, flag); \ + i < RTE_MAX_LCORE; \ + i = rte_get_next_lcore_from_domain(i, 1, 0, flag)) + +/** + * Macro to browse Nth lcores on each domain. + */ +#define RTE_LCORE_FORN_NEXT_DOMAIN(i, flag, n) \ + for (i = rte_get_next_lcore_from_next_domain(-1, 0, 0, flag, n);\ + i < RTE_MAX_LCORE; \ + i = rte_get_next_lcore_from_next_domain(i, 0, 0, flag, n)) + +/** + * Macro to browse all Nth lcores except the main lcore on each domain. + */ +#define RTE_LCORE_FORN_WORKER_NEXT_DOMAIN(i, flag, n) \ + for (i = rte_get_next_lcore_from_next_domain(-1, 1, 0, flag, n);\ + i < RTE_MAX_LCORE; \ + i = rte_get_next_lcore_from_next_domain(i, 1, 0, flag, n)) + /** * Callback prototype for initializing lcores. * diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c index 54577b7718..093f208319 100644 --- a/lib/eal/linux/eal.c +++ b/lib/eal/linux/eal.c @@ -65,6 +65,9 @@ * duration of the program, as we hold a write lock on it in the primary proc */ static int mem_cfg_fd = -1; +/* holds topology information */ +struct topology_config topo_cnfg; + static struct flock wr_lock = { .l_type = F_WRLCK, .l_whence = SEEK_SET, @@ -1311,6 +1314,12 @@ rte_eal_init(int argc, char **argv) return -1; } + if (rte_eal_topology_init()) { + rte_eal_init_alert("Cannot invoke topology!!!"); + rte_errno = ENOTSUP; + return -1; + } + eal_mcfg_complete(); return fctret; @@ -1352,6 +1361,8 @@ rte_eal_cleanup(void) struct internal_config *internal_conf = eal_get_internal_configuration(); + rte_eal_topology_release(); + if (rte_eal_process_type() == RTE_PROC_PRIMARY && internal_conf->hugepage_file.unlink_existing) rte_memseg_walk(mark_freeable, NULL); diff --git a/lib/eal/meson.build b/lib/eal/meson.build index e1d6c4cf17..690b95d5df 100644 --- a/lib/eal/meson.build +++ b/lib/eal/meson.build @@ -31,3 +31,7 @@ endif if is_freebsd annotate_locks = false endif + +if has_libhwloc + dpdk_conf.set10('RTE_EAL_HWLOC_TOPOLOGY_PROBE', true) +endif diff --git a/lib/eal/version.map b/lib/eal/version.map index f493cd1ca7..6c5b3ad205 100644 --- a/lib/eal/version.map +++ b/lib/eal/version.map @@ -399,6 +399,13 @@ EXPERIMENTAL { # added in 24.11 rte_bitset_to_str; + + # added in 25.03 + rte_get_domain_count; + rte_lcore_count_from_domain; + rte_get_lcore_in_domain; + rte_get_next_lcore_from_domain; + rte_get_next_lcore_from_next_domain; }; INTERNAL { @@ -408,6 +415,8 @@ INTERNAL { rte_bus_unregister; rte_eal_get_baseaddr; rte_eal_parse_coremask; + rte_eal_topology_init; + rte_eal_topology_release; rte_firmware_read; rte_intr_allow_others; rte_intr_cap_multiple; diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c index 28b78a95a6..2edfc4128c 100644 --- a/lib/eal/windows/eal.c +++ b/lib/eal/windows/eal.c @@ -40,6 +40,10 @@ static int mem_cfg_fd = -1; /* internal configuration (per-core) */ struct lcore_config lcore_config[RTE_MAX_LCORE]; +/* holds topology information */ +struct topology_config topo_cnfg; + + /* Detect if we are a primary or a secondary process */ enum rte_proc_type_t eal_proc_type_detect(void) @@ -262,6 +266,8 @@ rte_eal_cleanup(void) struct internal_config *internal_conf = eal_get_internal_configuration(); + rte_eal_topology_release(); + eal_intr_thread_cancel(); eal_mem_virt2iova_cleanup(); eal_bus_cleanup(); @@ -505,6 +511,12 @@ rte_eal_init(int argc, char **argv) rte_eal_mp_remote_launch(sync_func, NULL, SKIP_MAIN); rte_eal_mp_wait_lcore(); + if (rte_eal_topology_init()) { + rte_eal_init_alert("Cannot invoke topology!!!"); + rte_errno = ENOTSUP; + return -1; + } + eal_mcfg_complete(); return fctret; -- 2.34.1