From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BB909458E9; Mon, 2 Sep 2024 17:33:39 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7E5EC4042C; Mon, 2 Sep 2024 17:33:39 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2085.outbound.protection.outlook.com [40.107.223.85]) by mails.dpdk.org (Postfix) with ESMTP id 0C16040150 for ; Mon, 2 Sep 2024 17:33:38 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=XWhh1i50kxRCoZO+ixzpDG3pYJwvEfYInVl5ITXSR0ncti0IxoMTyZi59+x6LCrzN5VtpCZuzrpErWqdhc/5/PI7PIcrbsZGckPYlBa9TvRLDdBngNZZKXPFYjTZUv0C/L5Z2jZXFVqFpTXiJeFwDWcdhGre2kNf8a8pX60m22p6+X5KWahsWiPk/lckwAVUbHgcdlAfmUC7bVGRbgdiXRjRE2pkZL+hJOZeZoaULRixS6fUThBtcAOtgLm3xlLtPAGxNKfyJ3YirzACo/G+Ra5U6kNN/LdjJeOZsUyTughRcGl/+xvUt5m5qt7bLEpdiSyJReUh9b2LrGqjfRmIOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=C3v5DWao88TudTABKfkZIjzMR2goFYTK65Qy4EHx5D0=; b=k5yrkrapLHv4Wswlkwpk9BkvJKaNSL5Da5sol58DxQZwLG7GNvL3oKW1Zs2Ph8MpseQ+V8fGwVaL61faDC5G4/ricwIv5AMl1p0fbizwt5DkhNZBQPvABwBQIXo4Eb2nwg9dAu7IAp4fxtqAhu40pnNUU3DC9Kc+3ghCt9xGFr87C7iphexyTJ++7yuFBVYYP2dAqMfe7g8/gE1ML9lzWW96tUOrrKDGfuPk84yMFxPveJ58FRS3v/E6750NJjJAvITDySiF88vgKlrtApxpHanIF8FIXfEVH1eRBOQdwaH9seYWiqnKL313H23rmY508XGjbVC61AT/l2tIViYi6w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=C3v5DWao88TudTABKfkZIjzMR2goFYTK65Qy4EHx5D0=; b=tBwgPrf6jwyzptCoPCqxWH8SQTg8GqNnvfqRUQW0j2jhUoh0WPwwSOE6pc5LM2v01SgZNuJUZEy1y0r5dH8363lGHeC8JmlUkNvtBgvfSLhpIoirDaJJLZB7VW6q7kqBiW/5XSXslSTnzZFLGpMCJcEcbpi+vymhpnEv9RTXuOY= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com; Received: from PH7PR12MB8596.namprd12.prod.outlook.com (2603:10b6:510:1b7::6) by LV3PR12MB9265.namprd12.prod.outlook.com (2603:10b6:408:215::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7897.26; Mon, 2 Sep 2024 15:33:34 +0000 Received: from PH7PR12MB8596.namprd12.prod.outlook.com ([fe80::a011:943d:7291:8069]) by PH7PR12MB8596.namprd12.prod.outlook.com ([fe80::a011:943d:7291:8069%5]) with mapi id 15.20.7918.024; Mon, 2 Sep 2024 15:33:34 +0000 Content-Type: multipart/alternative; boundary="------------2lvxurI1noSKfdozjjHomjoD" Message-ID: Date: Mon, 2 Sep 2024 21:03:24 +0530 User-Agent: Mozilla Thunderbird Subject: Re: [RFC 0/2] introduce LLC aware functions Content-Language: en-US To: "Burakov, Anatoly" , ferruh.yigit@amd.com, dev@dpdk.org References: <20240827151014.201-1-vipin.varghese@amd.com> <288d9e9e-aaec-4dac-b969-54e01956ef4e@intel.com> <65f3dc80-2d07-4b8b-9a5c-197eb2b21180@amd.com> <8addd7f6-fac8-45ec-a44f-f81eb008cc36@intel.com> From: "Varghese, Vipin" In-Reply-To: <8addd7f6-fac8-45ec-a44f-f81eb008cc36@intel.com> X-ClientProxiedBy: PN2PR01CA0065.INDPRD01.PROD.OUTLOOK.COM (2603:1096:c01:23::10) To PH7PR12MB8596.namprd12.prod.outlook.com (2603:10b6:510:1b7::6) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: PH7PR12MB8596:EE_|LV3PR12MB9265:EE_ X-MS-Office365-Filtering-Correlation-Id: 67029b40-3e4e-4aca-ed75-08dccb649afb X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|366016|376014; X-Microsoft-Antispam-Message-Info: =?utf-8?B?emJJMFdoL1hiaGMzS1V3V3l2WjZMQmVvTklmMnpIMlc3cEZGSlRteG5Keld2?= =?utf-8?B?TTlPa09WV3FqNkEwSDkxS0hHTjZFdHl1NUdyTGF4UDVTaUZEcFVnR2dHUDl0?= =?utf-8?B?SnVRdHVCMnNtM3hhVGE5T3BUMkl2NDROVmtGOUVSKytLK0tLeGd3WkFld2FJ?= =?utf-8?B?WjBUYUtCTUdZMGYrRjFMZ2QwZHdQZXdrNnRxVEc2djU0Vkt1ZFdxZmtTRVZQ?= =?utf-8?B?NVlYSTQwcFNDaFJWNjZJdVhsRHFnVVpRYTVrb1h2OUpyY2hBOXM2REZyYmFO?= =?utf-8?B?RUl0Yi9VZHZDRWRFekRlSzMxaUhMQW9TUjJ2citJamx3Y0Uvd3dQQW1adFp0?= =?utf-8?B?ckNEZTRzU0JCci82NWJxdmtPRnpjZEdrQkdKNUtTZzhDV3lHaXRQWlFqNDEy?= =?utf-8?B?K3dOV0dIZjBBcnJ4TVRQSHJIUTNhRmVwL2lnYW1kUHo3cWRJUVhrcnpCWTdX?= =?utf-8?B?czJNQlVqSE9jYVpTVjk0eXBJdktReEFtd2MvVGx5RkVJa3FnUTZNNW55MGoz?= =?utf-8?B?VWhyZmRMSmYyOEFFZUN2WTZsOFVWUXkvTm1oWU85ZVZ2NnRBejE1V1Btd2Rz?= =?utf-8?B?ak9tYU1ZdXppQUVXNHlmTFNGTU9mdWxYU2pwdzBBQVpGcHNCSlVPZ1E0Y3I4?= =?utf-8?B?NnE2djlVK2FPNU1LWFlzTjBtakJ4NngzVzZHbWtHK2RaeU84aVQvVlhMK09X?= =?utf-8?B?VlhXYXZlNmRNNUcxalBKR0VlUWtrU2QxeFQva1VXZnRuUU1XN0RlWDRzRVU1?= =?utf-8?B?c2J6d1lDeVRMc1RvVzY0aWFmeC8wTjNMai8waGNKcHpOaE5JR2ZWRUtmK3JE?= =?utf-8?B?TkV5N3pPWTA4U2xoV1RuVVhqeUdvaEZtWFd6Z1hmYWkzZGZvK0xFNzFpdWVQ?= =?utf-8?B?Zko3WmViR0l0VDJVNWNDYzdBWXN6VDBsREhhYmVqS1d4dG9DTGo2WTJ6YW5u?= =?utf-8?B?KzJRREF0Mmo4NVQyNDBwNzNMMjdJSXhTT1hzM1ZubHk2WG5LWVlVck9SV2Er?= =?utf-8?B?Z0FudzJhZ2pmWCtmUzYvSWxkM2N1Y2kyK1Rxd0doalA3T0VSbndhVE9oMExC?= =?utf-8?B?YjNrczdoRnVDbGZsYlp6bXViZFdkb3NsUTJyd25UMDVzbmNCWkUyWGp1RG5E?= =?utf-8?B?Z0pQSUxHRkZyNjI3LzZYOHhXLzVZcnpiazZjMzNPM1NkTS8rWGJIbWF6QkQ1?= =?utf-8?B?c0IzWnFpeDQyeW5lZ3k5MUJucFk3Q0RwbGhyM2F4WmRoUDNscnRpSUE0bjUw?= =?utf-8?B?dnk1Mm5yNFJTdUhndnk5VVFsY2FZaUI3Vm9LNGs3MytwN1AzYVBDSkRjRkZm?= =?utf-8?B?eE5Uc2Y3L2VVN0xYVE9WTW0zVm1Ec1VsaHJMd1g4ZU5WcXpMZC9VQ2tBRFFX?= =?utf-8?B?Z09ib3dOSlhtVVd2QjZ4cmw5ZGtNMlZQaWN3TTNySUtzUDQwVDgrUDI4VXFM?= =?utf-8?B?UVkzbEY4RzVxR2ZMWEZKRlh5cEVLNG4rWlRvOFEwVlpURGFUY29ZUEpEYk1X?= =?utf-8?B?ZGRUVStCZ0dNNDVwWTlHamt6dzl5cysyOGdaNVNwQnEyTDgrUXJ2MGdSWUdp?= =?utf-8?B?alAzMHVCb1hQcTg1Y0c2bW85QTBVelNyQTJNaE0yYnRHQnFkb2tGTzNWT0F6?= =?utf-8?B?bTErczdlaW1tdk81QnZCMWtueUl1eXR4ekZ4MDhxRzdOYkZXS0piVWtQaGhl?= =?utf-8?B?VTNDYTgvQndXNVlyU2kweVZkT09HMjUwZ1pxVU1GdTJCa2tzQ0xEeW9yTVo5?= =?utf-8?Q?ZA1butiayEXBqcBwj0=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PH7PR12MB8596.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230040)(1800799024)(366016)(376014); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?MmF2U2JvTmlFZUtqRWN2ZVQ4MVlmR0YyRW1jcXVvbmhqMGV0eS9hd0lMNnBY?= =?utf-8?B?eHJiTDN6S1RrTnpSWlQvVVpGbVY1VWRnbzlSdFdJL0tHM1NSdFNZR2pqUVVv?= =?utf-8?B?MjAyRGoyQzVqdElVNUxtbmUzWnBGSUVNYmpWRTNjTHZnY2pSZTE5NDRrYjF5?= =?utf-8?B?RGo5QkpRaDNZbmdzWENYTTB0WFNhcVFaLzI1ZWdSQUFPUlJ0R3ZJS0FvMEVu?= =?utf-8?B?YzdjUEptRTdrWlBSMG1FRUdiL0kwVzhYWXVxSHF3bE91WjNybkRkaGJJN3ZH?= =?utf-8?B?UTVDMFhTSDFKTFU0Z0l5bXd4NnNjOUpEME5NYzNwSGxxQlUrNUViVUtJVFFx?= =?utf-8?B?S2tJR0xQaTlaNUQ3VkJXQytJTXhObkY1aTAzVTZ0d3pZdmJOcVdQV3NWOWNr?= =?utf-8?B?cU0rWnFhcGgvZ2xiNDlwTlUzMmR6Q25CaVVZUXRWM1hzNHVjK1lyVnNyeURt?= =?utf-8?B?OVlxTTdTa2dHRDVaS1RwdWdNUThOcmNEODZVL3BzUEdqRHhQOG5BRVZ3Qzkv?= =?utf-8?B?TVZrUGFic1RmMjRUbTVMYmRKTlY0aFNWOWZ4WVlFTVNtNUE0QjhTS2IycVBw?= =?utf-8?B?bmZ0UkdURm9hdFZlUVIwTU5KWW1pcHBLWjZ4U1l6eVRVc2dHZU00NllXZDAz?= =?utf-8?B?TDhyZ3hWczRmeURSL3U5Q2pTWlMyUmdrckh0a1hreTY5dmdhVUF3ZHJVVlpQ?= =?utf-8?B?RVc5SERMRTdoOEhRVjdnU1hycFQyRU8vM2h0UGtNUEZQdGUvQXB4OEtpcTFC?= =?utf-8?B?VW1YVGRrQmJHQkZ4OC8rd2xQcStjTmdMQWg5cEkzQm5BN2tvZm5TQlo1NHBi?= =?utf-8?B?U243QW5LbzF0T3dvS0F4eTk5ZXZSeUFGL3NsSUI3USt6c0tabUorK3ZPMmFN?= =?utf-8?B?d1ozc2l6eUQzcWJYalVHOW9BaXhuelNrRWhjZTZscXFOaUJEVVhpazM1ZjEy?= =?utf-8?B?cEhGMG1IZHE1cEY3N2FuOHpLdzRoVUI5THl3WEtCc09sTUIxZWhGb1pqNEp1?= =?utf-8?B?RmVvVXdzOHlNK3FaTHM4MGx2UW9SVGlyMEFNUWd0OHc0b0dKcEJaSndzZDhF?= =?utf-8?B?V1ZlcjRSM3FDTjdRSmE3TGkvRitQMVpIMVR3WjZDWTg2dWI4Q015WkMxWFU5?= =?utf-8?B?U1d1YWRRRWc4ZlhQT3F1Z0lIUGRPcXY1OVJRMVBDdjBuRGVlMkYzSUF2alNM?= =?utf-8?B?eTJYd21JNDZ0MWlwNzN5U2hBQTdhdFVheUFKb3NHR1diUjRwYW9YRzd1MzRX?= =?utf-8?B?T1dNMGlsYTRYVzlDb0lnRFgvanVpMlpnbjlLZTBmVWY3L0dkQ0lYNkZvZW5R?= =?utf-8?B?ZWFoUUxIWXlKSXhrS3Fab3FVWTM1eXF0bEU2THhmS3FBdHVBV3NyWERMcEFJ?= =?utf-8?B?YXpvYzRSZGlVWXhvY2s3bXZUbGlDUTFCT3lSVVppSUQ2MFY4elJmempNalhL?= =?utf-8?B?b01ZejloeDdHcDNaN1NqdGw0RnBMa0wrWEJmMEJlM01Uc0FXSEN3L1FCUjJq?= =?utf-8?B?a3VDazl2UEJvZHhTVnhReEx4Z1R5bW8xRU8vWnpSM09YTE9MV3dBQVIzMXY0?= =?utf-8?B?eW40SWNNM0w4cW0yS3ZQMVd3emxRQS9oeEwyZUs4bk4xUS9vcHR4WFIxUDlm?= =?utf-8?B?OVBCVFRmb09WNEFoaUgrbXk1dWhHWkw0dzJYM0JWSzhoTktVSGtRSTZSVTZh?= =?utf-8?B?QlExUWp3QnhLTkpTdlhXNW1mNlFiTnNvTGxYdTZlZi82b3h4M0pKd2RuQXlO?= =?utf-8?B?NHNYKzhDZXh2dFE2ajdSdXhZL2V3U24xTHA0MUU2dnR1aGZMSlBrdjRQUUI2?= =?utf-8?B?VjNsTHZXdTRHWjdaVCsrZWx3c29zWm5DMVZ5RUVsZk50Z3BHWmwzUW1aUCtG?= =?utf-8?B?UElvcHpyc0tsbS8zUXFWNzRxUFE1czNiWXRicjFjOXM1NFFlUEJMb3pvTUN2?= =?utf-8?B?TDNSS0NJZlI3KzB3TFpNYy94bElQT1MvajR3N2ZFTTNXZFdLK0lIMEFteGRS?= =?utf-8?B?K1RJUVRZTjdBd3FRMllxcUZnaC9RYkNMeHdlR2dUWC9sRkZ1Yld1T3hKR0NW?= =?utf-8?B?TEpidWpiMmRnK3BkS0s4emQzaWlhZ2U5QVJWM0c5WmUrdU56NUpSVnNTbWJ0?= =?utf-8?Q?vTf4fSoLOW/lyvEcOQpBxFz3P?= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 67029b40-3e4e-4aca-ed75-08dccb649afb X-MS-Exchange-CrossTenant-AuthSource: PH7PR12MB8596.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Sep 2024 15:33:34.0485 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 4tbVyADrCK12vCN+CH81YuJ1Mvdsh5KS7TTQfdnAURH52vHMbVbYjZt5X80qcyv9F8HUeBMLlmsfTOJtmY93xw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV3PR12MB9265 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --------------2lvxurI1noSKfdozjjHomjoD Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit >> >>> I recently looked into how Intel's Sub-NUMA Clustering would work >>> within >>> DPDK, and found that I actually didn't have to do anything, because the >>> SNC "clusters" present themselves as NUMA nodes, which DPDK already >>> supports natively. >> >> yes, this is correct. In Intel Xeon Platinum BIOS one can enable >> `Cluster per NUMA` as `1,2 or4`. >> >> This divides the tiles into Sub-Numa parition, each having separate >> lcores,memory controllers, PCIe >> >> and accelerator. >> >>> >>> Does AMD's implementation of chiplets not report themselves as separate >>> NUMA nodes? >> >> In AMD EPYC Soc, this is different. There are 2 BIOS settings, namely >> >> 1. NPS: `Numa Per Socket` which allows the IO tile (memory, PCIe and >> Accelerator) to be partitioned as Numa 0, 1, 2 or 4. >> >> 2. L3 as NUMA: `L3 cache of CPU tiles as individual NUMA`. This allows >> all CPU tiles to be independent NUMA cores. >> >> >> The above settings are possible because CPU is independent from IO tile. >> Thus allowing 4 combinations be available for use. > > Sure, but presumably if the user wants to distinguish this, they have to > configure their system appropriately. If user wants to take advantage of > L3 as NUMA (which is what your patch proposes), then they can enable the > BIOS knob and get that functionality for free. DPDK already supports > this. > The intend of the RFC is to introduce the ability to select lcore within the same L3 cache whether the BIOS is set or unset for `L3 as NUMA`. This is also achieved and tested on platforms which advertises via sysfs by OS kernel. Thus eliminating the dependency on hwloc and libuma which can be different versions in different distros. >> >> These are covered in the tuning gudie for the SoC in 12. How to get best >> performance on AMD platform — Data Plane Development Kit 24.07.0 >> documentation (dpdk.org) >> . >> >> >>> Because if it does, I don't really think any changes are >>> required because NUMA nodes would give you the same thing, would it >>> not? >> >> I have a different opinion to this outlook. An end user can >> >> 1. Identify the lcores and it's NUMA user `usertools/cpu-layout.py` > > I recently submitted an enhacement for CPU layout script to print out > NUMA separately from physical socket [1]. > > [1] > https://patches.dpdk.org/project/dpdk/patch/40cf4ee32f15952457ac5526cfce64728bd13d32.1724323106.git.anatoly.burakov@intel.com/ > > > I believe when "L3 as NUMA" is enabled in BIOS, the script will display > both physical package ID as well as NUMA nodes reported by the system, > which will be different from physical package ID, and which will display > information you were looking for. As AMD we had submitted earlier work on the same via usertools: enhance logic to display NUMA - Patchwork (dpdk.org) . this clearly were distinguishing NUMA and Physical socket. > >> >> 2. But it is core mask in eal arguments which makes the threads >> available to be used in a process. > > See above: if the OS already reports NUMA information, this is not a > problem to be solved, CPU layout script can give this information to the > user. Agreed, but as pointed out in case of Intel Xeon Platinum SPR, the tile consists of cpu, memory, pcie and accelerator. hence setting the BIOS option `Cluster per NUMA` the OS kernel & libnuma display appropriate Domain with memory, pcie and cpu. In case of AMD SoC, libnuma for CPU is different from memory NUMA per socket. > >> >> 3. there are no API which distinguish L3 numa domain. Function >> `rte_socket_id >> ` >> for CPU tiles like AMD SoC will return physical socket. > > Sure, but I would think the answer to that would be to introduce an API > to distinguish between NUMA (socket ID in DPDK parlance) and package > (physical socket ID in the "traditional NUMA" sense). Once we can > distinguish between those, DPDK can just rely on NUMA information > provided by the OS, while still being capable of identifying physical > sockets if the user so desires. Agreed, +1 for the idea for physcial socket and changes in library to exploit the same. > > I am actually going to introduce API to get *physical socket* (as > opposed to NUMA node) in the next few days. > But how does it solve the end customer issues 1. if there are multiple NIC or Accelerator on multiple socket, but IO tile is partitioned to Sub Domain. 2. If RTE_FLOW steering is applied on NIC which needs to processed under same L3 - reduces noisy neighbor and better cache hits 3, for PKT-distribute library which needs to run within same worker lcore set as RX-Distributor-TX. Current RFC suggested addresses the above, by helping the end users to identify the lcores withing same L3 domain under a NUMA|Physical socket irresepctive of BIOS setting. >> >> >> Example: In AMD EPYC Genoa, there are total of 13 tiles. 12 CPU tiles >> and 1 IO tile. Setting >> >> 1. NPS to 4 will divide the memory, PCIe and accelerator into 4 domain. >> While the all CPU will appear as single NUMA but each 12 tile having >> independent L3 caches. >> >> 2. Setting `L3 as NUMA` allows each tile to appear as separate L3 >> clusters. >> >> >> Hence, adding an API which allows to select available lcores based on >> Split L3 is essential irrespective of the BIOS setting. >> > > I think the crucial issue here is the "irrespective of BIOS setting" > bit. That is what the current RFC achieves. > If EAL is getting into the game of figuring out exact intricacies > of physical layout of the system, then there's a lot more work to be > done as there are lots of different topologies, as other people have > already commented, and such an API needs *a lot* of thought put into it. There is standard sysfs interfaces for CPU cache topology (OS kernel), as mentioned earlier problem with hwloc and libnuma is different distros has different versions. There are solutions for specific SoC architectures as per latest comment. But we always can limit the API to selected SoC, while all other SoC when invoked will invoke rte_get_next_lcore. > > If, on the other hand, we leave this issue to the kernel, and only > gather NUMA information provided by the kernel, then nothing has to be > done - DPDK already supports all of this natively, provided the user has > configured the system correctly. As shared above, we tried to bring this usertools: enhance logic to display NUMA - Patchwork (dpdk.org) . DPDK support for lcore is getting enhanced and allowing user to use more favorable lcores within same Tile. > > Moreover, arguably DPDK already works that way: technically you can get > physical socket information even absent of NUMA support in BIOS, but > DPDK does not do that. Instead, if OS reports NUMA node as 0, that's > what we're going with (even if we could detect multiple sockets from > sysfs), In the above argument, it is shared as OS kernel detects NUMA or domain, which is used by DPDK right? The RFC suggested also adheres to the same, what OS sees. can you please explain for better understanding what in the RFC is doing differently? > and IMO it should stay that way unless there is a strong > argument otherwise. Totally agree, that is what the RFC is also doing, based on what OS sees as NUMA we are using it. Only addition is within the NUMA if there are split LLC, allow selection of those lcores. Rather than blindly choosing lcore using rte_lcore_get_next. > We force the user to configure their system > correctly as it is, and I see no reason to second-guess user's BIOS > configuration otherwise. Again iterating, the changes suggested in RFC are agnostic to what BIOS options are used, It is to earlier question `is AMD configuration same as Intel tile` I have explained it is not using BIOS setting. > > -- > Thanks, > Anatoly > --------------2lvxurI1noSKfdozjjHomjoD Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit <snipped>

I recently looked into how Intel's Sub-NUMA Clustering would work within
DPDK, and found that I actually didn't have to do anything, because the
SNC "clusters" present themselves as NUMA nodes, which DPDK already
supports natively.

yes, this is correct. In Intel Xeon Platinum BIOS one can enable
`Cluster per NUMA` as `1,2 or4`.

This divides the tiles into Sub-Numa parition, each having separate
lcores,memory controllers, PCIe

and accelerator.


Does AMD's implementation of chiplets not report themselves as separate
NUMA nodes?

In AMD EPYC Soc, this is different. There are 2 BIOS settings, namely

1. NPS: `Numa Per Socket` which allows the IO tile (memory, PCIe and
Accelerator) to be partitioned as Numa 0, 1, 2 or 4.

2. L3 as NUMA: `L3 cache of CPU tiles as individual NUMA`. This allows
all CPU tiles to be independent NUMA cores.


The above settings are possible because CPU is independent from IO tile.
Thus allowing 4 combinations be available for use.

Sure, but presumably if the user wants to distinguish this, they have to
configure their system appropriately. If user wants to take advantage of
L3 as NUMA (which is what your patch proposes), then they can enable the
BIOS knob and get that functionality for free. DPDK already supports this.

The intend of the RFC is to introduce the ability to select lcore within the same

L3 cache whether the BIOS is set or unset for `L3 as NUMA`. This is also achieved

and tested on platforms which advertises via sysfs by OS kernel. Thus eliminating

the dependency on hwloc and libuma which can be different versions in different distros.



These are covered in the tuning gudie for the SoC in 12. How to get best
performance on AMD platform — Data Plane Development Kit 24.07.0
documentation (dpdk.org)
<https://doc.dpdk.org/guides/linux_gsg/amd_platform.html>.


Because if it does, I don't really think any changes are
required because NUMA nodes would give you the same thing, would it not?

I have a different opinion to this outlook. An end user can

1. Identify the lcores and it's NUMA user `usertools/cpu-layout.py`

I recently submitted an enhacement for CPU layout script to print out
NUMA separately from physical socket [1].

[1]
https://patches.dpdk.org/project/dpdk/patch/40cf4ee32f15952457ac5526cfce64728bd13d32.1724323106.git.anatoly.burakov@intel.com/

I believe when "L3 as NUMA" is enabled in BIOS, the script will display
both physical package ID as well as NUMA nodes reported by the system,
which will be different from physical package ID, and which will display
information you were looking for.

As AMD we had submitted earlier work on the same via usertools: enhance logic to display NUMA - Patchwork (dpdk.org).

this clearly were distinguishing NUMA and Physical socket.



2. But it is core mask in eal arguments which makes the threads
available to be used in a process.

See above: if the OS already reports NUMA information, this is not a
problem to be solved, CPU layout script can give this information to the
user.

Agreed, but as pointed out in case of Intel Xeon Platinum SPR, the tile consists of cpu, memory, pcie and accelerator.

hence setting the BIOS option `Cluster per NUMA` the OS kernel & libnuma display appropriate Domain with memory, pcie and cpu.


In case of AMD SoC, libnuma for CPU is different from memory NUMA per socket.



3. there are no API which distinguish L3 numa domain. Function
`rte_socket_id
<https://doc.dpdk.org/api/rte__lcore_8h.html#a7c8da4664df26a64cf05dc508a4f26df>` for CPU tiles like AMD SoC will return physical socket.

Sure, but I would think the answer to that would be to introduce an API
to distinguish between NUMA (socket ID in DPDK parlance) and package
(physical socket ID in the "traditional NUMA" sense). Once we can
distinguish between those, DPDK can just rely on NUMA information
provided by the OS, while still being capable of identifying physical
sockets if the user so desires.
Agreed, +1 for the idea for physcial socket and changes in library to exploit the same.

I am actually going to introduce API to get *physical socket* (as
opposed to NUMA node) in the next few days.

But how does it solve the end customer issues

1. if there are multiple NIC or Accelerator on multiple socket, but IO tile is partitioned to Sub Domain.

2. If RTE_FLOW steering is applied on NIC which needs to processed under same L3 - reduces noisy neighbor and better cache hits

3, for PKT-distribute library which needs to run within same worker lcore set as RX-Distributor-TX.


Current RFC suggested addresses the above, by helping the end users to identify the lcores withing same L3 domain under a NUMA|Physical socket irresepctive of BIOS setting.



Example: In AMD EPYC Genoa, there are total of 13 tiles. 12 CPU tiles
and 1 IO tile. Setting

1. NPS to 4 will divide the memory, PCIe and accelerator into 4 domain.
While the all CPU will appear as single NUMA but each 12 tile having
independent L3 caches.

2. Setting `L3 as NUMA` allows each tile to appear as separate L3 clusters.


Hence, adding an API which allows to select available lcores based on
Split L3 is essential irrespective of the BIOS setting.


I think the crucial issue here is the "irrespective of BIOS setting"
bit.

That is what the current RFC achieves.

If EAL is getting into the game of figuring out exact intricacies
of physical layout of the system, then there's a lot more work to be
done as there are lots of different topologies, as other people have
already commented, and such an API needs *a lot* of thought put into it.

There is standard sysfs interfaces for CPU cache topology (OS kernel), as mentioned earlier

problem with hwloc and libnuma is different distros has different versions. There are solutions for

specific SoC architectures as per latest comment.


But we always can limit the API to selected SoC, while all other SoC when invoked will invoke rte_get_next_lcore.



If, on the other hand, we leave this issue to the kernel, and only
gather NUMA information provided by the kernel, then nothing has to be
done - DPDK already supports all of this natively, provided the user has
configured the system correctly.

As shared above, we tried to bring this usertools: enhance logic to display NUMA - Patchwork (dpdk.org).

DPDK support for lcore is getting enhanced and allowing user to use more favorable lcores within same Tile.



Moreover, arguably DPDK already works that way: technically you can get
physical socket information even absent of NUMA support in BIOS, but
DPDK does not do that. Instead, if OS reports NUMA node as 0, that's
what we're going with (even if we could detect multiple sockets from
sysfs),

In the above argument, it is shared as OS kernel detects NUMA or domain, which is used by DPDK right?

The RFC suggested also adheres to the same, what OS sees. can you please explain for better understanding

what in the RFC is doing differently?


and IMO it should stay that way unless there is a strong
argument otherwise.

Totally agree, that is what the RFC is also doing, based on what OS sees as NUMA we are using it.

Only addition is within the NUMA if there are split LLC, allow selection of those lcores. Rather than blindly choosing lcore using

rte_lcore_get_next.


We force the user to configure their system
correctly as it is, and I see no reason to second-guess user's BIOS
configuration otherwise.

Again iterating, the changes suggested in RFC are agnostic to what BIOS options are used,

It is to earlier question `is AMD configuration same as Intel tile` I have explained it is not using BIOS setting.



--
Thanks,
Anatoly

--------------2lvxurI1noSKfdozjjHomjoD--