From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C65264590E; Thu, 5 Sep 2024 15:06:02 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5557F42D6A; Thu, 5 Sep 2024 15:06:02 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2082.outbound.protection.outlook.com [40.107.236.82]) by mails.dpdk.org (Postfix) with ESMTP id A838C40684 for ; Thu, 5 Sep 2024 15:06:00 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=kaLMwdH5c3lay4dDGPwZoA54t0wDVwzFnv4PQgLO5H9Ju/MufL5jwU1sBybb9VI1Ib2pRKwYItD5JW0LTVxPJhwDo6E3Ogh0vrVTxsTrYF0Qae56yyOAWKzYrxtKSqAvIvmSOy+L+JiEoQxVhMH+YU0AkQ5T3HItYv2wUrMdstd3K2GRgLo1mIz7FVZ8+WsxGmHpODqPeQmcv9nbo3XqwnX0gGHCPonGWQ1/ZDgtcP9ayb3X0jpfxL+E7ebzfeADNy8UVE/HBcoHrXSKdboIMnFasgcSsZGqtCBsjT1zRYfjIYCCktbIjxXjt4UU2sXe/dN4Iks5fChbZeCBa4vHPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4PBecqQdHk5d2v5ljugX14aaWjUTo8kHOc9YFl8A2QU=; b=A14Dp4SKT4hoDcnlxmR5d7HztdONtIZZw8sTUtASeuQYxnLj80CGAwy3YXa8lseXUFdM5VWcJfchWHhK5IZH5LJhU7YKuntaYyLFBHxeoKHbabyokC6x+XjMmuoMx4PmcSe4qAuz5q6X6DLwQThdA69BLX8fhtM69VJEXN4M1P1NQG0HKUqLvdQzH6iVKFK4hciL+HnHZP0AusJjG4zBdgXL6flekCe64OvDtvyjxuFffoanmnQaZcn+fnqKxvXnoRsRV+WDi5cdfKUuQqNYdhk9ieu1+BvwhAZtiTyVvgyHPOZ+es16ffJXOFPAl+twuvCPaWHjYUXDm7ejdZu1YA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4PBecqQdHk5d2v5ljugX14aaWjUTo8kHOc9YFl8A2QU=; b=rXalZGYcsrt1tdH0z0xLGt2PMXQJK7V7AwN+ZsNm+JLyN0U7Fzq5dIvWMbQtfspB2AP2xRAhIVVkZt7a5lBFmdJcr98hJssIxwQhHHIaIjZB3ybKoLrpcxUuZKaxdxhslbKxOTJ+ZAs/gAprAKlC833XCHYQW7ftJgcou5G3xxo= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com; Received: from CH2PR12MB4294.namprd12.prod.outlook.com (2603:10b6:610:a9::11) by MW6PR12MB8958.namprd12.prod.outlook.com (2603:10b6:303:240::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7918.27; Thu, 5 Sep 2024 13:05:57 +0000 Received: from CH2PR12MB4294.namprd12.prod.outlook.com ([fe80::ebfb:2f9f:f9ca:82cd]) by CH2PR12MB4294.namprd12.prod.outlook.com ([fe80::ebfb:2f9f:f9ca:82cd%4]) with mapi id 15.20.7918.019; Thu, 5 Sep 2024 13:05:57 +0000 Message-ID: <3eae1577-f06f-48f2-863a-faf70b97bc72@amd.com> Date: Thu, 5 Sep 2024 14:05:46 +0100 User-Agent: Mozilla Thunderbird Subject: Re: [RFC 0/2] introduce LLC aware functions To: "Burakov, Anatoly" , "Varghese, Vipin" , dev@dpdk.org References: <20240827151014.201-1-vipin.varghese@amd.com> <288d9e9e-aaec-4dac-b969-54e01956ef4e@intel.com> <65f3dc80-2d07-4b8b-9a5c-197eb2b21180@amd.com> <8addd7f6-fac8-45ec-a44f-f81eb008cc36@intel.com> <3edc8a89-7d10-47f4-8f95-856c2a7fc7ba@intel.com> Content-Language: en-US From: Ferruh Yigit Autocrypt: addr=ferruh.yigit@amd.com; keydata= xsFNBGJDD3EBEAC/M7Tk/DfQSmP1K96vyzdhfSBzlCaGtcxNXorq4fALruqVsD3oi0yfyEz9 4YN8x7py0o9EL8ZdpOX0skc0AMCDAaw033uWhCn0GLMeGRKUbfOAPvL6ecSDvGD7CJIO9j0J eZUvasBgPdM/435PEr9DmC6Ggzdzt8IuG4PoLi5jpFSfcqxZFCCxLUDEo/w0nuguk2FTuYJg B2zEZ4JTBZrw7hIHiFh8D8hr6YA6a5uTofq1tr+l048lbtdFUl8TR0aIExVzE4Z8qKZlcE+9 RQaewjK5Al1jLE4sHdmd3GN+IvgDF3D/fLsi25SKJDeGSdeHkOmaX0qGeM4WKIfU6iARRCiQ N3AmBIxZ/A7UXBKLaOyZ+/i3sE6Wb53nrO4i8+0K2Qwyh6LjTeiJAIjYKN43ppxz3DaI+QwQ vI+uyHr4Gg0Da9EPPz/YyKauSeOZCfCB5gIfICO0j6x0SCl8uQ2nLpjxcZkf0gjcwUzP3h+S 3x6NfDji9YEij0zczW/dcSpGgZ6vsFpPrtnP9ZXy6J53yp0kJtOJoOlkEFFdU2yCZnCDseum CoudmGLZVvS0/DzHDJejq+3kK3FDGktZBOxZIIpal+nFqS7lVgOZc4+huVv3jyhzoAUOEyXA XK5j6o7g8STUY+z33QNnHpdLvecMwuzmvqy0jR54yAbZ64mB9QARAQABzSNGZXJydWggWWln aXQgPGZlcnJ1aC55aWdpdEBhbWQuY29tPsLBlwQTAQgAQQIbAwULCQgHAgYVCgkICwIEFgID AQIeAQIXgAIZARYhBEm7aYjps5XGsPHCElRTPtCKKm/6BQJkdyEEBQkE3meNAAoJEFRTPtCK Km/6UdcP/0/kEp49aIUhkRnQfmKmNVpcBEs4NqceNCWTQlaXdEwL1lxf1L49dsF5Jz1yvWi3 tMtq0Mk1o68mQ7q8iZAzIeLxGQAlievMNE0BzLWPFmuX+ac98ITBqKdnUAn6ig5ezR+jxrAU 58utUszDl16eMabtCu76sINL5izB8zCWcDEUB4UqM8iBSQZ7/a7TSBVS0jVBldAORg1qfFIs cGMPQn/skhy3QqbK3u3Rhc44zRxvzrQJmhY6T1rpeniHSyGOeIYqjpbpnMU5n1VWzQ4NXvAD VDkZ4NDw6CpvF4S2h2Ds7w7GKvT6RRTddrl672IaLcaWRiqBNCPm+eKh4q5/XkOXTgUqYBVg Ors8uS9EbQC/SAcp9VHF9fB+3nadxZm4CLPe5ZDJnSmgu/ea7xjWQYR8ouo2THxqNZtkercc GOxGFxIaLcJIR/XChh9d0LKgc1FfVARTMW8UrPgINVEmVSFmAVSgVfsWIV+NSpG9/e90E4SV gMLPABn1YpJ8ca/IwqovctqDDXfxZOvCPOVWTzQe/ut767W+ctGR1kRkxWcz470SycOcY+PW VRPJd91Af0GdLFkwzZgNzkd6Gyc9XXcv4lwwqBLhWrBhqPYB0aZXIG1E/cVTiRp4dWpFHAFD DcuLldjIw93lCDsIeEDM9rBizGVMWEoeFmqSe7pzGTPXzsFNBGJDD3EBEAC8fBFQHej8qgIG CBzoIEd1cZgPIARlIhRudODXoNDbwA+zJMKtOVwol3Hh1qJ2/yZP11nZsqrP4fyUvMxrwhDe WBWFVDbWHLnqXMnKuUU1vQMujbzgq/4Rb9wSMW5vBL6YxhZng+h71JgS/9nVtzyaTtsOTrJi 6nzFSDx6Wbza2jYvL9rlK0yxJcMEiKwZQ/if4KcOesD0rtxomU/iSEv6DATcJbGXP6T93nPl 90XksijRKAmOwvdu3A8IIlxiSSVRP0lxiHOeR35y6PjHY2usfEDZZOVOfDfhlCVAIBZUZALv VmFOVSTYXeKgYa6Ooaf72+cHM3SgJIbYnevJfFv8YQW0MEAJ/IXE7B1Lk+pHNxwU3VBCrKnA fd/PTvviesuYRkrRD6qqZnINeu3b2DouVGGt2fVcGA38BujCd3p8i7azoGc7A6cgF7z9ETnr ANrbg1/dJyDmkDxOxVrVquTBbxJbDy2HaIe9wyJTEK2Sznpy62DaHVY+gfDQzexBXM10geHC IIUhEnOUYVaq65X3ZDjyAQnNDBQ4uMqSHZk8DpJ22X+T+IMzWzWl+VyU4UZXjkLKPvlqPjJk 1RbKScek5L2GhxHQbPaD76Hx4Jiel0vm2G+4wei8Ay1+0YRFkhySxogU/uQVXHTv63KzQMak oIfnN/V2R0ucarsvMBW+gwARAQABwsF8BBgBCAAmAhsMFiEESbtpiOmzlcaw8cISVFM+0Ioq b/oFAmR3IPsFCQTeZ44ACgkQVFM+0Ioqb/qINhAAtcor9bevHy22HvJvXX17IOpPSklZJAeQ Az43ZEo5kRlJ8mElc2g3RzYCvL/V3fSiIATxIsLq/MDtYhO8AAvklxND/u2zeBd7BkRZTZZX W1V1cM3oTvfx3LOhDu4f2ExQzCGdkzbXTRswSJIe1W0qwsDp+YPekbrsKp1maZArGeu+6FuW honeosIrWS98QJmscEhP8ooyJkLDCCOgEk+mJ/JBjzcJGuYn6+Iy/ApMw/vqiLGL1UWekcTA g18mREHqIR+A3ZvypIufSFB52oIs1zD/uh/MgmL62bY/Cw6M2SxiVxLRsav9TNkF6ZaNQCgn GqifliCEMvEuLZRBOZSYH2A/PfwjYW0Ss0Gyfywmb2IA990gcQsXxuCLG7pAbWaeYazoYYEQ NYmWatZNMAs68ERI2zvrVxdJ/fBWAllIEd0uQ4P05GtAHPdTIDQYp545+TPV7oyF0LfXcsQs SFVZE6igdvkjfYmh+QOrHGZvpWXLTmffVf/AQ81wspzbfxJ7sYM4P8Mg5kKOsaoUdyA/2qVe cMh1CLUHXF1GlofpGbe1lj4KUJVse5g3qwV7i9VrseA8c4VIZewdIjkzAhmmbxl+8rM/LKBH dZUMTzME5PFCXJIZ83qkZQ795MTe2YScp9dIV7fsS5tpDwIs7BZNVM1l3NAdK+DLHqNxKuyO 8Zk= Cc: =?UTF-8?Q?Mattias_R=C3=B6nnblom?= In-Reply-To: <3edc8a89-7d10-47f4-8f95-856c2a7fc7ba@intel.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-ClientProxiedBy: LO2P265CA0274.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:a1::22) To CH2PR12MB4294.namprd12.prod.outlook.com (2603:10b6:610:a9::11) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CH2PR12MB4294:EE_|MW6PR12MB8958:EE_ X-MS-Office365-Filtering-Correlation-Id: 5dffc9dc-666e-41ff-c60f-08dccdab77b2 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230040|1800799024|376014|366016; X-Microsoft-Antispam-Message-Info: =?utf-8?B?dGtrTU1oNG0wVmdrcUsxVjdOTTkwWDh5ZHA2dXI5Nm1KQnBzOVUwK0ZPbi9L?= =?utf-8?B?Wk9Va2lrZUhSWnpSU1BySGRnYVhWNmtDOEg2bEltdjAxUDhsVFlCdzQ3aU1S?= =?utf-8?B?bW9uMmpwR2VMOVM3U1kxckhaZVYxb0djdmFwS3BZak03RWl2MktNaVNlV2R3?= =?utf-8?B?U3Z5NytOV01nWjJ3VENrOWo1R1ZuYVB1dlF3bzE2T3k0U2ZjUEVBMGZvREdM?= =?utf-8?B?VTVjMnFyMjlKY3dTOXFUTHc1enNMMjRpUHdTbHlYdzFkWm9rTmhOMU9iei9C?= =?utf-8?B?N2ZWb0hFVGdmdW9VellxZVJodkNFN0ZxRmRnZVdGUkNQeWxMYXNLQ2hZNFpp?= =?utf-8?B?ZkYxNmp0U1BCK1VQVkNvNUEzU1FnV0pvOEIwN2czYlY4L0JnME41cjB2Q1JQ?= =?utf-8?B?WWlhWVZCRHZwVSt5NnFPT2NLQnFqTlhTNkpBdEtyTXhxcVNQeS91bmp1R3Jk?= =?utf-8?B?K0pUd245djhKaXFhNXFkSS9zUERoZ00zN21QMFA0OHBpU08wdHBHckRwZE1o?= =?utf-8?B?NkJWS0t5YmVMb1daWFgxSWxkSkpSR1orUWZDVlVjOE1pb0RHMWMxWlNhSG5N?= =?utf-8?B?S3hsWlVOUENiTmhQeW9GSG9SUUM4bTNuVXpyV3ROZnZhOW5qdkxnelBxV3Ni?= =?utf-8?B?SUw3aHA2eDlwZzFZSzJVVjJFMzZadUY1L2ZVSkdKY2ZQUS9ZTlg5MEhMLzBZ?= =?utf-8?B?NW5RYUV1Sm12SjJvZlovWnFHa0JIc05tcUh1TnJ2QzZ1dENnNzRQcTBpaGJT?= =?utf-8?B?RTIvTCtDVkFsOUw1TjhvajVaTk9LMUs0aHZ3bzc5MU9EZEpsdXBIdXZrVHBn?= =?utf-8?B?clJFdmpONnp4THpWMWRIc3dvRCtlUU5ZRWRKTFhjSFJXSVVhcFV3Z0p1ZHNP?= =?utf-8?B?ZFNpUEI3NWlmcmNtVmxNUHdaM3BtTi9yU3JYd2JCcUtOazMzMXY5RkxtSHBQ?= =?utf-8?B?SXZWdm5TZHRCZExhMVpKTGs0WDdMbFVEZlBCVVZTOHZRTEFqUTlSYXdDc3c1?= =?utf-8?B?S0g1MTVYeUp3K1Ryd2x3VUdtblhNanpOamNjSy9IMTZNM01rajI3MkNqOWtC?= =?utf-8?B?LzU5RDBKZy93U0tLdGh1Zlo3ZW9qeFhLczVDK0RBVlNDbmlNMzZyZXhuY0tP?= =?utf-8?B?MmJBc1krTVVlZ1k0M2JNZEhjc3dhbW5rTDlBMTcxWlhUTXRSb21hT0pMK2Fa?= =?utf-8?B?ZzdWZHU5ZjcySEgrMlRsWTR6a1NOL3p1ZmRFMVlES0EraTVRZ2VuNjlJMWVN?= =?utf-8?B?K2tPSVRpK0hqTTNaeXRmZnF2TDB2OTR5eFBZSVFEWUxIRkovdS82a09BYXNo?= =?utf-8?B?anZwQWNVYkR0VStHemRlNjNaSlZpSER3cFVoSkNmRDZhVDAvOG4vWlV6dFZJ?= =?utf-8?B?YkV6RFVyRHBTWVpKUG4xZmtBWENpdjVHZTJaU0hBQ2FNaHBYSi9CTzA2MnE3?= =?utf-8?B?WjV4NVZhZzhEd1R6RjlIdnhuWjdEbUM5cUZwVkxXMUtkN1ZYQ3VJME5YWXF6?= =?utf-8?B?L2Znb0Fjb1ZnWlo0OWpVbllFNmxKb1JLVkRLbTkyNytIR1lQdUI1RzN6OHNU?= =?utf-8?B?QXJXdCtpWEtDd2liV0tiWElhM20vQUpWbUttcEVjVXora3BvdzRteFQ1bE5o?= =?utf-8?B?eEZnSUNWS1FPQUxEVUg4S0oyMDBDVEFzL1E5bkc0S2tCazNRVUI4Y1J0UDhi?= =?utf-8?B?ZTlzMVVNTHRzMjJvcGd0Y21lRktMdy9mUTZnYi9GYUI4cjMyUW1ZbzdWVXFy?= =?utf-8?Q?erVtTIjLFRwfC6KiHM=3D?= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:CH2PR12MB4294.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230040)(1800799024)(376014)(366016); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?Z3FFc3NkS0kvc1FPRW1Hdi94Z01xc2pCZzlzYUxDZWZCSG4wRm1ORjgrQ0h2?= =?utf-8?B?R0Y0eW9HOTBTZWs0OHlSQlRYRG5EWUszS0NtNHVneUN3OU1ZVVpVUVo2Sm9x?= =?utf-8?B?SmtWWmYrckI3bHl2OEtNTHpPazk4dThKV2NCVjF2Q21tZnJMcHBuK0FibUlN?= =?utf-8?B?V0VWK0FvVjhFbDFtbGVmU1JLVVF6ZVYxY2VmWElYTkY5b3hnbzg3dDRyVnh6?= =?utf-8?B?RHRuZTh6b1pEeS94MzdqSzZ2dzIyT3JNSFYybWNaTytCd2dicXF3L3pSKy8w?= =?utf-8?B?bzNaK1IxdDFKYzYwcy9aZHIzNU15dDF4WW5aUDhuU1hKTmJCblFQL3o0b0Rq?= =?utf-8?B?clN3RnFUYytPN1B5UDFIUzJIRi9mcngyZ2o3VTJrTjFFUTF1Qk9Xa0ltbWVy?= =?utf-8?B?QTdGcG5YVHFidFNEa2JKd1p1Vkd2WktwZy84OC9NRWtCZ1hLZVpzSWd0MjZW?= =?utf-8?B?UGlxc3ZnNmhjOXhzVWIvbzdnYmt0bCs5bWRqSHdrTUh4V2pid2RQWWJGMjFJ?= =?utf-8?B?dnVVYlNkdm8xWWJWVGV0QnZvU2paZ2hkdFREcVZ0anF1aVBsdkFocDQ0VVFF?= =?utf-8?B?Ri9NL2ZhbGMwZTRicEFzdFN1aWwyYnplWGUxQ1puOG94UjdBZXc3cXJHdjFs?= =?utf-8?B?NlVZTE5tS1llbDhQN1c1a3B1Zjh1aG1ZT013cjFhSkc5SHFFaUxSeUhybGVV?= =?utf-8?B?RGNzeDNHc05YZTdQMEw5ZWZDSDZjTlQyWFdCV0swMnd6RmV1TktpUG1MTzg0?= =?utf-8?B?ZWl0bVZQb2ZwaXBRV1RWZzVmeldOdFNpVlQzNHY2UFhHZ2V4TDdHNVphL2p0?= =?utf-8?B?ZFRwNVJhc3FMUkJiaGJaNE1jYnJlUTd6R1dyOVVpMmJBMU53RHZqSWI2bmJz?= =?utf-8?B?S0l2K0FTWVlFb2xYUlV5cC9nb0pvZnRuR2dNdnllb0NxOUJ6bHhPYWx6ZXJV?= =?utf-8?B?bmNhbHRxWUpWN2Q5WmhBTk1jQndDTno0aXB3M25rbkJTRzF5MVJUK1U3dGQ2?= =?utf-8?B?d0ZJUDZlbFIxQXNvSGRXVnhtUmN0RTZDb3VPMUlmRkVnOWI4aXFscmVubWpk?= =?utf-8?B?SHNPQ2JWVGNyVXFqaVFTWmRWSGdHczk0KzBrZ0J0VmZzeVo4QW5OeCs4Rmhu?= =?utf-8?B?UGpLN2Q5Y1M0clF1ZGVwbGN2Q0h5eUxRaGNDM3JVYWk5c1dMZnY3YTVjZytq?= =?utf-8?B?eGM4US9seHAyNXA4U1dGQUlpYjhwZjdOeG5Od2VqaEhxbTFoY2ZDT05NZXR0?= =?utf-8?B?RWhabGxSZnFxV3BsUUh0ZzBrSlYyVjVmZ2J1OVJ6L0Z4bE9RYThCWXQ1T2xz?= =?utf-8?B?QkF3SkU2SDJlVThYUml6eDN1cWVwMkJpdXNpTVU5U3dCUUZ0MEhQOXVCenB5?= =?utf-8?B?OTN3UEpyZ1o5TVhqc1NHTFZ1dWxldkZpRnZFT1JoYklkQVM3alhmSlVqM1d6?= =?utf-8?B?NzZ6VHY2eTljb2xEenA4VlYzTVZlUTVKcGhhM015ZENkOFBPQjVlWmo5Qk9l?= =?utf-8?B?UnJ3Qml3MmhGaS9rVFoxOGd0Y012bnZVZ2tmMmRUZVdmZWhlNTY4QUIybVFm?= =?utf-8?B?VzFidkJRN0xGSFN0N1FidWg0RG4rdFhpdSt5ZmdjVTBYd0sxZTMrY29tYmNv?= =?utf-8?B?dGJyNUU2SE5oaWJKckx2SVlWM2ZyV2NQVHRQNE5VWHpJTjVEQUVvc2hPc1Bw?= =?utf-8?B?N2tYZDBPVkF1SEtteWJpSVJhZDhTUzhoVDVMY0N4T0RoQTJ6TWU2MnNtUktF?= =?utf-8?B?cVQwdXJsRjVnSG9VQVIwTFd3UTBhTHcrQzVaRXVIck1QZVZWQ2ZuRUYwM1lD?= =?utf-8?B?WkRGY1FVeVV5SDZwbFRPdEFkS2wwQ0tiK0tLa3M5aER4YVY3ZjlvMGVMUndP?= =?utf-8?B?dmEvUjZDZHRRMUp5ek1SYkxVTGdxQkFQNDR3T0JheWVRaFlPbWJzQWRITFhi?= =?utf-8?B?cGxhNEhqQjR0QWVtRENQK1BoWVNMQk8yZ2pQMVduRk1UeXkxQTZuN2VLc0Zr?= =?utf-8?B?bFF3SXFOSlA2VDQ5b2pOekVSbnNtcU1xZE9uTGgrOFNvd1JCb3hFVDV0Y1Bl?= =?utf-8?B?UEdnZExZS0VQV1BQdU5KVEhYUVlrOC8rdDd1elBOdXpDYnZvVWJLOVpmSTd2?= =?utf-8?Q?lWBDgVFjhxjrwhYW5BkH6M/VA?= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-Network-Message-Id: 5dffc9dc-666e-41ff-c60f-08dccdab77b2 X-MS-Exchange-CrossTenant-AuthSource: CH2PR12MB4294.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Sep 2024 13:05:57.0646 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: +RELlXyVAGjCAAUGAvHKRzgcQYSgnMF4OzGl6rCexUGZLE3vOeGvNxonKVXuVHO0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8958 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 9/3/2024 9:50 AM, Burakov, Anatoly wrote: > On 9/2/2024 5:33 PM, Varghese, Vipin wrote: >> >>>> >>>>> I recently looked into how Intel's Sub-NUMA Clustering would work >>>>> within >>>>> DPDK, and found that I actually didn't have to do anything, because >>>>> the >>>>> SNC "clusters" present themselves as NUMA nodes, which DPDK already >>>>> supports natively. >>>> >>>> yes, this is correct. In Intel Xeon Platinum BIOS one can enable >>>> `Cluster per NUMA` as `1,2 or4`. >>>> >>>> This divides the tiles into Sub-Numa parition, each having separate >>>> lcores,memory controllers, PCIe >>>> >>>> and accelerator. >>>> >>>>> >>>>> Does AMD's implementation of chiplets not report themselves as >>>>> separate >>>>> NUMA nodes? >>>> >>>> In AMD EPYC Soc, this is different. There are 2 BIOS settings, namely >>>> >>>> 1. NPS: `Numa Per Socket` which allows the IO tile (memory, PCIe and >>>> Accelerator) to be partitioned as Numa 0, 1, 2 or 4. >>>> >>>> 2. L3 as NUMA: `L3 cache of CPU tiles as individual NUMA`. This allows >>>> all CPU tiles to be independent NUMA cores. >>>> >>>> >>>> The above settings are possible because CPU is independent from IO >>>> tile. >>>> Thus allowing 4 combinations be available for use. >>> >>> Sure, but presumably if the user wants to distinguish this, they have to >>> configure their system appropriately. If user wants to take advantage of >>> L3 as NUMA (which is what your patch proposes), then they can enable the >>> BIOS knob and get that functionality for free. DPDK already supports >>> this. >>> >> The intend of the RFC is to introduce the ability to select lcore >> within the same >> >> L3 cache whether the BIOS is set or unset for `L3 as NUMA`. This is >> also achieved >> >> and tested on platforms which advertises via sysfs by OS kernel. Thus >> eliminating >> >> the dependency on hwloc and libuma which can be different versions in >> different distros. > > But we do depend on libnuma, so we might as well depend on it? Are there > different versions of libnuma that interfere with what you're trying to > do? You keep coming back to this "whether the BIOS is set or unset" for > L3 as NUMA, but I'm still unclear as to what issues your patch is > solving assuming "knob is set". When the system is configured correctly, > it already works and reports cores as part of NUMA nodes (as L3) > correctly. It is only when the system is configured *not* to do that > that issues arise, is it not? In which case IMO the easier solution > would be to just tell the user to enable that knob in BIOS? > >> >> >>>> >>>> These are covered in the tuning gudie for the SoC in 12. How to get >>>> best >>>> performance on AMD platform — Data Plane Development Kit 24.07.0 >>>> documentation (dpdk.org) >>>> . >>>> >>>> >>>>> Because if it does, I don't really think any changes are >>>>> required because NUMA nodes would give you the same thing, would it >>>>> not? >>>> >>>> I have a different opinion to this outlook. An end user can >>>> >>>> 1. Identify the lcores and it's NUMA user `usertools/cpu-layout.py` >>> >>> I recently submitted an enhacement for CPU layout script to print out >>> NUMA separately from physical socket [1]. >>> >>> [1] >>> https://patches.dpdk.org/project/dpdk/ >>> patch/40cf4ee32f15952457ac5526cfce64728bd13d32.1724323106.git.anatoly.burakov@intel.com/ >>> >>> I believe when "L3 as NUMA" is enabled in BIOS, the script will display >>> both physical package ID as well as NUMA nodes reported by the system, >>> which will be different from physical package ID, and which will display >>> information you were looking for. >> >> As AMD we had submitted earlier work on the same via usertools: >> enhance logic to display NUMA - Patchwork (dpdk.org) > patchwork.dpdk.org/project/dpdk/patch/20220326073207.489694-1- >> vipin.varghese@amd.com/>. >> >> this clearly were distinguishing NUMA and Physical socket. > > Oh, cool, I didn't see that patch. I would argue my visual format is > more readable though, so perhaps we can get that in :) > >> Agreed, but as pointed out in case of Intel Xeon Platinum SPR, the >> tile consists of cpu, memory, pcie and accelerator. >> >> hence setting the BIOS option `Cluster per NUMA` the OS kernel & >> libnuma display appropriate Domain with memory, pcie and cpu. >> >> >> In case of AMD SoC, libnuma for CPU is different from memory NUMA per >> socket. > > I'm curious how does the kernel handle this then, and what are you > getting from libnuma. You seem to be implying that there are two > different NUMA nodes on your SoC, and either kernel or libnuma are in > conflict as to what belongs to what NUMA node? > >> >>> >>>> >>>> 3. there are no API which distinguish L3 numa domain. Function >>>> `rte_socket_id >>>> >>> rte__lcore_8h.html#a7c8da4664df26a64cf05dc508a4f26df>` for CPU tiles >>>> like AMD SoC will return physical socket. >>> >>> Sure, but I would think the answer to that would be to introduce an API >>> to distinguish between NUMA (socket ID in DPDK parlance) and package >>> (physical socket ID in the "traditional NUMA" sense). Once we can >>> distinguish between those, DPDK can just rely on NUMA information >>> provided by the OS, while still being capable of identifying physical >>> sockets if the user so desires. >> Agreed, +1 for the idea for physcial socket and changes in library to >> exploit the same. >>> >>> I am actually going to introduce API to get *physical socket* (as >>> opposed to NUMA node) in the next few days. >>> >> But how does it solve the end customer issues >> >> 1. if there are multiple NIC or Accelerator on multiple socket, but IO >> tile is partitioned to Sub Domain. > > At least on Intel platforms, NUMA node gets assigned correctly - that > is, if my Xeon with SNC enabled has NUMA nodes 3,4 on socket 1, and > there's a NIC connected to socket 1, it's going to show up as being on > NUMA node 3 or 4 depending on where exactly I plugged it in. Everything > already works as expected, and there is no need for any changes for > Intel platforms (at least none that I can see). > > My proposed API is really for those users who wish to explicitly allow > for reserving memory/cores on "the same physical socket", as "on the > same tile" is already taken care of by NUMA nodes. > >> >> 2. If RTE_FLOW steering is applied on NIC which needs to processed >> under same L3 - reduces noisy neighbor and better cache hits >> >> 3, for PKT-distribute library which needs to run within same worker >> lcore set as RX-Distributor-TX. >> > > Same as above: on Intel platforms, NUMA nodes already solve this. > > > >> Totally agree, that is what the RFC is also doing, based on what OS >> sees as NUMA we are using it. >> >> Only addition is within the NUMA if there are split LLC, allow >> selection of those lcores. Rather than blindly choosing lcore using >> >> rte_lcore_get_next. > > It feels like we're working around a problem that shouldn't exist in the > first place, because kernel should already report this information. > Within NUMA subsystem, there is sysfs node "distance" that, at least on > Intel platforms and in certain BIOS configuration, reports distance > between NUMA nodes, from which one can make inferences about how far a > specific NUMA node is from any other NUMA node. This could have been > used to encode L3 cache information. Do AMD platforms not do that? In > that case, "lcore next" for a particular socket ID (NUMA node, in > reality) should already get us any cores that are close to each other, > because all of this information is already encoded in NUMA nodes by the > system. > > I feel like there's a disconnect between my understanding of the problem > space, and yours, so I'm going to ask a very basic question: > > Assuming the user has configured their AMD system correctly (i.e. > enabled L3 as NUMA), are there any problem to be solved by adding a new > API? Does the system not report each L3 as a separate NUMA node? > Hi Anatoly, Let me try to answer. To start with, Intel "Sub-NUMA Clustering" and AMD NUMA is different, as far as I understand SNC is more similar to more classic physical socket based NUMA. Following is the AMD CPU: ┌─────┐┌─────┐┌──────────┐┌─────┐┌─────┐ │ ││ ││ ││ ││ │ │ ││ ││ ││ ││ │ │TILE1││TILE2││ ││TILE5││TILE6│ │ ││ ││ ││ ││ │ │ ││ ││ ││ ││ │ │ ││ ││ ││ ││ │ └─────┘└─────┘│ IO │└─────┘└─────┘ ┌─────┐┌─────┐│ TILE │┌─────┐┌─────┐ │ ││ ││ ││ ││ │ │ ││ ││ ││ ││ │ │TILE3││TILE4││ ││TILE7││TILE8│ │ ││ ││ ││ ││ │ │ ││ ││ ││ ││ │ │ ││ ││ ││ ││ │ └─────┘└─────┘└──────────┘└─────┘└─────┘ Each 'Tile' has multiple cores, and 'IO Tile' has memory controller, bus controllers etc.. When NPS=x configured in bios, IO tile resources are split and each seen as a NUMA node. Following is NPS=4 ┌─────┐┌─────┐┌──────────┐┌─────┐┌─────┐ │ ││ ││ . ││ ││ │ │ ││ ││ . ││ ││ │ │TILE1││TILE2││ . ││TILE5││TILE6│ │ ││ ││NUMA .NUMA││ ││ │ │ ││ ││ 0 . 1 ││ ││ │ │ ││ ││ . ││ ││ │ └─────┘└─────┘│ . │└─────┘└─────┘ ┌─────┐┌─────┐│..........│┌─────┐┌─────┐ │ ││ ││ . ││ ││ │ │ ││ ││NUMA .NUMA││ ││ │ │TILE3││TILE4││ 2 . 3 ││TILE7││TILE8│ │ ││ ││ . ││ ││ │ │ ││ ││ . ││ ││ │ │ ││ ││ . ││ ││ │ └─────┘└─────┘└─────.────┘└─────┘└─────┘ Benefit of this is approach is now all cores has to access all NUMA without any penalty. Like a DPDK application can use cores from 'TILE1', 'TILE4' & 'TILE7' to access to NUMA0 (or any NUMA) resources in high performance. This is different than SNC where cores access to cross NUMA resources hit by performance penalty. Now, although which tile cores come from doesn't matter from NUMA perspective, it may matter (based on workload) to have them under same LLC. One way to make sure all cores are under same LLC, is to enable "L3 as NUMA" BIOS option, which will make each TILE shown as a different NUMA, and user select cores from one NUMA. This is sufficient up to some point, but not enough when application needs number of cores that uses multiple tiles. Assume each tile has 8 cores, and application needs 24 cores, when user provide all cores from TILE1, TILE2 & TILE3, in DPDK right now there is now way for application to figure out how to group/select these cores to use cores efficiently. Indeed this is what Vipin is enabling, from a core, he is finding list of cores that will work efficiently with this core. In this perspective this is nothing really related to NUMA configuration, and nothing really specific to AMD, as defined Linux sysfs interface is used for this. There are other architectures around that has similar NUMA configuration and they can also use same logic, at worst we can introduce an architecture specific code that all architectures can have a way to find other cores that works more efficient with given core. This is a useful feature for DPDK. Lets looks into another example, application uses 24 cores in an graph library like usage, that we want to group each three cores to process a graph node. Application needs to a way to select which three cores works most efficient with eachother, that is what this patch enables. In this case enabling "L3 as NUMA" does not help at all. With this patch both bios config works, but of course user should select cores to provide application based on configuration. And we can even improve this effective core selection, like as Mattias suggested we can select cores that share L2 caches, with expansion of this patch. This is unrelated to NUMA, and again it is not introducing architecture details to DPDK as this implementation already relies on Linux sysfs interface. I hope it clarifies a little more. Thanks, ferruh