From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6F70042BA7; Fri, 26 May 2023 09:08:16 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EC65540DDA; Fri, 26 May 2023 09:08:15 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 0394440A89; Fri, 26 May 2023 09:08:13 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1685084894; x=1716620894; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EwIexSmoqWkpily0QSZ7eqZoH0T3KjqTGXm3mVacppk=; b=FXE1ghJJVmyejbfdFWrQvxSm4Ovpu7BpsqUEhALaJDXkEa9W8Kd7aCcV 7BAFTufS8NQa8YAvZFTEJItMF/pwG+JPuCu8JGF8w1DH6GvougkHR+V3j gaCVM/ysn8TF+qkT0C2zbyS/NrMgdGzEmu/bMF5eg6/mm35nakMgMt6UV m7oAQ0YpKOZoGvx9VbG/eTr8/KmH6mm6Mq9dYkXTaBbl+90Gg9h2NSbZA wF/LOCsDINOouh79gwRjEOawRBgtANqjGmfMo/raZjdJ4zWK0VeUkdp1L AY2enDQCXl7G+LjOCMe0YnjybYMZg1axwy2ej7v/iPlxZB0Fp5su9AnpM Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10721"; a="357389186" X-IronPort-AV: E=Sophos;i="6.00,193,1681196400"; d="scan'208";a="357389186" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 May 2023 00:08:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10721"; a="770244660" X-IronPort-AV: E=Sophos;i="6.00,193,1681196400"; d="scan'208";a="770244660" Received: from unknown (HELO localhost.localdomain) ([10.239.252.104]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 May 2023 00:08:09 -0700 From: Kaisen You To: dev@dpdk.org Cc: yidingx.zhou@intel.com, thomas@monjalon.net, david.marchand@redhat.com, olivier.matz@6wind.com, ferruh.yigit@amd.com, kaisenx.you@intel.com, zhoumin@loongson.cn, anatoly.burakov@intel.com, stable@dpdk.org Subject: [PATCH v8] enhance NUMA affinity heuristic Date: Fri, 26 May 2023 14:50:23 +0800 Message-Id: <20230526065023.329918-1-kaisenx.you@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230523025004.192071-1-kaisenx.you@intel.com> References: <20230523025004.192071-1-kaisenx.you@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When a DPDK application is started on only one numa node, memory is allocated for only one socket. When interrupt threads use memory, memory may not be found on the socket where the interrupt thread is currently located, and memory has to be reallocated on the hugepage, this operation will lead to performance degradation. Fixes: 705356f0811f ("eal: simplify control thread creation") Fixes: 770d41bf3309 ("malloc: fix allocation with unknown socket ID") Cc: stable@dpdk.org Signed-off-by: Kaisen You --- Changes since v7: - Update commet, Changes since v6: - New explanation for easy understanding, Changes since v5: - Add comments to the code, Changes since v4: - mod the patch title, Changes since v3: - add the assignment of socket_id in thread initialization, Changes since v2: - add uncommitted local change and fix compilation, Changes since v1: - accomodate for configurations with main lcore running on multiples physical cores belonging to different numa, --- lib/eal/common/eal_common_thread.c | 2 ++ lib/eal/common/malloc_heap.c | 7 ++++++- 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/lib/eal/common/eal_common_thread.c b/lib/eal/common/eal_common_thread.c index 079a385630..95720a61c0 100644 --- a/lib/eal/common/eal_common_thread.c +++ b/lib/eal/common/eal_common_thread.c @@ -252,6 +252,8 @@ static int ctrl_thread_init(void *arg) struct rte_thread_ctrl_params *params = arg; __rte_thread_init(rte_lcore_id(), cpuset); + /* Set control thread socket ID to SOCKET_ID_ANY as + * control threads may be scheduled on any NUMA node. + */ + RTE_PER_LCORE(_socket_id) = SOCKET_ID_ANY; params->ret = rte_thread_set_affinity_by_id(rte_thread_self(), cpuset); if (params->ret != 0) { __atomic_store_n(¶ms->ctrl_thread_status, diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c index d25bdc98f9..5de2317827 100644 --- a/lib/eal/common/malloc_heap.c +++ b/lib/eal/common/malloc_heap.c @@ -716,7 +716,12 @@ malloc_get_numa_socket(void) if (conf->socket_mem[socket_id] != 0) return socket_id; } - + /* We couldn't find quickly find a NUMA node where memory + * was available, so fall back to using main lcore socket ID. + */ + socket_id = rte_lcore_to_socket_id(rte_get_main_lcore()); + /* Main lcore socket ID may be SOCKET_ID_ANY in cases when main + * lcore thread is affinitized to multiple NUMA nodes. + */ + if (socket_id != (unsigned int)SOCKET_ID_ANY) + return socket_id; + /* Failed to find meaningful socket ID, so just use the first one available */ return rte_socket_id_by_idx(0); } -- 2.25.1