From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 012FF41DC8; Fri, 3 Mar 2023 15:07:22 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 909914114B; Fri, 3 Mar 2023 15:07:22 +0100 (CET) Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by mails.dpdk.org (Postfix) with ESMTP id AC9D1400D6; Fri, 3 Mar 2023 15:07:20 +0100 (CET) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 914A35C0131; Fri, 3 Mar 2023 09:07:17 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Fri, 03 Mar 2023 09:07:17 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:sender:subject:subject:to:to; s=fm1; t= 1677852437; x=1677938837; bh=fxOTu7ljoNGPW4HkGFahDm/YjJY/W6DEAuA fj/sZLg8=; b=UnKGFKZ7VbUlTn/wWSu3X83Ih1C7IjO7/CZaZA9jSIaGsiz2eiS URJER9lxYymjGu2A9JaXAKXV3y0n+4+j7+Up/AOL2ZLO2lCd/Ixoza5ORGH4XDNP FhPH39cpU/m3sQ7AHO2MgTsXIVyyTHszdvOvvpuOYddPDe/jfnpuxRIYFv0rQow1 y41x/cKFIb8Pi2yE+BrW7PRTzVqo35WWOsi88Yhi4UCE0uY2m2BXT+C8OLs95mza vgbGUvP6S5VfJz7xUizXF7b2VVwNosGk1SGUTBWwcSIh+oKj7rsMDaYcyeQcNEMz m9N6uk9R7faRcRV44E/kmcFAfwz8rmGgc3A== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t= 1677852437; x=1677938837; bh=fxOTu7ljoNGPW4HkGFahDm/YjJY/W6DEAuA fj/sZLg8=; b=hJBJYaRuv+jIWrKDtatI9Afvg61D7q2REu/kSbgMgeA3QMgF2NJ ubYgaFWl5D3q74RcpcIsBx5m1eeZ3rYKyAbzgRkxAOKNAGs8DG2dKZQgj5Y3XN5T ImbYpgKzSxMJVEJ/w7qJC0c/HE85/d/tblRo9M2KaqaPHuxZXywxhHQTHf2gPnJP 7MwcEoyEOSjswezhLLQnbNqv4ZwWJtii4dUXnZ6I28pjeBsAgD3svbj7b3Rhoxab DsNzj1Wb2eyVyOx8WG+UDkXWDmOPlzdeqEfXBFj5J/2o5hMGRdas+HEN/MSIQXS6 Q3f13sLqPILlMFJouL/SR3q0PHginQIFKwg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrudelledgheegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhm rghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenuc ggtffrrghtthgvrhhnpedtjeeiieefhedtfffgvdelteeufeefheeujefgueetfedttdei kefgkeduhedtgfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh hrohhmpehthhhomhgrshesmhhonhhjrghlohhnrdhnvght X-ME-Proxy: Feedback-ID: i47234305:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 3 Mar 2023 09:07:15 -0500 (EST) From: Thomas Monjalon To: "Burakov, Anatoly" , "You, KaisenX" Cc: "dev@dpdk.org" , "Zhou, YidingX" , "david.marchand@redhat.com" , "Matz, Olivier" , "ferruh.yigit@amd.com" , "zhoumin@loongson.cn" , "stable@dpdk.org" , bruce.richardson@intel.com, jerinj@marvell.com Subject: Re: [PATCH v5] enhance NUMA affinity heuristic Date: Fri, 03 Mar 2023 15:07:14 +0100 Message-ID: <4598779.O6GEm4j1yj@thomas> In-Reply-To: References: <20221221104858.296530-1-david.marchand@redhat.com> <6014e44e-5dd5-365f-2a8f-0ec37f562ca8@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org I'm not comfortable with this patch. First, there is no comment in the code which helps to understand the logic. Second, I'm afraid changing the value of the per-core variable _socket_id may have an impact on some applications. 16/02/2023 03:50, You, KaisenX: > From: Burakov, Anatoly > > On 2/1/2023 12:20 PM, Kaisen You wrote: > > > Trying to allocate memory on the first detected numa node has less > > > chance to find some memory actually available rather than on the main > > > lcore numa node (especially when the DPDK application is started only > > > on one numa node). > > > > > > Fixes: 705356f0811f ("eal: simplify control thread creation") > > > Fixes: bb0bd346d5c1 ("eal: suggest using --lcores option") > > > Cc: stable@dpdk.org > > > > > > Signed-off-by: David Marchand > > > Signed-off-by: Kaisen You > > > --- > > > Changes since v4: > > > - mod the patch title, > > > > > > Changes since v3: > > > - add the assignment of socket_id in thread initialization, > > > > > > Changes since v2: > > > - add uncommitted local change and fix compilation, > > > > > > Changes since v1: > > > - accomodate for configurations with main lcore running on multiples > > > physical cores belonging to different numa, > > > --- > > > lib/eal/common/eal_common_thread.c | 1 + > > > lib/eal/common/malloc_heap.c | 4 ++++ > > > 2 files changed, 5 insertions(+) > > > > > > diff --git a/lib/eal/common/eal_common_thread.c > > > b/lib/eal/common/eal_common_thread.c > > > index 38d83a6885..21bff971f8 100644 > > > --- a/lib/eal/common/eal_common_thread.c > > > +++ b/lib/eal/common/eal_common_thread.c > > > @@ -251,6 +251,7 @@ static void *ctrl_thread_init(void *arg) > > > void *routine_arg = params->arg; > > > > > > __rte_thread_init(rte_lcore_id(), cpuset); > > > + RTE_PER_LCORE(_socket_id) = SOCKET_ID_ANY; > > > params->ret = rte_thread_set_affinity_by_id(rte_thread_self(), > > cpuset); > > > if (params->ret != 0) { > > > __atomic_store_n(¶ms->ctrl_thread_status, > > > diff --git a/lib/eal/common/malloc_heap.c > > > b/lib/eal/common/malloc_heap.c index d7c410b786..3ee19aee15 100644 > > > --- a/lib/eal/common/malloc_heap.c > > > +++ b/lib/eal/common/malloc_heap.c > > > @@ -717,6 +717,10 @@ malloc_get_numa_socket(void) > > > return socket_id; > > > } > > > > > > + socket_id = rte_lcore_to_socket_id(rte_get_main_lcore()); > > > + if (socket_id != (unsigned int)SOCKET_ID_ANY) > > > + return socket_id; > > > + > > > return rte_socket_id_by_idx(0); > > > } > > > > > > > I may be lacking context, but I don't quite get the suggested change. > > From what I understand, the original has to do with assigning lcore cpusets in > > such a way that an lcore ends up having two socket ID's (because it's been > > assigned to CPU's on different sockets). Why is this allowed in the first place? > > It seems like a user error to me, as it breaks many of the fundamental > > assumptions DPDK makes. > > > In a dual socket system, if all used cores are in socket 1 and the NIC is in socket 1, > no memory is allocated for socket 0. This is to optimize memory consumption. > > I agree with you. If the startup parameters can ensure that both sockets > allocate memory, there will be no problem. > However, due to the different CPU topologies of different systems, > It is difficult for users to ensure that the startup parameter contains two cpu nodes. > > > I'm fine with using main lcore socket for control threads, I just don't think the > > `socket_id != SOCKET_ID_ANY` thing should be checked here, because it > > apparently tries to compensate for a problem with cpuset of the main thread, > > which shouldn't have happened to begin with. > > > This issue has been explained in detail in the discussion of the patch v1 version. > I will forward the previous email to you. The content of the email will also better > let you know the purpose of submitting this patch.