From: "Burakov, Anatoly" <anatoly.burakov@intel.com>
To: Kaisen You <kaisenx.you@intel.com>, <dev@dpdk.org>
Cc: <yidingx.zhou@intel.com>, <thomas@monjalon.net>,
<david.marchand@redhat.com>, <olivier.matz@6wind.com>,
<ferruh.yigit@amd.com>, <zhoumin@loongson.cn>, <stable@dpdk.org>
Subject: Re: [PATCH v8] enhance NUMA affinity heuristic
Date: Fri, 26 May 2023 15:44:15 +0100 [thread overview]
Message-ID: <63f43dcf-0a03-d1f8-5120-39714cc712d9@intel.com> (raw)
In-Reply-To: <20230526084535.374803-1-kaisenx.you@intel.com>
On 5/26/2023 9:45 AM, Kaisen You wrote:
> When a DPDK application is started on only one numa node, memory is
> allocated for only one socket. When interrupt threads use memory,
> memory may not be found on the socket where the interrupt thread
> is currently located, and memory has to be reallocated on the hugepage,
> this operation will lead to performance degradation.
>
> Fixes: 705356f0811f ("eal: simplify control thread creation")
> Fixes: 770d41bf3309 ("malloc: fix allocation with unknown socket ID")
> Cc: stable@dpdk.org
>
> Signed-off-by: Kaisen You <kaisenx.you@intel.com>
> ---
> Changes since v7:
> - Update commet,
>
> Changes since v6:
> - New explanation for easy understanding,
>
> Changes since v5:
> - Add comments to the code,
>
> Changes since v4:
> - mod the patch title,
>
> Changes since v3:
> - add the assignment of socket_id in thread initialization,
>
> Changes since v2:
> - add uncommitted local change and fix compilation,
>
> Changes since v1:
> - accomodate for configurations with main lcore running on multiples
> physical cores belonging to different numa,
> ---
> lib/eal/common/eal_common_thread.c | 4 ++++
> lib/eal/common/malloc_heap.c | 11 ++++++++++-
> 2 files changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/lib/eal/common/eal_common_thread.c b/lib/eal/common/eal_common_thread.c
> index 079a385630..22480aa61f 100644
> --- a/lib/eal/common/eal_common_thread.c
> +++ b/lib/eal/common/eal_common_thread.c
> @@ -252,6 +252,10 @@ static int ctrl_thread_init(void *arg)
> struct rte_thread_ctrl_params *params = arg;
>
> __rte_thread_init(rte_lcore_id(), cpuset);
> + /* Set control thread socket ID to SOCKET_ID_ANY as control
> + * threads may be scheduled on any NUMA node.
> + */
> + RTE_PER_LCORE(_socket_id) = SOCKET_ID_ANY;
> params->ret = rte_thread_set_affinity_by_id(rte_thread_self(), cpuset);
> if (params->ret != 0) {
> __atomic_store_n(¶ms->ctrl_thread_status,
> diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
> index d25bdc98f9..d833a71e7a 100644
> --- a/lib/eal/common/malloc_heap.c
> +++ b/lib/eal/common/malloc_heap.c
> @@ -716,7 +716,16 @@ malloc_get_numa_socket(void)
> if (conf->socket_mem[socket_id] != 0)
> return socket_id;
> }
> -
> + /* We couldn't find quickly find a NUMA node where memory was available,
typo: `find quickly find`, should probably be `quickly find`
Can be fixed on apply.
Reviewed-by: Anatoly Burakov <anatoly.burakov@intel.com>
> + * so fall back to using main lcore socket ID.
> + */
> + socket_id = rte_lcore_to_socket_id(rte_get_main_lcore());
> + /* Main lcore socket ID may be SOCKET_ID_ANY in cases when main lcore
> + * thread is affinitized to multiple NUMA nodes.
> + */
> + if (socket_id != (unsigned int)SOCKET_ID_ANY)
> + return socket_id;
> + /* Failed to find meaningful socket ID, so just use the first one available */
> return rte_socket_id_by_idx(0);
> }
>
--
Thanks,
Anatoly
next prev parent reply other threads:[~2023-05-26 14:44 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-21 10:48 [PATCH] malloc: " David Marchand
2022-12-21 11:16 ` Bruce Richardson
2022-12-21 13:50 ` Ferruh Yigit
2022-12-21 14:57 ` David Marchand
2022-12-27 9:00 ` zhoumin
2023-01-03 10:56 ` David Marchand
2023-01-03 10:58 ` [PATCH v2] " David Marchand
2023-01-03 13:32 ` [PATCH v3] " David Marchand
2023-01-31 3:23 ` You, KaisenX
2023-01-31 15:05 ` [PATCH v4] net/iavf:enhance " Kaisen You
2023-01-31 16:05 ` Thomas Monjalon
2023-02-01 5:32 ` You, KaisenX
2023-02-01 12:20 ` [PATCH v5] enhance " Kaisen You
2023-02-01 10:52 ` Jiale, SongX
2023-02-15 14:22 ` Burakov, Anatoly
2023-02-15 14:47 ` Burakov, Anatoly
2023-02-16 2:50 ` You, KaisenX
2023-03-03 14:07 ` Thomas Monjalon
2023-03-09 1:58 ` You, KaisenX
2023-04-13 0:56 ` You, KaisenX
2023-04-19 12:16 ` Thomas Monjalon
2023-04-21 2:34 ` You, KaisenX
2023-04-21 8:12 ` Thomas Monjalon
2023-04-23 6:52 ` You, KaisenX
2023-04-23 8:57 ` You, KaisenX
2023-04-23 13:19 ` Thomas Monjalon
2023-04-25 5:16 ` [PATCH v6] " Kaisen You
2023-04-27 6:57 ` Thomas Monjalon
2023-05-16 5:19 ` You, KaisenX
2023-05-23 2:50 ` [PATCH v7] " Kaisen You
2023-05-23 10:44 ` Burakov, Anatoly
2023-05-26 6:44 ` You, KaisenX
2023-05-23 12:45 ` Burakov, Anatoly
2023-05-26 6:50 ` [PATCH v8] " Kaisen You
2023-05-26 8:45 ` Kaisen You
2023-05-26 14:44 ` Burakov, Anatoly [this message]
2023-05-26 17:50 ` Stephen Hemminger
2023-05-29 10:37 ` Burakov, Anatoly
2023-06-01 14:42 ` David Marchand
2023-06-06 14:04 ` Thomas Monjalon
2023-06-12 9:36 ` Burakov, Anatoly
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=63f43dcf-0a03-d1f8-5120-39714cc712d9@intel.com \
--to=anatoly.burakov@intel.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@amd.com \
--cc=kaisenx.you@intel.com \
--cc=olivier.matz@6wind.com \
--cc=stable@dpdk.org \
--cc=thomas@monjalon.net \
--cc=yidingx.zhou@intel.com \
--cc=zhoumin@loongson.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).