patches for DPDK stable branches
 help / color / mirror / Atom feed
From: "You, KaisenX" <kaisenx.you@intel.com>
To: Thomas Monjalon <thomas@monjalon.net>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	"Zhou, YidingX" <yidingx.zhou@intel.com>,
	"david.marchand@redhat.com" <david.marchand@redhat.com>,
	"Matz, Olivier" <olivier.matz@6wind.com>,
	"ferruh.yigit@amd.com" <ferruh.yigit@amd.com>,
	"zhoumin@loongson.cn" <zhoumin@loongson.cn>,
	"stable@dpdk.org" <stable@dpdk.org>,
	"Richardson, Bruce" <bruce.richardson@intel.com>,
	"jerinj@marvell.com" <jerinj@marvell.com>,
	"Burakov, Anatoly" <anatoly.burakov@intel.com>
Subject: RE: [PATCH v5] enhance NUMA affinity heuristic
Date: Thu, 13 Apr 2023 00:56:19 +0000	[thread overview]
Message-ID: <SJ0PR11MB676546A9A503ED05C5C6A2DCE1989@SJ0PR11MB6765.namprd11.prod.outlook.com> (raw)
In-Reply-To: <SJ0PR11MB67655B223283E17E3970FDB9E1B59@SJ0PR11MB6765.namprd11.prod.outlook.com>



> -----Original Message-----
> From: You, KaisenX
> Sent: 2023年3月9日 9:58
> To: Thomas Monjalon <thomas@monjalon.net>
> Cc: dev@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>;
> david.marchand@redhat.com; Matz, Olivier <olivier.matz@6wind.com>;
> ferruh.yigit@amd.com; zhoumin@loongson.cn; stable@dpdk.org;
> Richardson, Bruce <bruce.richardson@intel.com>; jerinj@marvell.com;
> Burakov, Anatoly <anatoly.burakov@intel.com>
> Subject: RE: [PATCH v5] enhance NUMA affinity heuristic
> 
> 
> 
> > -----Original Message-----
> > From: Thomas Monjalon <thomas@monjalon.net>
> > Sent: 2023年3月3日 22:07
> > To: Burakov, Anatoly <anatoly.burakov@intel.com>; You, KaisenX
> > <kaisenx.you@intel.com>
> > Cc: dev@dpdk.org; Zhou, YidingX <yidingx.zhou@intel.com>;
> > david.marchand@redhat.com; Matz, Olivier <olivier.matz@6wind.com>;
> > ferruh.yigit@amd.com; zhoumin@loongson.cn; stable@dpdk.org;
> > Richardson, Bruce <bruce.richardson@intel.com>; jerinj@marvell.com
> > Subject: Re: [PATCH v5] enhance NUMA affinity heuristic
> >
> > I'm not comfortable with this patch.
> >
> > First, there is no comment in the code which helps to understand the logic.
> > Second, I'm afraid changing the value of the per-core variable
> > _socket_id may have an impact on some applications.
> >
Hi Thomas, I'm sorry to bother you again, but we can't think of a better solution for now,
would you please give me some suggestion, and then I will modify it accordingly.

> Thank you for your reply.
> First, about comments, I can submit a new patch to add comments to help
> understand.
> Second, if you do not change the value of the per-core variable_ socket_ id,
> /lib/eal/common/malloc_heap.c
> malloc_get_numa_socket(void)
> {
>         const struct internal_config *conf = eal_get_internal_configuration();
>         unsigned int socket_id = rte_socket_id();   // The return value of
> "rte_socket_id()" is 1
>         unsigned int idx;
> 
>         if (socket_id != (unsigned int)SOCKET_ID_ANY)
>                 return socket_id;    //so return here
> 
> This will cause return here, This function returns the socket_id of unallocated
> memory.
> 
> If you have a better solution, I can modify it.
> > 16/02/2023 03:50, You, KaisenX:
> > > From: Burakov, Anatoly <anatoly.burakov@intel.com>
> > > > On 2/1/2023 12:20 PM, Kaisen You wrote:
> > > > > Trying to allocate memory on the first detected numa node has
> > > > > less chance to find some memory actually available rather than
> > > > > on the main lcore numa node (especially when the DPDK
> > > > > application is started only on one numa node).
> > > > >
> > > > > Fixes: 705356f0811f ("eal: simplify control thread creation")
> > > > > Fixes: bb0bd346d5c1 ("eal: suggest using --lcores option")
> > > > > Cc: stable@dpdk.org
> > > > >
> > > > > Signed-off-by: David Marchand <david.marchand@redhat.com>
> > > > > Signed-off-by: Kaisen You <kaisenx.you@intel.com>
> > > > > ---
> > > > > Changes since v4:
> > > > > - mod the patch title,
> > > > >
> > > > > Changes since v3:
> > > > > - add the assignment of socket_id in thread initialization,
> > > > >
> > > > > Changes since v2:
> > > > > - add uncommitted local change and fix compilation,
> > > > >
> > > > > Changes since v1:
> > > > > - accomodate for configurations with main lcore running on multiples
> > > > >    physical cores belonging to different numa,
> > > > > ---
> > > > >   lib/eal/common/eal_common_thread.c | 1 +
> > > > >   lib/eal/common/malloc_heap.c       | 4 ++++
> > > > >   2 files changed, 5 insertions(+)
> > > > >
> > > > > diff --git a/lib/eal/common/eal_common_thread.c
> > > > > b/lib/eal/common/eal_common_thread.c
> > > > > index 38d83a6885..21bff971f8 100644
> > > > > --- a/lib/eal/common/eal_common_thread.c
> > > > > +++ b/lib/eal/common/eal_common_thread.c
> > > > > @@ -251,6 +251,7 @@ static void *ctrl_thread_init(void *arg)
> > > > >   	void *routine_arg = params->arg;
> > > > >
> > > > >   	__rte_thread_init(rte_lcore_id(), cpuset);
> > > > > +	RTE_PER_LCORE(_socket_id) = SOCKET_ID_ANY;
> > > > >   	params->ret =
> > > > > rte_thread_set_affinity_by_id(rte_thread_self(),
> > > > cpuset);
> > > > >   	if (params->ret != 0) {
> > > > >   		__atomic_store_n(&params->ctrl_thread_status,
> > > > > diff --git a/lib/eal/common/malloc_heap.c
> > > > > b/lib/eal/common/malloc_heap.c index d7c410b786..3ee19aee15
> > 100644
> > > > > --- a/lib/eal/common/malloc_heap.c
> > > > > +++ b/lib/eal/common/malloc_heap.c
> > > > > @@ -717,6 +717,10 @@ malloc_get_numa_socket(void)
> > > > >   			return socket_id;
> > > > >   	}
> > > > >
> > > > > +	socket_id = rte_lcore_to_socket_id(rte_get_main_lcore());
> > > > > +	if (socket_id != (unsigned int)SOCKET_ID_ANY)
> > > > > +		return socket_id;
> > > > > +
> > > > >   	return rte_socket_id_by_idx(0);
> > > > >   }
> > > > >
> > > >
> > > > I may be lacking context, but I don't quite get the suggested change.
> > > >  From what I understand, the original has to do with assigning
> > > > lcore cpusets in such a way that an lcore ends up having two
> > > > socket ID's (because it's been assigned to CPU's on different
> > > > sockets). Why is this
> > allowed in the first place?
> > > > It seems like a user error to me, as it breaks many of the
> > > > fundamental assumptions DPDK makes.
> > > >
> > > In a dual socket system, if all used cores are in socket 1 and the
> > > NIC is in socket 1, no memory is allocated for socket 0. This is to
> > > optimize
> > memory consumption.
> > >
> > > I agree with you. If the startup parameters can ensure that both
> > > sockets allocate memory, there will be no problem.
> > > However, due to the different CPU topologies of different systems,
> > > It is difficult for users to ensure that the startup parameter
> > > contains two cpu
> > nodes.
> > >
> > > > I'm fine with using main lcore socket for control threads, I just
> > > > don't think the `socket_id != SOCKET_ID_ANY` thing should be
> > > > checked here, because it apparently tries to compensate for a
> > > > problem with cpuset of the main thread, which shouldn't have
> happened to begin with.
> > > >
> > > This issue has been explained in detail in the discussion of the
> > > patch v1
> > version.
> > > I will forward the previous email to you. The content of the email
> > > will also better let you know the purpose of submitting this patch.
> >
> >


  reply	other threads:[~2023-04-13  0:56 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20221221104858.296530-1-david.marchand@redhat.com>
2023-01-31 15:05 ` [PATCH v4] net/iavf:enhance " Kaisen You
2023-01-31 16:05   ` Thomas Monjalon
2023-02-01  5:32     ` You, KaisenX
2023-02-01 12:20 ` [PATCH v5] enhance " Kaisen You
2023-02-15 14:22   ` Burakov, Anatoly
2023-02-15 14:47     ` Burakov, Anatoly
2023-02-16  2:50     ` You, KaisenX
2023-03-03 14:07       ` Thomas Monjalon
2023-03-09  1:58         ` You, KaisenX
2023-04-13  0:56           ` You, KaisenX [this message]
2023-04-19 12:16             ` Thomas Monjalon
2023-04-21  2:34               ` You, KaisenX
2023-04-21  8:12                 ` Thomas Monjalon
2023-04-23  6:52                   ` You, KaisenX
2023-04-23  8:57                     ` You, KaisenX
2023-04-23 13:19                       ` Thomas Monjalon
2023-04-25  5:16   ` [PATCH v6] " Kaisen You
2023-04-27  6:57     ` Thomas Monjalon
2023-05-16  5:19       ` You, KaisenX
2023-05-23  2:50     ` [PATCH v7] " Kaisen You
2023-05-23 10:44       ` Burakov, Anatoly
2023-05-26  6:44         ` You, KaisenX
2023-05-23 12:45       ` Burakov, Anatoly
2023-05-26  6:50       ` [PATCH v8] " Kaisen You
2023-05-26  8:45       ` Kaisen You
2023-05-26 14:44         ` Burakov, Anatoly
2023-05-26 17:50           ` Stephen Hemminger
2023-05-29 10:37             ` Burakov, Anatoly
2023-06-01 14:42         ` David Marchand
2023-06-06 14:04           ` Thomas Monjalon
2023-06-12  9:36           ` Burakov, Anatoly

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SJ0PR11MB676546A9A503ED05C5C6A2DCE1989@SJ0PR11MB6765.namprd11.prod.outlook.com \
    --to=kaisenx.you@intel.com \
    --cc=anatoly.burakov@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=jerinj@marvell.com \
    --cc=olivier.matz@6wind.com \
    --cc=stable@dpdk.org \
    --cc=thomas@monjalon.net \
    --cc=yidingx.zhou@intel.com \
    --cc=zhoumin@loongson.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).