patches for DPDK stable branches
 help / color / mirror / Atom feed
From: "You, KaisenX" <kaisenx.you@intel.com>
To: "Burakov, Anatoly" <anatoly.burakov@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: "Zhou, YidingX" <yidingx.zhou@intel.com>,
	"thomas@monjalon.net" <thomas@monjalon.net>,
	"david.marchand@redhat.com" <david.marchand@redhat.com>,
	"Matz, Olivier" <olivier.matz@6wind.com>,
	"ferruh.yigit@amd.com" <ferruh.yigit@amd.com>,
	"zhoumin@loongson.cn" <zhoumin@loongson.cn>,
	"stable@dpdk.org" <stable@dpdk.org>
Subject: RE: [PATCH v7] enhance NUMA affinity heuristic
Date: Fri, 26 May 2023 06:44:07 +0000	[thread overview]
Message-ID: <SJ0PR11MB6765D2DB4A0150F0D574023BE1479@SJ0PR11MB6765.namprd11.prod.outlook.com> (raw)
In-Reply-To: <79dfed13-a3b9-41a2-05d5-dc05531f9e79@intel.com>



> -----Original Message-----
> From: Burakov, Anatoly <anatoly.burakov@intel.com>
> Sent: 2023年5月23日 18:45
> To: You, KaisenX <kaisenx.you@intel.com>; dev@dpdk.org
> Cc: Zhou, YidingX <yidingx.zhou@intel.com>; thomas@monjalon.net;
> david.marchand@redhat.com; Matz, Olivier <olivier.matz@6wind.com>;
> ferruh.yigit@amd.com; zhoumin@loongson.cn; stable@dpdk.org
> Subject: Re: [PATCH v7] enhance NUMA affinity heuristic
> 
> On 5/23/2023 3:50 AM, Kaisen You wrote:
> > When a DPDK application is started on only one numa node, memory is
> > allocated for only one socket. When interrupt threads use memory,
> > memory may not be found on the socket where the interrupt thread is
> > currently located, and memory has to be reallocated on the hugepage,
> > this operation will lead to performance degradation.
> >
> > Fixes: 705356f0811f ("eal: simplify control thread creation")
> > Fixes: 770d41bf3309 ("malloc: fix allocation with unknown socket ID")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Kaisen You <kaisenx.you@intel.com>
> 
> Hi You,
> 
> I've suggested comment rewordings based on my understanding of the issue.
> 
> > ---
> > Changes since v6:
> > - New explanation for easy understanding,
> >
> > Changes since v5:
> > - Add comments to the code,
> >
> > Changes since v4:
> > - mod the patch title,
> >
> > Changes since v3:
> > - add the assignment of socket_id in thread initialization,
> >
> > Changes since v2:
> > - add uncommitted local change and fix compilation,
> >
> > Changes since v1:
> > - accomodate for configurations with main lcore running on multiples
> >    physical cores belonging to different numa,
> > ---
> >   lib/eal/common/eal_common_thread.c | 6 ++++++
> >   lib/eal/common/malloc_heap.c       | 9 +++++++++
> >   2 files changed, 15 insertions(+)
> >
> > diff --git a/lib/eal/common/eal_common_thread.c
> > b/lib/eal/common/eal_common_thread.c
> > index 079a385630..6479b66da1 100644
> > --- a/lib/eal/common/eal_common_thread.c
> > +++ b/lib/eal/common/eal_common_thread.c
> > @@ -252,6 +252,12 @@ static int ctrl_thread_init(void *arg)
> >   	struct rte_thread_ctrl_params *params = arg;
> >
> >   	__rte_thread_init(rte_lcore_id(), cpuset);
> > +	/* set the value of the per-core variable _socket_id to
> SOCKET_ID_ANY.
> > +	 * Satisfy the judgment condition when threads find memory.
> > +	 * If SOCKET_ID_ANY is not specified, the thread may go to a node
> with
> > +	 * unallocated memory in a subsequent memory search.
> 
> I suggest a different comment wording:
> 
> Set control thread socket ID to SOCKET_ID_ANY as control threads may be
> scheduled on any NUMA node.
> 
> > +	 */
> > +	RTE_PER_LCORE(_socket_id) = SOCKET_ID_ANY;
> >   	params->ret = rte_thread_set_affinity_by_id(rte_thread_self(),
> cpuset);
> >   	if (params->ret != 0) {
> >   		__atomic_store_n(&params->ctrl_thread_status,
> > diff --git a/lib/eal/common/malloc_heap.c
> > b/lib/eal/common/malloc_heap.c index d25bdc98f9..6d37f8afee 100644
> > --- a/lib/eal/common/malloc_heap.c
> > +++ b/lib/eal/common/malloc_heap.c
> > @@ -716,6 +716,15 @@ malloc_get_numa_socket(void)
> >   		if (conf->socket_mem[socket_id] != 0)
> >   			return socket_id;
> >   	}
> > +	/* Trying to allocate memory on the main lcore numa node.
> > +	 * especially when the DPDK application is started only on one numa
> node.
> > +	 */
> 
> I suggest the following comment wording:
> 
> We couldn't find quickly find a NUMA node where memory was available, so
> fall back to using main lcore socket ID.
> 
> > +	socket_id = rte_lcore_to_socket_id(rte_get_main_lcore());
> > +	/* When the socket_id obtained in the main lcore numa is
> SOCKET_ID_ANY,
> > +	 * The probability of finding memory on rte_socket_id_by_idx(0) is
> higher.
> > +	 */
> 
> I suggest the following comment wording:
> 
> Main lcore socket ID may be SOCKET_ID_ANY in cases when main lcore
> thread is affinitized to multiple NUMA nodes.
> 
> > +	if (socket_id != (unsigned int)SOCKET_ID_ANY)
> > +		return socket_id;
> >
> 
> I suggest adding comment here:
> 
> Failed to find meaningful socket ID, so just use the first one available.
> 
> >   	return rte_socket_id_by_idx(0);
> >   }
> 
> I believe these comments offer better explanation as to why we are doing
> the things we do here.
> 
> Whether or not you decide to take these corrections on board,
> 
> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>

Thank you for your acked and suggestions, I will adopt your suggestions in the V8 version.
> 
> --
> Thanks,
> Anatoly


  reply	other threads:[~2023-05-26  6:44 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20221221104858.296530-1-david.marchand@redhat.com>
2023-01-31 15:05 ` [PATCH v4] net/iavf:enhance " Kaisen You
2023-01-31 16:05   ` Thomas Monjalon
2023-02-01  5:32     ` You, KaisenX
2023-02-01 12:20 ` [PATCH v5] enhance " Kaisen You
2023-02-15 14:22   ` Burakov, Anatoly
2023-02-15 14:47     ` Burakov, Anatoly
2023-02-16  2:50     ` You, KaisenX
2023-03-03 14:07       ` Thomas Monjalon
2023-03-09  1:58         ` You, KaisenX
2023-04-13  0:56           ` You, KaisenX
2023-04-19 12:16             ` Thomas Monjalon
2023-04-21  2:34               ` You, KaisenX
2023-04-21  8:12                 ` Thomas Monjalon
2023-04-23  6:52                   ` You, KaisenX
2023-04-23  8:57                     ` You, KaisenX
2023-04-23 13:19                       ` Thomas Monjalon
2023-04-25  5:16   ` [PATCH v6] " Kaisen You
2023-04-27  6:57     ` Thomas Monjalon
2023-05-16  5:19       ` You, KaisenX
2023-05-23  2:50     ` [PATCH v7] " Kaisen You
2023-05-23 10:44       ` Burakov, Anatoly
2023-05-26  6:44         ` You, KaisenX [this message]
2023-05-23 12:45       ` Burakov, Anatoly
2023-05-26  6:50       ` [PATCH v8] " Kaisen You
2023-05-26  8:45       ` Kaisen You
2023-05-26 14:44         ` Burakov, Anatoly
2023-05-26 17:50           ` Stephen Hemminger
2023-05-29 10:37             ` Burakov, Anatoly
2023-06-01 14:42         ` David Marchand
2023-06-06 14:04           ` Thomas Monjalon
2023-06-12  9:36           ` Burakov, Anatoly

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SJ0PR11MB6765D2DB4A0150F0D574023BE1479@SJ0PR11MB6765.namprd11.prod.outlook.com \
    --to=kaisenx.you@intel.com \
    --cc=anatoly.burakov@intel.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=olivier.matz@6wind.com \
    --cc=stable@dpdk.org \
    --cc=thomas@monjalon.net \
    --cc=yidingx.zhou@intel.com \
    --cc=zhoumin@loongson.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).