DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Morten Brørup" <mb@smartsharesystems.com>
To: "Don Wallwork" <donw@xsightlabs.com>,
	"Stephen Hemminger" <stephen@networkplumber.org>,
	"Anatoly Burakov" <anatoly.burakov@intel.com>,
	"Dmitry Kozlyuk" <dmitry.kozliuk@gmail.com>,
	"Bruce Richardson" <bruce.richardson@intel.com>
Cc: <dev@dpdk.org>
Subject: RE: [RFC] eal: allow worker lcore stacks to be allocated from hugepage memory
Date: Sat, 30 Apr 2022 09:55:24 +0200	[thread overview]
Message-ID: <98CBD80474FA8B44BF855DF32C47DC35D87012@smartserver.smartshare.dk> (raw)
In-Reply-To: <f21b5915-d9c4-d7b9-9563-20adf096acbd@xsightlabs.com>

> From: Don Wallwork [mailto:donw@xsightlabs.com]
> Sent: Friday, 29 April 2022 20.52
> 
> On 4/27/2022 4:17 AM, Morten Brørup wrote:
> > +CC: EAL and Memory maintainers.
> >
> >> From: Don Wallwork [mailto:donw@xsightlabs.com]
> >> Sent: Tuesday, 26 April 2022 23.26
> >>
> >> On 4/26/2022 5:21 PM, Stephen Hemminger wrote:
> >>> On Tue, 26 Apr 2022 17:01:18 -0400
> >>> Don Wallwork <donw@xsightlabs.com> wrote:
> >>>
> >>>> On 4/26/2022 10:58 AM, Stephen Hemminger wrote:
> >>>>> On Tue, 26 Apr 2022 08:19:59 -0400
> >>>>> Don Wallwork <donw@xsightlabs.com> wrote:
> >>>>>
> >>>>>> Add support for using hugepages for worker lcore stack memory.
> The intent is to improve performance by reducing stack memory related
> TLB misses and also by using memory local to the NUMA node of each
> lcore.
> > This certainly seems like a good idea!
> >
> > However, I wonder: Does the O/S assign memory local to the NUMA node
> to an lcore-pinned thread's stack when instantiating the tread? And
> does the DPDK EAL ensure that the preconditions for the O/S to do that
> are present?
> >
> > (Not relevant for this patch, but the same locality questions come to
> mind regarding Thread Local Storage.)
> Currently, DPDK does not set pthread affinity until after the pthread
> is
> created and the stack has been allocated.  If the affinity attribute
> were set before the pthread_create call, it seems possible that
> pthread_create could be NUMA aware when allocating the stack.  However,
> it looks like at least the glibc v2.35 implementation of pthread_create
> does not consider this at stack allocation time.

Thank you for the looking into this! Very interesting.

So, your patch improves the memory locality (and TLB) for the stack, which is great.

The same for Thread Local Storage needs to be addressed by glibc (and the C libraries of other OS'es), which is clearly out of scope here. I searched for RTE_DEFINE_PER_LCORE, and it is only rarely used in DPDK core libraries, so this is not a problem inside DPDK.

> > Would it be possible to add a guard page or guard region by using the
> O/S memory allocator instead of rte_zmalloc_socket()? Since the stack
> is considered private to the process, i.e. not accessible from other
> processes, this patch does not need to provide remote access to stack
> memory from secondary processes - and thus it is not a requirement for
> this features to use DPDK managed memory.
> In order for each stack to have guard page protection, this would
> likely
> require reserving an entire hugepage per stack.  Although guard pages
> do
> not require physical memory allocation, it would not be possible for
> multiple stacks to share a hugepage and also have per stack guard page
> protection.

Makes sense; allocating an entire huge page for stack per worker thread could be considered too much.

> > Do the worker threads need a different stack size than the main
> thread? In my opinion: "Nice to have", not "must have".
> The main thread stack behaves differently anyway; it can grow
> dynamically, but regarless of this patch, pthread stack sizes are
> always
> fixed.   This change only relates to worker threads.

Agree.

> >
> > Do the worker threads need different stack sizes individually? In my
> opinion: Perhaps "nice to have", certainly not "must have".
> >
> Currently, worker thread stack sizes are uniformly sized and not
> dynamically resized. This patch does not change that aspect.  Given
> that, it seems unnecessary to add that complexity here.

Agree.


  parent reply	other threads:[~2022-04-30  7:55 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-26 12:19 Don Wallwork
2022-04-26 14:58 ` Stephen Hemminger
2022-04-26 21:01   ` Don Wallwork
2022-04-26 21:21     ` Stephen Hemminger
2022-04-26 21:25       ` Don Wallwork
2022-04-27  8:17         ` Morten Brørup
2022-04-29 18:52           ` Don Wallwork
2022-04-29 19:03             ` Stephen Hemminger
2022-05-02 13:15               ` Don Wallwork
2022-04-30  7:55             ` Morten Brørup [this message]
2022-04-27  0:42 ` Honnappa Nagarahalli
2022-04-27 17:50   ` Don Wallwork
2022-04-27 19:09     ` Honnappa Nagarahalli
2022-04-29 20:00 ` [RFC v2] " Don Wallwork
2022-04-30  7:20   ` Morten Brørup

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=98CBD80474FA8B44BF855DF32C47DC35D87012@smartserver.smartshare.dk \
    --to=mb@smartsharesystems.com \
    --cc=anatoly.burakov@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=dmitry.kozliuk@gmail.com \
    --cc=donw@xsightlabs.com \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).