DPDK patches and discussions
 help / color / mirror / Atom feed
From: Don Wallwork <donw@xsightlabs.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: "Morten Brørup" <mb@smartsharesystems.com>,
	"Anatoly Burakov" <anatoly.burakov@intel.com>,
	"Dmitry Kozlyuk" <dmitry.kozliuk@gmail.com>,
	"Bruce Richardson" <bruce.richardson@intel.com>,
	dev@dpdk.org
Subject: Re: [RFC] eal: allow worker lcore stacks to be allocated from hugepage memory
Date: Mon, 2 May 2022 09:15:35 -0400	[thread overview]
Message-ID: <064b6e35-0e49-a96f-cc16-cbb02a67009d@xsightlabs.com> (raw)
In-Reply-To: <20220429120326.4c56227c@hermes.local>



On 4/29/2022 3:03 PM, Stephen Hemminger wrote:
> On Fri, 29 Apr 2022 14:52:03 -0400
> Don Wallwork <donw@xsightlabs.com> wrote:
>
>>>>>> The expectation is that use of this optional feature would be limited to cases where the performance gains justify the implications of these tradeoffs. For example, a specific data plane application may be okay with limited stack size and could be tested to ensure stack usage remains within limits.
>>> How to identify the required stack size and verify it... If aiming for small stacks, some instrumentation would be nice, like rte_mempool_audit() and rte_mempool_list_dump().
>> Theoretically, a region of memory following the stack could be populated
>> with a poison pattern that could be audited.   Not as robust as hw
>> mprotect/MMU, but it could provide some protection.
> Usually just doing mmap(.., PROT_NONE) will create a page that will cause SEGV on access
> which is what  you want.
As mentioned elsewhere, the problem with this is we don't want to 
allocate an entire hugepage per stack just to get a guard page.

There is a simple way to verify this.  If the application is run without 
the hugepage stacks option and it does not seg fault, then we know it is 
safe to run the application with the hugepage stacks given the same 
thread stack size.

  reply	other threads:[~2022-05-02 13:15 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-26 12:19 Don Wallwork
2022-04-26 14:58 ` Stephen Hemminger
2022-04-26 21:01   ` Don Wallwork
2022-04-26 21:21     ` Stephen Hemminger
2022-04-26 21:25       ` Don Wallwork
2022-04-27  8:17         ` Morten Brørup
2022-04-29 18:52           ` Don Wallwork
2022-04-29 19:03             ` Stephen Hemminger
2022-05-02 13:15               ` Don Wallwork [this message]
2022-04-30  7:55             ` Morten Brørup
2022-04-27  0:42 ` Honnappa Nagarahalli
2022-04-27 17:50   ` Don Wallwork
2022-04-27 19:09     ` Honnappa Nagarahalli
2022-04-29 20:00 ` [RFC v2] " Don Wallwork
2022-04-30  7:20   ` Morten Brørup

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=064b6e35-0e49-a96f-cc16-cbb02a67009d@xsightlabs.com \
    --to=donw@xsightlabs.com \
    --cc=anatoly.burakov@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=dmitry.kozliuk@gmail.com \
    --cc=mb@smartsharesystems.com \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).