DPDK usage discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: "Lombardo, Ed" <Ed.Lombardo@netscout.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: DPDK hugepages
Date: Thu, 25 May 2023 08:13:08 -0700	[thread overview]
Message-ID: <20230525081308.57637057@hermes.local> (raw)
In-Reply-To: <PH0PR01MB673083E3BA36A0D0F8641BF78F469@PH0PR01MB6730.prod.exchangelabs.com>

On Thu, 25 May 2023 05:36:02 +0000
"Lombardo, Ed" <Ed.Lombardo@netscout.com> wrote:

> Hi,
> I have two DPDK processes in our application, where one process allocates 1024 2MB hugepages and the second process allocates 8 1GB hugepages.
> I am allocating hugepages in a script before the application starts.  This is to satisfy different configuration settings and I don't want to write to grub when second DPDK process is enabled or disabled.
> 
> Script that preconditions the hugepages:
> Process 1:
> mkdir /mnt/huge
> mount -t hugetlbfs -o pagesize=2M nodev /mnt/huge
> echo  1024  > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> 
> Process 2:
> mkdir /dev/hugepages-1024
> mount -t hugetlbfs -o pagesize=1G none /dev/hugepages-1024
> echo 8 >/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
> 
> 
> Application -
> Process 1 DPDK EAL arguments:
> Const char *argv[] = { "app1", "-c", "7fc", "-n", "4", "--huge-dir", "/dev/hugepages-1024", "--proc-type", "secondary"};
> 
> Process 2 DPDK EAL arguments:
> const  char *dpdk_argv_2gb[6]  = {"app1 ", "-c0x2", "-n4" , "--socket-mem=2048", "--huge-dir /mnt/huge", "--proc-type primary"};
> 
> Questions:
> 
>   1.  Does DPDK support two hugepage sizes (2MB and 1GB) sharing app1?
This is a new scenario. I doubt it.

It is possible to have two processes an a common hugepage pool.


>   2.  Do I need to specify -proc-type for each Process shown above for argument to the rte_eal_init()?
The problem is that DPDK uses a runtime directory to communicate.

If you want two disjoint DPDK primary processes, you need to set the runtime directory.

>   3.  I find the files in /dev/hugpages/rtemap_#s are not present once Process 2 hugepages-1G/nr_hugepages are set to 8, but when set value to 1 the /dev/hugepages/rtemap_# files (1024) are present.  I can't see how to resolve this issue.  Any suggestions?
>   4.  Do I need to set -socket-mem to the total memory of both Processes, or are they separately defined?  I have one NUMA node in this VM.
> 
> Thanks,
> Ed


  reply	other threads:[~2023-05-25 15:13 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-25  5:36 Lombardo, Ed
2023-05-25 15:13 ` Stephen Hemminger [this message]
2023-05-25 15:50   ` Cliff Burdick

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230525081308.57637057@hermes.local \
    --to=stephen@networkplumber.org \
    --cc=Ed.Lombardo@netscout.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).