DPDK usage discussions
 help / color / mirror / Atom feed
* DPDK hugepages
@ 2023-05-25  5:36 Lombardo, Ed
  2023-05-25 15:13 ` Stephen Hemminger
  0 siblings, 1 reply; 3+ messages in thread
From: Lombardo, Ed @ 2023-05-25  5:36 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 1690 bytes --]

Hi,
I have two DPDK processes in our application, where one process allocates 1024 2MB hugepages and the second process allocates 8 1GB hugepages.
I am allocating hugepages in a script before the application starts.  This is to satisfy different configuration settings and I don't want to write to grub when second DPDK process is enabled or disabled.

Script that preconditions the hugepages:
Process 1:
mkdir /mnt/huge
mount -t hugetlbfs -o pagesize=2M nodev /mnt/huge
echo  1024  > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages

Process 2:
mkdir /dev/hugepages-1024
mount -t hugetlbfs -o pagesize=1G none /dev/hugepages-1024
echo 8 >/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages


Application -
Process 1 DPDK EAL arguments:
Const char *argv[] = { "app1", "-c", "7fc", "-n", "4", "--huge-dir", "/dev/hugepages-1024", "--proc-type", "secondary"};

Process 2 DPDK EAL arguments:
const  char *dpdk_argv_2gb[6]  = {"app1 ", "-c0x2", "-n4" , "--socket-mem=2048", "--huge-dir /mnt/huge", "--proc-type primary"};

Questions:

  1.  Does DPDK support two hugepage sizes (2MB and 1GB) sharing app1?
  2.  Do I need to specify -proc-type for each Process shown above for argument to the rte_eal_init()?
  3.  I find the files in /dev/hugpages/rtemap_#s are not present once Process 2 hugepages-1G/nr_hugepages are set to 8, but when set value to 1 the /dev/hugepages/rtemap_# files (1024) are present.  I can't see how to resolve this issue.  Any suggestions?
  4.  Do I need to set -socket-mem to the total memory of both Processes, or are they separately defined?  I have one NUMA node in this VM.

Thanks,
Ed

[-- Attachment #2: Type: text/html, Size: 6369 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: DPDK hugepages
  2023-05-25  5:36 DPDK hugepages Lombardo, Ed
@ 2023-05-25 15:13 ` Stephen Hemminger
  2023-05-25 15:50   ` Cliff Burdick
  0 siblings, 1 reply; 3+ messages in thread
From: Stephen Hemminger @ 2023-05-25 15:13 UTC (permalink / raw)
  To: Lombardo, Ed; +Cc: users

On Thu, 25 May 2023 05:36:02 +0000
"Lombardo, Ed" <Ed.Lombardo@netscout.com> wrote:

> Hi,
> I have two DPDK processes in our application, where one process allocates 1024 2MB hugepages and the second process allocates 8 1GB hugepages.
> I am allocating hugepages in a script before the application starts.  This is to satisfy different configuration settings and I don't want to write to grub when second DPDK process is enabled or disabled.
> 
> Script that preconditions the hugepages:
> Process 1:
> mkdir /mnt/huge
> mount -t hugetlbfs -o pagesize=2M nodev /mnt/huge
> echo  1024  > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> 
> Process 2:
> mkdir /dev/hugepages-1024
> mount -t hugetlbfs -o pagesize=1G none /dev/hugepages-1024
> echo 8 >/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
> 
> 
> Application -
> Process 1 DPDK EAL arguments:
> Const char *argv[] = { "app1", "-c", "7fc", "-n", "4", "--huge-dir", "/dev/hugepages-1024", "--proc-type", "secondary"};
> 
> Process 2 DPDK EAL arguments:
> const  char *dpdk_argv_2gb[6]  = {"app1 ", "-c0x2", "-n4" , "--socket-mem=2048", "--huge-dir /mnt/huge", "--proc-type primary"};
> 
> Questions:
> 
>   1.  Does DPDK support two hugepage sizes (2MB and 1GB) sharing app1?
This is a new scenario. I doubt it.

It is possible to have two processes an a common hugepage pool.


>   2.  Do I need to specify -proc-type for each Process shown above for argument to the rte_eal_init()?
The problem is that DPDK uses a runtime directory to communicate.

If you want two disjoint DPDK primary processes, you need to set the runtime directory.

>   3.  I find the files in /dev/hugpages/rtemap_#s are not present once Process 2 hugepages-1G/nr_hugepages are set to 8, but when set value to 1 the /dev/hugepages/rtemap_# files (1024) are present.  I can't see how to resolve this issue.  Any suggestions?
>   4.  Do I need to set -socket-mem to the total memory of both Processes, or are they separately defined?  I have one NUMA node in this VM.
> 
> Thanks,
> Ed


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: DPDK hugepages
  2023-05-25 15:13 ` Stephen Hemminger
@ 2023-05-25 15:50   ` Cliff Burdick
  0 siblings, 0 replies; 3+ messages in thread
From: Cliff Burdick @ 2023-05-25 15:50 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Lombardo, Ed, users

[-- Attachment #1: Type: text/plain, Size: 2278 bytes --]

>
>
> On Thu, 25 May 2023 05:36:02 +0000
> "Lombardo, Ed" <Ed.Lombardo@netscout.com> wrote:
>
> > Hi,
> > I have two DPDK processes in our application, where one process
> allocates 1024 2MB hugepages and the second process allocates 8 1GB
> hugepages.
> > I am allocating hugepages in a script before the application starts.
> This is to satisfy different configuration settings and I don't want to
> write to grub when second DPDK process is enabled or disabled.
> >
> > Script that preconditions the hugepages:
> > Process 1:
> > mkdir /mnt/huge
> > mount -t hugetlbfs -o pagesize=2M nodev /mnt/huge
> > echo  1024  >
> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> >
> > Process 2:
> > mkdir /dev/hugepages-1024
> > mount -t hugetlbfs -o pagesize=1G none /dev/hugepages-1024
> > echo 8
> >/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
> >
> >
> > Application -
> > Process 1 DPDK EAL arguments:
> > Const char *argv[] = { "app1", "-c", "7fc", "-n", "4", "--huge-dir",
> "/dev/hugepages-1024", "--proc-type", "secondary"};
> >
> > Process 2 DPDK EAL arguments:
> > const  char *dpdk_argv_2gb[6]  = {"app1 ", "-c0x2", "-n4" ,
> "--socket-mem=2048", "--huge-dir /mnt/huge", "--proc-type primary"};
> >
> > Questions:
> >
> >   1.  Does DPDK support two hugepage sizes (2MB and 1GB) sharing app1?
> This is a new scenario. I doubt it.
>
> It is possible to have two processes an a common hugepage pool.
>
>
> >   2.  Do I need to specify -proc-type for each Process shown above for
> argument to the rte_eal_init()?
> The problem is that DPDK uses a runtime directory to communicate.
>
> If you want two disjoint DPDK primary processes, you need to set the
> runtime directory.
>
> >   3.  I find the files in /dev/hugpages/rtemap_#s are not present once
> Process 2 hugepages-1G/nr_hugepages are set to 8, but when set value to 1
> the /dev/hugepages/rtemap_# files (1024) are present.  I can't see how to
> resolve this issue.  Any suggestions?
> >   4.  Do I need to set -socket-mem to the total memory of both
> Processes, or are they separately defined?  I have one NUMA node in this VM.
> >
> > Thanks,
> > Ed
>

To add to what Stephen said, to point to different directories for separate
processes use --file-prefix

[-- Attachment #2: Type: text/html, Size: 3161 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2023-05-25 15:51 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-25  5:36 DPDK hugepages Lombardo, Ed
2023-05-25 15:13 ` Stephen Hemminger
2023-05-25 15:50   ` Cliff Burdick

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).