From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B4BC242B9B for ; Thu, 25 May 2023 17:51:08 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 975E942670; Thu, 25 May 2023 17:51:04 +0200 (CEST) Received: from mail-io1-f50.google.com (mail-io1-f50.google.com [209.85.166.50]) by mails.dpdk.org (Postfix) with ESMTP id 0A20540FAE for ; Thu, 25 May 2023 17:51:03 +0200 (CEST) Received: by mail-io1-f50.google.com with SMTP id ca18e2360f4ac-77493b3d18cso159934339f.0 for ; Thu, 25 May 2023 08:51:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1685029862; x=1687621862; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=T/QTjYVEgs5ZH83eubMHQujZxRYgIM9eVJJ5TiGmRSo=; b=j9my6ZXVNqo//MmOP3Rr3vWStn9WO19VDrHadEIApazvNK+hPIDIf4mlXm2xo7xfdU ICTylDwjtCZz94adzIJuiqTklxhKEKK7Ah3d0ZD5uzGkzVF19BeQs85PxtMKVIsLWTy+ IjMCkcSwxXyesX2Kn4CStx/eSNzTlBFZyMRUNpo7BMrErynZ0//xWjcEonO2O8pCLOvs uZV7pHNTSMUqHOOPk8A5J92RrGBJWBjrM60KeuDd2GwsjKcZcBwkzox4j8px6e51L2T6 s8ERIO59y6qlFWNL6ZAc8wNmfuxmkAp4Sba/7ewHHGl2U4RwP4u+ZnZcgXtnHYiSagDN 1CFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685029862; x=1687621862; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=T/QTjYVEgs5ZH83eubMHQujZxRYgIM9eVJJ5TiGmRSo=; b=eKTvaP8nuikphlHDxOSxldVrb4FZvwI1G1O0UtgPQNvesj70wPsEJ5xXyPgFW84DCH 7RsNeBvzJgSMlhAQV1FIPfI5PBxoGrtmza/RL8XiKC7j9Z0IPNSH0CAjJ9L7BL5VVfFD yJflvENexcW5WPrRtDlJWEpZ63MWGHVGQGOHRBMvMuf6SAnMXIIJ61jnTJsswSEKOPLY RbgpN2oagOpAM0y/eQw1Eg42LcjYXlp8plneMOFgLJfbg18UncVdSKk8T0McfuNfpDax 63vP0WHDXnHoOpQVUnm5bTfZO/gZcJOy0brFC7CwDP9rtRUtDvK92/X+Z+QwqLeUWlCK xU2w== X-Gm-Message-State: AC+VfDwwJluqobMM1f3tQ+sVOdoNQGBheqVP5OlrMuu6KzOID/AUwV2y ZHDsYlKT0BaOioTRCD7M4NuhUHtEnCKlp4+n5VfearJufvQEaQ== X-Google-Smtp-Source: ACHHUZ7XG2I7sz177OnjJnxpOfuHJW5aU8DMtYC1Eae5o1fR7ktTTob8n0TV1KuUaKkYDi/6qMhI1iA6Y3XC3QoWOeo= X-Received: by 2002:a5e:8e43:0:b0:76c:daa7:afd9 with SMTP id r3-20020a5e8e43000000b0076cdaa7afd9mr77073ioo.0.1685029862193; Thu, 25 May 2023 08:51:02 -0700 (PDT) MIME-Version: 1.0 References: <20230525081308.57637057@hermes.local> In-Reply-To: <20230525081308.57637057@hermes.local> From: Cliff Burdick Date: Thu, 25 May 2023 08:50:50 -0700 Message-ID: Subject: Re: DPDK hugepages To: Stephen Hemminger Cc: "Lombardo, Ed" , "users@dpdk.org" Content-Type: multipart/alternative; boundary="0000000000005f40ce05fc869673" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --0000000000005f40ce05fc869673 Content-Type: text/plain; charset="UTF-8" > > > On Thu, 25 May 2023 05:36:02 +0000 > "Lombardo, Ed" wrote: > > > Hi, > > I have two DPDK processes in our application, where one process > allocates 1024 2MB hugepages and the second process allocates 8 1GB > hugepages. > > I am allocating hugepages in a script before the application starts. > This is to satisfy different configuration settings and I don't want to > write to grub when second DPDK process is enabled or disabled. > > > > Script that preconditions the hugepages: > > Process 1: > > mkdir /mnt/huge > > mount -t hugetlbfs -o pagesize=2M nodev /mnt/huge > > echo 1024 > > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages > > > > Process 2: > > mkdir /dev/hugepages-1024 > > mount -t hugetlbfs -o pagesize=1G none /dev/hugepages-1024 > > echo 8 > >/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages > > > > > > Application - > > Process 1 DPDK EAL arguments: > > Const char *argv[] = { "app1", "-c", "7fc", "-n", "4", "--huge-dir", > "/dev/hugepages-1024", "--proc-type", "secondary"}; > > > > Process 2 DPDK EAL arguments: > > const char *dpdk_argv_2gb[6] = {"app1 ", "-c0x2", "-n4" , > "--socket-mem=2048", "--huge-dir /mnt/huge", "--proc-type primary"}; > > > > Questions: > > > > 1. Does DPDK support two hugepage sizes (2MB and 1GB) sharing app1? > This is a new scenario. I doubt it. > > It is possible to have two processes an a common hugepage pool. > > > > 2. Do I need to specify -proc-type for each Process shown above for > argument to the rte_eal_init()? > The problem is that DPDK uses a runtime directory to communicate. > > If you want two disjoint DPDK primary processes, you need to set the > runtime directory. > > > 3. I find the files in /dev/hugpages/rtemap_#s are not present once > Process 2 hugepages-1G/nr_hugepages are set to 8, but when set value to 1 > the /dev/hugepages/rtemap_# files (1024) are present. I can't see how to > resolve this issue. Any suggestions? > > 4. Do I need to set -socket-mem to the total memory of both > Processes, or are they separately defined? I have one NUMA node in this VM. > > > > Thanks, > > Ed > To add to what Stephen said, to point to different directories for separate processes use --file-prefix --0000000000005f40ce05fc869673 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

On Thu, 25 May 2023 05:36:02 +0000
"Lombardo, Ed" <Ed.Lombardo@netscout.com> wrote:

> Hi,
> I have two DPDK processes in our application, where one process alloca= tes 1024 2MB hugepages and the second process allocates 8 1GB hugepages. > I am allocating hugepages in a script before the application starts.= =C2=A0 This is to satisfy different configuration settings and I don't = want to write to grub when second DPDK process is enabled or disabled.
>
> Script that preconditions the hugepages:
> Process 1:
> mkdir /mnt/huge
> mount -t hugetlbfs -o pagesize=3D2M nodev /mnt/huge
> echo=C2=A0 1024=C2=A0 > /sys/devices/system/node/node0/hugepages/hu= gepages-2048kB/nr_hugepages
>
> Process 2:
> mkdir /dev/hugepages-1024
> mount -t hugetlbfs -o pagesize=3D1G none /dev/hugepages-1024
> echo 8 >/sys/devices/system/node/node0/hugepages/hugepages-1048576k= B/nr_hugepages
>
>
> Application -
> Process 1 DPDK EAL arguments:
> Const char *argv[] =3D { "app1", "-c", "7fc&q= uot;, "-n", "4", "--huge-dir", "/dev/hug= epages-1024", "--proc-type", "secondary"};
>
> Process 2 DPDK EAL arguments:
> const=C2=A0 char *dpdk_argv_2gb[6]=C2=A0 =3D {"app1 ", "= ;-c0x2", "-n4" , "--socket-mem=3D2048", "--hu= ge-dir /mnt/huge", "--proc-type primary"};
>
> Questions:
>
>=C2=A0 =C2=A01.=C2=A0 Does DPDK support two hugepage sizes (2MB and 1GB= ) sharing app1?
This is a new scenario. I doubt it.

It is possible to have two processes an a common hugepage pool.


>=C2=A0 =C2=A02.=C2=A0 Do I need to specify -proc-type for each Process = shown above for argument to the rte_eal_init()?
The problem is that DPDK uses a runtime directory to communicate.

If you want two disjoint DPDK primary processes, you need to set the runtim= e directory.

>=C2=A0 =C2=A03.=C2=A0 I find the files in /dev/hugpages/rtemap_#s are n= ot present once Process 2 hugepages-1G/nr_hugepages are set to 8, but when = set value to 1 the /dev/hugepages/rtemap_# files (1024) are present.=C2=A0 = I can't see how to resolve this issue.=C2=A0 Any suggestions?
>=C2=A0 =C2=A04.=C2=A0 Do I need to set -socket-mem to the total memory = of both Processes, or are they separately defined?=C2=A0 I have one NUMA no= de in this VM.
>
> Thanks,
> Ed

To add to what Stephen said, to= point to different directories for separate processes use=C2=A0--file-prefix=C2=A0
--0000000000005f40ce05fc869673--