DPDK usage discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: Jatin Sahu <jatin152@gmail.com>
Cc: users@dpdk.org, jatin.sahu@mobileum.com, chandrika.gautam@mobileum.com
Subject: Re: [dpdk-users] DPDK application is hunged in eal_hugepage_info_init
Date: Wed, 12 Aug 2020 08:36:15 -0700	[thread overview]
Message-ID: <20200812083615.4e6ddfea@hermes.lan> (raw)
In-Reply-To: <CAL2oWpXkNPG5mtk-o9gOb6EAO_ygnOcCrXZ2qAro-jtEhG8ZZA@mail.gmail.com>

On Wed, 12 Aug 2020 16:59:33 +0530
Jatin Sahu <jatin152@gmail.com> wrote:

> Hi,
> 
> We are using DPDK libraries with huge pages enabled.
> We observed that the application rte_eal_init() function is stuck in
> flock() call.
> 
> Please advise how to resolve this.
> 
> DPDK Version: dpdk-stable-18.11.6/
> 
> [root@loadtestsrv helloworld]# ./build/helloworld -l 0-3 -n 4
> EAL: Detected 72 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> 
> Stack Trace:
> Thread 3 (Thread 0x7f783c34e700 (LWP 1034)):
> #0  0x00007f783c664113 in epoll_wait () from /lib64/libc.so.6
> #1  0x0000000000558724 in eal_intr_thread_main ()
> #2  0x00007f783c939dd5 in start_thread () from /lib64/libpthread.so.0
> #3  0x00007f783c663b3d in clone () from /lib64/libc.so.6
> Thread 2 (Thread 0x7f783bb4d700 (LWP 1035)):
> #0  0x00007f783c940bfd in recvmsg () from /lib64/libpthread.so.0
> #1  0x0000000000565bae in mp_handle ()
> #2  0x00007f783c939dd5 in start_thread () from /lib64/libpthread.so.0
> #3  0x00007f783c663b3d in clone () from /lib64/libc.so.6
> Thread 1 (Thread 0x7f783d66dc00 (LWP 1033)):
> #0  0x00007f783c655167 in flock () from /lib64/libc.so.6
> #1  0x000000000054e068 in eal_hugepage_info_init ()
> #2  0x000000000054ccfe in rte_eal_init ()
> #3  0x00000000004712a6 in main ()
> [root@loadtestsrv user]#
> 
> HugePage details:
> 
> [root@loadtestsrv roamware]# cat /proc/meminfo | grep -i huge
> AnonHugePages:    112640 kB
> HugePages_Total:      32
> HugePages_Free:       32
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:    1048576 kB
> 
> [roamware@loadtestsrv ~]$ cat
> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> 512
> [roamware@loadtestsrv ~]$ cat
> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
> 512
> 
> [root@loadtestsrv roamware]# cat /proc/mounts | grep -i huge
> cgroup /sys/fs/cgroup/hugetlb cgroup
> rw,seclabel,nosuid,nodev,noexec,relatime,hugetlb 0 0
> nodev /mnt/huge hugetlbfs rw,seclabel,relatime 0 0
> nodev /mnt/huge hugetlbfs rw,seclabel,relatime,pagesize=2M 0 0
> nodev /mnt/huge hugetlbfs rw,seclabel,relatime 0 0
> 
> Regards,
> Jatin

Maybe in this case Flock blocks if other process has the file locked?
Is there another process which has flocked the file?
Or you have a buggy kernel?

      reply	other threads:[~2020-08-12 15:36 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-12 11:29 Jatin Sahu
2020-08-12 15:36 ` Stephen Hemminger [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200812083615.4e6ddfea@hermes.lan \
    --to=stephen@networkplumber.org \
    --cc=chandrika.gautam@mobileum.com \
    --cc=jatin.sahu@mobileum.com \
    --cc=jatin152@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).