DPDK patches and discussions
 help / color / mirror / Atom feed
From: Truring Team <truring12@gmail.com>
To: dev@dpdk.org
Subject: Re: [dpdk-dev] dpdk-testpmd app with SRIOV VF on Kubernetes POD not working
Date: Tue, 20 Jul 2021 18:47:36 +0530	[thread overview]
Message-ID: <CALvshjMB+adLv=KwACRomQV5hg70bAyTEgzVKw=t1XkU+gQ-iQ@mail.gmail.com> (raw)
In-Reply-To: <CALvshjO6YwARhu-x0nRY0qtSq6m+iJtC3QgHVES0KFa-cXyBiw@mail.gmail.com>

Hi Everyone ,

Can someone please look into this  and analyse logs ?
I need help to proceed further .

Best Regards
Puneet

On Fri, 16 Jul 2021 at 09:34, Truring Team <truring12@gmail.com> wrote:

> Hi Everyone,
>
>
> We have Kubernetes setup where Master Node and one worker node are there
> and tried running dpdk-testpmd on POD
>
>
>
> So Master Node and Worker Node are AWS VM which has ixgbevf VF interfaces
> and binded one interface with dpdk igb_uio then expose the interface inside
> POD with multus cni and SR-IOV Network Device Plugin used and exposed
> interface showing inside POD
>
>
>
> Also created the Hugepages on worker node before joining cluster and then
> POD yaml file used hugepages-2Mi:100Mi with help of Kubernetes
> documentation
> https://kubernetes.io/docs/tasks/manage-hugepages/scheduling-hugepages/
>
>   Hugepages are showing in node and inside POD as well
>
>
> NIC Detail:
> *00:04.0 Ethernet controller [0200]: Intel Corporation 82599 Ethernet
> Controller Virtual Function [8086:10ed] (rev 01)*
>
> *dpdk-testpmd giving  segmentation fault on PODs*
>
>
>
> After build dpdk-20.11 inside POD run the dpdk-testpmd then following
> segment fault occurs
>
>
>
> [root@centos7-testpmd dpdk-20.11]#* gdb ./build/app/dpdk-testpmd*
>
> GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-120.el7
>
> Copyright (C) 2013 Free Software Foundation, Inc.
>
> License GPLv3+: GNU GPL version 3 or later <
> http://gnu.org/licenses/gpl.html>
>
> This is free software: you are free to change and redistribute it.
>
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
>
> and "show warranty" for details.
>
> This GDB was configured as "x86_64-redhat-linux-gnu".
>
> For bug reporting instructions, please see:
>
> <http://www.gnu.org/software/gdb/bugs/>...
>
> Reading symbols from /home/sandeep/dpdk-20.11/build/app/dpdk-testpmd...(no
> debugging symbols found)...done.
>
> (gdb) r -l 0-3 -n 4 -- -i
>
> Starting program: /home/sandeep/dpdk-20.11/./build/app/dpdk-testpmd -l 0-3
> -n 4 -- -i
>
> [Thread debugging using libthread_db enabled]
>
> Using host libthread_db library "/lib64/libthread_db.so.1".
>
> EAL: Detected 4 lcore(s)
>
> EAL: Detected 1 NUMA nodes
>
> [New Thread 0x7ffff6a71700 (LWP 29508)]
>
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>
> [New Thread 0x7ffff6270700 (LWP 29509)]
>
> EAL: Selected IOVA mode 'PA'
>
> EAL: No available hugepages reported in hugepages-1048576kB
>
> EAL: Probing VFIO support...
>
> [New Thread 0x7ffff5a6f700 (LWP 29510)]
>
> [New Thread 0x7ffff526e700 (LWP 29511)]
>
> [New Thread 0x7ffff4a6d700 (LWP 29512)]
>
> EAL:   Invalid NUMA socket, default to 0
>
> EAL:   Invalid NUMA socket, default to 0
>
> EAL: Probe PCI driver: net_ixgbe_vf (8086:10ed) device: 0000:00:04.0
> (socket 0)
>
> [New Thread 0x7ffff426c700 (LWP 29513)]
>
> EAL: No legacy callbacks, legacy socket not created
>
> Interactive-mode selected
>
> testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
>
> testpmd: preferred mempool ops selected: ring_mp_mc
>
>
>
> Program received signal SIGBUS, Bus error.
>
> 0x00000000009ffa36 in alloc_seg ()
>
> Missing separate debuginfos, use: debuginfo-install
> glibc-2.17-324.el7_9.x86_64 libgcc-4.8.5-44.el7.x86_64
> libpcap-1.5.3-12.el7.x86_64 numactl-libs-2.0.12-5.el7.x86_64
> zlib-1.2.7-19.el7_9.x86_64
>
> (gdb) *bt*
>
> *#0  0x00000000009ffa36 in alloc_seg ()*
>
> #1  0x0000000000a0043b in alloc_seg_walk ()
>
> #2  0x00000000009e35ab in rte_memseg_list_walk_thread_unsafe ()
>
> #3  0x0000000000a00d32 in eal_memalloc_alloc_seg_bulk ()
>
> #4  0x00000000009f0374 in alloc_pages_on_heap ()
>
> #5  0x00000000009f065b in try_expand_heap ()
>
> #6  0x00000000009f0a91 in alloc_more_mem_on_socket ()
>
> #7  0x00000000009f115e in malloc_heap_alloc ()
>
> #8  0x00000000009e3ed1 in rte_memzone_reserve_thread_safe ()
>
> #9  0x00000000009d671e in rte_mempool_populate_default ()
>
> #10 0x00000000009c93af in rte_pktmbuf_pool_create_by_ops ()
>
> #11 0x00000000009c9478 in rte_pktmbuf_pool_create ()
>
> #12 0x000000000072b233 in mbuf_pool_create ()
>
> #13 0x00000000004266d4 in main ()
>
> (gdb)
>
>
>
>
> Can anyone help in resolving this issue ?
>
>
> Best Regard
>
> Puneet
>

  reply	other threads:[~2021-07-20 13:18 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-16  4:04 Truring Team
2021-07-20 13:17 ` Truring Team [this message]
2021-10-29 19:07   ` David Marchand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALvshjMB+adLv=KwACRomQV5hg70bAyTEgzVKw=t1XkU+gQ-iQ@mail.gmail.com' \
    --to=truring12@gmail.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).