DPDK patches and discussions
 help / color / mirror / Atom feed
* Error in rte_eal_init() when multiple PODs over single node of K8 cluster
@ 2024-03-27 12:42 Avijit  Pandey
  2024-03-27 14:55 ` Bruce Richardson
  0 siblings, 1 reply; 5+ messages in thread
From: Avijit  Pandey @ 2024-03-27 12:42 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 3183 bytes --]

Hello Devs,

I hope this email finds you well.
I am reaching out to seek assistance regarding an issue I am facing in DPDK within my Kubernetes cluster.

I have deployed a Kubernetes cluster v1.26.0, and I am currently running network testing through DPPD-PRoX (commit/02425932<https://github.com/opnfv/samplevnf/commit/02425932>) using DPDK (v22.11.0). I have deployed 3 pairs of PODs (3 server pods and 3 client pods) on a single K8 node. The server generates and sends traffic to the receiver pod.

During the automated testing, I encounter an error: "Error in rte_eal_init()." This error occurs randomly, and I am unable to determine the root cause. However, this issue does not occur when I use a single pair of PODs (1 server pod and 1 client pod). The traffic is sent and received through the sriov NICs.

PFB the software catalogue I am using:
DPPD-PRoX: commit/02425932<https://github.com/opnfv/samplevnf/commit/02425932>
DPDK version: v22.11.0
DPDK driver: vfio-pci
SRIOV VF driver: iavf
POD OS: Ubuntu 20.04
POD Kernel: 4.18.0-372.9.1.el8.x86_64
Kubernetes: v1.26.0

Error logs:

EAL: Detected CPU lcores: 104
EAL: Detected NUMA nodes: 2
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Cannot allocate memzone list
EAL: FATAL: Cannot init memzone
EAL: Cannot init memzone
Supports Intel RDT Monitoring capability
        RDT-A. Supports Intel RDT Allocation capability
        Supports L3 Cache Intel RDT Monitoring
        Intel RDT Monitoring has 207 maximum RMID
        Supports L3 occupancy monitoring
        Supports L3 Total bandwidth monitoring
        Supports L3 Local bandwidth monitoring
        L3 Cache Intel RDT Monitoring Capability has 207 maximum RMID
        Upscaling_factor = 106496
        Supports L3 Cache Allocation Technology
        Supports MBA Allocation Technology
        Code and Data Prioritization Technology supported
        L3 Cache Allocation Technology Enumeration Highest COS number = 15
        L2 Cache Allocation Technology Enumeration COS number = 0
        Memory Bandwidth Allocation Enumeration COS number = 7
=== Parsing configuration file '/tmp/tmpwkvn651h.cfg' ===
        *** Reading [lua] section ***
        *** Reading [variables] section ***
        *** Reading [eal options] section ***
        *** Reading [cache set #] sections ***
        *** Reading [port #] sections ***
        *** Reading [defaults] section ***
        *** Reading [global] section ***
        *** Reading [core #] sections ***
=== Setting up RTE EAL ===
        Worker threads core mask is 0x2800000
        With master core index 23, full core mask is 0x2800000
        EAL command line: /opt/samplevnf/VNFs/DPPD-PROX/build/prox -c0x2800000 --main-lcore=23 -n4 --allow 0000:86:04.6
error   Error in rte_eal_init()


Any insights or guidance to help resolve this issue would be highly appreciated. If you need any more details, please feel free to ask.
Thank you for your time and assistance!



Best Regards,

Avijit Pandey
Cloud SME | VoerEirAB
+919598570190


[-- Attachment #2: Type: text/html, Size: 13783 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2024-04-08  5:29 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-27 12:42 Error in rte_eal_init() when multiple PODs over single node of K8 cluster Avijit  Pandey
2024-03-27 14:55 ` Bruce Richardson
2024-04-01  7:38   ` Avijit  Pandey
2024-04-02  9:13     ` Bruce Richardson
2024-04-08  5:29       ` Avijit  Pandey

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).