DPDK patches and discussions
 help / color / mirror / Atom feed
From: bugzilla@dpdk.org
To: dev@dpdk.org
Subject: [DPDK/core Bug 1857] DPDK applications inside kubernetes pods  are failing to come up when CPU and hugepage memory are from different NUMA nodes
Date: Mon, 22 Dec 2025 07:25:52 +0000	[thread overview]
Message-ID: <bug-1857-3@http.bugs.dpdk.org/> (raw)

http://bugs.dpdk.org/show_bug.cgi?id=1857

            Bug ID: 1857
           Summary: DPDK applications inside kubernetes pods  are failing
                    to come up when CPU and hugepage memory are from
                    different NUMA nodes
           Product: DPDK
           Version: 25.11
          Hardware: All
                OS: All
            Status: UNCONFIRMED
          Severity: major
          Priority: Normal
         Component: core
          Assignee: dev@dpdk.org
          Reporter: sahithi.singam@oracle.com
  Target Milestone: ---

DPDK sample applications like l2fwd, l3fwd are not coming up and failing with
below error when they are run with cores assigned from  NUMA-1  , but hugepages
are allocated on NUMA-0. 

===========
EAL: set_mempolicy failed: Invalid argument][N/A][N/A]
 EAL: Using IOMMU type 1 (Type 1)][N/A][N/A]
 EAL: set_mempolicy failed: Invalid argument][N/A][N/A]
 PCI_BUS: Probe PCI driver: net_iavf (8086:154c) device: 0000:31:04.6 (socket
0)][N/A][N/A]
 EAL: set_mempolicy failed: Invalid argument][N/A][N/A]
 ETHDEV: Cannot allocate ethdev shared data][N/A][N/A]
 PCI_BUS: Releasing PCI mapped resource for 0000:31:04.6][N/A][N/A]
 PCI_BUS: Calling pci_unmap_resource for 0000:31:04.6 at
0x312000400000][N/A][N/A]

===========

When a kubernetes pod is launched , cores and memory/hugepages are allocated to
the pod from containerd and we dont have much control of the numa nodes from
where they are allocated (we are using best-effort in the topology manager
configuration of the kubelet). 
In this scenario, we are allocated cpus from NUMA 1 and hugepages from NUMA 0 .
DPDK application is trying to allocated memory from the same numa node on which
core is present and it is failing as there are no hugepages available on that
particular NUMA node (Node 1). 

As of now DPDK application design works only when both CPU and hugepages are
from same NUMA node , but with increase in DPDK application usage inside
kubernetes pods, this problem should be addressed where users do not have much
control of resource reservation from same NUMA nodes. 

So DPDK applications should work when cores and hugepages are from different
NUMA nodes.

-- 
You are receiving this mail because:
You are the assignee for the bug.

                 reply	other threads:[~2025-12-22  7:25 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-1857-3@http.bugs.dpdk.org/ \
    --to=bugzilla@dpdk.org \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).