DPDK usage discussions
 help / color / mirror / Atom feed
* DPDK occurs "failed to allocate private data" in rte_eal_init-a possibly unknown dpdk bug?
@ 2022-10-15 15:38  =?gb18030?B?bGFubmlzdGVy?=
  0 siblings, 0 replies; only message in thread
From: =?gb18030?B?bGFubmlzdGVy?= @ 2022-10-15 15:38 UTC (permalink / raw)
  To: =?gb18030?B?dXNlcnM=?=

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="gb18030", Size: 5267 bytes --]

When I searched for this problem, I found that almost no one else encountered such a problem.It's strange.By the way, I use dpdk-19.11.12 and Ubuntu-20.04




After looking at the dpdk source code, I know that the above code is in the rte_eth_dev_create function:
       if (priv_data_size) {         ethdev->data->dev_private = **rte_zmalloc_socket**(             name, priv_data_size, RTE_CACHE_LINE_SIZE,             device->numa_node);         if (!ethdev->data->dev_private) {             RTE_LOG(ERR, EAL, "failed to allocate private data");             retval = -ENOMEM;             goto probe_failed;         }     } 
It seems rte_zmalloc_socket return a NULL pointer. Why this happens?I allocated the relevant hugepage memory as requested.




Some Information£º
EAL: Detected 4 NUMA nodes EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'PA' EAL: No available hugepages reported in hugepages-2048kB EAL: Probing VFIO support... EAL: VFIO support initialized EAL: PCI device 0000:31:00.0 on NUMA socket 3 EAL:   probe driver: 8086:1521 net_e1000_igb EAL: PCI device 0000:31:00.1 on NUMA socket 3 EAL:   probe driver: 8086:1521 net_e1000_igb EAL: PCI device 0000:51:00.0 on NUMA socket 5 EAL:   probe driver: 8086:10fb net_ixgbe failed to allocate private data EAL: Requested device 0000:51:00.0 cannot be used EAL: PCI device 0000:51:00.1 on NUMA socket 5 EAL:   probe driver: 8086:10fb net_ixgbe failed to allocate private data EAL: Requested device 0000:51:00.1 cannot be used 



Hugepages£º (cat /proc/meminfo | grep Huge)
AnonHugePages:         0 kB ShmemHugePages:        0 kB FileHugePages:         0 kB HugePages_Total:      20 HugePages_Free:       19 HugePages_Rsvd:        0 HugePages_Surp:        0 Hugepagesize:    1048576 kB Hugetlb:        20971520 kB 



and mount information: (mount | grep huge)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=1024M) nodev on /mnt/huge type hugetlbfs (rw,relatime,pagesize=1024M) 



NIC:
Network devices using DPDK-compatible driver ============================================ 0000:51:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe,vfio-pci 0000:51:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' drv=igb_uio unused=ixgbe,vfio-pci Network devices using kernel driver =================================== 0000:31:00.0 'I350 Gigabit Network Connection 1521' if=enp49s0f0 drv=igb unused=igb_uio,vfio-pci  0000:31:00.1 'I350 Gigabit Network Connection 1521' if=enp49s0f1 drv=igb unused=igb_uio,vfio-pci *Active* 
Numa information:(numactl -H)
node 5 cpus: 30 31 32 33 34 35 78 79 80 81 82 83 node 5 size: 64475 MB node 5 free: 58743 MB 
82599ES NICs are on numa node 5.

I thought I did all the initialization but rte_eal_init returned the error "failed to allocate private data"

Any ideas on this issue? Thanks for help.

-----------------------update-----------------------------------------

Since this problem occurs in rte_zmalloc_socket(),I guess that there is something wrong with my configuration in Hugepages.But as posted above,i check Hugepages with command:



1.cat /proc/meminfo | grep Huge 2.mount | grep huge 

first command checks available 1G Hugepages and second checks mount situation.

Result above seems to be normal which confuses me most.

Any clues?Thanks.

----------------------------update------------------------------------------

After using debug mode, I seem to find why this error occurs,but still got confused:

Pay attention to first parameter of rte_malloc_socket and rte_zmalloc_socket:



#0  rte_malloc_socket (type=0x17ffb0440 <error: Cannot access memory at address 0x17ffb0440&gt;, size=93825001207904, align=96, socket_arg=2147157000) at ../lib/librte_eal/common/rte_malloc.c:46 #1  0x00007ffff7bdc79b in rte_zmalloc_socket (type=0x555555de3860 "0000:51:00.0", size=30552, align=64, socket=5) at ../lib/librte_eal/common/rte_malloc.c:79 #2  0x00007ffff7cebc08 in rte_eth_dev_create (device=0x555555ddcf50, name=0x555555de3860 "0000:51:00.0", priv_data_size=30552,  ethdev_bus_specific_init=0x7ffff59c74b5 <eth_dev_pci_specific_init&gt;, bus_init_params=0x555555ddcf40, ethdev_init=0x7ffff59c8ed0 <eth_ixgbe_dev_init&gt;,  init_params=0x0) at ../lib/librte_ethdev/rte_ethdev.c:4279 

And in dpdk source code:
/* * Allocate zero'd memory on specified heap. */ void * rte_zmalloc_socket(const char *type, size_t size, unsigned align, int socket) {    void *ptr = rte_malloc_socket(type, size, align, socket);    if (ptr != NULL)        memset(ptr, 0, size);    return ptr; } 

first parameter of rte_malloc_socket and rte_zmalloc_socket should be same, but as shown above,&nbsp;this does not happen in my case




other parameters seems to have same question!!




This confuses me, what appened?And is what happened above the cause of the error?




How can i fix it?Any clues?




It seems we need a dpdk developer's help:)




same question posted in&nbsp;stack overflow.




Thanks.



lannister
2311041380@qq.com



&nbsp;

[-- Attachment #2: Type: text/html, Size: 40105 bytes --]

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2022-10-15 15:38 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-15 15:38 DPDK occurs "failed to allocate private data" in rte_eal_init-a possibly unknown dpdk bug?  =?gb18030?B?bGFubmlzdGVy?=

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).