From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C883843100; Fri, 25 Aug 2023 15:12:07 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4FF6940695; Fri, 25 Aug 2023 15:12:07 +0200 (CEST) Received: from inbox.dpdk.org (inbox.dpdk.org [95.142.172.178]) by mails.dpdk.org (Postfix) with ESMTP id C3454400D5 for ; Fri, 25 Aug 2023 15:12:05 +0200 (CEST) Received: by inbox.dpdk.org (Postfix, from userid 33) id 9427543101; Fri, 25 Aug 2023 15:12:05 +0200 (CEST) From: bugzilla@dpdk.org To: dev@dpdk.org Subject: [Bug 1280] rte_mempool_create returning error "EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list" Date: Fri, 25 Aug 2023 13:12:05 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: DPDK X-Bugzilla-Component: other X-Bugzilla-Version: 21.11 X-Bugzilla-Keywords: X-Bugzilla-Severity: major X-Bugzilla-Who: pingtosiva@gmail.com X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: dev@dpdk.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter target_milestone attachments.created Message-ID: Content-Type: multipart/alternative; boundary=16929691250.bcff4B19B.1874717 Content-Transfer-Encoding: 7bit X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --16929691250.bcff4B19B.1874717 Date: Fri, 25 Aug 2023 15:12:05 +0200 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All https://bugs.dpdk.org/show_bug.cgi?id=3D1280 Bug ID: 1280 Summary: rte_mempool_create returning error "EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list" Product: DPDK Version: 21.11 Hardware: All OS: Linux Status: UNCONFIRMED Severity: major Priority: Normal Component: other Assignee: dev@dpdk.org Reporter: pingtosiva@gmail.com Target Milestone: --- Created attachment 258 --> https://bugs.dpdk.org/attachment.cgi?id=3D258&action=3Dedit proc/pid/maps output Spawned a VM with RAM of 188GB and configured Huge page size as 1GB and num= ber of huge pages as 100. When tried to allocate mempool using rte_mempool_crea= te from testpmd process, after allocating 64 huge pages this call returns below error message, "EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list" Though there were 35 huge pages available, mempool creation was failed. The below call would create 1 huge page, rte_mempool_create("test6", 1048576, 4096, 512, 0, 0, 0, 0, 0, (int)rte_socket_id(), 0) after creating 64 huge pages, this call started to fail. This fails even af= ter tweak the number of pool, element size etc., Appreciate if any help on this. Details =3D=3D=3D=3D=3D=3D=3D=3D Issue Platforms: ESXi, cn98xx=20 Huge page Size: 1 GB number of Huge pages: 100 Command output =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D gigamon@gigavue-vseries-node:~$ cat /proc/meminfo MemTotal: 197867396 kB MemFree: 91528492 kB MemAvailable: 91268008 kB Buffers: 84376 kB Cached: 722624 kB SwapCached: 0 kB Active: 331440 kB Inactive: 646316 kB Active(anon): 1316 kB Inactive(anon): 170872 kB Active(file): 330124 kB Inactive(file): 475444 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 170764 kB Mapped: 170252 kB Shmem: 1432 kB KReclaimable: 58308 kB Slab: 104180 kB SReclaimable: 58308 kB SUnreclaim: 45872 kB KernelStack: 3312 kB PageTables: 3392 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 46402496 kB Committed_AS: 701964 kB VmallocTotal: 34359738367 kB VmallocUsed: 23732 kB VmallocChunk: 0 kB Percpu: 1296 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB ShmemHugePages: 0 kB ShmemPmdMapped: 0 kB FileHugePages: 0 kB FilePmdMapped: 0 kB HugePages_Total: 100 HugePages_Free: 36 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 1048576 kB Hugetlb: 105062400 kB DirectMap4k: 143232 kB DirectMap2M: 4050944 kB DirectMap1G: 199229440 kB --=20 You are receiving this mail because: You are the assignee for the bug.= --16929691250.bcff4B19B.1874717 Date: Fri, 25 Aug 2023 15:12:05 +0200 MIME-Version: 1.0 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All
Bug ID 1280
Summary rte_mempool_create returning error "EAL: eal_memalloc_al= loc_seg_bulk(): couldn't find suitable memseg_list"
Product DPDK
Version 21.11
Hardware All
OS Linux
Status UNCONFIRMED
Severity major
Priority Normal
Component other
Assignee dev@dpdk.org
Reporter pingtosiva@gmail.com
Target Milestone ---

Created attachment 258 [details]
proc/pid/maps output

Spawned a VM with RAM of 188GB and configured Huge page size as 1GB and num=
ber
of huge pages as 100. When tried to allocate mempool using rte_mempool_crea=
te
from testpmd process, after allocating 64 huge pages this call returns below
error message,

"EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_lis=
t"

Though there were 35 huge pages available, mempool creation was failed.

The below call would create 1 huge page,
rte_mempool_create("test6", 1048576, 4096, 512, 0, 0, 0, 0, 0,
(int)rte_socket_id(), 0)

after creating 64 huge pages, this call started to fail. This fails even af=
ter
tweak  the number of pool, element size etc.,

Appreciate if any help on this.

Details
=3D=3D=3D=3D=3D=3D=3D=3D

Issue Platforms: ESXi, cn98xx=20
Huge page Size: 1 GB
number of Huge pages: 100

Command output
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

gigamon@gigavue-vseries-node:~$ cat /proc/meminfo
MemTotal:       197867396 kB
MemFree:        91528492 kB
MemAvailable:   91268008 kB
Buffers:           84376 kB
Cached:           722624 kB
SwapCached:            0 kB
Active:           331440 kB
Inactive:         646316 kB
Active(anon):       1316 kB
Inactive(anon):   170872 kB
Active(file):     330124 kB
Inactive(file):   475444 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:        170764 kB
Mapped:           170252 kB
Shmem:              1432 kB
KReclaimable:      58308 kB
Slab:             104180 kB
SReclaimable:      58308 kB
SUnreclaim:        45872 kB
KernelStack:        3312 kB
PageTables:         3392 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    46402496 kB
Committed_AS:     701964 kB
VmallocTotal:   34359738367 kB
VmallocUsed:       23732 kB
VmallocChunk:          0 kB
Percpu:             1296 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
FileHugePages:         0 kB
FilePmdMapped:         0 kB
HugePages_Total:     100
HugePages_Free:       36
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB
Hugetlb:        105062400 kB
DirectMap4k:      143232 kB
DirectMap2M:     4050944 kB
DirectMap1G:    199229440 kB
          


You are receiving this mail because:
  • You are the assignee for the bug.
=20=20=20=20=20=20=20=20=20=20
= --16929691250.bcff4B19B.1874717--