DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] rte_eal_init fails with --socket-mem set to more than 192MB per socket
@ 2016-08-31 19:15 Sarthak Ray
  2016-08-31 19:58 ` Wiles, Keith
  0 siblings, 1 reply; 3+ messages in thread
From: Sarthak Ray @ 2016-08-31 19:15 UTC (permalink / raw)
  To: users

Hi,

I am using dpdk-2.1.0 and I am not able to reserve memory beyond 192MB per socket by using --socket-mem option. I see below error logs, though my system has enough free memory.

EAL: Not enough memory available on socket 0! Requested: 256MB, available: 192MB
PANIC in rte_eal_init():

# numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
node 0 size: 65170 MB
node 0 free: 47433 MB
node 1 cpus: 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
node 1 size: 65536 MB
node 1 free: 49999 MB
node distances:
node   0   1
  0:  10  21
  1:  21  10

Is there any max limit for reserving memory per socket? If yes, then please suggest me how to increase that limit.

Thanks in advance,
Sarthak

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] rte_eal_init fails with --socket-mem set to more than 192MB per socket
  2016-08-31 19:15 [dpdk-users] rte_eal_init fails with --socket-mem set to more than 192MB per socket Sarthak Ray
@ 2016-08-31 19:58 ` Wiles, Keith
  2016-09-01  6:46   ` Sarthak Ray
  0 siblings, 1 reply; 3+ messages in thread
From: Wiles, Keith @ 2016-08-31 19:58 UTC (permalink / raw)
  To: Sarthak Ray; +Cc: users


Regards,
Keith

> On Aug 31, 2016, at 2:15 PM, Sarthak Ray <sarthak_ray@outlook.com> wrote:
> 
> Hi,
> 
> I am using dpdk-2.1.0 and I am not able to reserve memory beyond 192MB per socket by using --socket-mem option. I see below error logs, though my system has enough free memory.
> 
> EAL: Not enough memory available on socket 0! Requested: 256MB, available: 192MB
> PANIC in rte_eal_init():
\Most of the time this means that contiguous memory is not available and you have fragmented huge pages. The normal fix is to make sure you allocate the huge pages early in boot up, which to me means making sure you have the /etc/sysctl.conf file setup with the number of huge pages.

vm.nr_hugepages=NNN

> 
> # numactl -H
> available: 2 nodes (0-1)
> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
> node 0 size: 65170 MB
> node 0 free: 47433 MB
> node 1 cpus: 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
> node 1 size: 65536 MB
> node 1 free: 49999 MB
> node distances:
> node   0   1
>  0:  10  21
>  1:  21  10
> 
> Is there any max limit for reserving memory per socket? If yes, then please suggest me how to increase that limit.
> 
> Thanks in advance,
> Sarthak

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] rte_eal_init fails with --socket-mem set to more than 192MB per socket
  2016-08-31 19:58 ` Wiles, Keith
@ 2016-09-01  6:46   ` Sarthak Ray
  0 siblings, 0 replies; 3+ messages in thread
From: Sarthak Ray @ 2016-09-01  6:46 UTC (permalink / raw)
  To: Wiles, Keith; +Cc: users

Hi Keith,


Thanks for your response; but I am reserving enough hugepages during boot time so that user space can't fragment those memories.


System boot arguments related to hugepages

hugepagesz=1g hugepages=24 hugepagesz=2m hugepages=1024


Memory status before my application starts.

# cat /proc/meminfo

MemTotal:       131990816 kB

MemFree:        103888408 kB

Buffers:             220 kB

Cached:           326596 kB

SwapCached:            0 kB

Active:           619104 kB

Inactive:         309796 kB

Active(anon):     602108 kB

Inactive(anon):      952 kB

Active(file):      16996 kB

Inactive(file):   308844 kB

Unevictable:           0 kB

Mlocked:               0 kB

SwapTotal:             0 kB

SwapFree:              0 kB

Dirty:              2252 kB

Writeback:             0 kB

AnonPages:        601976 kB

Mapped:            41560 kB

Shmem:               976 kB

Slab:             130140 kB

SReclaimable:      69672 kB

SUnreclaim:        60468 kB

KernelStack:        6104 kB

PageTables:        10320 kB

NFS_Unstable:          0 kB

Bounce:                0 kB

WritebackTmp:          0 kB

CommitLimit:    52888208 kB

Committed_AS:    1771680 kB

VmallocTotal:   34359738367 kB

VmallocUsed:     1310168 kB

VmallocChunk:   34291387912 kB

HugePages_Total:     512

HugePages_Free:      512

HugePages_Rsvd:        0

HugePages_Surp:        0

Hugepagesize:       2048 kB

DirectMap4k:       12480 kB

DirectMap2M:     1906688 kB

DirectMap1G:    134217728 kB


Memory status after my application exits with rte_eal_init failure.

# cat /proc/meminfo

MemTotal:       131990816 kB

MemFree:        99747864 kB

Buffers:             240 kB

Cached:           440156 kB

SwapCached:            0 kB

Active:           839728 kB

Inactive:         327164 kB

Active(anon):     820448 kB

Inactive(anon):     1920 kB

Active(file):      19280 kB

Inactive(file):   325244 kB

Unevictable:     3704100 kB

Mlocked:         3704100 kB

SwapTotal:             0 kB

SwapFree:              0 kB

Dirty:              7832 kB

Writeback:             0 kB

AnonPages:       4433016 kB

Mapped:           135536 kB

Shmem:              9088 kB

Slab:             133932 kB

SReclaimable:      71568 kB

SUnreclaim:        62364 kB

KernelStack:        7344 kB

PageTables:        20876 kB

NFS_Unstable:          0 kB

Bounce:                0 kB

WritebackTmp:          0 kB

CommitLimit:    52798096 kB

Committed_AS:    5902824 kB

VmallocTotal:   34359738367 kB

VmallocUsed:     1310168 kB

VmallocChunk:   34291387912 kB

HugePages_Total:     600

HugePages_Free:        0

HugePages_Rsvd:        0

HugePages_Surp:        0

Hugepagesize:       2048 kB

DirectMap4k:       12480 kB

DirectMap2M:     1906688 kB

DirectMap1G:    134217728 kB



Thanks,

Sarthak

________________________________
From: Wiles, Keith <keith.wiles@intel.com>
Sent: Thursday, September 1, 2016 1:28:36 AM
To: Sarthak Ray
Cc: users@dpdk.org
Subject: Re: [dpdk-users] rte_eal_init fails with --socket-mem set to more than 192MB per socket


Regards,
Keith

> On Aug 31, 2016, at 2:15 PM, Sarthak Ray <sarthak_ray@outlook.com> wrote:
>
> Hi,
>
> I am using dpdk-2.1.0 and I am not able to reserve memory beyond 192MB per socket by using --socket-mem option. I see below error logs, though my system has enough free memory.
>
> EAL: Not enough memory available on socket 0! Requested: 256MB, available: 192MB
> PANIC in rte_eal_init():
\Most of the time this means that contiguous memory is not available and you have fragmented huge pages. The normal fix is to make sure you allocate the huge pages early in boot up, which to me means making sure you have the /etc/sysctl.conf file setup with the number of huge pages.

vm.nr_hugepages=NNN

>
> # numactl -H
> available: 2 nodes (0-1)
> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
> node 0 size: 65170 MB
> node 0 free: 47433 MB
> node 1 cpus: 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
> node 1 size: 65536 MB
> node 1 free: 49999 MB
> node distances:
> node   0   1
>  0:  10  21
>  1:  21  10
>
> Is there any max limit for reserving memory per socket? If yes, then please suggest me how to increase that limit.
>
> Thanks in advance,
> Sarthak

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-09-01  6:47 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-31 19:15 [dpdk-users] rte_eal_init fails with --socket-mem set to more than 192MB per socket Sarthak Ray
2016-08-31 19:58 ` Wiles, Keith
2016-09-01  6:46   ` Sarthak Ray

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).