DPDK usage discussions
 help / color / mirror / Atom feed
From: Sarthak Ray <sarthak_ray@outlook.com>
To: "users@dpdk.org" <users@dpdk.org>
Subject: [dpdk-users] Mempool allocation fails on first boot; but succeeds after system reboot
Date: Wed, 17 Aug 2016 13:10:35 +0000	[thread overview]
Message-ID: <MAXPR01MB031505ED7323F2170688600E9F140@MAXPR01MB0315.INDPRD01.PROD.OUTLOOK.COM> (raw)

Hi,

I am using dpdk-2.1.0 for a platform appliance, where I am facing issue with mempool allocation.

On the firstboot of the newly installed appliance, my dpdk application is not coming up saying failure in mbuf allocation on socket 0. But once I reboot the system, it comes up without any issues.

I tried "rte_malloc_dump_stats" api to check the heap statistics right before allocating mbuf pools.

Heap Statistics on first boot (with --socket-mem=128,128)
Socket:0
    Heap_size:134215808,
    Free_size:127706432,
    Alloc_size:6509376,
    Greatest_free_size:8388544, // This value is very less than the "contiguous memory block" that my app is trying to allocate
    Alloc_count:29,
    Free_count:31,

Please Note: Increasing --socket-mem value from 128 to 192 has no impact on Greatest_free_size value and I don't see this fragmentation on socket 1.

Heap Statistics after reboot (with --socket-mem=128,128)
Socket:0
    Heap_size:134217600,
    Free_size:127708224,
    Alloc_size:6509376,
    Greatest_free_size:125982080,
    Alloc_count:29,
    Free_count:3,

After reboot, the largest free block size is increased drastically resulting successful mbuf pool allocation. So looks like this is heap fragmentation issue on Socket 0.

Output of "numactl -H" on my sytem
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
node 0 size: 65170 MB
node 0 free: 49476 MB
node 1 cpus: 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
node 1 size: 65536 MB
node 1 free: 50759 MB
node distances:
node   0   1
  0:  10  21
  1:  21  10

Kernel Boot Arguments for hugepage setting
hugepagesz=1g hugepages=24

Can anyone please comment on how to address this issue? Is there any way to reserve hugepages that can't be fragmented?

Thanks in advance for the valuable suggestion.

Regards,
Sarthak

             reply	other threads:[~2016-08-17 13:10 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-17 13:10 Sarthak Ray [this message]
2016-08-17 15:39 ` Stephen Hemminger
2016-08-18 13:42   ` Sarthak Ray

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=MAXPR01MB031505ED7323F2170688600E9F140@MAXPR01MB0315.INDPRD01.PROD.OUTLOOK.COM \
    --to=sarthak_ray@outlook.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).