DPDK patches and discussions
 help / color / mirror / Atom feed
From: bugzilla@dpdk.org
To: dev@dpdk.org
Subject: [dpdk-dev] [Bug 800] rte_malloc fails to allocate memory block, but succeeds allocating bigger block
Date: Mon, 30 Aug 2021 17:16:15 +0000	[thread overview]
Message-ID: <bug-800-3@http.bugs.dpdk.org/> (raw)

https://bugs.dpdk.org/show_bug.cgi?id=800

            Bug ID: 800
           Summary: rte_malloc fails to allocate memory block, but
                    succeeds allocating bigger block
           Product: DPDK
           Version: 18.11
          Hardware: All
                OS: All
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: core
          Assignee: dev@dpdk.org
          Reporter: maaaah@mail.ru
  Target Milestone: ---

rte_malloc succeeds allocating some memory, but fails to allocate less amount
(both on "clean" heap). While the behavior could be explained, it looks
counterintuitive. Would be fine to have at least some hint to guarantee
allocation success.


Machine specs:

Virtualbox VM
2 CPU
8G RAM

Also can be reproduced on some hardware, will provide specs if needed.


OS specs:

# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)

# uname -a
Linux test 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64
x86_64 x86_64 GNU/Linux


Hugepages config:

# fgrep GRUB_CMDLINE_LINUX /etc/default/grub
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root
rd.lvm.lv=centos/swap rhgb quiet default_hugepagesz=2M hugepagesz=2M
hugepages=2560"

# fgrep HugePages_ /proc/meminfo
HugePages_Total:    2560
HugePages_Free:     2560
HugePages_Rsvd:        0
HugePages_Surp:        0



DPDK 18.11.11 (LTS)

Built using make:

# curl https://fast.dpdk.org/rel/dpdk-18.11.11.tar.xz -O
# tar -xf dpdk-18.11.11.tar.xz
# cd dpdk-stable-18.11.11
# make T=x86_64-native-linuxapp-gcc config
# make
# make install


Steps to reproduce:

#include <rte_eal.h>
#include <rte_errno.h>
#include <stdio.h>
#include <string.h>

int main(int argc, char** argv) {
        int result = rte_eal_init(argc, argv);
        if (result < 0) {
                printf("failed to init EAL: %s (%i)\n", strerror(rte_errno),
rte_errno);
                return result;
        }

        void *p = rte_malloc(NULL, 0xffffffff, 0);
        if (p) {
                printf("success\n");

                rte_free(p);
        } else {
                printf("failure\n");
        }

        p = rte_malloc(NULL, 0x100000000, 0);
        if (p) {
                printf("success\n");

                rte_free(p);
        } else {
                printf("failure\n");
        }
}


DPDK_HOME=/usr/local/share/dpdk/ RTE_SDK=/usr/local/share/dpdk/ make

./test-malloc --log-level 0


Expected result:

success
success


Actual result:

failure
success

-- 
You are receiving this mail because:
You are the assignee for the bug.

             reply	other threads:[~2021-08-30 17:16 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-30 17:16 bugzilla [this message]
2022-06-24  8:25 ` bugzilla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-800-3@http.bugs.dpdk.org/ \
    --to=bugzilla@dpdk.org \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).