DPDK patches and discussions
 help / color / mirror / Atom feed
From: Kamaraj P <pkamaraj@gmail.com>
To: "Burakov, Anatoly" <anatoly.burakov@intel.com>
Cc: dev@dpdk.org, Nageswara Rao Penumarthy <npenumar@cisco.com>,
	"Kamaraj P (kamp)" <kamp@cisco.com>
Subject: Re: [dpdk-dev] CONFIG_RTE_MAX_MEM_MB fails in DPDK18.05
Date: Mon, 17 Feb 2020 15:27:09 +0530	[thread overview]
Message-ID: <CAG8PAara3w+QV0F4w+7mqZ2AJ1p4xgcLvcuPHtY62Ge3bcT-Jg@mail.gmail.com> (raw)
In-Reply-To: <5192f94a-e50a-7e61-2e33-a218a4b6b5b4@intel.com>

Hi Anatoly,
Thanks for the clarifications.

Currently we are migrating to the new DPDK 18.11 ( from 17.05).  Here is
our configuration:
=======================================================================
We have configured the "--legacy-mem" option and changed the
CONFIG_RTE_MAX_MEM_MB to 2048 (and we are passing 2MB huge page 188 and no
1G hugepages in the bootargs).
Our application deployment as 2G RAM
=======================================================================
We are observing the hang issue, with above configuration.
Please see the below logs:
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 1 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: open shared lib /usr/lib64/librte_pmd_ixgbe.so.2.1
EAL: open shared lib /usr/lib64/librte_pmd_e1000.so.1.1
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or
directory)
EAL: VFIO PCI modules not loaded
EAL: No free hugepages reported in hugepages-1048576kB
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
EAL: VFIO modules not loaded, skipping VFIO support...
EAL: Ask a virtual area of 0x2e000 bytes
EAL: Virtual area found at 0x100000000 (size = 0x2e000)
EAL: Setting up physically contiguous memory...
EAL: Setting maximum number of open files to 4096
EAL: Detected memory type: socket_id:0 hugepage_sz:1073741824
EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
EAL: Creating 1 segment lists: n_segs:1 socket_id:0 hugepage_sz:1073741824
EAL: Ask a virtual area of 0x1000 bytes
EAL: Virtual area found at 0x10002e000 (size = 0x1000)
EAL: Memseg list allocated: 0x100000kB at socket 0
EAL: Ask a virtual area of 0x40000000 bytes
<<< --- struck here ---> >>>>


Is there any other dpdk options thro which we can resolve the above issue ?
Any thoughts ?
Like passing  the  *--socket-limit* and *--m *parameter etc during the EAL
Init (could help ???).
Please suggest us.

Thanks,
Kamaraj

On Tue, Dec 10, 2019 at 3:53 PM Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:

> On 07-Dec-19 5:01 PM, Kamaraj P wrote:
> > Hello All,
> >
> > Currently, we are facing an issue with memory allocation failure
> > in memseg_primary_init().
> > When we configure the CONFIG_RTE_MAX_MEM_MB to 512MB and correspondingly
> > configured the number of huge pages for our platform. But the virtual
> > memory allocation is failing.
> >
> > It appears that its trying to allocate CONFIG_RTE_MAX_MEMSEG_PER_LIST *
> > Huge page size (i.e. 8192 * 2MB = 0x400000000) and virtual memory
> > allocation is failing.
> >
> > Also tried changing the CONFIG_RTE_MAX_MEMSEG_PER_LIST to 64 with which
> > virtual memory allocation is passing for the 128MB (64 * 2MB). But looks
> > like 128MB memory is not enough and it is causing the PCIe enumeration
> > failure.
> > Not able allocate virtual memory beyond 128MB by increasing the
> > CONFIG_RTE_MAX_MEMSEG_PER_LIST beyond 64.
> >
> > Is there are any settings(argument) which we need to pass as part of
> > rte_eal_init()
> > to get success in the virtual memory allocation?
> > Please advise.
> >
> > Thanks,
> > Kamaraj
> >
>
> I don't think there are, as the allocator wasn't designed with such
> memory constrained use cases in mind. You may want to try --legacy-mem
> option.
>
> --
> Thanks,
> Anatoly
>

  reply	other threads:[~2020-02-17  9:57 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-07 17:01 Kamaraj P
2019-12-10 10:23 ` Burakov, Anatoly
2020-02-17  9:57   ` Kamaraj P [this message]
2020-02-19 10:23     ` Burakov, Anatoly
2020-02-19 10:56       ` Kevin Traynor
2020-02-19 11:16         ` Kamaraj P
2020-02-19 14:23           ` Burakov, Anatoly
2020-02-19 15:02             ` Kamaraj P
2020-02-19 15:28               ` Burakov, Anatoly
2020-02-19 15:42                 ` Kamaraj P
2020-02-19 16:00                   ` Burakov, Anatoly
2020-02-19 16:20                     ` Kamaraj P
2020-02-20 10:02                       ` Burakov, Anatoly

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAG8PAara3w+QV0F4w+7mqZ2AJ1p4xgcLvcuPHtY62Ge3bcT-Jg@mail.gmail.com \
    --to=pkamaraj@gmail.com \
    --cc=anatoly.burakov@intel.com \
    --cc=dev@dpdk.org \
    --cc=kamp@cisco.com \
    --cc=npenumar@cisco.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).