DPDK patches and discussions
 help / color / mirror / Atom feed
From: "De Lara Guarch, Pablo" <pablo.de.lara.guarch@intel.com>
To: Raja Jayapal <raja.jayapal@tcs.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: Rafat Jahan <rafat.jahan@tcs.com>,
	Nagaratna Patagar <nagaratna.patagar@tcs.com>
Subject: Re: [dpdk-dev] l3fwd LPM memory allocation failed
Date: Mon, 1 Aug 2016 15:36:25 +0000	[thread overview]
Message-ID: <E115CCD9D858EF4F90C690B0DCB4D8973C9AA0F2@IRSMSX108.ger.corp.intel.com> (raw)
In-Reply-To: <OF6F7B65DE.BD7A2953-ON65258002.00332D7C-65258002.0034588D@tcs.com>

Hi Raja,

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Raja Jayapal
> Sent: Monday, August 01, 2016 2:32 AM
> To: dev@dpdk.org
> Cc: Rafat Jahan; Nagaratna Patagar
> Subject: [dpdk-dev] l3fwd LPM memory allocation failed
> 
> Hi All,
> 
> I have installed dpdk-2.2.0 on VM and when i try to run l3fwd sample
> application, facing the below memory error.
> 
> root@tcs-Standard-PC-i440FX-PIIX-1996:/home/tcs/Downloads/dpdk-
> 2.2.0/examples/l3fwd# ./build/l3fwd -c 0x1 -n 1 -- -p 0x3 --
> config="(0,0,0),(1,0,0)"
> EAL: Detected lcore 0 as core 0 on socket 0
> EAL: Detected lcore 1 as core 0 on socket 0
> EAL: Support maximum 128 logical core(s) by configuration.
> EAL: Detected 2 lcore(s)
> EAL: VFIO modules not all loaded, skip VFIO support...
> EAL: Setting up physically contiguous memory...
> EAL: Ask a virtual area of 0x600000 bytes
> EAL: Virtual area found at 0x7f2f9a800000 (size = 0x600000)
> EAL: Ask a virtual area of 0xc00000 bytes
> EAL: Virtual area found at 0x7f2f99a00000 (size = 0xc00000)
> EAL: Ask a virtual area of 0x400000 bytes

[...]

> EAL: Virtual area found at 0x7f2f75200000 (size = 0x4600000)
> EAL: Ask a virtual area of 0x600000 bytes
> EAL: Virtual area found at 0x7f2f74a00000 (size = 0x600000)
> EAL: Ask a virtual area of 0xa00000 bytes
> EAL: Virtual area found at 0x7f2f73e00000 (size = 0xa00000)
> EAL: Requesting 263 pages of size 2MB from socket 0
> EAL: TSC frequency is ~3092976 KHz
> EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
> unreliable clock cycles !
> EAL: Master lcore 0 is ready (tid=9cb32940;cpuset=[0])
> EAL: PCI device 0000:00:03.0 on NUMA socket -1
> EAL:ÿÿ probe driver: 8086:100e rte_em_pmd
> EAL:ÿÿ PCI memory mapped at 0x7f2f9ae00000
> PMD: eth_em_dev_init(): port_id 0 vendorID=0x8086 deviceID=0x100e
> EAL: PCI device 0000:00:07.0 on NUMA socket -1
> EAL:ÿÿ probe driver: 8086:100e rte_em_pmd
> EAL:ÿÿ PCI memory mapped at 0x7f2f9ae20000
> PMD: eth_em_dev_init(): port_id 1 vendorID=0x8086 deviceID=0x100e
> EAL: PCI device 0000:00:08.0 on NUMA socket -1
> EAL:ÿÿ probe driver: 8086:100e rte_em_pmd
> EAL:ÿÿ PCI memory mapped at 0x7f2f9ae40000
> PMD: eth_em_dev_init(): port_id 2 vendorID=0x8086 deviceID=0x100e
> EAL: PCI device 0000:00:09.0 on NUMA socket -1
> EAL:ÿÿ probe driver: 8086:100e rte_em_pmd
> EAL:ÿÿ PCI memory mapped at 0x7f2f9ae60000
> PMD: eth_em_dev_init(): port_id 3 vendorID=0x8086 deviceID=0x100e
> Initializing port 0 ... Creating queues: nb_rxq=1 nb_txq=1...ÿ
> Address:52:54:00:0D:AF:AF, Destination:02:00:00:00:00:00, Allocated mbuf
> pool on socket 0
> LPM: Adding route 0x01010100 / 24 (0)
> LPM: Adding route 0x02010100 / 24 (1)
> LPM: LPM memory allocation failed
> EAL: Error - exiting with code: 1
> ÿ Cause: Unable to create the l3fwd LPM table on socket 0
> 
> 
> As mentioned in previous dpdkthreads, i tried after changing the hugepage
> size to 1024 as well.
> http://dpdk.org/ml/archives/dev/2014-November/007770.html
> http://dpdk.org/ml/archives/users/2015-November/000066.html
> echo 512 > /sys/devices/system/node/node0/hugepages/hugepages-
> 2048kB/nr_hugepages
>  echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-
> 2048kB/nr_hugepages
> 
> Tried setting the hugepage through ./tools/setup.sh (1024,4096...) as well. but
> facing the same error.
> 
> Could somebody help how to resolve this issue?

It looks like your memory is too fragmented. Your should reboot your machine and
reserve the hugepages as soon as possible, or even better, pass it to the kernel parameters,
adding in grub.cfg, at the end of the rest of the parameters "hugepages=1024",
to reserve 1024 pages of 2M.

Btw, next time, for this kind of questions, you should use the users list users@dpdk.org.

Best regards,
Pablo

> 
> Thanks,
> Raja
> 
> =====-----=====-----=====
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
> 

      reply	other threads:[~2016-08-01 15:36 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-01  9:31 Raja Jayapal
2016-08-01 15:36 ` De Lara Guarch, Pablo [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=E115CCD9D858EF4F90C690B0DCB4D8973C9AA0F2@IRSMSX108.ger.corp.intel.com \
    --to=pablo.de.lara.guarch@intel.com \
    --cc=dev@dpdk.org \
    --cc=nagaratna.patagar@tcs.com \
    --cc=rafat.jahan@tcs.com \
    --cc=raja.jayapal@tcs.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).