From: Bruce Richardson <bruce.richardson@intel.com>
To: ChenXiaodong <ch.xd@live.cn>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] problem: Cannot allocate memzone for ethernet port data
Date: Wed, 22 Apr 2015 14:34:17 +0100 [thread overview]
Message-ID: <20150422133416.GA8676@bricha3-MOBL3> (raw)
In-Reply-To: <SNT148-W7973933D48CAD040A45339F6EE0@phx.gbl>
On Wed, Apr 22, 2015 at 07:36:59PM +0800, ChenXiaodong wrote:
> Thanks for you reply! There is actually only one NUMA node. here is what the commands return:
>
Hi,
there seems to be a mismatch in terms of the numa node the memory is reported
as being on (node 0) and the numa node that the cores are detected as being on
- as reported by the physical package id value. That's why the app is failing
as it's looking for the memory on the wrong numa node.
Can you try applying the patch I've just posted to change how we detect
the numa node of a core, and see if it helps in your case.
thanks,
/Bruce
> [root@test dpdk-2.0.0]# tools/cpu_layout.py
> ============================================================
> Core and Socket Information (as reported by '/proc/cpuinfo')
> ============================================================
>
> cores = [0, 1, 2, 3]
> sockets = [1]
>
> Socket 1
> --------
> Core 0 [0, 4]
>
> Core 1 [1, 5]
>
> Core 2 [2, 6]
>
> Core 3 [3, 7]
>
> [root@test dpdk-2.0.0]# cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> 1024
>
> > From: pablo.de.lara.guarch@intel.com
> > To: ch.xd@live.cn; dev@dpdk.org
> > Subject: RE: [dpdk-dev] problem: Cannot allocate memzone for ethernet port data
> > Date: Wed, 22 Apr 2015 11:01:08 +0000
> >
> > Hi ,
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of ChenXiaodong
> > > Sent: Wednesday, April 22, 2015 11:33 AM
> > > To: dev@dpdk.org
> > > Subject: [dpdk-dev] problem: Cannot allocate memzone for ethernet port
> > > data
> > >
> > > Hi, every one
> > >
> > > Is there any one who can help me ? I'm new to DPDK, here is a problem that
> > > I don't know how to deal with. The program is running on an machine with
> > > two Intel 82576 NICs. One of the NICs is binded with the uio_pci_generic
> > > driver。1024 2M hugepages have been setup.
> > >
> > > Any suggestion about the cause of the problem or solution to it will be
> > > appreciated!
> >
> > Could you tell us what these commands return?
> >
> > ./tools/cpu_layout.py
> > cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> > cat /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
> >
> > (assuming you have 2 NUMA nodes)
> >
> > Pablo
> > >
> > > Here is the output of the program:
> > >
> > > [root@test dpdk-2.0.0]# build/app/testpmd -c ff -n 4 --socket-mem=1024
> > > EAL: Cannot read numa node link for lcore 0 - using physical package id
> > > instead
> > > EAL: Detected lcore 0 as core 0 on socket 1
> > > EAL: Cannot read numa node link for lcore 1 - using physical package id
> > > instead
> > > EAL: Detected lcore 1 as core 1 on socket 1
> > > EAL: Cannot read numa node link for lcore 2 - using physical package id
> > > instead
> > > EAL: Detected lcore 2 as core 2 on socket 1
> > > EAL: Cannot read numa node link for lcore 3 - using physical package id
> > > instead
> > > EAL: Detected lcore 3 as core 3 on socket 1
> > > EAL: Cannot read numa node link for lcore 4 - using physical package id
> > > instead
> > > EAL: Detected lcore 4 as core 0 on socket 1
> > > EAL: Cannot read numa node link for lcore 5 - using physical package id
> > > instead
> > > EAL: Detected lcore 5 as core 1 on socket 1
> > > EAL: Cannot read numa node link for lcore 6 - using physical package id
> > > instead
> > > EAL: Detected lcore 6 as core 2 on socket 1
> > > EAL: Cannot read numa node link for lcore 7 - using physical package id
> > > instead
> > > EAL: Detected lcore 7 as core 3 on socket 1
> > > EAL: Support maximum 128 logical core(s) by configuration.
> > > EAL: Detected 8 lcore(s)
> > > EAL: Setting up memory...
> > > EAL: Ask a virtual area of 0x8000000 bytes
> > > EAL: Virtual area found at 0x7f83fae00000 (size = 0x8000000)
> > > EAL: Ask a virtual area of 0x38000000 bytes
> > > EAL: Virtual area found at 0x7f83c2c00000 (size = 0x38000000)
> > > EAL: Requesting 512 pages of size 2MB from socket 0
> > > EAL: TSC frequency is ~2394002 KHz
> > > EAL: WARNING: Master core has no memory on local socket!
> > > EAL: Cannot read numa node link for lcore 0 - using physical package id
> > > instead
> > > EAL: Cannot read numa node link for lcore 0 - using physical package id
> > > instead
> > > EAL: Master lcore 0 is ready (tid=319e880;cpuset=[0])
> > > PMD: ENICPMD trace: rte_enic_pmd_init
> > > EAL: Cannot read numa node link for lcore 4 - using physical package id
> > > instead
> > > EAL: Cannot read numa node link for lcore 5 - using physical package id
> > > instead
> > > EAL: Cannot read numa node link for lcore 1 - using physical package id
> > > instead
> > > EAL: Cannot read numa node link for lcore 1 - using physical package id
> > > instead
> > > EAL: Cannot read numa node link for lcore 2 - using physical package id
> > > instead
> > > EAL: Cannot read numa node link for lcore 3 - using physical package id
> > > instead
> > > EAL: Cannot read numa node link for lcore 5 - using physical package id
> > > instead
> > > EAL: Cannot read numa node link for lcore 6 - using physical package id
> > > instead
> > > EAL: Cannot read numa node link for lcore 7 - using physical package id
> > > instead
> > > EAL: Cannot read numa node link for lcore 2 - using physical package id
> > > instead
> > > EAL: Cannot read numa node link for lcore 7 - using physical package id
> > > instead
> > > EAL: lcore 1 is ready (tid=c1ff8700;cpuset=[1])
> > > EAL: lcore 2 is ready (tid=c15f7700;cpuset=[2])
> > > EAL: Cannot read numa node link for lcore 4 - using physical package id
> > > instead
> > > EAL: Cannot read numa node link for lcore 3 - using physical package id
> > > instead
> > > EAL: lcore 5 is ready (tid=bf7f4700;cpuset=[5])
> > > EAL: Cannot read numa node link for lcore 6 - using physical package id
> > > instead
> > > EAL: lcore 7 is ready (tid=be3f2700;cpuset=[7])
> > > EAL: lcore 3 is ready (tid=c0bf6700;cpuset=[3])
> > > EAL: lcore 6 is ready (tid=bedf3700;cpuset=[6])
> > > EAL: lcore 4 is ready (tid=c01f5700;cpuset=[4])
> > > EAL: PCI device 0000:07:00.0 on NUMA socket -1
> > > EAL: probe driver: 8086:10c9 rte_igb_pmd
> > > EAL: PCI memory mapped at 0x7f8402e00000
> > > EAL: PCI memory mapped at 0x7f83bd5f2000
> > > EAL: PCI memory mapped at 0x7f84031ba000
> > > PANIC in rte_eth_dev_data_alloc():
> > > Cannot allocate memzone for ethernet port data
> > > 8: [build/app/testpmd() [0x4285d9]]
> > > 7: [/lib64/libc.so.6(__libc_start_main+0xfd) [0x3d18e1ecdd]]
> > > 6: [build/app/testpmd(main+0x18) [0x42dce8]]
> > > 5: [build/app/testpmd(rte_eal_init+0xab8) [0x4982c8]]
> > > 4: [build/app/testpmd(rte_eal_pci_probe+0xe2) [0x4a27e2]]
> > > 3: [build/app/testpmd() [0x491ea7]]
> > > 2: [build/app/testpmd(__rte_panic+0xc0) [0x428520]]
> > > 1: [build/app/testpmd(rte_dump_stack+0x1e) [0x49e91e]]
> > >
> > >
>
next prev parent reply other threads:[~2015-04-22 13:34 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-04-22 10:32 ChenXiaodong
2015-04-22 11:01 ` De Lara Guarch, Pablo
2015-04-22 11:36 ` ChenXiaodong
2015-04-22 13:34 ` Bruce Richardson [this message]
2015-04-22 15:20 ` ChenXiaodong
2015-04-22 15:41 ` Stephen Hemminger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150422133416.GA8676@bricha3-MOBL3 \
--to=bruce.richardson@intel.com \
--cc=ch.xd@live.cn \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).