DPDK usage discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: users@dpdk.org
Subject: Re: [dpdk-users] Failed to allocate tx pool
Date: Wed, 31 Oct 2018 09:45:14 -0700	[thread overview]
Message-ID: <20181031094514.0ea9fa08@xeon-e3> (raw)
In-Reply-To: <DA0D02C15AF8CF408F8A03D8FE5AD16CB10CFC71F5@SJ-EXCH-1.adaranet.com>

On Wed, 31 Oct 2018 09:33:34 -0700
Raghu Gangi <raghu_gangi@adaranetworks.com> wrote:

> Hi Cliff,
> 
> But I want to allocate memory on only NUMA node where my lcore's and DPDK NICs are connected.
> I think this gives the most optimized performace when everything is connected to the same NUMA node.
> 
> Thanks,
> Raghu
> 
> -----Original Message-----

> From: Raghu Gangi 
> Sent: Wednesday, October 31, 2018 9:31 AM
> To: Cliff Burdick
> Cc: users@dpdk.org
> Subject: Re: [dpdk-users] Failed to allocate tx pool
> 
> Hi Cliff,
> 
> Yes, I had tried it.
> 
> When I set memory on both NUMA nodes it works without this issue.
> 
> I set using the following commands:
> 
> root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# echo 128 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
> root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# echo 128 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0#
> root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# cat /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
> 128
> root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> 128
> 
> 
> Thanks,
> Raghu
> 
> 
>   Date: Tue, 30 Oct 2018 18:56:45 -0700
> >From: Cliff Burdick <shaklee3@gmail.com>
> >To: Raghu Gangi <raghu_gangi@adaranetworks.com>
> >Cc: users <users@dpdk.org>
> >Subject: Re: [dpdk-users] Failed to allocate tx pool
> >Message-ID:
> >	<CA+Gp1nZY1xM_qGdBcrvnuUPc2duDwQH5dTdSRbuwCLgXwf_=Sg@mail.gmail.com>
> >Content-Type: text/plain; charset="UTF-8"
> >
> >Have you tried allocating memory on both numa nodes to rule that out?
> >
> >
> >On Tue, Oct 30, 2018, 16:40 Raghu Gangi <raghu_gangi@adaranetworks.com>
> >wrote:
> >  
> >> Hi,
> >>
> >> I am currently facing issue in brining up DPDK application. It is 
> >>failing  with the following message. rte_errno is set to 12 in this 
> >>scenario.
> >>(Out
> >> of memory).
> >>
> >> It would be great if you can kindly point me to what am I doing 
> >> incorrectly.
> >>
> >> I am using DPDK 2.2.0 version on ubuntu 16.04.
> >>
> >> EAL: PCI device 0000:02:00.0 on NUMA socket 0
> >> EAL:   probe driver: 8086:1521 rte_igb_pmd
> >> EAL:   Not managed by a supported kernel driver, skipped
> >> EAL: PCI device 0000:02:00.3 on NUMA socket 0
> >> EAL:   probe driver: 8086:1521 rte_igb_pmd
> >> EAL:   Not managed by a supported kernel driver, skipped
> >> EAL: PCI device 0000:82:00.0 on NUMA socket 1
> >> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> >> EAL:   PCI memory mapped at 0x7fd1a7600000
> >> EAL:   PCI memory mapped at 0x7fd1a7640000
> >> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 3
> >> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> >> EAL: PCI device 0000:82:00.1 on NUMA socket 1
> >> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> >> EAL:   PCI memory mapped at 0x7fd1a7644000
> >> EAL:   PCI memory mapped at 0x7fd1a7684000
> >> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 4
> >> PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb
> >> RING: Cannot reserve memory
> >> dpdk_if_init:256: failed to allocate tx pool
> >>
> >> The DPDK bound NIC cards are on NUMA socket 1.
> >>
> >> root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0#
> >> ./tools/dpdk_nic_bind.py --status
> >>
> >> Network devices using DPDK-compatible driver 
> >>============================================
> >> 0000:82:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
> >>drv=igb_uio
> >> unused=ixgbe
> >> 0000:82:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
> >>drv=igb_uio
> >> unused=ixgbe
> >>
> >> Network devices using kernel driver
> >> ===================================
> >> 0000:02:00.0 'I350 Gigabit Network Connection' if=eno1 drv=igb 
> >> unused=igb_uio *Active*
> >> 0000:02:00.3 'I350 Gigabit Network Connection' if=eno2 drv=igb 
> >> unused=igb_uio
> >>
> >> Other network devices
> >> =====================
> >> <none>
> >>
> >>
> >> root@rg2-14053:/home/adara/raghu_2/run# cat 
> >> /sys/bus/pci/devices/0000\:82\:00.0/numa_node
> >> 1
> >> root@rg2-14053:/home/adara/raghu_2/run# cat 
> >> /sys/bus/pci/devices/0000\:82\:00.1/numa_node
> >> 1
> >>
> >>
> >> DPDK huge pages are allocated on the same NUMA node 1 as shown below:
> >>
> >> root@rg2-14053:/home/adara/raghu_2/run# cat 
> >> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepage
> >> s
> >> 0
> >> root@rg2-14053:/home/adara/raghu_2/run# cat 
> >> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepage
> >> s
> >> 128
> >> root@rg2-14053:/home/adara/raghu_2/run#
> >>
> >> Output of CPU Layout tool:
> >>
> >> root@rg2-14053:/home/adara/raghu_2/run#
> >>../dpdk-2.2.0/tools/cpu_layout.py
> >> ============================================================
> >> Core and Socket Information (as reported by '/proc/cpuinfo') 
> >>============================================================
> >> cores =  [0, 1, 2, 3, 4, 8, 9, 10, 11, 12]  sockets =  [0, 1]
> >>
> >>         Socket 0        Socket 1
> >>         --------        --------
> >> Core 0  [0, 20]         [10, 30]
> >>
> >> Core 1  [1, 21]         [11, 31]
> >>
> >> Core 2  [2, 22]         [12, 32]
> >>
> >> Core 3  [3, 23]         [13, 33]
> >>
> >> Core 4  [4, 24]         [14, 34]
> >>
> >> Core 8  [5, 25]         [15, 35]
> >>
> >> Core 9  [6, 26]         [16, 36]
> >>
> >> Core 10 [7, 27]         [17, 37]
> >>
> >> Core 11 [8, 28]         [18, 38]
> >>
> >> Core 12 [9, 29]         [19, 39]
> >>

The DPDK email etiquette is to not top-post.

The DPDK drivers and libraries are going to allocate memory in general
on the same numa node as where the device is present. Some resources will
allocate memory on socket 0 the default socket.

In a numa environment, then you need to have some memory on socket 0
and the bulk of it on the same as your device.  Not all nodes need
to have the same reserved memory.

  reply	other threads:[~2018-10-31 16:45 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-31 16:33 Raghu Gangi
2018-10-31 16:45 ` Stephen Hemminger [this message]
  -- strict thread matches above, loose matches on Subject: below --
2018-10-31 16:30 Raghu Gangi
2018-10-30 23:39 Raghu Gangi
2018-10-31  1:56 ` Cliff Burdick

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181031094514.0ea9fa08@xeon-e3 \
    --to=stephen@networkplumber.org \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).