DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Failed to allocate tx pool
@ 2018-10-30 23:39 Raghu Gangi
  2018-10-31  1:56 ` Cliff Burdick
  0 siblings, 1 reply; 5+ messages in thread
From: Raghu Gangi @ 2018-10-30 23:39 UTC (permalink / raw)
  To: users

Hi,

I am currently facing issue in brining up DPDK application. It is failing with the following message. rte_errno is set to 12 in this scenario. (Out of memory).

It would be great if you can kindly point me to what am I doing incorrectly.

I am using DPDK 2.2.0 version on ubuntu 16.04.

EAL: PCI device 0000:02:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:02:00.3 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:82:00.0 on NUMA socket 1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7fd1a7600000
EAL:   PCI memory mapped at 0x7fd1a7640000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 3
PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
EAL: PCI device 0000:82:00.1 on NUMA socket 1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7fd1a7644000
EAL:   PCI memory mapped at 0x7fd1a7684000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 4
PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb
RING: Cannot reserve memory
dpdk_if_init:256: failed to allocate tx pool

The DPDK bound NIC cards are on NUMA socket 1.

root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# ./tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver
============================================
0000:82:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe
0000:82:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe

Network devices using kernel driver
===================================
0000:02:00.0 'I350 Gigabit Network Connection' if=eno1 drv=igb unused=igb_uio *Active*
0000:02:00.3 'I350 Gigabit Network Connection' if=eno2 drv=igb unused=igb_uio

Other network devices
=====================
<none>


root@rg2-14053:/home/adara/raghu_2/run# cat /sys/bus/pci/devices/0000\:82\:00.0/numa_node
1
root@rg2-14053:/home/adara/raghu_2/run# cat /sys/bus/pci/devices/0000\:82\:00.1/numa_node
1


DPDK huge pages are allocated on the same NUMA node 1 as shown below:

root@rg2-14053:/home/adara/raghu_2/run# cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
0
root@rg2-14053:/home/adara/raghu_2/run# cat /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
128
root@rg2-14053:/home/adara/raghu_2/run#

Output of CPU Layout tool:

root@rg2-14053:/home/adara/raghu_2/run# ../dpdk-2.2.0/tools/cpu_layout.py
============================================================
Core and Socket Information (as reported by '/proc/cpuinfo')
============================================================
cores =  [0, 1, 2, 3, 4, 8, 9, 10, 11, 12]
sockets =  [0, 1]

        Socket 0        Socket 1
        --------        --------
Core 0  [0, 20]         [10, 30]

Core 1  [1, 21]         [11, 31]

Core 2  [2, 22]         [12, 32]

Core 3  [3, 23]         [13, 33]

Core 4  [4, 24]         [14, 34]

Core 8  [5, 25]         [15, 35]

Core 9  [6, 26]         [16, 36]

Core 10 [7, 27]         [17, 37]

Core 11 [8, 28]         [18, 38]

Core 12 [9, 29]         [19, 39]

Thanks,
Raghu

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-users] Failed to allocate tx pool
  2018-10-30 23:39 [dpdk-users] Failed to allocate tx pool Raghu Gangi
@ 2018-10-31  1:56 ` Cliff Burdick
  0 siblings, 0 replies; 5+ messages in thread
From: Cliff Burdick @ 2018-10-31  1:56 UTC (permalink / raw)
  To: Raghu Gangi; +Cc: users

Have you tried allocating memory on both numa nodes to rule that out?


On Tue, Oct 30, 2018, 16:40 Raghu Gangi <raghu_gangi@adaranetworks.com>
wrote:

> Hi,
>
> I am currently facing issue in brining up DPDK application. It is failing
> with the following message. rte_errno is set to 12 in this scenario. (Out
> of memory).
>
> It would be great if you can kindly point me to what am I doing
> incorrectly.
>
> I am using DPDK 2.2.0 version on ubuntu 16.04.
>
> EAL: PCI device 0000:02:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1521 rte_igb_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:02:00.3 on NUMA socket 0
> EAL:   probe driver: 8086:1521 rte_igb_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:82:00.0 on NUMA socket 1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   PCI memory mapped at 0x7fd1a7600000
> EAL:   PCI memory mapped at 0x7fd1a7640000
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 3
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> EAL: PCI device 0000:82:00.1 on NUMA socket 1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   PCI memory mapped at 0x7fd1a7644000
> EAL:   PCI memory mapped at 0x7fd1a7684000
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 4
> PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb
> RING: Cannot reserve memory
> dpdk_if_init:256: failed to allocate tx pool
>
> The DPDK bound NIC cards are on NUMA socket 1.
>
> root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# ./tools/dpdk_nic_bind.py
> --status
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:82:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=ixgbe
> 0000:82:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=ixgbe
>
> Network devices using kernel driver
> ===================================
> 0000:02:00.0 'I350 Gigabit Network Connection' if=eno1 drv=igb
> unused=igb_uio *Active*
> 0000:02:00.3 'I350 Gigabit Network Connection' if=eno2 drv=igb
> unused=igb_uio
>
> Other network devices
> =====================
> <none>
>
>
> root@rg2-14053:/home/adara/raghu_2/run# cat
> /sys/bus/pci/devices/0000\:82\:00.0/numa_node
> 1
> root@rg2-14053:/home/adara/raghu_2/run# cat
> /sys/bus/pci/devices/0000\:82\:00.1/numa_node
> 1
>
>
> DPDK huge pages are allocated on the same NUMA node 1 as shown below:
>
> root@rg2-14053:/home/adara/raghu_2/run# cat
> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> 0
> root@rg2-14053:/home/adara/raghu_2/run# cat
> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
> 128
> root@rg2-14053:/home/adara/raghu_2/run#
>
> Output of CPU Layout tool:
>
> root@rg2-14053:/home/adara/raghu_2/run# ../dpdk-2.2.0/tools/cpu_layout.py
> ============================================================
> Core and Socket Information (as reported by '/proc/cpuinfo')
> ============================================================
> cores =  [0, 1, 2, 3, 4, 8, 9, 10, 11, 12]
> sockets =  [0, 1]
>
>         Socket 0        Socket 1
>         --------        --------
> Core 0  [0, 20]         [10, 30]
>
> Core 1  [1, 21]         [11, 31]
>
> Core 2  [2, 22]         [12, 32]
>
> Core 3  [3, 23]         [13, 33]
>
> Core 4  [4, 24]         [14, 34]
>
> Core 8  [5, 25]         [15, 35]
>
> Core 9  [6, 26]         [16, 36]
>
> Core 10 [7, 27]         [17, 37]
>
> Core 11 [8, 28]         [18, 38]
>
> Core 12 [9, 29]         [19, 39]
>
> Thanks,
> Raghu
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-users] Failed to allocate tx pool
  2018-10-31 16:33 Raghu Gangi
@ 2018-10-31 16:45 ` Stephen Hemminger
  0 siblings, 0 replies; 5+ messages in thread
From: Stephen Hemminger @ 2018-10-31 16:45 UTC (permalink / raw)
  To: users

On Wed, 31 Oct 2018 09:33:34 -0700
Raghu Gangi <raghu_gangi@adaranetworks.com> wrote:

> Hi Cliff,
> 
> But I want to allocate memory on only NUMA node where my lcore's and DPDK NICs are connected.
> I think this gives the most optimized performace when everything is connected to the same NUMA node.
> 
> Thanks,
> Raghu
> 
> -----Original Message-----

> From: Raghu Gangi 
> Sent: Wednesday, October 31, 2018 9:31 AM
> To: Cliff Burdick
> Cc: users@dpdk.org
> Subject: Re: [dpdk-users] Failed to allocate tx pool
> 
> Hi Cliff,
> 
> Yes, I had tried it.
> 
> When I set memory on both NUMA nodes it works without this issue.
> 
> I set using the following commands:
> 
> root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# echo 128 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
> root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# echo 128 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0#
> root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# cat /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
> 128
> root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
> 128
> 
> 
> Thanks,
> Raghu
> 
> 
>   Date: Tue, 30 Oct 2018 18:56:45 -0700
> >From: Cliff Burdick <shaklee3@gmail.com>
> >To: Raghu Gangi <raghu_gangi@adaranetworks.com>
> >Cc: users <users@dpdk.org>
> >Subject: Re: [dpdk-users] Failed to allocate tx pool
> >Message-ID:
> >	<CA+Gp1nZY1xM_qGdBcrvnuUPc2duDwQH5dTdSRbuwCLgXwf_=Sg@mail.gmail.com>
> >Content-Type: text/plain; charset="UTF-8"
> >
> >Have you tried allocating memory on both numa nodes to rule that out?
> >
> >
> >On Tue, Oct 30, 2018, 16:40 Raghu Gangi <raghu_gangi@adaranetworks.com>
> >wrote:
> >  
> >> Hi,
> >>
> >> I am currently facing issue in brining up DPDK application. It is 
> >>failing  with the following message. rte_errno is set to 12 in this 
> >>scenario.
> >>(Out
> >> of memory).
> >>
> >> It would be great if you can kindly point me to what am I doing 
> >> incorrectly.
> >>
> >> I am using DPDK 2.2.0 version on ubuntu 16.04.
> >>
> >> EAL: PCI device 0000:02:00.0 on NUMA socket 0
> >> EAL:   probe driver: 8086:1521 rte_igb_pmd
> >> EAL:   Not managed by a supported kernel driver, skipped
> >> EAL: PCI device 0000:02:00.3 on NUMA socket 0
> >> EAL:   probe driver: 8086:1521 rte_igb_pmd
> >> EAL:   Not managed by a supported kernel driver, skipped
> >> EAL: PCI device 0000:82:00.0 on NUMA socket 1
> >> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> >> EAL:   PCI memory mapped at 0x7fd1a7600000
> >> EAL:   PCI memory mapped at 0x7fd1a7640000
> >> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 3
> >> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> >> EAL: PCI device 0000:82:00.1 on NUMA socket 1
> >> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> >> EAL:   PCI memory mapped at 0x7fd1a7644000
> >> EAL:   PCI memory mapped at 0x7fd1a7684000
> >> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 4
> >> PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb
> >> RING: Cannot reserve memory
> >> dpdk_if_init:256: failed to allocate tx pool
> >>
> >> The DPDK bound NIC cards are on NUMA socket 1.
> >>
> >> root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0#
> >> ./tools/dpdk_nic_bind.py --status
> >>
> >> Network devices using DPDK-compatible driver 
> >>============================================
> >> 0000:82:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
> >>drv=igb_uio
> >> unused=ixgbe
> >> 0000:82:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
> >>drv=igb_uio
> >> unused=ixgbe
> >>
> >> Network devices using kernel driver
> >> ===================================
> >> 0000:02:00.0 'I350 Gigabit Network Connection' if=eno1 drv=igb 
> >> unused=igb_uio *Active*
> >> 0000:02:00.3 'I350 Gigabit Network Connection' if=eno2 drv=igb 
> >> unused=igb_uio
> >>
> >> Other network devices
> >> =====================
> >> <none>
> >>
> >>
> >> root@rg2-14053:/home/adara/raghu_2/run# cat 
> >> /sys/bus/pci/devices/0000\:82\:00.0/numa_node
> >> 1
> >> root@rg2-14053:/home/adara/raghu_2/run# cat 
> >> /sys/bus/pci/devices/0000\:82\:00.1/numa_node
> >> 1
> >>
> >>
> >> DPDK huge pages are allocated on the same NUMA node 1 as shown below:
> >>
> >> root@rg2-14053:/home/adara/raghu_2/run# cat 
> >> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepage
> >> s
> >> 0
> >> root@rg2-14053:/home/adara/raghu_2/run# cat 
> >> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepage
> >> s
> >> 128
> >> root@rg2-14053:/home/adara/raghu_2/run#
> >>
> >> Output of CPU Layout tool:
> >>
> >> root@rg2-14053:/home/adara/raghu_2/run#
> >>../dpdk-2.2.0/tools/cpu_layout.py
> >> ============================================================
> >> Core and Socket Information (as reported by '/proc/cpuinfo') 
> >>============================================================
> >> cores =  [0, 1, 2, 3, 4, 8, 9, 10, 11, 12]  sockets =  [0, 1]
> >>
> >>         Socket 0        Socket 1
> >>         --------        --------
> >> Core 0  [0, 20]         [10, 30]
> >>
> >> Core 1  [1, 21]         [11, 31]
> >>
> >> Core 2  [2, 22]         [12, 32]
> >>
> >> Core 3  [3, 23]         [13, 33]
> >>
> >> Core 4  [4, 24]         [14, 34]
> >>
> >> Core 8  [5, 25]         [15, 35]
> >>
> >> Core 9  [6, 26]         [16, 36]
> >>
> >> Core 10 [7, 27]         [17, 37]
> >>
> >> Core 11 [8, 28]         [18, 38]
> >>
> >> Core 12 [9, 29]         [19, 39]
> >>

The DPDK email etiquette is to not top-post.

The DPDK drivers and libraries are going to allocate memory in general
on the same numa node as where the device is present. Some resources will
allocate memory on socket 0 the default socket.

In a numa environment, then you need to have some memory on socket 0
and the bulk of it on the same as your device.  Not all nodes need
to have the same reserved memory.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-users] Failed to allocate tx pool
@ 2018-10-31 16:33 Raghu Gangi
  2018-10-31 16:45 ` Stephen Hemminger
  0 siblings, 1 reply; 5+ messages in thread
From: Raghu Gangi @ 2018-10-31 16:33 UTC (permalink / raw)
  To: Cliff Burdick; +Cc: users

Hi Cliff,

But I want to allocate memory on only NUMA node where my lcore's and DPDK NICs are connected.
I think this gives the most optimized performace when everything is connected to the same NUMA node.

Thanks,
Raghu

-----Original Message-----
From: Raghu Gangi 
Sent: Wednesday, October 31, 2018 9:31 AM
To: Cliff Burdick
Cc: users@dpdk.org
Subject: Re: [dpdk-users] Failed to allocate tx pool

Hi Cliff,

Yes, I had tried it.

When I set memory on both NUMA nodes it works without this issue.

I set using the following commands:

root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# echo 128 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# echo 128 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0#
root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# cat /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
128
root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
128


Thanks,
Raghu


  Date: Tue, 30 Oct 2018 18:56:45 -0700
>From: Cliff Burdick <shaklee3@gmail.com>
>To: Raghu Gangi <raghu_gangi@adaranetworks.com>
>Cc: users <users@dpdk.org>
>Subject: Re: [dpdk-users] Failed to allocate tx pool
>Message-ID:
>	<CA+Gp1nZY1xM_qGdBcrvnuUPc2duDwQH5dTdSRbuwCLgXwf_=Sg@mail.gmail.com>
>Content-Type: text/plain; charset="UTF-8"
>
>Have you tried allocating memory on both numa nodes to rule that out?
>
>
>On Tue, Oct 30, 2018, 16:40 Raghu Gangi <raghu_gangi@adaranetworks.com>
>wrote:
>
>> Hi,
>>
>> I am currently facing issue in brining up DPDK application. It is 
>>failing  with the following message. rte_errno is set to 12 in this 
>>scenario.
>>(Out
>> of memory).
>>
>> It would be great if you can kindly point me to what am I doing 
>> incorrectly.
>>
>> I am using DPDK 2.2.0 version on ubuntu 16.04.
>>
>> EAL: PCI device 0000:02:00.0 on NUMA socket 0
>> EAL:   probe driver: 8086:1521 rte_igb_pmd
>> EAL:   Not managed by a supported kernel driver, skipped
>> EAL: PCI device 0000:02:00.3 on NUMA socket 0
>> EAL:   probe driver: 8086:1521 rte_igb_pmd
>> EAL:   Not managed by a supported kernel driver, skipped
>> EAL: PCI device 0000:82:00.0 on NUMA socket 1
>> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
>> EAL:   PCI memory mapped at 0x7fd1a7600000
>> EAL:   PCI memory mapped at 0x7fd1a7640000
>> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 3
>> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
>> EAL: PCI device 0000:82:00.1 on NUMA socket 1
>> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
>> EAL:   PCI memory mapped at 0x7fd1a7644000
>> EAL:   PCI memory mapped at 0x7fd1a7684000
>> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 4
>> PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb
>> RING: Cannot reserve memory
>> dpdk_if_init:256: failed to allocate tx pool
>>
>> The DPDK bound NIC cards are on NUMA socket 1.
>>
>> root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0#
>> ./tools/dpdk_nic_bind.py --status
>>
>> Network devices using DPDK-compatible driver 
>>============================================
>> 0000:82:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
>>drv=igb_uio
>> unused=ixgbe
>> 0000:82:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
>>drv=igb_uio
>> unused=ixgbe
>>
>> Network devices using kernel driver
>> ===================================
>> 0000:02:00.0 'I350 Gigabit Network Connection' if=eno1 drv=igb 
>> unused=igb_uio *Active*
>> 0000:02:00.3 'I350 Gigabit Network Connection' if=eno2 drv=igb 
>> unused=igb_uio
>>
>> Other network devices
>> =====================
>> <none>
>>
>>
>> root@rg2-14053:/home/adara/raghu_2/run# cat 
>> /sys/bus/pci/devices/0000\:82\:00.0/numa_node
>> 1
>> root@rg2-14053:/home/adara/raghu_2/run# cat 
>> /sys/bus/pci/devices/0000\:82\:00.1/numa_node
>> 1
>>
>>
>> DPDK huge pages are allocated on the same NUMA node 1 as shown below:
>>
>> root@rg2-14053:/home/adara/raghu_2/run# cat 
>> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepage
>> s
>> 0
>> root@rg2-14053:/home/adara/raghu_2/run# cat 
>> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepage
>> s
>> 128
>> root@rg2-14053:/home/adara/raghu_2/run#
>>
>> Output of CPU Layout tool:
>>
>> root@rg2-14053:/home/adara/raghu_2/run#
>>../dpdk-2.2.0/tools/cpu_layout.py
>> ============================================================
>> Core and Socket Information (as reported by '/proc/cpuinfo') 
>>============================================================
>> cores =  [0, 1, 2, 3, 4, 8, 9, 10, 11, 12]  sockets =  [0, 1]
>>
>>         Socket 0        Socket 1
>>         --------        --------
>> Core 0  [0, 20]         [10, 30]
>>
>> Core 1  [1, 21]         [11, 31]
>>
>> Core 2  [2, 22]         [12, 32]
>>
>> Core 3  [3, 23]         [13, 33]
>>
>> Core 4  [4, 24]         [14, 34]
>>
>> Core 8  [5, 25]         [15, 35]
>>
>> Core 9  [6, 26]         [16, 36]
>>
>> Core 10 [7, 27]         [17, 37]
>>
>> Core 11 [8, 28]         [18, 38]
>>
>> Core 12 [9, 29]         [19, 39]
>>
>> Thanks,
>> Raghu
>>
>
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-users] Failed to allocate tx pool
@ 2018-10-31 16:30 Raghu Gangi
  0 siblings, 0 replies; 5+ messages in thread
From: Raghu Gangi @ 2018-10-31 16:30 UTC (permalink / raw)
  To: Cliff Burdick; +Cc: users

Hi Cliff,

Yes, I had tried it.

When I set memory on both NUMA nodes it works without this issue.

I set using the following commands:

root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# echo 128 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages 
root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# echo 128 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages 
root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# 
root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# cat /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
128
root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
128


Thanks,
Raghu


  Date: Tue, 30 Oct 2018 18:56:45 -0700
>From: Cliff Burdick <shaklee3@gmail.com>
>To: Raghu Gangi <raghu_gangi@adaranetworks.com>
>Cc: users <users@dpdk.org>
>Subject: Re: [dpdk-users] Failed to allocate tx pool
>Message-ID:
>	<CA+Gp1nZY1xM_qGdBcrvnuUPc2duDwQH5dTdSRbuwCLgXwf_=Sg@mail.gmail.com>
>Content-Type: text/plain; charset="UTF-8"
>
>Have you tried allocating memory on both numa nodes to rule that out?
>
>
>On Tue, Oct 30, 2018, 16:40 Raghu Gangi <raghu_gangi@adaranetworks.com>
>wrote:
>
>> Hi,
>>
>> I am currently facing issue in brining up DPDK application. It is 
>>failing  with the following message. rte_errno is set to 12 in this 
>>scenario.
>>(Out
>> of memory).
>>
>> It would be great if you can kindly point me to what am I doing 
>> incorrectly.
>>
>> I am using DPDK 2.2.0 version on ubuntu 16.04.
>>
>> EAL: PCI device 0000:02:00.0 on NUMA socket 0
>> EAL:   probe driver: 8086:1521 rte_igb_pmd
>> EAL:   Not managed by a supported kernel driver, skipped
>> EAL: PCI device 0000:02:00.3 on NUMA socket 0
>> EAL:   probe driver: 8086:1521 rte_igb_pmd
>> EAL:   Not managed by a supported kernel driver, skipped
>> EAL: PCI device 0000:82:00.0 on NUMA socket 1
>> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
>> EAL:   PCI memory mapped at 0x7fd1a7600000
>> EAL:   PCI memory mapped at 0x7fd1a7640000
>> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 3
>> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
>> EAL: PCI device 0000:82:00.1 on NUMA socket 1
>> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
>> EAL:   PCI memory mapped at 0x7fd1a7644000
>> EAL:   PCI memory mapped at 0x7fd1a7684000
>> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 4
>> PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb
>> RING: Cannot reserve memory
>> dpdk_if_init:256: failed to allocate tx pool
>>
>> The DPDK bound NIC cards are on NUMA socket 1.
>>
>> root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# 
>> ./tools/dpdk_nic_bind.py --status
>>
>> Network devices using DPDK-compatible driver  
>>============================================
>> 0000:82:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
>>drv=igb_uio
>> unused=ixgbe
>> 0000:82:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
>>drv=igb_uio
>> unused=ixgbe
>>
>> Network devices using kernel driver
>> ===================================
>> 0000:02:00.0 'I350 Gigabit Network Connection' if=eno1 drv=igb 
>> unused=igb_uio *Active*
>> 0000:02:00.3 'I350 Gigabit Network Connection' if=eno2 drv=igb 
>> unused=igb_uio
>>
>> Other network devices
>> =====================
>> <none>
>>
>>
>> root@rg2-14053:/home/adara/raghu_2/run# cat 
>> /sys/bus/pci/devices/0000\:82\:00.0/numa_node
>> 1
>> root@rg2-14053:/home/adara/raghu_2/run# cat 
>> /sys/bus/pci/devices/0000\:82\:00.1/numa_node
>> 1
>>
>>
>> DPDK huge pages are allocated on the same NUMA node 1 as shown below:
>>
>> root@rg2-14053:/home/adara/raghu_2/run# cat 
>> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepage
>> s
>> 0
>> root@rg2-14053:/home/adara/raghu_2/run# cat 
>> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepage
>> s
>> 128
>> root@rg2-14053:/home/adara/raghu_2/run#
>>
>> Output of CPU Layout tool:
>>
>> root@rg2-14053:/home/adara/raghu_2/run#
>>../dpdk-2.2.0/tools/cpu_layout.py
>> ============================================================
>> Core and Socket Information (as reported by '/proc/cpuinfo')  
>>============================================================
>> cores =  [0, 1, 2, 3, 4, 8, 9, 10, 11, 12]  sockets =  [0, 1]
>>
>>         Socket 0        Socket 1
>>         --------        --------
>> Core 0  [0, 20]         [10, 30]
>>
>> Core 1  [1, 21]         [11, 31]
>>
>> Core 2  [2, 22]         [12, 32]
>>
>> Core 3  [3, 23]         [13, 33]
>>
>> Core 4  [4, 24]         [14, 34]
>>
>> Core 8  [5, 25]         [15, 35]
>>
>> Core 9  [6, 26]         [16, 36]
>>
>> Core 10 [7, 27]         [17, 37]
>>
>> Core 11 [8, 28]         [18, 38]
>>
>> Core 12 [9, 29]         [19, 39]
>>
>> Thanks,
>> Raghu
>>
>
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-10-31 16:45 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-30 23:39 [dpdk-users] Failed to allocate tx pool Raghu Gangi
2018-10-31  1:56 ` Cliff Burdick
2018-10-31 16:30 Raghu Gangi
2018-10-31 16:33 Raghu Gangi
2018-10-31 16:45 ` Stephen Hemminger

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).