From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from SJ-EXCH-1.adaranet.com (unknown [50.205.150.194]) by dpdk.org (Postfix) with ESMTP id 6FE2823D for ; Wed, 31 Oct 2018 17:33:36 +0100 (CET) Received: from SJ-EXCH-1.adaranet.com ([::1]) by SJ-EXCH-1.adaranet.com ([::1]) with mapi; Wed, 31 Oct 2018 09:33:35 -0700 From: Raghu Gangi To: Cliff Burdick CC: "users@dpdk.org" Date: Wed, 31 Oct 2018 09:33:34 -0700 Thread-Topic: [dpdk-users] Failed to allocate tx pool Thread-Index: AdRxNomgOoLBmzh9TYqjf+pqZjsDxgAALAqA Message-ID: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-users] Failed to allocate tx pool X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Oct 2018 16:33:37 -0000 Hi Cliff, But I want to allocate memory on only NUMA node where my lcore's and DPDK N= ICs are connected. I think this gives the most optimized performace when everything is connect= ed to the same NUMA node. Thanks, Raghu -----Original Message----- From: Raghu Gangi=20 Sent: Wednesday, October 31, 2018 9:31 AM To: Cliff Burdick Cc: users@dpdk.org Subject: Re: [dpdk-users] Failed to allocate tx pool Hi Cliff, Yes, I had tried it. When I set memory on both NUMA nodes it works without this issue. I set using the following commands: root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# echo 128 > /sys/devices/syst= em/node/node1/hugepages/hugepages-2048kB/nr_hugepages root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# echo 128 > /sys/devices/syst= em/node/node0/hugepages/hugepages-2048kB/nr_hugepages root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# cat /sys/devices/system/node= /node1/hugepages/hugepages-2048kB/nr_hugepages 128 root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# cat /sys/devices/system/node= /node0/hugepages/hugepages-2048kB/nr_hugepages 128 Thanks, Raghu Date: Tue, 30 Oct 2018 18:56:45 -0700 >From: Cliff Burdick >To: Raghu Gangi >Cc: users >Subject: Re: [dpdk-users] Failed to allocate tx pool >Message-ID: > >Content-Type: text/plain; charset=3D"UTF-8" > >Have you tried allocating memory on both numa nodes to rule that out? > > >On Tue, Oct 30, 2018, 16:40 Raghu Gangi >wrote: > >> Hi, >> >> I am currently facing issue in brining up DPDK application. It is=20 >>failing with the following message. rte_errno is set to 12 in this=20 >>scenario. >>(Out >> of memory). >> >> It would be great if you can kindly point me to what am I doing=20 >> incorrectly. >> >> I am using DPDK 2.2.0 version on ubuntu 16.04. >> >> EAL: PCI device 0000:02:00.0 on NUMA socket 0 >> EAL: probe driver: 8086:1521 rte_igb_pmd >> EAL: Not managed by a supported kernel driver, skipped >> EAL: PCI device 0000:02:00.3 on NUMA socket 0 >> EAL: probe driver: 8086:1521 rte_igb_pmd >> EAL: Not managed by a supported kernel driver, skipped >> EAL: PCI device 0000:82:00.0 on NUMA socket 1 >> EAL: probe driver: 8086:10fb rte_ixgbe_pmd >> EAL: PCI memory mapped at 0x7fd1a7600000 >> EAL: PCI memory mapped at 0x7fd1a7640000 >> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 3 >> PMD: eth_ixgbe_dev_init(): port 0 vendorID=3D0x8086 deviceID=3D0x10fb >> EAL: PCI device 0000:82:00.1 on NUMA socket 1 >> EAL: probe driver: 8086:10fb rte_ixgbe_pmd >> EAL: PCI memory mapped at 0x7fd1a7644000 >> EAL: PCI memory mapped at 0x7fd1a7684000 >> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 4 >> PMD: eth_ixgbe_dev_init(): port 1 vendorID=3D0x8086 deviceID=3D0x10fb >> RING: Cannot reserve memory >> dpdk_if_init:256: failed to allocate tx pool >> >> The DPDK bound NIC cards are on NUMA socket 1. >> >> root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# >> ./tools/dpdk_nic_bind.py --status >> >> Network devices using DPDK-compatible driver=20 >>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> 0000:82:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' >>drv=3Digb_uio >> unused=3Dixgbe >> 0000:82:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' >>drv=3Digb_uio >> unused=3Dixgbe >> >> Network devices using kernel driver >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> 0000:02:00.0 'I350 Gigabit Network Connection' if=3Deno1 drv=3Digb=20 >> unused=3Digb_uio *Active* >> 0000:02:00.3 'I350 Gigabit Network Connection' if=3Deno2 drv=3Digb=20 >> unused=3Digb_uio >> >> Other network devices >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> >> >> >> root@rg2-14053:/home/adara/raghu_2/run# cat=20 >> /sys/bus/pci/devices/0000\:82\:00.0/numa_node >> 1 >> root@rg2-14053:/home/adara/raghu_2/run# cat=20 >> /sys/bus/pci/devices/0000\:82\:00.1/numa_node >> 1 >> >> >> DPDK huge pages are allocated on the same NUMA node 1 as shown below: >> >> root@rg2-14053:/home/adara/raghu_2/run# cat=20 >> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepage >> s >> 0 >> root@rg2-14053:/home/adara/raghu_2/run# cat=20 >> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepage >> s >> 128 >> root@rg2-14053:/home/adara/raghu_2/run# >> >> Output of CPU Layout tool: >> >> root@rg2-14053:/home/adara/raghu_2/run# >>../dpdk-2.2.0/tools/cpu_layout.py >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> Core and Socket Information (as reported by '/proc/cpuinfo')=20 >>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> cores =3D [0, 1, 2, 3, 4, 8, 9, 10, 11, 12] sockets =3D [0, 1] >> >> Socket 0 Socket 1 >> -------- -------- >> Core 0 [0, 20] [10, 30] >> >> Core 1 [1, 21] [11, 31] >> >> Core 2 [2, 22] [12, 32] >> >> Core 3 [3, 23] [13, 33] >> >> Core 4 [4, 24] [14, 34] >> >> Core 8 [5, 25] [15, 35] >> >> Core 9 [6, 26] [16, 36] >> >> Core 10 [7, 27] [17, 37] >> >> Core 11 [8, 28] [18, 38] >> >> Core 12 [9, 29] [19, 39] >> >> Thanks, >> Raghu >> > >