From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) by dpdk.org (Postfix) with ESMTP id 681A023D for ; Wed, 31 Oct 2018 17:45:18 +0100 (CET) Received: by mail-pg1-f171.google.com with SMTP id 23-v6so7638414pgc.8 for ; Wed, 31 Oct 2018 09:45:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:subject:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=lKWKSJJiwJBl83dXurXIZjdcjVvQLTCj3H+boNI5YpU=; b=jwajPZtabaJFVtPvWv+idsNLeP+HF5SSGID06SBeP3Eo2oqawhI3QN9dOx5ZtU0q+I 6GZYWHEpYC+c48Aza6kxiiApchl5biOSUImKiLmLxYvo7CswJ6UDAk0EWbhotj2a+7LI UgwbijSc88uSztU66yvtRWbERBoiupETHbqDjJLqIH3JtmuuJMqyRRr8vh1G12GHv+2F NWrYKU9Es7n2rntRU+DJVQgHj63xrEAK4rgJgtkwmBhBSKMZ4nmXFqNy4G3nxbcauz9i wT9AW7/jFub7lmvBQqLFs1iMrSs/0WmmgdKwElwLiCRup2BK6DMbOJNkFOsU+VDaTQd+ Quww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:subject:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lKWKSJJiwJBl83dXurXIZjdcjVvQLTCj3H+boNI5YpU=; b=gmCQO/Jgc/uHrRaURQJGYEnUd8lsVCDYMGZsIyQRZLsorhE9XRdzzhja+yCR9w6ULb JPPiedcw02aWe1zRavQ/iLtov9HFXuOX4lAwo8HiPmIEo+YmONecU8Di96UVNO1ZgEDB CTrX1G27pRE/qLfMvFcG5bXt5QRm/THMYPfk/T9xf7jBeJfshNMHuryMvSOFY9mGiXwe XJ78J595/8tEmACsJZ8QbO4Nnvv1ytn1WyJYy5mvJsGqT+dm+X5zqb3GAVUthG8CojYO NO/yb6ljcM5bN3DbKkZwu6EPPuyuLjGPjuJYUWUXmCnhsg0b2j2lnPJ/k9JYhZpMJq4Q ln6A== X-Gm-Message-State: AGRZ1gLer9DBY45jY15gwocgF5xvlW1Jg8QeRlazJfvAAtL0dZ2ABfcx 5tZim4A/nN333aVvDycDW5Dia+mUrSM= X-Google-Smtp-Source: AJdET5cNIJd+Ph9T0LBadiSR52ZF42yjP5TNr8xoIemJ0w8r1xqTP98nFaw4ZjxzdVAqCHPb4b9Btg== X-Received: by 2002:a62:b50a:: with SMTP id y10-v6mr4164826pfe.199.1541004317196; Wed, 31 Oct 2018 09:45:17 -0700 (PDT) Received: from xeon-e3 (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id c70-v6sm7238422pfg.97.2018.10.31.09.45.16 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 31 Oct 2018 09:45:17 -0700 (PDT) Date: Wed, 31 Oct 2018 09:45:14 -0700 From: Stephen Hemminger To: users@dpdk.org Message-ID: <20181031094514.0ea9fa08@xeon-e3> In-Reply-To: References: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-users] Failed to allocate tx pool X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 31 Oct 2018 16:45:18 -0000 On Wed, 31 Oct 2018 09:33:34 -0700 Raghu Gangi wrote: > Hi Cliff, > > But I want to allocate memory on only NUMA node where my lcore's and DPDK NICs are connected. > I think this gives the most optimized performace when everything is connected to the same NUMA node. > > Thanks, > Raghu > > -----Original Message----- > From: Raghu Gangi > Sent: Wednesday, October 31, 2018 9:31 AM > To: Cliff Burdick > Cc: users@dpdk.org > Subject: Re: [dpdk-users] Failed to allocate tx pool > > Hi Cliff, > > Yes, I had tried it. > > When I set memory on both NUMA nodes it works without this issue. > > I set using the following commands: > > root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# echo 128 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages > root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# echo 128 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages > root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# > root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# cat /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages > 128 > root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages > 128 > > > Thanks, > Raghu > > > Date: Tue, 30 Oct 2018 18:56:45 -0700 > >From: Cliff Burdick > >To: Raghu Gangi > >Cc: users > >Subject: Re: [dpdk-users] Failed to allocate tx pool > >Message-ID: > > > >Content-Type: text/plain; charset="UTF-8" > > > >Have you tried allocating memory on both numa nodes to rule that out? > > > > > >On Tue, Oct 30, 2018, 16:40 Raghu Gangi > >wrote: > > > >> Hi, > >> > >> I am currently facing issue in brining up DPDK application. It is > >>failing with the following message. rte_errno is set to 12 in this > >>scenario. > >>(Out > >> of memory). > >> > >> It would be great if you can kindly point me to what am I doing > >> incorrectly. > >> > >> I am using DPDK 2.2.0 version on ubuntu 16.04. > >> > >> EAL: PCI device 0000:02:00.0 on NUMA socket 0 > >> EAL: probe driver: 8086:1521 rte_igb_pmd > >> EAL: Not managed by a supported kernel driver, skipped > >> EAL: PCI device 0000:02:00.3 on NUMA socket 0 > >> EAL: probe driver: 8086:1521 rte_igb_pmd > >> EAL: Not managed by a supported kernel driver, skipped > >> EAL: PCI device 0000:82:00.0 on NUMA socket 1 > >> EAL: probe driver: 8086:10fb rte_ixgbe_pmd > >> EAL: PCI memory mapped at 0x7fd1a7600000 > >> EAL: PCI memory mapped at 0x7fd1a7640000 > >> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 3 > >> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb > >> EAL: PCI device 0000:82:00.1 on NUMA socket 1 > >> EAL: probe driver: 8086:10fb rte_ixgbe_pmd > >> EAL: PCI memory mapped at 0x7fd1a7644000 > >> EAL: PCI memory mapped at 0x7fd1a7684000 > >> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 4 > >> PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb > >> RING: Cannot reserve memory > >> dpdk_if_init:256: failed to allocate tx pool > >> > >> The DPDK bound NIC cards are on NUMA socket 1. > >> > >> root@rg2-14053:/home/adara/raghu_2/dpdk-2.2.0# > >> ./tools/dpdk_nic_bind.py --status > >> > >> Network devices using DPDK-compatible driver > >>============================================ > >> 0000:82:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' > >>drv=igb_uio > >> unused=ixgbe > >> 0000:82:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' > >>drv=igb_uio > >> unused=ixgbe > >> > >> Network devices using kernel driver > >> =================================== > >> 0000:02:00.0 'I350 Gigabit Network Connection' if=eno1 drv=igb > >> unused=igb_uio *Active* > >> 0000:02:00.3 'I350 Gigabit Network Connection' if=eno2 drv=igb > >> unused=igb_uio > >> > >> Other network devices > >> ===================== > >> > >> > >> > >> root@rg2-14053:/home/adara/raghu_2/run# cat > >> /sys/bus/pci/devices/0000\:82\:00.0/numa_node > >> 1 > >> root@rg2-14053:/home/adara/raghu_2/run# cat > >> /sys/bus/pci/devices/0000\:82\:00.1/numa_node > >> 1 > >> > >> > >> DPDK huge pages are allocated on the same NUMA node 1 as shown below: > >> > >> root@rg2-14053:/home/adara/raghu_2/run# cat > >> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepage > >> s > >> 0 > >> root@rg2-14053:/home/adara/raghu_2/run# cat > >> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepage > >> s > >> 128 > >> root@rg2-14053:/home/adara/raghu_2/run# > >> > >> Output of CPU Layout tool: > >> > >> root@rg2-14053:/home/adara/raghu_2/run# > >>../dpdk-2.2.0/tools/cpu_layout.py > >> ============================================================ > >> Core and Socket Information (as reported by '/proc/cpuinfo') > >>============================================================ > >> cores = [0, 1, 2, 3, 4, 8, 9, 10, 11, 12] sockets = [0, 1] > >> > >> Socket 0 Socket 1 > >> -------- -------- > >> Core 0 [0, 20] [10, 30] > >> > >> Core 1 [1, 21] [11, 31] > >> > >> Core 2 [2, 22] [12, 32] > >> > >> Core 3 [3, 23] [13, 33] > >> > >> Core 4 [4, 24] [14, 34] > >> > >> Core 8 [5, 25] [15, 35] > >> > >> Core 9 [6, 26] [16, 36] > >> > >> Core 10 [7, 27] [17, 37] > >> > >> Core 11 [8, 28] [18, 38] > >> > >> Core 12 [9, 29] [19, 39] > >> The DPDK email etiquette is to not top-post. The DPDK drivers and libraries are going to allocate memory in general on the same numa node as where the device is present. Some resources will allocate memory on socket 0 the default socket. In a numa environment, then you need to have some memory on socket 0 and the bulk of it on the same as your device. Not all nodes need to have the same reserved memory.