From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 2A8204CBD; Mon, 8 Oct 2018 13:35:34 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 08 Oct 2018 04:35:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,356,1534834800"; d="scan'208";a="90137099" Received: from aburakov-mobl1.ger.corp.intel.com (HELO [10.237.220.79]) ([10.237.220.79]) by orsmga003.jf.intel.com with ESMTP; 08 Oct 2018 04:35:30 -0700 To: Ferruh Yigit , dev-bounces@dpdk.org, dev@dpdk.org Cc: bernard.iremonger@intel.com, gavin.hu@arm.com, stable@dpdk.org, phil.yang@arm.com References: <1535362398-6526-1-git-send-email-phil.yang@arm.com> <1536717266-6363-1-git-send-email-phil.yang@arm.com> <6f76decb-086b-3be9-0ed7-25a098e959c7@intel.com> From: "Burakov, Anatoly" Message-ID: <722734c3-7020-0a27-d50e-395b7cc2c59f@intel.com> Date: Mon, 8 Oct 2018 12:35:30 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <6f76decb-086b-3be9-0ed7-25a098e959c7@intel.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH v2] app/testpmd: optimize membuf pool allocation X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 08 Oct 2018 11:35:35 -0000 On 08-Oct-18 12:33 PM, Ferruh Yigit wrote: > On 9/12/2018 2:54 AM, dev-bounces@dpdk.org wrote: >> By default, testpmd will create membuf pool for all NUMA nodes and >> ignore EAL configuration. >> >> Count the number of available NUMA according to EAL core mask or core >> list configuration. Optimized by only creating membuf pool for those >> nodes. >> >> Fixes: c9cafcc ("app/testpmd: fix mempool creation by socket id") >> >> Signed-off-by: Phil Yang >> Acked-by: Gavin Hu >> --- >> app/test-pmd/testpmd.c | 4 ++-- >> 1 file changed, 2 insertions(+), 2 deletions(-) >> >> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c >> index ee48db2..a56af2b 100644 >> --- a/app/test-pmd/testpmd.c >> +++ b/app/test-pmd/testpmd.c >> @@ -476,6 +476,8 @@ set_default_fwd_lcores_config(void) >> >> nb_lc = 0; >> for (i = 0; i < RTE_MAX_LCORE; i++) { >> + if (!rte_lcore_is_enabled(i)) >> + continue; >> sock_num = rte_lcore_to_socket_id(i); >> if (new_socket_id(sock_num)) { >> if (num_sockets >= RTE_MAX_NUMA_NODES) { >> @@ -485,8 +487,6 @@ set_default_fwd_lcores_config(void) >> } >> socket_ids[num_sockets++] = sock_num; >> } >> - if (!rte_lcore_is_enabled(i)) >> - continue; >> if (i == rte_get_master_lcore()) >> continue; >> fwd_lcores_cpuids[nb_lc++] = i; >> > > > This is causing testpmd fail for the case all cores from socket 1 and added a > virtual device which will try to allocate memory from socket 0. > > > $ testpmd -l --vdev net_pcap0,iface=lo -- -i > ... > Failed to setup RX queue:No mempool allocation on the socket 0 > EAL: Error - exiting with code: 1 > Cause: Start ports failed > > It's an open question as to why pcap driver tries to allocate on socket 0 when everything is on socket 1, but perhaps a better improvement would be to take into account not only socket ID's of lcores, but ethdev devices as well? -- Thanks, Anatoly