DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Burakov, Anatoly" <anatoly.burakov@intel.com>
To: Ferruh Yigit <ferruh.yigit@intel.com>,
	dev-bounces@dpdk.org, dev@dpdk.org
Cc: bernard.iremonger@intel.com, gavin.hu@arm.com, stable@dpdk.org,
	phil.yang@arm.com
Subject: Re: [dpdk-dev] [PATCH v2] app/testpmd: optimize membuf pool allocation
Date: Mon, 8 Oct 2018 12:35:30 +0100	[thread overview]
Message-ID: <722734c3-7020-0a27-d50e-395b7cc2c59f@intel.com> (raw)
In-Reply-To: <6f76decb-086b-3be9-0ed7-25a098e959c7@intel.com>

On 08-Oct-18 12:33 PM, Ferruh Yigit wrote:
> On 9/12/2018 2:54 AM, dev-bounces@dpdk.org wrote:
>> By default, testpmd will create membuf pool for all NUMA nodes and
>> ignore EAL configuration.
>>
>> Count the number of available NUMA according to EAL core mask or core
>> list configuration. Optimized by only creating membuf pool for those
>> nodes.
>>
>> Fixes: c9cafcc ("app/testpmd: fix mempool creation by socket id")
>>
>> Signed-off-by: Phil Yang <phil.yang@arm.com>
>> Acked-by: Gavin Hu <gavin.hu@arm.com>
>> ---
>>   app/test-pmd/testpmd.c | 4 ++--
>>   1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
>> index ee48db2..a56af2b 100644
>> --- a/app/test-pmd/testpmd.c
>> +++ b/app/test-pmd/testpmd.c
>> @@ -476,6 +476,8 @@ set_default_fwd_lcores_config(void)
>>   
>>   	nb_lc = 0;
>>   	for (i = 0; i < RTE_MAX_LCORE; i++) {
>> +		if (!rte_lcore_is_enabled(i))
>> +			continue;
>>   		sock_num = rte_lcore_to_socket_id(i);
>>   		if (new_socket_id(sock_num)) {
>>   			if (num_sockets >= RTE_MAX_NUMA_NODES) {
>> @@ -485,8 +487,6 @@ set_default_fwd_lcores_config(void)
>>   			}
>>   			socket_ids[num_sockets++] = sock_num;
>>   		}
>> -		if (!rte_lcore_is_enabled(i))
>> -			continue;
>>   		if (i == rte_get_master_lcore())
>>   			continue;
>>   		fwd_lcores_cpuids[nb_lc++] = i;
>>
> 
> 
> This is causing testpmd fail for the case all cores from socket 1 and added a
> virtual device which will try to allocate memory from socket 0.
> 
> 
>   $ testpmd -l<cores from socket 1> --vdev net_pcap0,iface=lo -- -i
>   ...
>   Failed to setup RX queue:No mempool allocation on the socket 0
>   EAL: Error - exiting with code: 1
>     Cause: Start ports failed
> 
> 

It's an open question as to why pcap driver tries to allocate on socket 
0 when everything is on socket 1, but perhaps a better improvement would 
be to take into account not only socket ID's of lcores, but ethdev 
devices as well?

-- 
Thanks,
Anatoly

  reply	other threads:[~2018-10-08 11:35 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-27  9:33 [dpdk-dev] [PATCH] app/testpmd: Optimize " Phil Yang
2018-08-27  9:39 ` Gavin Hu
2018-09-11 16:23 ` Iremonger, Bernard
2018-09-12  1:59   ` Phil Yang (Arm Technology China)
2018-09-12  1:54 ` [dpdk-dev] [PATCH v2] app/testpmd: optimize " Phil Yang
2018-09-12 10:15   ` Iremonger, Bernard
2018-09-19 13:38     ` Thomas Monjalon
2018-10-08 11:33   ` Ferruh Yigit
2018-10-08 11:35     ` Burakov, Anatoly [this message]
2018-10-11  7:11       ` Phil Yang (Arm Technology China)
2018-10-11 10:37         ` Phil Yang (Arm Technology China)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=722734c3-7020-0a27-d50e-395b7cc2c59f@intel.com \
    --to=anatoly.burakov@intel.com \
    --cc=bernard.iremonger@intel.com \
    --cc=dev-bounces@dpdk.org \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=gavin.hu@arm.com \
    --cc=phil.yang@arm.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).