From: Ferruh Yigit <ferruh.yigit@intel.com>
To: "Phil Yang (Arm Technology China)" <Phil.Yang@arm.com>,
"dev@dpdk.org" <dev@dpdk.org>
Cc: nd <nd@arm.com>, "anatoly.burakov@intel.com" <anatoly.burakov@intel.com>
Subject: Re: [dpdk-dev] [PATCH] app/testpmd: fix vdev socket initialization
Date: Mon, 15 Oct 2018 11:41:34 +0100 [thread overview]
Message-ID: <d70d23fa-a409-c4c9-64b2-6141f5bed024@intel.com> (raw)
In-Reply-To: <DB7PR08MB33857791EE43D4F7AA4A2733E9FD0@DB7PR08MB3385.eurprd08.prod.outlook.com>
On 10/15/2018 10:51 AM, Phil Yang (Arm Technology China) wrote:
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Saturday, October 13, 2018 1:13 AM
>> To: Phil Yang (Arm Technology China) <Phil.Yang@arm.com>; dev@dpdk.org
>> Cc: nd <nd@arm.com>; anatoly.burakov@intel.com
>> Subject: Re: [PATCH] app/testpmd: fix vdev socket initialization
>>
>> On 10/12/2018 10:34 AM, phil.yang@arm.com wrote:
>>> The cmdline settings of port-numa-config and rxring-numa-config have
>>> been flushed by the following init_config. If we don't configure the
>>> port-numa-config, the virtual device will allocate the device ports to
>>> socket 0. It will cause failure when the socket 0 is unavailable.
>>>
>>> eg:
>>> testpmd -l <cores from socket 1> --vdev net_pcap0,iface=lo
>>> --socket-mem=64 -- --numa --port-numa-config="(0,1)"
>>> --ring-numa-config="(0,1,1),(0,2,1)" -i
>>>
>>> ...
>>> Configuring Port 0 (socket 0)
>>> Failed to setup RX queue:No mempool allocation on the socket 0
>>> EAL: Error - exiting with code: 1
>>> Cause: Start ports failed
>>>
>>> Fix by allocate the devices port to the first available socket or the
>>> socket configured in port-numa-config.
>>
>> I confirm this fixes the issue, by making vdev to allocate from available socket
>> instead of hardcoded socket 0, overall this make sense.
>>
>> But currently there is no way to request mempool form "socket 0" if only cores
>> from "socket 1" provided in "-l", even with "port-numa-config" and "rxring-
>> numa-config".
>> Both this behavior and the problem this patch fixes caused by patch:
>> Commit dbfb8ec7094c ("app/testpmd: optimize mbuf pool allocation")
>>
>> It is good to have optimized mempool allocation but I think this shouldn't limit
>> the tool. If user wants mempools from specific socket, let it have.
>>
>> What about changing the default behavior to:
>> 1- Allocate mempool only from socket that coremask provided (current
>> approach)
>> 2- Plus, allocate mempool from sockets of attached devices (this is alternative
>> solution to this patch, your solution seems better for virtual devices but for
>> physical devices allocating from socket it connects can be better)
>> 3- Plus, allocate mempool from sockets provided in "port-numa-config" and
>> "rxring-numa-config"
>>
>> What do you think?
>
> Hi Ferruh,
>
> Totally agreed with your suggestion.
>
> As I understand, allocating mempool from sockets of attached devices will enable the cross NUMA scenario for Testpmd.
Yes it will.
>
> Below is my fix for physic port mempool allocate issue. So, is it better to separate it into a new patch on the top of this one or rework this one by adding below fix? I prefer to add a new one because the current patch has already fixed two defects. Anyway, I will follow your comment.
+1 to separate it into a new patch, so I will check existing patch.
Below looks good only not sure if is should be in
`set_default_fwd_ports_config`? Or perhaps `set_default_fwd_lcores_config`?
And port-numa-config and rxring-numa-config still not covered.
>
> 565 static void
> 566 set_default_fwd_ports_config(void)
> 567 {
> 568 › portid_t pt_id;
> 569 › int i = 0;
> 570
> 571 › RTE_ETH_FOREACH_DEV(pt_id) {
> 572 › › fwd_ports_ids[i++] = pt_id;
> 573
> + 574 › › /* Update sockets info according to the attached device */
> + 575 › › int socket_id = rte_eth_dev_socket_id(pt_id);
> + 576 › › if (socket_id >= 0 && new_socket_id(pt_id)) {
> + 577 › › › if (num_sockets >= RTE_MAX_NUMA_NODES) {
> + 578 › › › › rte_exit(EXIT_FAILURE,
> + 579 › › › › › "Total sockets greater than %u\n",
> + 580 › › › › › RTE_MAX_NUMA_NODES);
> + 581 › › › }
> + 582 › › › socket_ids[num_sockets++] = socket_id;
> + 583 › › }
> + 584 › }
> + 585
> 586 › nb_cfg_ports = nb_ports;
> 587 › nb_fwd_ports = nb_ports;
> 588 }
>
> Thanks
> Phil Yang
>
>>
>>
>>>
>>> Fixes: 487f9a5 ("app/testpmd: fix NUMA structures initialization")
>>>
>>> Signed-off-by: Phil Yang <phil.yang@arm.com>
>>> Reviewed-by: Gavin Hu <Gavin.Hu@arm.com>
>>
>> <...>
next prev parent reply other threads:[~2018-10-15 10:41 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-10-12 9:34 Phil Yang
2018-10-12 17:13 ` Ferruh Yigit
2018-10-15 9:51 ` Phil Yang (Arm Technology China)
2018-10-15 10:41 ` Ferruh Yigit [this message]
2018-10-16 8:58 ` Phil Yang (Arm Technology China)
2018-10-15 10:58 ` Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d70d23fa-a409-c4c9-64b2-6141f5bed024@intel.com \
--to=ferruh.yigit@intel.com \
--cc=Phil.Yang@arm.com \
--cc=anatoly.burakov@intel.com \
--cc=dev@dpdk.org \
--cc=nd@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).