DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH] app/testpmd: fix vdev socket initialization
@ 2018-10-12  9:34 Phil Yang
  2018-10-12 17:13 ` Ferruh Yigit
  2018-10-15 10:58 ` Ferruh Yigit
  0 siblings, 2 replies; 6+ messages in thread
From: Phil Yang @ 2018-10-12  9:34 UTC (permalink / raw)
  To: dev; +Cc: nd, anatoly.burakov, ferruh.yigit

The cmdline settings of port-numa-config and rxring-numa-config have been
flushed by the following init_config. If we don't configure the
port-numa-config, the virtual device will allocate the device ports to
socket 0. It will cause failure when the socket 0 is unavailable.

eg:
testpmd -l <cores from socket 1> --vdev net_pcap0,iface=lo
--socket-mem=64 -- --numa --port-numa-config="(0,1)"
--ring-numa-config="(0,1,1),(0,2,1)" -i

...
Configuring Port 0 (socket 0)
Failed to setup RX queue:No mempool allocation on the socket 0
EAL: Error - exiting with code: 1
  Cause: Start ports failed

Fix by allocate the devices port to the first available socket or the
socket configured in port-numa-config.

Fixes: 487f9a5 ("app/testpmd: fix NUMA structures initialization")

Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Gavin Hu <Gavin.Hu@arm.com>
---
 app/test-pmd/testpmd.c | 29 +++++++++++++++++------------
 1 file changed, 17 insertions(+), 12 deletions(-)

diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index d550bda..1279cd5 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -1024,12 +1024,6 @@ init_config(void)
 
 	memset(port_per_socket,0,RTE_MAX_NUMA_NODES);
 
-	if (numa_support) {
-		memset(port_numa, NUMA_NO_CONFIG, RTE_MAX_ETHPORTS);
-		memset(rxring_numa, NUMA_NO_CONFIG, RTE_MAX_ETHPORTS);
-		memset(txring_numa, NUMA_NO_CONFIG, RTE_MAX_ETHPORTS);
-	}
-
 	/* Configuration of logical cores. */
 	fwd_lcores = rte_zmalloc("testpmd: fwd_lcores",
 				sizeof(struct fwd_lcore *) * nb_lcores,
@@ -1066,9 +1060,12 @@ init_config(void)
 			else {
 				uint32_t socket_id = rte_eth_dev_socket_id(pid);
 
-				/* if socket_id is invalid, set to 0 */
+				/*
+				 * if socket_id is invalid,
+				 * set to the first available socket.
+				 */
 				if (check_socket_id(socket_id) < 0)
-					socket_id = 0;
+					socket_id = socket_ids[0];
 				port_per_socket[socket_id]++;
 			}
 		}
@@ -1224,9 +1221,12 @@ init_fwd_streams(void)
 			else {
 				port->socket_id = rte_eth_dev_socket_id(pid);
 
-				/* if socket_id is invalid, set to 0 */
+				/*
+				 * if socket_id is invalid,
+				 * set to the first available socket.
+				 */
 				if (check_socket_id(port->socket_id) < 0)
-					port->socket_id = 0;
+					port->socket_id = socket_ids[0];
 			}
 		}
 		else {
@@ -2330,9 +2330,9 @@ attach_port(char *identifier)
 		return;
 
 	socket_id = (unsigned)rte_eth_dev_socket_id(pi);
-	/* if socket_id is invalid, set to 0 */
+	/* if socket_id is invalid, set to the first available socket. */
 	if (check_socket_id(socket_id) < 0)
-		socket_id = 0;
+		socket_id = socket_ids[0];
 	reconfig(pi, socket_id);
 	rte_eth_promiscuous_enable(pi);
 
@@ -2971,6 +2971,11 @@ init_port(void)
 				"rte_zmalloc(%d struct rte_port) failed\n",
 				RTE_MAX_ETHPORTS);
 	}
+
+	/* Initialize ports NUMA structures */
+	memset(port_numa, NUMA_NO_CONFIG, RTE_MAX_ETHPORTS);
+	memset(rxring_numa, NUMA_NO_CONFIG, RTE_MAX_ETHPORTS);
+	memset(txring_numa, NUMA_NO_CONFIG, RTE_MAX_ETHPORTS);
 }
 
 static void
-- 
2.7.4

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH] app/testpmd: fix vdev socket initialization
  2018-10-12  9:34 [dpdk-dev] [PATCH] app/testpmd: fix vdev socket initialization Phil Yang
@ 2018-10-12 17:13 ` Ferruh Yigit
  2018-10-15  9:51   ` Phil Yang (Arm Technology China)
  2018-10-15 10:58 ` Ferruh Yigit
  1 sibling, 1 reply; 6+ messages in thread
From: Ferruh Yigit @ 2018-10-12 17:13 UTC (permalink / raw)
  To: phil.yang, dev; +Cc: nd, anatoly.burakov

On 10/12/2018 10:34 AM, phil.yang@arm.com wrote:
> The cmdline settings of port-numa-config and rxring-numa-config have been
> flushed by the following init_config. If we don't configure the
> port-numa-config, the virtual device will allocate the device ports to
> socket 0. It will cause failure when the socket 0 is unavailable.
> 
> eg:
> testpmd -l <cores from socket 1> --vdev net_pcap0,iface=lo
> --socket-mem=64 -- --numa --port-numa-config="(0,1)"
> --ring-numa-config="(0,1,1),(0,2,1)" -i
> 
> ...
> Configuring Port 0 (socket 0)
> Failed to setup RX queue:No mempool allocation on the socket 0
> EAL: Error - exiting with code: 1
>   Cause: Start ports failed
> 
> Fix by allocate the devices port to the first available socket or the
> socket configured in port-numa-config.

I confirm this fixes the issue, by making vdev to allocate from available socket
instead of hardcoded socket 0, overall this make sense.

But currently there is no way to request mempool form "socket 0" if only cores
from "socket 1" provided in "-l", even with "port-numa-config" and
"rxring-numa-config".
Both this behavior and the problem this patch fixes caused by patch:
Commit dbfb8ec7094c ("app/testpmd: optimize mbuf pool allocation")

It is good to have optimized mempool allocation but I think this shouldn't limit
the tool. If user wants mempools from specific socket, let it have.

What about changing the default behavior to:
1- Allocate mempool only from socket that coremask provided (current approach)
2- Plus, allocate mempool from sockets of attached devices (this is alternative
solution to this patch, your solution seems better for virtual devices but for
physical devices allocating from socket it connects can be better)
3- Plus, allocate mempool from sockets provided in "port-numa-config" and
"rxring-numa-config"

What do you think?


> 
> Fixes: 487f9a5 ("app/testpmd: fix NUMA structures initialization")
> 
> Signed-off-by: Phil Yang <phil.yang@arm.com>
> Reviewed-by: Gavin Hu <Gavin.Hu@arm.com>

<...>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH] app/testpmd: fix vdev socket initialization
  2018-10-12 17:13 ` Ferruh Yigit
@ 2018-10-15  9:51   ` Phil Yang (Arm Technology China)
  2018-10-15 10:41     ` Ferruh Yigit
  0 siblings, 1 reply; 6+ messages in thread
From: Phil Yang (Arm Technology China) @ 2018-10-15  9:51 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: nd, anatoly.burakov

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Saturday, October 13, 2018 1:13 AM
> To: Phil Yang (Arm Technology China) <Phil.Yang@arm.com>; dev@dpdk.org
> Cc: nd <nd@arm.com>; anatoly.burakov@intel.com
> Subject: Re: [PATCH] app/testpmd: fix vdev socket initialization
> 
> On 10/12/2018 10:34 AM, phil.yang@arm.com wrote:
> > The cmdline settings of port-numa-config and rxring-numa-config have
> > been flushed by the following init_config. If we don't configure the
> > port-numa-config, the virtual device will allocate the device ports to
> > socket 0. It will cause failure when the socket 0 is unavailable.
> >
> > eg:
> > testpmd -l <cores from socket 1> --vdev net_pcap0,iface=lo
> > --socket-mem=64 -- --numa --port-numa-config="(0,1)"
> > --ring-numa-config="(0,1,1),(0,2,1)" -i
> >
> > ...
> > Configuring Port 0 (socket 0)
> > Failed to setup RX queue:No mempool allocation on the socket 0
> > EAL: Error - exiting with code: 1
> >   Cause: Start ports failed
> >
> > Fix by allocate the devices port to the first available socket or the
> > socket configured in port-numa-config.
> 
> I confirm this fixes the issue, by making vdev to allocate from available socket
> instead of hardcoded socket 0, overall this make sense.
> 
> But currently there is no way to request mempool form "socket 0" if only cores
> from "socket 1" provided in "-l", even with "port-numa-config" and "rxring-
> numa-config".
> Both this behavior and the problem this patch fixes caused by patch:
> Commit dbfb8ec7094c ("app/testpmd: optimize mbuf pool allocation")
> 
> It is good to have optimized mempool allocation but I think this shouldn't limit
> the tool. If user wants mempools from specific socket, let it have.
> 
> What about changing the default behavior to:
> 1- Allocate mempool only from socket that coremask provided (current
> approach)
> 2- Plus, allocate mempool from sockets of attached devices (this is alternative
> solution to this patch, your solution seems better for virtual devices but for
> physical devices allocating from socket it connects can be better)
> 3- Plus, allocate mempool from sockets provided in "port-numa-config" and
> "rxring-numa-config"
> 
> What do you think?

Hi Ferruh,

Totally agreed with your suggestion. 

As I understand, allocating mempool from sockets of attached devices will enable the cross NUMA scenario for Testpmd. 

Below is my fix for physic port mempool allocate issue. So, is it better to separate it into a new patch on the top of this one or rework this one by adding below fix? I prefer to add a new one because the current patch has already fixed two defects. Anyway, I will follow your comment.

   565 static void                                                                                                                 
   566 set_default_fwd_ports_config(void)                                                                                          
   567 {                                                                                                                           
   568 ›   portid_t pt_id;                                                                                                         
   569 ›   int i = 0;                                                                                                              
   570 
   571 ›   RTE_ETH_FOREACH_DEV(pt_id) {                                                                                            
   572 ›   ›   fwd_ports_ids[i++] = pt_id;                                                                                         
   573 
+  574 ›   ›   /* Update sockets info according to the attached device */                                                          
+  575 ›   ›   int socket_id = rte_eth_dev_socket_id(pt_id);
+  576 ›   ›   if (socket_id >= 0 && new_socket_id(pt_id)) {                                                                       
+  577 ›   ›   ›   if (num_sockets >= RTE_MAX_NUMA_NODES) {                                                                        
+  578 ›   ›   ›   ›   rte_exit(EXIT_FAILURE,                                                                                      
+  579 ›   ›   ›   ›   ›    "Total sockets greater than %u\n",                                                                     
+  580 ›   ›   ›   ›   ›    RTE_MAX_NUMA_NODES);
+  581 ›   ›   ›   }                                                                                                               
+  582 ›   ›   ›   socket_ids[num_sockets++] = socket_id;
+  583 ›   ›   }
+  584 ›   }
+  585 
   586 ›   nb_cfg_ports = nb_ports;
   587 ›   nb_fwd_ports = nb_ports;                                                                                                
   588 }                               

Thanks
Phil Yang

> 
> 
> >
> > Fixes: 487f9a5 ("app/testpmd: fix NUMA structures initialization")
> >
> > Signed-off-by: Phil Yang <phil.yang@arm.com>
> > Reviewed-by: Gavin Hu <Gavin.Hu@arm.com>
> 
> <...>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH] app/testpmd: fix vdev socket initialization
  2018-10-15  9:51   ` Phil Yang (Arm Technology China)
@ 2018-10-15 10:41     ` Ferruh Yigit
  2018-10-16  8:58       ` Phil Yang (Arm Technology China)
  0 siblings, 1 reply; 6+ messages in thread
From: Ferruh Yigit @ 2018-10-15 10:41 UTC (permalink / raw)
  To: Phil Yang (Arm Technology China), dev; +Cc: nd, anatoly.burakov

On 10/15/2018 10:51 AM, Phil Yang (Arm Technology China) wrote:
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Saturday, October 13, 2018 1:13 AM
>> To: Phil Yang (Arm Technology China) <Phil.Yang@arm.com>; dev@dpdk.org
>> Cc: nd <nd@arm.com>; anatoly.burakov@intel.com
>> Subject: Re: [PATCH] app/testpmd: fix vdev socket initialization
>>
>> On 10/12/2018 10:34 AM, phil.yang@arm.com wrote:
>>> The cmdline settings of port-numa-config and rxring-numa-config have
>>> been flushed by the following init_config. If we don't configure the
>>> port-numa-config, the virtual device will allocate the device ports to
>>> socket 0. It will cause failure when the socket 0 is unavailable.
>>>
>>> eg:
>>> testpmd -l <cores from socket 1> --vdev net_pcap0,iface=lo
>>> --socket-mem=64 -- --numa --port-numa-config="(0,1)"
>>> --ring-numa-config="(0,1,1),(0,2,1)" -i
>>>
>>> ...
>>> Configuring Port 0 (socket 0)
>>> Failed to setup RX queue:No mempool allocation on the socket 0
>>> EAL: Error - exiting with code: 1
>>>   Cause: Start ports failed
>>>
>>> Fix by allocate the devices port to the first available socket or the
>>> socket configured in port-numa-config.
>>
>> I confirm this fixes the issue, by making vdev to allocate from available socket
>> instead of hardcoded socket 0, overall this make sense.
>>
>> But currently there is no way to request mempool form "socket 0" if only cores
>> from "socket 1" provided in "-l", even with "port-numa-config" and "rxring-
>> numa-config".
>> Both this behavior and the problem this patch fixes caused by patch:
>> Commit dbfb8ec7094c ("app/testpmd: optimize mbuf pool allocation")
>>
>> It is good to have optimized mempool allocation but I think this shouldn't limit
>> the tool. If user wants mempools from specific socket, let it have.
>>
>> What about changing the default behavior to:
>> 1- Allocate mempool only from socket that coremask provided (current
>> approach)
>> 2- Plus, allocate mempool from sockets of attached devices (this is alternative
>> solution to this patch, your solution seems better for virtual devices but for
>> physical devices allocating from socket it connects can be better)
>> 3- Plus, allocate mempool from sockets provided in "port-numa-config" and
>> "rxring-numa-config"
>>
>> What do you think?
> 
> Hi Ferruh,
> 
> Totally agreed with your suggestion. 
> 
> As I understand, allocating mempool from sockets of attached devices will enable the cross NUMA scenario for Testpmd.

Yes it will.

> 
> Below is my fix for physic port mempool allocate issue. So, is it better to separate it into a new patch on the top of this one or rework this one by adding below fix? I prefer to add a new one because the current patch has already fixed two defects. Anyway, I will follow your comment.

+1 to separate it into a new patch, so I will check existing patch.

Below looks good only not sure if is should be in
`set_default_fwd_ports_config`? Or perhaps `set_default_fwd_lcores_config`?

And port-numa-config and rxring-numa-config still not covered.

> 
>    565 static void                                                                                                                 
>    566 set_default_fwd_ports_config(void)                                                                                          
>    567 {                                                                                                                           
>    568 ›   portid_t pt_id;                                                                                                         
>    569 ›   int i = 0;                                                                                                              
>    570 
>    571 ›   RTE_ETH_FOREACH_DEV(pt_id) {                                                                                            
>    572 ›   ›   fwd_ports_ids[i++] = pt_id;                                                                                         
>    573 
> +  574 ›   ›   /* Update sockets info according to the attached device */                                                          
> +  575 ›   ›   int socket_id = rte_eth_dev_socket_id(pt_id);
> +  576 ›   ›   if (socket_id >= 0 && new_socket_id(pt_id)) {                                                                       
> +  577 ›   ›   ›   if (num_sockets >= RTE_MAX_NUMA_NODES) {                                                                        
> +  578 ›   ›   ›   ›   rte_exit(EXIT_FAILURE,                                                                                      
> +  579 ›   ›   ›   ›   ›    "Total sockets greater than %u\n",                                                                     
> +  580 ›   ›   ›   ›   ›    RTE_MAX_NUMA_NODES);
> +  581 ›   ›   ›   }                                                                                                               
> +  582 ›   ›   ›   socket_ids[num_sockets++] = socket_id;
> +  583 ›   ›   }
> +  584 ›   }
> +  585 
>    586 ›   nb_cfg_ports = nb_ports;
>    587 ›   nb_fwd_ports = nb_ports;                                                                                                
>    588 }                               
> 
> Thanks
> Phil Yang
> 
>>
>>
>>>
>>> Fixes: 487f9a5 ("app/testpmd: fix NUMA structures initialization")
>>>
>>> Signed-off-by: Phil Yang <phil.yang@arm.com>
>>> Reviewed-by: Gavin Hu <Gavin.Hu@arm.com>
>>
>> <...>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH] app/testpmd: fix vdev socket initialization
  2018-10-12  9:34 [dpdk-dev] [PATCH] app/testpmd: fix vdev socket initialization Phil Yang
  2018-10-12 17:13 ` Ferruh Yigit
@ 2018-10-15 10:58 ` Ferruh Yigit
  1 sibling, 0 replies; 6+ messages in thread
From: Ferruh Yigit @ 2018-10-15 10:58 UTC (permalink / raw)
  To: phil.yang, dev; +Cc: nd, anatoly.burakov

On 10/12/2018 10:34 AM, phil.yang@arm.com wrote:
> The cmdline settings of port-numa-config and rxring-numa-config have been
> flushed by the following init_config. If we don't configure the
> port-numa-config, the virtual device will allocate the device ports to
> socket 0. It will cause failure when the socket 0 is unavailable.
> 
> eg:
> testpmd -l <cores from socket 1> --vdev net_pcap0,iface=lo
> --socket-mem=64 -- --numa --port-numa-config="(0,1)"
> --ring-numa-config="(0,1,1),(0,2,1)" -i
> 
> ...
> Configuring Port 0 (socket 0)
> Failed to setup RX queue:No mempool allocation on the socket 0
> EAL: Error - exiting with code: 1
>   Cause: Start ports failed
> 
> Fix by allocate the devices port to the first available socket or the
> socket configured in port-numa-config.
> 
> Fixes: 487f9a5 ("app/testpmd: fix NUMA structures initialization")
> 
> Signed-off-by: Phil Yang <phil.yang@arm.com>
> Reviewed-by: Gavin Hu <Gavin.Hu@arm.com>

 Fixes: 487f9a592a27 ("app/testpmd: fix NUMA structures initialization")
 Fixes: 20a0286fd2c0 ("app/testpmd: check socket id validity")
 Cc: stable@dpdk.org

Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

Applied to dpdk-next-net/master, thanks.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH] app/testpmd: fix vdev socket initialization
  2018-10-15 10:41     ` Ferruh Yigit
@ 2018-10-16  8:58       ` Phil Yang (Arm Technology China)
  0 siblings, 0 replies; 6+ messages in thread
From: Phil Yang (Arm Technology China) @ 2018-10-16  8:58 UTC (permalink / raw)
  To: Ferruh Yigit, dev; +Cc: nd, anatoly.burakov


> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Monday, October 15, 2018 6:42 PM
> To: Phil Yang (Arm Technology China) <Phil.Yang@arm.com>; dev@dpdk.org
> Cc: nd <nd@arm.com>; anatoly.burakov@intel.com
> Subject: Re: [PATCH] app/testpmd: fix vdev socket initialization
> 
> On 10/15/2018 10:51 AM, Phil Yang (Arm Technology China) wrote:
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >> Sent: Saturday, October 13, 2018 1:13 AM
> >> To: Phil Yang (Arm Technology China) <Phil.Yang@arm.com>;
> >> dev@dpdk.org
> >> Cc: nd <nd@arm.com>; anatoly.burakov@intel.com
> >> Subject: Re: [PATCH] app/testpmd: fix vdev socket initialization
> >>
> >> On 10/12/2018 10:34 AM, phil.yang@arm.com wrote:
> >>> The cmdline settings of port-numa-config and rxring-numa-config have
> >>> been flushed by the following init_config. If we don't configure the
> >>> port-numa-config, the virtual device will allocate the device ports
> >>> to socket 0. It will cause failure when the socket 0 is unavailable.
> >>>
> >>> eg:
> >>> testpmd -l <cores from socket 1> --vdev net_pcap0,iface=lo
> >>> --socket-mem=64 -- --numa --port-numa-config="(0,1)"
> >>> --ring-numa-config="(0,1,1),(0,2,1)" -i
> >>>
> >>> ...
> >>> Configuring Port 0 (socket 0)
> >>> Failed to setup RX queue:No mempool allocation on the socket 0
> >>> EAL: Error - exiting with code: 1
> >>>   Cause: Start ports failed
> >>>
> >>> Fix by allocate the devices port to the first available socket or
> >>> the socket configured in port-numa-config.
> >>
> >> I confirm this fixes the issue, by making vdev to allocate from
> >> available socket instead of hardcoded socket 0, overall this make sense.
> >>
> >> But currently there is no way to request mempool form "socket 0" if
> >> only cores from "socket 1" provided in "-l", even with
> >> "port-numa-config" and "rxring- numa-config".
> >> Both this behavior and the problem this patch fixes caused by patch:
> >> Commit dbfb8ec7094c ("app/testpmd: optimize mbuf pool allocation")
> >>
> >> It is good to have optimized mempool allocation but I think this
> >> shouldn't limit the tool. If user wants mempools from specific socket, let it
> have.
> >>
> >> What about changing the default behavior to:
> >> 1- Allocate mempool only from socket that coremask provided (current
> >> approach)
> >> 2- Plus, allocate mempool from sockets of attached devices (this is
> >> alternative solution to this patch, your solution seems better for
> >> virtual devices but for physical devices allocating from socket it
> >> connects can be better)
> >> 3- Plus, allocate mempool from sockets provided in "port-numa-config"
> >> and "rxring-numa-config"
> >>
> >> What do you think?
> >
> > Hi Ferruh,
> >
> > Totally agreed with your suggestion.
> >
> > As I understand, allocating mempool from sockets of attached devices will
> enable the cross NUMA scenario for Testpmd.
> 
> Yes it will.
> 
> >
> > Below is my fix for physic port mempool allocate issue. So, is it better to
> separate it into a new patch on the top of this one or rework this one by adding
> below fix? I prefer to add a new one because the current patch has already fixed
> two defects. Anyway, I will follow your comment.
> 
> +1 to separate it into a new patch, so I will check existing patch.
> 
> Below looks good only not sure if is should be in `set_default_fwd_ports_config`?
> Or perhaps `set_default_fwd_lcores_config`?
Hi Ferruh,

IMO, 'set_default_fwd_lcores_config' is aiming to update sockets info and core related info according to the -l <core list> or -c <core mask> input.
So, go through the attached devices to update ports' socket info in 'set_default_fwd_ports_config' is reasonable.

I think the initialization process go through like below in Testpmd.

                                         /    1.  'set_default_fwd_lcore_config'   update core related info
'set_def_fwd_config'   —  2. 'set_default_peer_eth_addrs '  update port address
                |                       \	    3. 'set_default_fwd_ports_config '  update port (or devices) related info
                V
'launch_args_parse'    —  update port-numa-config settings
                |
                V                 /     1. allocate mempool for each available socket which recorded in socket_ids[]
       'init_config'       —    2. 'init_fwd_streams' update port->socket_id info according to the port-numa-config
                :
          start_port

Once this patch applied. This socket_ids[] update order will affect the default socket of mempool allocation.
e.g. :  socket_ids[0] = -l <core list>
           socket_ids[1] = <attached devices socket id> when it is not the socket listed in <core list>
For virtual devices, the default socket is socket_ids[0]. For physic devices, the default socket will be socket_id[1].

> 
> And port-numa-config and rxring-numa-config still not covered.
Those configurations have been initialed in launch_args_parse' and operate the forwarding streams in 'init_fwd_streams'.  So the patch covered those configurations.

Thanks
Phil Yang
> 
> >
> >    565 static void
> >    566 set_default_fwd_ports_config(void)
> >    567 {
> >    568 ›   portid_t pt_id;
> >    569 ›   int i = 0;
> >    570
> >    571 ›   RTE_ETH_FOREACH_DEV(pt_id) {
> >    572 ›   ›   fwd_ports_ids[i++] = pt_id;
> >    573
> > +  574 ›   ›   /* Update sockets info according to the attached device */
> > +  575 ›   ›   int socket_id = rte_eth_dev_socket_id(pt_id);
> > +  576 ›   ›   if (socket_id >= 0 && new_socket_id(pt_id)) {
> > +  577 ›   ›   ›   if (num_sockets >= RTE_MAX_NUMA_NODES) {
> > +  578 ›   ›   ›   ›   rte_exit(EXIT_FAILURE,
> > +  579 ›   ›   ›   ›   ›    "Total sockets greater than %u\n",
> > +  580 ›   ›   ›   ›   ›    RTE_MAX_NUMA_NODES);
> > +  581 ›   ›   ›   }
> > +  582 ›   ›   ›   socket_ids[num_sockets++] = socket_id;
> > +  583 ›   ›   }
> > +  584 ›   }
> > +  585
> >    586 ›   nb_cfg_ports = nb_ports;
> >    587 ›   nb_fwd_ports = nb_ports;
> >    588 }
> >
> > Thanks
> > Phil Yang
> >
> >>
> >>
> >>>
> >>> Fixes: 487f9a5 ("app/testpmd: fix NUMA structures initialization")
> >>>
> >>> Signed-off-by: Phil Yang <phil.yang@arm.com>
> >>> Reviewed-by: Gavin Hu <Gavin.Hu@arm.com>
> >>
> >> <...>


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2018-10-16  8:58 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-12  9:34 [dpdk-dev] [PATCH] app/testpmd: fix vdev socket initialization Phil Yang
2018-10-12 17:13 ` Ferruh Yigit
2018-10-15  9:51   ` Phil Yang (Arm Technology China)
2018-10-15 10:41     ` Ferruh Yigit
2018-10-16  8:58       ` Phil Yang (Arm Technology China)
2018-10-15 10:58 ` Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).