DPDK usage discussions
 help / color / mirror / Atom feed
* [mlx5] (segmentation fault) rte_flow Template and Asynchronous APIs
@ 2022-09-20 12:12 Haider Ali
  2022-09-21 19:58 ` Asaf Penso
  0 siblings, 1 reply; 3+ messages in thread
From: Haider Ali @ 2022-09-20 12:12 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 2947 bytes --]

Hi,

I am trying to test the new rte_flow template and asynchronous APIs but when I start the testpmd application I got a segmentation fault.

$ sudo ./app/dpdk-testpmd -a 04:00.1,dv_flow_en=2 -- -i --rxq=8 --txq=8

EAL: Detected CPU lcores: 48
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:04:00.1 (socket 0)
mlx5_common: DevX create q counter set failed errno=22 status=0 syndrome=0
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=523456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=523456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
Segmentation fault


Secondly, I also tried to debug the code and found that there are some missing functions in mlx5 PMD (mlx5_flow_hw_drv_ops):

Configuring Port 0 (socket 0)

Thread 1 "dpdk-testpmd" received signal SIGSEGV, Segmentation fault.
0x0000000000000000 in ?? ()


(gdb) bt
#0  0x0000000000000000 in ?? ()
#1  0x000000000201fb7e in flow_drv_validate (dev=0x7ce4d80 <rte_eth_devices>, attr=0x7fffffffd150, items=0x7fffffffd0f0, actions=0x7fffffffd0a0, external=false, hairpin=0, error=0x7fffffffd080)
    at ../drivers/net/mlx5/mlx5_flow.c:3770
#2  0x00000000020302b1 in flow_list_create (dev=0x7ce4d80 <rte_eth_devices>, type=MLX5_FLOW_TYPE_CTL, attr=0x7fffffffd150, items=0x7fffffffd0f0, original_actions=0x7fffffffd0a0, external=false,
    error=0x7fffffffd080) at ../drivers/net/mlx5/mlx5_flow.c:6872
#3  0x0000000002031976 in mlx5_ctrl_flow_vlan (dev=0x7ce4d80 <rte_eth_devices>, eth_spec=0x7fffffffd2c0, eth_mask=0x7fffffffd2c0, vlan_spec=0x0, vlan_mask=0x0) at ../drivers/net/mlx5/mlx5_flow.c:7626
#4  0x00000000020319dd in mlx5_ctrl_flow (dev=0x7ce4d80 <rte_eth_devices>, eth_spec=0x7fffffffd2c0, eth_mask=0x7fffffffd2c0) at ../drivers/net/mlx5/mlx5_flow.c:7651
#5  0x00000000020c86c5 in mlx5_traffic_enable (dev=0x7ce4d80 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_trigger.c:1411
#6  0x00000000020c7b6d in mlx5_dev_start (dev=0x7ce4d80 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_trigger.c:1173
#7  0x0000000000ae286a in rte_eth_dev_start (port_id=0) at ../lib/ethdev/rte_ethdev.c:1474
#8  0x000000000066b0ff in eth_dev_start_mp (port_id=0) at ../app/test-pmd/testpmd.c:646
#9  0x0000000000670135 in start_port (pid=65535) at ../app/test-pmd/testpmd.c:3027
#10 0x00000000006731e7 in main (argc=4, argv=0x7fffffffe3d0) at ../app/test-pmd/testpmd.c:4398
(gdb)

Please correct me if I am doing anything wronf.

Regards,
Haider

[-- Attachment #2: Type: text/html, Size: 6431 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: [mlx5] (segmentation fault) rte_flow Template and Asynchronous APIs
  2022-09-20 12:12 [mlx5] (segmentation fault) rte_flow Template and Asynchronous APIs Haider Ali
@ 2022-09-21 19:58 ` Asaf Penso
  2022-09-22  6:14   ` Haider Ali
  0 siblings, 1 reply; 3+ messages in thread
From: Asaf Penso @ 2022-09-21 19:58 UTC (permalink / raw)
  To: Haider Ali, users; +Cc: Ori Kam, Lior Margalit

[-- Attachment #1: Type: text/plain, Size: 3288 bytes --]

Hello Haider,

The full mlx5 pmd support for these API is planned for 22.11.
Soon, we'll start sending the patches.

Regards,
Asaf Penso

From: Haider Ali <haider@dreambigsemi.com>
Sent: Tuesday, September 20, 2022 3:13 PM
To: users <users@dpdk.org>
Subject: [mlx5] (segmentation fault) rte_flow Template and Asynchronous APIs

Hi,

I am trying to test the new rte_flow template and asynchronous APIs but when I start the testpmd application I got a segmentation fault.

$ sudo ./app/dpdk-testpmd -a 04:00.1,dv_flow_en=2 -- -i --rxq=8 --txq=8

EAL: Detected CPU lcores: 48
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:04:00.1 (socket 0)
mlx5_common: DevX create q counter set failed errno=22 status=0 syndrome=0
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=523456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=523456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
Segmentation fault


Secondly, I also tried to debug the code and found that there are some missing functions in mlx5 PMD (mlx5_flow_hw_drv_ops):

Configuring Port 0 (socket 0)

Thread 1 "dpdk-testpmd" received signal SIGSEGV, Segmentation fault.
0x0000000000000000 in ?? ()


(gdb) bt
#0  0x0000000000000000 in ?? ()
#1  0x000000000201fb7e in flow_drv_validate (dev=0x7ce4d80 <rte_eth_devices>, attr=0x7fffffffd150, items=0x7fffffffd0f0, actions=0x7fffffffd0a0, external=false, hairpin=0, error=0x7fffffffd080)
    at ../drivers/net/mlx5/mlx5_flow.c:3770
#2  0x00000000020302b1 in flow_list_create (dev=0x7ce4d80 <rte_eth_devices>, type=MLX5_FLOW_TYPE_CTL, attr=0x7fffffffd150, items=0x7fffffffd0f0, original_actions=0x7fffffffd0a0, external=false,
    error=0x7fffffffd080) at ../drivers/net/mlx5/mlx5_flow.c:6872
#3  0x0000000002031976 in mlx5_ctrl_flow_vlan (dev=0x7ce4d80 <rte_eth_devices>, eth_spec=0x7fffffffd2c0, eth_mask=0x7fffffffd2c0, vlan_spec=0x0, vlan_mask=0x0) at ../drivers/net/mlx5/mlx5_flow.c:7626
#4  0x00000000020319dd in mlx5_ctrl_flow (dev=0x7ce4d80 <rte_eth_devices>, eth_spec=0x7fffffffd2c0, eth_mask=0x7fffffffd2c0) at ../drivers/net/mlx5/mlx5_flow.c:7651
#5  0x00000000020c86c5 in mlx5_traffic_enable (dev=0x7ce4d80 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_trigger.c:1411
#6  0x00000000020c7b6d in mlx5_dev_start (dev=0x7ce4d80 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_trigger.c:1173
#7  0x0000000000ae286a in rte_eth_dev_start (port_id=0) at ../lib/ethdev/rte_ethdev.c:1474
#8  0x000000000066b0ff in eth_dev_start_mp (port_id=0) at ../app/test-pmd/testpmd.c:646
#9  0x0000000000670135 in start_port (pid=65535) at ../app/test-pmd/testpmd.c:3027
#10 0x00000000006731e7 in main (argc=4, argv=0x7fffffffe3d0) at ../app/test-pmd/testpmd.c:4398
(gdb)

Please correct me if I am doing anything wronf.

Regards,
Haider

[-- Attachment #2: Type: text/html, Size: 12684 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [mlx5] (segmentation fault) rte_flow Template and Asynchronous APIs
  2022-09-21 19:58 ` Asaf Penso
@ 2022-09-22  6:14   ` Haider Ali
  0 siblings, 0 replies; 3+ messages in thread
From: Haider Ali @ 2022-09-22  6:14 UTC (permalink / raw)
  To: Asaf Penso, users; +Cc: Ori Kam, Lior Margalit

[-- Attachment #1: Type: text/plain, Size: 3893 bytes --]

Thanks Asaf for your reply.

We can wait till 22.11 to play with these new rte_flow Template and Asynchronous APIs.

Regards,
Haider
________________________________
From: Asaf Penso <asafp@nvidia.com>
Sent: Thursday, September 22, 2022 12:58 AM
To: Haider Ali <haider@dreambigsemi.com>; users <users@dpdk.org>
Cc: Ori Kam <orika@nvidia.com>; Lior Margalit <lmargalit@nvidia.com>
Subject: RE: [mlx5] (segmentation fault) rte_flow Template and Asynchronous APIs


Hello Haider,



The full mlx5 pmd support for these API is planned for 22.11.

Soon, we’ll start sending the patches.



Regards,

Asaf Penso



From: Haider Ali <haider@dreambigsemi.com>
Sent: Tuesday, September 20, 2022 3:13 PM
To: users <users@dpdk.org>
Subject: [mlx5] (segmentation fault) rte_flow Template and Asynchronous APIs



Hi,



I am trying to test the new rte_flow template and asynchronous APIs but when I start the testpmd application I got a segmentation fault.



$ sudo ./app/dpdk-testpmd -a 04:00.1,dv_flow_en=2 -- -i --rxq=8 --txq=8



EAL: Detected CPU lcores: 48

EAL: Detected NUMA nodes: 2

EAL: Detected static linkage of DPDK

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'PA'

EAL: VFIO support initialized

EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:04:00.1 (socket 0)

mlx5_common: DevX create q counter set failed errno=22 status=0 syndrome=0

TELEMETRY: No legacy callbacks, legacy socket not created

Interactive-mode selected

testpmd: create a new mbuf pool <mb_pool_0>: n=523456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc

testpmd: create a new mbuf pool <mb_pool_1>: n=523456, size=2176, socket=1

testpmd: preferred mempool ops selected: ring_mp_mc



Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.



Configuring Port 0 (socket 0)

Segmentation fault





Secondly, I also tried to debug the code and found that there are some missing functions in mlx5 PMD (mlx5_flow_hw_drv_ops):



Configuring Port 0 (socket 0)



Thread 1 "dpdk-testpmd" received signal SIGSEGV, Segmentation fault.

0x0000000000000000 in ?? ()





(gdb) bt

#0  0x0000000000000000 in ?? ()

#1  0x000000000201fb7e in flow_drv_validate (dev=0x7ce4d80 <rte_eth_devices>, attr=0x7fffffffd150, items=0x7fffffffd0f0, actions=0x7fffffffd0a0, external=false, hairpin=0, error=0x7fffffffd080)

    at ../drivers/net/mlx5/mlx5_flow.c:3770

#2  0x00000000020302b1 in flow_list_create (dev=0x7ce4d80 <rte_eth_devices>, type=MLX5_FLOW_TYPE_CTL, attr=0x7fffffffd150, items=0x7fffffffd0f0, original_actions=0x7fffffffd0a0, external=false,

    error=0x7fffffffd080) at ../drivers/net/mlx5/mlx5_flow.c:6872

#3  0x0000000002031976 in mlx5_ctrl_flow_vlan (dev=0x7ce4d80 <rte_eth_devices>, eth_spec=0x7fffffffd2c0, eth_mask=0x7fffffffd2c0, vlan_spec=0x0, vlan_mask=0x0) at ../drivers/net/mlx5/mlx5_flow.c:7626

#4  0x00000000020319dd in mlx5_ctrl_flow (dev=0x7ce4d80 <rte_eth_devices>, eth_spec=0x7fffffffd2c0, eth_mask=0x7fffffffd2c0) at ../drivers/net/mlx5/mlx5_flow.c:7651

#5  0x00000000020c86c5 in mlx5_traffic_enable (dev=0x7ce4d80 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_trigger.c:1411

#6  0x00000000020c7b6d in mlx5_dev_start (dev=0x7ce4d80 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_trigger.c:1173

#7  0x0000000000ae286a in rte_eth_dev_start (port_id=0) at ../lib/ethdev/rte_ethdev.c:1474

#8  0x000000000066b0ff in eth_dev_start_mp (port_id=0) at ../app/test-pmd/testpmd.c:646

#9  0x0000000000670135 in start_port (pid=65535) at ../app/test-pmd/testpmd.c:3027

#10 0x00000000006731e7 in main (argc=4, argv=0x7fffffffe3d0) at ../app/test-pmd/testpmd.c:4398

(gdb)



Please correct me if I am doing anything wronf.



Regards,

Haider

[-- Attachment #2: Type: text/html, Size: 13270 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-10-04  6:33 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-20 12:12 [mlx5] (segmentation fault) rte_flow Template and Asynchronous APIs Haider Ali
2022-09-21 19:58 ` Asaf Penso
2022-09-22  6:14   ` Haider Ali

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).