From: Haider Ali <haider@dreambigsemi.com>
To: Asaf Penso <asafp@nvidia.com>, users <users@dpdk.org>
Cc: Ori Kam <orika@nvidia.com>, Lior Margalit <lmargalit@nvidia.com>
Subject: Re: [mlx5] (segmentation fault) rte_flow Template and Asynchronous APIs
Date: Thu, 22 Sep 2022 06:14:14 +0000 [thread overview]
Message-ID: <MW5PR22MB3395203D43E90733ABECE9A9A74E9@MW5PR22MB3395.namprd22.prod.outlook.com> (raw)
In-Reply-To: <MWHPR1201MB25573DDA3DD43F7A64AE192BCD4F9@MWHPR1201MB2557.namprd12.prod.outlook.com>
[-- Attachment #1: Type: text/plain, Size: 3893 bytes --]
Thanks Asaf for your reply.
We can wait till 22.11 to play with these new rte_flow Template and Asynchronous APIs.
Regards,
Haider
________________________________
From: Asaf Penso <asafp@nvidia.com>
Sent: Thursday, September 22, 2022 12:58 AM
To: Haider Ali <haider@dreambigsemi.com>; users <users@dpdk.org>
Cc: Ori Kam <orika@nvidia.com>; Lior Margalit <lmargalit@nvidia.com>
Subject: RE: [mlx5] (segmentation fault) rte_flow Template and Asynchronous APIs
Hello Haider,
The full mlx5 pmd support for these API is planned for 22.11.
Soon, we’ll start sending the patches.
Regards,
Asaf Penso
From: Haider Ali <haider@dreambigsemi.com>
Sent: Tuesday, September 20, 2022 3:13 PM
To: users <users@dpdk.org>
Subject: [mlx5] (segmentation fault) rte_flow Template and Asynchronous APIs
Hi,
I am trying to test the new rte_flow template and asynchronous APIs but when I start the testpmd application I got a segmentation fault.
$ sudo ./app/dpdk-testpmd -a 04:00.1,dv_flow_en=2 -- -i --rxq=8 --txq=8
EAL: Detected CPU lcores: 48
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:04:00.1 (socket 0)
mlx5_common: DevX create q counter set failed errno=22 status=0 syndrome=0
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=523456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=523456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
Configuring Port 0 (socket 0)
Segmentation fault
Secondly, I also tried to debug the code and found that there are some missing functions in mlx5 PMD (mlx5_flow_hw_drv_ops):
Configuring Port 0 (socket 0)
Thread 1 "dpdk-testpmd" received signal SIGSEGV, Segmentation fault.
0x0000000000000000 in ?? ()
(gdb) bt
#0 0x0000000000000000 in ?? ()
#1 0x000000000201fb7e in flow_drv_validate (dev=0x7ce4d80 <rte_eth_devices>, attr=0x7fffffffd150, items=0x7fffffffd0f0, actions=0x7fffffffd0a0, external=false, hairpin=0, error=0x7fffffffd080)
at ../drivers/net/mlx5/mlx5_flow.c:3770
#2 0x00000000020302b1 in flow_list_create (dev=0x7ce4d80 <rte_eth_devices>, type=MLX5_FLOW_TYPE_CTL, attr=0x7fffffffd150, items=0x7fffffffd0f0, original_actions=0x7fffffffd0a0, external=false,
error=0x7fffffffd080) at ../drivers/net/mlx5/mlx5_flow.c:6872
#3 0x0000000002031976 in mlx5_ctrl_flow_vlan (dev=0x7ce4d80 <rte_eth_devices>, eth_spec=0x7fffffffd2c0, eth_mask=0x7fffffffd2c0, vlan_spec=0x0, vlan_mask=0x0) at ../drivers/net/mlx5/mlx5_flow.c:7626
#4 0x00000000020319dd in mlx5_ctrl_flow (dev=0x7ce4d80 <rte_eth_devices>, eth_spec=0x7fffffffd2c0, eth_mask=0x7fffffffd2c0) at ../drivers/net/mlx5/mlx5_flow.c:7651
#5 0x00000000020c86c5 in mlx5_traffic_enable (dev=0x7ce4d80 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_trigger.c:1411
#6 0x00000000020c7b6d in mlx5_dev_start (dev=0x7ce4d80 <rte_eth_devices>) at ../drivers/net/mlx5/mlx5_trigger.c:1173
#7 0x0000000000ae286a in rte_eth_dev_start (port_id=0) at ../lib/ethdev/rte_ethdev.c:1474
#8 0x000000000066b0ff in eth_dev_start_mp (port_id=0) at ../app/test-pmd/testpmd.c:646
#9 0x0000000000670135 in start_port (pid=65535) at ../app/test-pmd/testpmd.c:3027
#10 0x00000000006731e7 in main (argc=4, argv=0x7fffffffe3d0) at ../app/test-pmd/testpmd.c:4398
(gdb)
Please correct me if I am doing anything wronf.
Regards,
Haider
[-- Attachment #2: Type: text/html, Size: 13270 bytes --]
prev parent reply other threads:[~2022-10-04 6:33 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-20 12:12 Haider Ali
2022-09-21 19:58 ` Asaf Penso
2022-09-22 6:14 ` Haider Ali [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=MW5PR22MB3395203D43E90733ABECE9A9A74E9@MW5PR22MB3395.namprd22.prod.outlook.com \
--to=haider@dreambigsemi.com \
--cc=asafp@nvidia.com \
--cc=lmargalit@nvidia.com \
--cc=orika@nvidia.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).