DPDK usage discussions
 help / color / mirror / Atom feed
From: "Benoit Ganne (bganne)" <bganne@cisco.com>
To: Matan Azrad <matan@mellanox.com>, "users@dpdk.org" <users@dpdk.org>
Cc: Shahaf Shuler <shahafs@mellanox.com>,
	Slava Ovsiienko <viacheslavo@mellanox.com>
Subject: Re: [dpdk-users] mlx5 pmd + rdma-core 28 init failure
Date: Thu, 2 Apr 2020 17:03:04 +0000	[thread overview]
Message-ID: <CH2PR11MB432720C6D7FEF091A7265A23C1C60@CH2PR11MB4327.namprd11.prod.outlook.com> (raw)
In-Reply-To: <AM0PR0502MB40196B24C3501F0225224132D2C60@AM0PR0502MB4019.eurprd05.prod.outlook.com>

> Can you run with log level debug and send us the log?

Here it is:
~# sudo ./build/app/testpmd --log-level=8 --log-level=pmd.common.mlx5:8 --log-level=pmd.net.mlx5:8 -w 0000:5e:00.0 -w 0000:5e:00.1 -l 4,11,35 -- -a --forward-mode=rxonly
EAL: Detected 48 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:5e:00.0 on NUMA socket 0
EAL:   probe driver: 15b3:1013 net_mlx5
net_mlx5: mlx5.c:3040: mlx5_pci_probe(): checking device "mlx5_0"
net_mlx5: mlx5.c:3074: mlx5_pci_probe(): PCI information matches for device "mlx5_0"
net_mlx5: mlx5.c:3040: mlx5_pci_probe(): checking device "mlx5_1"
net_mlx5: mlx5.c:3305: mlx5_pci_probe(): no E-Switch support detected
net_mlx5: mlx5.c:2178: mlx5_dev_spawn(): naming Ethernet device "0000:5e:00.0"
net_mlx5: mlx5.c:548: mlx5_alloc_shared_ibctx(): DevX is NOT supported
net_mlx5: mlx5_mr.c:214: mlx5_mr_btree_init(): initialized B-tree 0x1003dcb20 with table 0x1003d9640
net_mlx5: mlx5.c:2251: mlx5_dev_spawn(): MPW isn't supported
net_mlx5: mlx5.c:2257: mlx5_dev_spawn(): SWP support: 7
net_mlx5: mlx5.c:2266: mlx5_dev_spawn(): 	min_single_stride_log_num_of_bytes: 0
net_mlx5: mlx5.c:2268: mlx5_dev_spawn(): 	max_single_stride_log_num_of_bytes: 0
net_mlx5: mlx5.c:2270: mlx5_dev_spawn(): 	min_single_wqe_log_num_of_strides: 0
net_mlx5: mlx5.c:2272: mlx5_dev_spawn(): 	max_single_wqe_log_num_of_strides: 0
net_mlx5: mlx5.c:2274: mlx5_dev_spawn(): 	supported_qpts: 0
net_mlx5: mlx5.c:2275: mlx5_dev_spawn(): device supports Multi-Packet RQ
net_mlx5: mlx5.c:2310: mlx5_dev_spawn(): tunnel offloading is supported
net_mlx5: mlx5.c:2322: mlx5_dev_spawn(): MPLS over GRE/UDP tunnel offloading is not supported
net_mlx5: mlx5.c:2473: mlx5_dev_spawn(): checksum offloading is supported
net_mlx5: mlx5.c:2493: mlx5_dev_spawn(): maximum Rx indirection table size is 512
net_mlx5: mlx5.c:2497: mlx5_dev_spawn(): VLAN stripping is supported
net_mlx5: mlx5.c:2501: mlx5_dev_spawn(): FCS stripping configuration is supported
net_mlx5: mlx5.c:2531: mlx5_dev_spawn(): MPS is disabled
net_mlx5: mlx5.c:2656: mlx5_dev_spawn(): port 0 MAC address is 24:8a:07:5b:14:14
net_mlx5: mlx5.c:2663: mlx5_dev_spawn(): port 0 ifname is "enp94s0f0"
net_mlx5: mlx5.c:2676: mlx5_dev_spawn(): port 0 MTU is 9216
net_mlx5: mlx5.c:2703: mlx5_dev_spawn(): port 0 forcing Ethernet interface up
net_mlx5: mlx5.c:1836: mlx5_set_min_inline(): min tx inline configured: 18
net_mlx5: mlx5_utils.c:41: mlx5_hlist_create(): Hash list with mlx5_0_flow_table size 0x1000 is created.

net_mlx5: mlx5_utils.c:41: mlx5_hlist_create(): Hash list with mlx5_0_tags size 0x2000 is created.

net_mlx5: mlx5_flow.c:550: mlx5_flow_discover_priorities(): port 0 flow maximum priority: 3
net_mlx5: mlx5.c:1887: mlx5_set_metadata_mask(): metadata mode 0
net_mlx5: mlx5.c:1888: mlx5_set_metadata_mask(): metadata MARK mask 00FFFFFF
net_mlx5: mlx5.c:1889: mlx5_set_metadata_mask(): metadata META mask FFFFFFFF
net_mlx5: mlx5.c:1890: mlx5_set_metadata_mask(): metadata reg_c0 mask FFFFFFFF
net_mlx5: mlx5.c:2771: mlx5_dev_spawn(): port 0 extensive metadata register is not supported
EAL: PCI device 0000:5e:00.1 on NUMA socket 0
EAL:   probe driver: 15b3:1013 net_mlx5
net_mlx5: mlx5.c:3040: mlx5_pci_probe(): checking device "mlx5_0"
net_mlx5: mlx5.c:3040: mlx5_pci_probe(): checking device "mlx5_1"
net_mlx5: mlx5.c:3074: mlx5_pci_probe(): PCI information matches for device "mlx5_1"
net_mlx5: mlx5.c:3305: mlx5_pci_probe(): no E-Switch support detected
net_mlx5: mlx5.c:2178: mlx5_dev_spawn(): naming Ethernet device "0000:5e:00.1"
net_mlx5: mlx5.c:548: mlx5_alloc_shared_ibctx(): DevX is NOT supported
net_mlx5: mlx5_mr.c:214: mlx5_mr_btree_init(): initialized B-tree 0x10037b420 with table 0x100377f40
net_mlx5: mlx5.c:2251: mlx5_dev_spawn(): MPW isn't supported
net_mlx5: mlx5.c:2257: mlx5_dev_spawn(): SWP support: 7
net_mlx5: mlx5.c:2266: mlx5_dev_spawn(): 	min_single_stride_log_num_of_bytes: 0
net_mlx5: mlx5.c:2268: mlx5_dev_spawn(): 	max_single_stride_log_num_of_bytes: 0
net_mlx5: mlx5.c:2270: mlx5_dev_spawn(): 	min_single_wqe_log_num_of_strides: 0
net_mlx5: mlx5.c:2272: mlx5_dev_spawn(): 	max_single_wqe_log_num_of_strides: 0
net_mlx5: mlx5.c:2274: mlx5_dev_spawn(): 	supported_qpts: 0
net_mlx5: mlx5.c:2275: mlx5_dev_spawn(): device supports Multi-Packet RQ
net_mlx5: mlx5.c:2310: mlx5_dev_spawn(): tunnel offloading is supported
net_mlx5: mlx5.c:2322: mlx5_dev_spawn(): MPLS over GRE/UDP tunnel offloading is not supported
net_mlx5: mlx5.c:2473: mlx5_dev_spawn(): checksum offloading is supported
net_mlx5: mlx5.c:2493: mlx5_dev_spawn(): maximum Rx indirection table size is 512
net_mlx5: mlx5.c:2497: mlx5_dev_spawn(): VLAN stripping is supported
net_mlx5: mlx5.c:2501: mlx5_dev_spawn(): FCS stripping configuration is supported
net_mlx5: mlx5.c:2531: mlx5_dev_spawn(): MPS is disabled
net_mlx5: mlx5.c:2656: mlx5_dev_spawn(): port 1 MAC address is 24:8a:07:5b:14:15
net_mlx5: mlx5.c:2663: mlx5_dev_spawn(): port 1 ifname is "enp94s0f1"
net_mlx5: mlx5.c:2676: mlx5_dev_spawn(): port 1 MTU is 9216
net_mlx5: mlx5.c:2703: mlx5_dev_spawn(): port 1 forcing Ethernet interface up
net_mlx5: mlx5.c:1836: mlx5_set_min_inline(): min tx inline configured: 18
net_mlx5: mlx5_utils.c:41: mlx5_hlist_create(): Hash list with mlx5_1_flow_table size 0x1000 is created.

net_mlx5: mlx5_utils.c:41: mlx5_hlist_create(): Hash list with mlx5_1_tags size 0x2000 is created.

net_mlx5: mlx5_flow.c:550: mlx5_flow_discover_priorities(): port 1 flow maximum priority: 3
net_mlx5: mlx5.c:1887: mlx5_set_metadata_mask(): metadata mode 0
net_mlx5: mlx5.c:1888: mlx5_set_metadata_mask(): metadata MARK mask 00FFFFFF
net_mlx5: mlx5.c:1889: mlx5_set_metadata_mask(): metadata META mask FFFFFFFF
net_mlx5: mlx5.c:1890: mlx5_set_metadata_mask(): metadata reg_c0 mask FFFFFFFF
net_mlx5: mlx5.c:2771: mlx5_dev_spawn(): port 1 extensive metadata register is not supported
Auto-start selected
Set rxonly packet forwarding mode
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
net_mlx5: mlx5_ethdev.c:424: mlx5_dev_configure(): port 0 Tx queues number update: 0 -> 1
net_mlx5: mlx5_ethdev.c:435: mlx5_dev_configure(): port 0 Rx queues number update: 0 -> 1
net_mlx5: mlx5_txq.c:172: mlx5_tx_queue_pre_setup(): port 0 configuring queue 0 for 256 descriptors
net_mlx5: mlx5_mr.c:214: mlx5_mr_btree_init(): initialized B-tree 0x101051c28 with table 0x1010505c0
net_mlx5: mlx5_txq.c:225: mlx5_tx_queue_setup(): port 0 adding Tx queue 0 to list
net_mlx5: mlx5_rxq.c:468: mlx5_rx_queue_pre_setup(): port 0 configuring Rx queue 0 for 256 descriptors
net_mlx5: mlx5_mr.c:214: mlx5_mr_btree_init(): initialized B-tree 0x10104fa6c with table 0x10104e500
net_mlx5: mlx5_rxq.c:1921: mlx5_rxq_new(): port 0 maximum number of segments per packet: 1
net_mlx5: mlx5_rxq.c:1759: mlx5_max_lro_msg_size_adjust(): port 0 Rx Queue 0 max LRO message size adjusted to 1280 bytes
net_mlx5: mlx5_rxq.c:1968: mlx5_rxq_new(): port 0 CRC stripping is enabled, 0 bytes will be subtracted from incoming frames to hide it
net_mlx5: mlx5_rxq.c:525: mlx5_rx_queue_setup(): port 0 adding Rx queue 0 to list
net_mlx5: mlx5_trigger.c:276: mlx5_dev_start(): port 0 starting device
net_mlx5: mlx5_ethdev.c:493: mlx5_dev_configure_rss_reta(): port 0 Rx queues number update: 1 -> 1
net_mlx5: mlx5_txq.c:55: txq_alloc_elts(): port 0 Tx queue 0 allocated and configured 256 WRs
net_mlx5: mlx5_txq.c:771: mlx5_txq_obj_new(): port 0: uar_mmap_offset 0x306000
net_mlx5: mlx5_trigger.c:145: mlx5_rxq_start(): port 0 Rx queue 0 registering mp mbuf_pool_socket_0 having 1 chunks
net_mlx5: mlx5_mr.c:600: mlx5_mr_create_primary(): port 0 creating a MR using address (0x10109b4c0)
net_mlx5: mlx5_mr.c:649: mlx5_mr_create_primary(): port 0 extending 0x10109b4c0 to [0x100200000, 0x118200000), page_sz=0x200000, ms_n=192
net_mlx5: mlx5_mr.c:786: mlx5_mr_create_primary(): port 0 MR CREATED (0x10104e140) for 0x10109b4c0:
  [0x100200000, 0x118200000), lkey=0x104b0100 base_idx=0 ms_n=192, ms_bmp_n=192
net_mlx5: mlx5_mr.c:345: mr_insert_dev_cache(): device mlx5_0 inserting MR(0x10104e140) to global cache
net_mlx5: mlx5_mr.c:173: mr_btree_insert(): inserted B-tree(0x1003dcb20)[1], [0x100200000, 0x118200000) lkey=0x104b0100
net_mlx5: mlx5_mr.c:173: mr_btree_insert(): inserted B-tree(0x10104fa6c)[1], [0x100200000, 0x118200000) lkey=0x104b0100
net_mlx5: mlx5_rxq.c:257: rxq_alloc_elts_sprq(): port 0 Rx queue 0 allocated and configured 256 segments (max 256 packets)
net_mlx5: mlx5_rxq.c:1402: mlx5_rxq_obj_new(): port 0 device_attr.max_qp_wr is 32768
net_mlx5: mlx5_rxq.c:1404: mlx5_rxq_obj_new(): port 0 device_attr.max_sge is 30
net_mlx5: mlx5_rxq.c:1478: mlx5_rxq_obj_new(): port 0 rxq 0 updated with 0x7ffda24f21f8
net_mlx5: mlx5_trigger.c:322: mlx5_dev_start(): port 0 failed to set defaults flows
net_mlx5: mlx5_rxq.c:333: rxq_free_elts_sprq(): port 0 Rx queue 0 freeing WRs
Fail to start port 0
Configuring Port 1 (socket 0)
net_mlx5: mlx5_ethdev.c:424: mlx5_dev_configure(): port 1 Tx queues number update: 0 -> 1
net_mlx5: mlx5_ethdev.c:435: mlx5_dev_configure(): port 1 Rx queues number update: 0 -> 1
net_mlx5: mlx5_txq.c:172: mlx5_tx_queue_pre_setup(): port 1 configuring queue 0 for 256 descriptors
net_mlx5: mlx5_mr.c:214: mlx5_mr_btree_init(): initialized B-tree 0x10104d6a8 with table 0x10104c040
net_mlx5: mlx5_txq.c:225: mlx5_tx_queue_setup(): port 1 adding Tx queue 0 to list
net_mlx5: mlx5_rxq.c:468: mlx5_rx_queue_pre_setup(): port 1 configuring Rx queue 0 for 256 descriptors
net_mlx5: mlx5_mr.c:214: mlx5_mr_btree_init(): initialized B-tree 0x10104b4ec with table 0x101049f80
net_mlx5: mlx5_rxq.c:1921: mlx5_rxq_new(): port 1 maximum number of segments per packet: 1
net_mlx5: mlx5_rxq.c:1759: mlx5_max_lro_msg_size_adjust(): port 1 Rx Queue 0 max LRO message size adjusted to 1280 bytes
net_mlx5: mlx5_rxq.c:1968: mlx5_rxq_new(): port 1 CRC stripping is enabled, 0 bytes will be subtracted from incoming frames to hide it
net_mlx5: mlx5_rxq.c:525: mlx5_rx_queue_setup(): port 1 adding Rx queue 0 to list
net_mlx5: mlx5_trigger.c:276: mlx5_dev_start(): port 1 starting device
net_mlx5: mlx5_ethdev.c:493: mlx5_dev_configure_rss_reta(): port 1 Rx queues number update: 1 -> 1
net_mlx5: mlx5_txq.c:55: txq_alloc_elts(): port 1 Tx queue 0 allocated and configured 256 WRs
net_mlx5: mlx5_txq.c:771: mlx5_txq_obj_new(): port 1: uar_mmap_offset 0x306000
net_mlx5: mlx5_trigger.c:145: mlx5_rxq_start(): port 1 Rx queue 0 registering mp mbuf_pool_socket_0 having 1 chunks
net_mlx5: mlx5_mr.c:600: mlx5_mr_create_primary(): port 1 creating a MR using address (0x10109b4c0)
net_mlx5: mlx5_mr.c:649: mlx5_mr_create_primary(): port 1 extending 0x10109b4c0 to [0x100200000, 0x118200000), page_sz=0x200000, ms_n=192
net_mlx5: mlx5_mr.c:786: mlx5_mr_create_primary(): port 1 MR CREATED (0x101045dc0) for 0x10109b4c0:
  [0x100200000, 0x118200000), lkey=0xcac80f00 base_idx=0 ms_n=192, ms_bmp_n=192
net_mlx5: mlx5_mr.c:345: mr_insert_dev_cache(): device mlx5_1 inserting MR(0x101045dc0) to global cache
net_mlx5: mlx5_mr.c:173: mr_btree_insert(): inserted B-tree(0x10037b420)[1], [0x100200000, 0x118200000) lkey=0xcac80f00
net_mlx5: mlx5_mr.c:173: mr_btree_insert(): inserted B-tree(0x10104b4ec)[1], [0x100200000, 0x118200000) lkey=0xcac80f00
net_mlx5: mlx5_rxq.c:257: rxq_alloc_elts_sprq(): port 1 Rx queue 0 allocated and configured 256 segments (max 256 packets)
net_mlx5: mlx5_rxq.c:1402: mlx5_rxq_obj_new(): port 1 device_attr.max_qp_wr is 32768
net_mlx5: mlx5_rxq.c:1404: mlx5_rxq_obj_new(): port 1 device_attr.max_sge is 30
net_mlx5: mlx5_rxq.c:1478: mlx5_rxq_obj_new(): port 1 rxq 0 updated with 0x7ffda24f21f8
net_mlx5: mlx5_trigger.c:322: mlx5_dev_start(): port 1 failed to set defaults flows
net_mlx5: mlx5_rxq.c:333: rxq_free_elts_sprq(): port 1 Rx queue 0 freeing WRs
Fail to start port 1
Please stop the ports first
Done
No commandline core given, start packet forwarding
Not all ports were started
Press enter to exit

Stopping port 0...
Stopping ports...
Done

Stopping port 1...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
net_mlx5: mlx5.c:1233: mlx5_dev_close(): port 0 closing device "mlx5_0"
net_mlx5: mlx5_mr.c:230: mlx5_mr_btree_free(): freeing B-tree 0x10104fa6c with table 0x10104e500
net_mlx5: mlx5_txq.c:77: txq_free_elts(): port 0 Tx queue 0 freeing WRs
net_mlx5: mlx5_mr.c:230: mlx5_mr_btree_free(): freeing B-tree 0x101051c28 with table 0x1010505c0
net_mlx5: mlx5_mr.c:1569: mlx5_mr_dump_dev(): device mlx5_0 MR[0], LKey = 0x104b0100, ms_n = 192, ms_bmp_n = 192
net_mlx5: mlx5_mr.c:1579: mlx5_mr_dump_dev():   chunk[0], [0x100200000, 0x118200000)
net_mlx5: mlx5_mr.c:1582: mlx5_mr_dump_dev(): device mlx5_0 dumping global cache
net_mlx5: mlx5_mr.c:256: mlx5_mr_btree_dump(): B-tree(0x1003dcb20)[0], [0x0, 0x0) lkey=0xffffffff
net_mlx5: mlx5_mr.c:256: mlx5_mr_btree_dump(): B-tree(0x1003dcb20)[1], [0x100200000, 0x118200000) lkey=0x104b0100
net_mlx5: mlx5_mr.c:230: mlx5_mr_btree_free(): freeing B-tree 0x1003dcb20 with table 0x1003d9640
net_mlx5: mlx5_mr.c:459: mr_free(): freeing MR(0x10104e140):
Done

Shutting down port 1...
Closing ports...
net_mlx5: mlx5.c:1233: mlx5_dev_close(): port 1 closing device "mlx5_1"
net_mlx5: mlx5_mr.c:230: mlx5_mr_btree_free(): freeing B-tree 0x10104b4ec with table 0x101049f80
net_mlx5: mlx5_txq.c:77: txq_free_elts(): port 1 Tx queue 0 freeing WRs
net_mlx5: mlx5_mr.c:230: mlx5_mr_btree_free(): freeing B-tree 0x10104d6a8 with table 0x10104c040
net_mlx5: mlx5_mr.c:1569: mlx5_mr_dump_dev(): device mlx5_1 MR[0], LKey = 0xcac80f00, ms_n = 192, ms_bmp_n = 192
net_mlx5: mlx5_mr.c:1579: mlx5_mr_dump_dev():   chunk[0], [0x100200000, 0x118200000)
net_mlx5: mlx5_mr.c:1582: mlx5_mr_dump_dev(): device mlx5_1 dumping global cache
net_mlx5: mlx5_mr.c:256: mlx5_mr_btree_dump(): B-tree(0x10037b420)[0], [0x0, 0x0) lkey=0xffffffff
net_mlx5: mlx5_mr.c:256: mlx5_mr_btree_dump(): B-tree(0x10037b420)[1], [0x100200000, 0x118200000) lkey=0xcac80f00
net_mlx5: mlx5_mr.c:230: mlx5_mr_btree_free(): freeing B-tree 0x10037b420 with table 0x100377f40
net_mlx5: mlx5_mr.c:459: mr_free(): freeing MR(0x101045dc0):
Done

Bye...

Best
ben

> From: Benoit Ganne (bganne) <bganne@cisco.com>
> Sent: Thursday, April 2, 2020 6:50:22 PM
> To: users@dpdk.org <users@dpdk.org>
> Cc: Matan Azrad <matan@mellanox.com>; Shahaf Shuler
> <shahafs@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>
> Subject: RE: mlx5 pmd + rdma-core 28 init failure
> 
> Adding MLX5 PMD maintainers.
> 
> I also checked with latest rdma-core master and latest DPDK master and it
> is failing with the same issue.
> Any recommendation?
> 
> Best
> ben
> 
> > -----Original Message-----
> > From: Benoit Ganne (bganne)
> > Sent: mercredi 1 avril 2020 18:52
> > To: users@dpdk.org
> > Subject: mlx5 pmd + rdma-core 28 init failure
> >
> > Hi all,
> >
> > I have troubles making DPDK v20.02 MLX5 PMD working with rdma-core
> v28.0:
> > it looks like the flow initialization done while initializing the device
> > fails in rdma-core providers/mlx5/dr_table.c:mlx5dv_dr_table_create()
> > because of unsupported parameters.
> > The issue is the following test in rdma-core
> > providers/mlx5/dr_table.c:mlx5dv_dr_table_create():
> >     if (level && !dmn->info.supp_sw_steering) {
> >         errno = EOPNOTSUPP;
> >         goto dec_ref;
> >     }
> > Where level == 65534 and dmn->info.supp_sw_steering == false, hence the
> > test is true and the function returns EOPNOTSUPP.
> >
> > Is this expected? It does work fine with ibv instead of dv API. Any help
> > appreciated.
> >
> > Here is what I do:
> >    1) checkout & compile rdma-core v28.0
> > ~# git clone
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.c
> om%2Flinux-rdma%2Frdma-
> core&amp;data=02%7C01%7Cmatan%40mellanox.com%7C7984449b13af4519d70608d7d71
> d9000%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C637214394282618427&amp;
> sdata=DSLvWg3cWwY8prSFOm8BAFf1I2bbKnqPOKBPEiy4b44%3D&amp;reserved=0
> <https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.
> com%2Flinux-rdma%2Frdma-
> core&amp;data=02%7C01%7Cmatan%40mellanox.com%7C7984449b13af4519d70608d7d71
> d9000%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C637214394282618427&amp;
> sdata=DSLvWg3cWwY8prSFOm8BAFf1I2bbKnqPOKBPEiy4b44%3D&amp;reserved=0>
> > ~# cd rdma-core
> > ~# git checkout v28.0
> > ~# mkdir build
> > ~# cd build
> > ~# CFLAGS=-fPIC cmake -DIN_PLACE=1 -DENABLE_STATIC=1 -GNinja ..
> > ~# ninja
> >
> >    2) checkout & compile dpdk v20.02
> > ~# git clone git://dpdk.org/dpdk
> > ~# cd dpdk
> > ~# make config T=x86_64-native-linuxapp-gcc
> > ~# sed -ri 's,(MLX5_PMD=).*,\1y,' build/.config
> > ~# sed -ri 's,(IBVERBS_LINK_STATIC_PMD=).*,\1y,' build/.config
> > ~# make EXTRA_CFLAGS=-I/home/bganne/src/rdma-core/build/include
> > EXTRA_LDFLAGS=-L/home/bganne/src/rdma-core/build/lib
> > PKG_CONFIG_PATH=/home/bganne/src/rdma-core/build/lib/pkgconfig
> > ~# sudo gdb --args ./build/app/testpmd -w 0000:5e:00.0 -w 0000:5e:00.1 -
> l
> > 4,11,35 -- -a --forward-mode=rxonly -i
> >
> > The backtrace looks like this:
> > #0  mlx5dv_dr_table_create (dmn=0x555556c641b0, level=65534) at
> > ../providers/mlx5/dr_table.c:183
> > #1  0x0000555555dfaeaa in flow_dv_tbl_resource_get (dev=<optimized out>,
> > table_id=65534, egress=<optimized out>, transfer=<optimized out>,
> > error=0x7fffffffdca0) at
> > /home/bganne/src/dpdk/drivers/net/mlx5/mlx5_flow_dv.c:6746
> > #2  0x0000555555e02b28 in __flow_dv_translate
> > (dev=dev@entry=0x555556bbcdc0 <rte_eth_devices>, dev_flow=0x100388300,
> > attr=<optimized out>, items=<optimized out>, actions=<optimized out>,
> > error=<optimized out>) at
> > /home/bganne/src/dpdk/drivers/net/mlx5/mlx5_flow_dv.c:7503
> > #3  0x0000555555e04954 in flow_dv_translate (dev=0x555556bbcdc0
> > <rte_eth_devices>, dev_flow=<optimized out>, attr=<optimized out>,
> > items=<optimized out>, actions=<optimized out>, error=<optimized out>)
> at
> > /home/bganne/src/dpdk/drivers/net/mlx5/mlx5_flow_dv.c:8841
> > #4  0x0000555555df152f in flow_drv_translate (error=0x7fffffffdca0,
> > actions=0x7fffffffdce0, items=0x7fffffffdcc0, attr=0x7fffffffbb88,
> > dev_flow=<optimized out>, dev=0x555556bbcdc0 <rte_eth_devices>) at
> > /home/bganne/src/dpdk/drivers/net/mlx5/mlx5_flow.c:2571
> > #5  flow_create_split_inner (error=0x7fffffffdca0, external=false,
> > actions=0x7fffffffdce0, items=0x7fffffffdcc0, attr=0x7fffffffbb88,
> > prefix_layers=0, sub_flow=0x0, flow=0x1003885c0, dev=0x555556bbcdc0
> > <rte_eth_devices>) at
> > /home/bganne/src/dpdk/drivers/net/mlx5/mlx5_flow.c:3490
> > #6  flow_create_split_metadata (error=0x7fffffffdca0, external=false,
> > actions=0x7fffffffdce0, items=0x7fffffffdcc0, attr=0x7fffffffbb88,
> > prefix_layers=0, flow=0x1003885c0, dev=0x555556bbcdc0 <rte_eth_devices>)
> > at /home/bganne/src/dpdk/drivers/net/mlx5/mlx5_flow.c:3865
> > #7  flow_create_split_meter (error=0x7fffffffdca0, external=false,
> > actions=0x7fffffffdce0, items=<optimized out>, attr=0x7fffffffdc94,
> > flow=0x1003885c0, dev=0x555556bbcdc0 <rte_eth_devices>) at
> > /home/bganne/src/dpdk/drivers/net/mlx5/mlx5_flow.c:4121
> > #8  flow_create_split_outer (error=0x7fffffffdca0, external=false,
> > actions=0x7fffffffdce0, items=<optimized out>, attr=0x7fffffffdc94,
> > flow=0x1003885c0, dev=0x555556bbcdc0 <rte_eth_devices>) at
> > /home/bganne/src/dpdk/drivers/net/mlx5/mlx5_flow.c:4178
> > #9  flow_list_create (dev=dev@entry=0x555556bbcdc0 <rte_eth_devices>,
> > list=list@entry=0x0, attr=attr@entry=0x7fffffffdc94,
> > items=items@entry=0x7fffffffdcc0, actions=actions@entry=0x7fffffffdce0,
> > external=external@entry=false, error=0x7fffffffdca0) at
> > /home/bganne/src/dpdk/drivers/net/mlx5/mlx5_flow.c:4306
> > #10 0x0000555555df8587 in mlx5_flow_discover_mreg_c
> > (dev=dev@entry=0x555556bbcdc0 <rte_eth_devices>) at
> > /home/bganne/src/dpdk/drivers/net/mlx5/mlx5_flow.c:5747
> > #11 0x0000555555d692a6 in mlx5_dev_spawn (config=..., spawn=0x1003e9e00,
> > dpdk_dev=0x555556dd6fe0) at
> > /home/bganne/src/dpdk/drivers/net/mlx5/mlx5.c:2763
> > #12 mlx5_pci_probe (pci_drv=<optimized out>, pci_dev=<optimized out>) at
> > /home/bganne/src/dpdk/drivers/net/mlx5/mlx5.c:3363
> > #13 0x0000555555a411c8 in pci_probe_all_drivers ()
> > #14 0x0000555555a412f8 in rte_pci_probe ()
> > #15 0x0000555555a083da in rte_bus_probe ()
> > #16 0x00005555559f204d in rte_eal_init ()
> > #17 0x00005555556a0d45 in main ()
> >
> > Best
> > ben


  reply	other threads:[~2020-04-02 17:03 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-01 16:51 Benoit Ganne (bganne)
2020-04-02 15:50 ` Benoit Ganne (bganne)
2020-04-02 16:18   ` Matan Azrad
2020-04-02 17:03     ` Benoit Ganne (bganne) [this message]
2020-04-05 11:01       ` Matan Azrad
2020-04-06  9:14         ` Benoit Ganne (bganne)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CH2PR11MB432720C6D7FEF091A7265A23C1C60@CH2PR11MB4327.namprd11.prod.outlook.com \
    --to=bganne@cisco.com \
    --cc=matan@mellanox.com \
    --cc=shahafs@mellanox.com \
    --cc=users@dpdk.org \
    --cc=viacheslavo@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).