From: Jaeeun Ham <jaeeun.ham@ericsson.com>
To: Thomas Monjalon <thomas@monjalon.net>
Cc: "users@dpdk.org" <users@dpdk.org>,
"alialnu@nvidia.com" <alialnu@nvidia.com>,
"rasland@nvidia.com" <rasland@nvidia.com>,
"asafp@nvidia.com" <asafp@nvidia.com>
Subject: RE: I need DPDK MLX5 Probe error support
Date: Sat, 2 Oct 2021 10:57:04 +0000 [thread overview]
Message-ID: <HE1PR07MB4220DC66DE361434F4D5A853F3AC9@HE1PR07MB4220.eurprd07.prod.outlook.com> (raw)
In-Reply-To: <1682223.0z2BAqYvtR@thomas>
Hi,
Could you teach me how to install dpdk-testpmd?
I have to run the application on the host server, not a development server.
So, I don't know how to get dpdk-testpmd.
By the way, testpmd run result is as below.
root@seroics05590:~/ejaeham# testpmd
EAL: Detected 64 lcore(s)
EAL: libmlx4.so.1: cannot open shared object file: No such file or directory
EAL: FATAL: Cannot init plugins
EAL: Cannot init plugins
PANIC in main():
Cannot init EAL
5: [testpmd(_start+0x2a) [0x55d301d98e1a]]
4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7f5e044a4bf7]]
3: [testpmd(main+0x907) [0x55d301d98d07]]
2: [/usr/lib/x86_64-linux-gnu/librte_eal.so.17.11(__rte_panic+0xbd) [0x7f5e04ca3cfd]]
1: [/usr/lib/x86_64-linux-gnu/librte_eal.so.17.11(rte_dump_stack+0x2e) [0x7f5e04cac19e]]
Aborted
I added option below when the process is starting in the docker.
dv_flow_en=0 \
--log-level=pmd,8 \
< MLX5 log >
415a695ba348:/tmp/logs # cat epp.log
MIDHAUL_PCI_ADDR:0000:12:01.0, BACKHAUL_PCI_ADDR:0000:12:01.1
MIDHAUL_IP_ADDR:10.255.21.177, BACKHAUL_IP_ADDR:10.255.21.178
mlx5_pci: unable to recognize master/representors on the multiple IB devices
common_mlx5: Failed to load driver = mlx5_pci.
EAL: Requested device 0000:12:01.0 cannot be used
mlx5_pci: unable to recognize master/representors on the multiple IB devices
common_mlx5: Failed to load driver = mlx5_pci.
EAL: Requested device 0000:12:01.1 cannot be used
EAL: Bus (pci) probe failed.
EAL: Trying to obtain current memory policy.
EAL: Setting policy MPOL_PREFERRED for socket 1
Caught signal 15
EAL: Restoring previous memory policy: 0
EAL: Calling mem event callback 'MLX5_MEM_EVENT_CB:(nil)'
EAL: request: mp_malloc_sync
EAL: Heap on socket 1 was expanded by 5120MB
FATAL: epp_init.c::copy_mac_addr:130: Call to rte_eth_dev_get_port_by_name(src_dpdk_dev_name, &port_id) failed: -19 (Unknown error -19), rte_errno=0 (not set)
Caught signal 6
Obtained 7 stack frames, tid=713.
tid=713, /usr/local/bin/ericsson-packet-processor() [0x40a4a4]
tid=713, /lib64/libpthread.so.0(+0x13f80) [0x7f7e1eae8f80]
tid=713, /lib64/libc.so.6(gsignal+0x10b) [0x7f7e1c5f818b]
tid=713, /lib64/libc.so.6(abort+0x175) [0x7f7e1c5f9585]
tid=713, /usr/local/bin/ericsson-packet-processor(main+0x458) [0x406818]
tid=713, /lib64/libc.so.6(__libc_start_main+0xed) [0x7f7e1c5e334d]
tid=713, /usr/local/bin/ericsson-packet-processor(_start+0x2a) [0x4091ca]
< i40e log >
cat epp.log
MIDHAUL_PCI_ADDR:0000:3b:0d.5, BACKHAUL_PCI_ADDR:0000:3b:0d.4
MIDHAUL_IP_ADDR:10.51.21.112, BACKHAUL_IP_ADDR:10.51.21.113
EAL: Trying to obtain current memory policy.
EAL: Setting policy MPOL_PREFERRED for socket 1
EAL: Restoring previous memory policy: 0
EAL: Calling mem event callback 'vfio_mem_event_clb:(nil)'
EAL: request: mp_malloc_sync
EAL: Heap on socket 1 was expanded by 5120MB
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 28
i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
i40e_set_tx_function_flag(): Neither simple nor vector Tx enabled on Tx queue 0
i40evf_dev_start(): >>
i40evf_config_rss(): No hash flag is set
i40e_set_rx_function(): Vector Rx path will be used on port=0.
i40e_set_tx_function(): Xmit tx finally be used.
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 6
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 7
i40evf_add_del_all_mac_addr(): add/rm mac:62:64:21:84:83:b0
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 10
i40evf_dev_rx_queue_start(): >>
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 8
i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event
i40evf_dev_tx_queue_start(): >>
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 8
i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 14
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 14
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 14
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 28
i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0.
i40e_set_tx_function_flag(): Neither simple nor vector Tx enabled on Tx queue 0
i40evf_dev_start(): >>
i40evf_config_rss(): No hash flag is set
i40e_set_rx_function(): Vector Rx path will be used on port=1.
i40e_set_tx_function(): Xmit tx finally be used.
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 6
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 7
i40evf_add_del_all_mac_addr(): add/rm mac:c2:88:5c:a9:a2:ef
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 10
i40evf_dev_rx_queue_start(): >>
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 8
i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event
i40evf_dev_tx_queue_start(): >>
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 8
i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 14
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 14
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 14
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1
i40evf_dev_mtu_set(): port 1 must be stopped before configuration
i40evf_dev_mtu_set(): port 0 must be stopped before configuration
Caught signal 10
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 15
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 15
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 15
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 15
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 15
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 15
Caught signal 10
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 15
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 15
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 15
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 15
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 15
i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported
i40evf_handle_aq_msg(): adminq response is received, opcode = 15
process start option which is triggered by shell script is as below.
< start-epp.sh >
exec /usr/local/bin/ericsson-packet-processor \
$(get_dpdk_core_list_parameter) \
$(get_dpdk_mem_parameter) \
$(get_dpdk_hugepage_parameters) \
-d /usr/local/lib/librte_mempool_ring.so \
-d /usr/local/lib/librte_mempool_stack.so \
-d /usr/local/lib/librte_net_pcap.so \
-d /usr/local/lib/librte_net_i40e.so \
-d /usr/local/lib/librte_net_mlx5.so \
-d /usr/local/lib/librte_event_dsw.so \
$DPDK_PCI_OPTIONS \
--vdev=event_dsw0 \
--vdev=eth_pcap0,iface=midhaul_edk \
--vdev=eth_pcap1,iface=backhaul_edk \
--file-prefix=container \
--log-level lib.eal:debug \
dv_flow_en=0 \
--log-level=pmd,8 \
-- \
$(get_epp_mempool_parameter) \
"--neighbor-discovery-interface=midhaul_ker,${MIDHAUL_IP_ADDR},mac_addr_dev=${MIDHAUL_MAC_ADDR_DEV},vr_id=0" \
"--neighbor-discovery-interface=backhaul_ker,${BACKHAUL_IP_ADDR},mac_addr_dev=${BACKHAUL_MAC_ADDR_DEV},vr_id=1"
BR/Jaeeun
-----Original Message-----
From: Thomas Monjalon <thomas@monjalon.net>
Sent: Wednesday, September 29, 2021 8:16 PM
To: Jaeeun Ham <jaeeun.ham@ericsson.com>
Cc: users@dpdk.org; alialnu@nvidia.com; rasland@nvidia.com; asafp@nvidia.com
Subject: Re: I need DPDK MLX5 Probe error support
27/09/2021 02:18, Jaeeun Ham:
> Hi,
>
> I hope you are well.
> My name is Jaeeun Ham and I have been working for the Ericsson.
>
> I am suffering from enabling MLX5 NIC, so could you take a look at how to run it?
> There are two pci address for the SRIOV(vfio) mlx5 nic support but it
> doesn't run correctly. (12:01.0, 12:01.1)
>
> I started one process which is running inside the docker process that is on the MLX5 NIC support host server.
> The process started to run with following option.
> -d /usr/local/lib/librte_net_mlx5.so And the docker process has
> mlx5 libraries as below.
Did you try on the host outside of any container?
Please could you try following commands (variables to be replaced)?
dpdk-hugepages.py --reserve 1G
ip link set $netdev netns $container
docker run --cap-add SYS_NICE --cap-add IPC_LOCK --cap-add NET_ADMIN \
--device /dev/infiniband/ $image
echo show port summary all | dpdk-testpmd --in-memory -- -i
> 706a37a35d29:/usr/local/lib # ls -1 | grep mlx librte_common_mlx5.so
> librte_common_mlx5.so.21
> librte_common_mlx5.so.21.0
> librte_net_mlx5.so
> librte_net_mlx5.so.21
> librte_net_mlx5.so.21.0
>
> But I failed to run the process with following error.
> (MIDHAUL_PCI_ADDR:0000:12:01.0, BACKHAUL_PCI_ADDR:0000:12:01.1)
>
> ---
>
> mlx5_pci: unable to recognize master/representors on the multiple IB
> devices
> common_mlx5: Failed to load driver = mlx5_pci.
> EAL: Requested device 0000:12:01.0 cannot be used
> mlx5_pci: unable to recognize master/representors on the multiple IB
> devices
> common_mlx5: Failed to load driver = mlx5_pci.
> EAL: Requested device 0000:12:01.1 cannot be used
> EAL: Bus (pci) probe failed.
>
> ---
>
> For the success case of pci address 12:01.2, it showed following messages.
>
> ---
>
> EAL: Detected 64 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/nah2/mp_socket
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: PCI device 0000:12:01.2 on NUMA socket 0
> EAL: probe driver: 15b3:1016 net_mlx5
> net_mlx5: MPLS over GRE/UDP tunnel offloading disabled due to old
> OFED/rdma-core version or firmware configuration
> net_mlx5: port 0 the requested maximum Rx packet size (2056) is larger
> than a single mbuf (2048) and scattered mode has not been requested
> USER1: rte_ip_frag_table_create: allocated of 6291584 bytes at socket
> 0
>
> ---
>
> BR/Jaeeun
next prev parent reply other threads:[~2021-10-04 9:36 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <HE1PR07MB422057B3D4E22FB2BE882D37F3A59@HE1PR07MB4220.eurprd07.prod.outlook.com>
[not found] ` <HE1PR07MB42201B7FB0F61E6FCB39AAE9F3A79@HE1PR07MB4220.eurprd07.prod.outlook.com>
2021-09-29 11:16 ` Thomas Monjalon
2021-10-02 10:57 ` Jaeeun Ham [this message]
2021-10-03 7:51 ` Thomas Monjalon
2021-10-03 8:10 ` Jaeeun Ham
2021-10-03 18:44 ` Thomas Monjalon
2021-10-05 1:17 ` Jaeeun Ham
2021-10-05 6:00 ` Thomas Monjalon
2021-10-06 9:57 ` Jaeeun Ham
2021-10-06 10:58 ` Thomas Monjalon
[not found] ` <HE1PR07MB42208754E63C0DE2D0F0D138F3B09@HE1PR07MB4220.eurprd07.prod.outlook.com>
2021-10-06 13:19 ` Thomas Monjalon
2021-10-09 1:12 ` Jaeeun Ham
2021-10-09 1:15 ` Jaeeun Ham
2021-10-09 4:42 ` Jaeeun Ham
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=HE1PR07MB4220DC66DE361434F4D5A853F3AC9@HE1PR07MB4220.eurprd07.prod.outlook.com \
--to=jaeeun.ham@ericsson.com \
--cc=alialnu@nvidia.com \
--cc=asafp@nvidia.com \
--cc=rasland@nvidia.com \
--cc=thomas@monjalon.net \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).