From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A795AA034F for ; Sun, 3 Oct 2021 09:51:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3ED2D40698; Sun, 3 Oct 2021 09:51:26 +0200 (CEST) Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com [66.111.4.25]) by mails.dpdk.org (Postfix) with ESMTP id 143CD4068E for ; Sun, 3 Oct 2021 09:51:24 +0200 (CEST) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 478765C00C5; Sun, 3 Oct 2021 03:51:22 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Sun, 03 Oct 2021 03:51:22 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm2; bh= vRDbfbgJXR/YITPwqPjbalhiHxZpSmzGC2mApXB7tK8=; b=YeMAcLj79ETzWsAq DvzuOwDhR76/fN1OZv3XTHlj2y1ZlXNPbUV3iYdQWeOAVoM1dQQRHGRVUDZ2ajnu tzVFNPGBjlazKkoSCtNjTMxWApeWpQ6nkrhyED4drnyz5g8shnPbpTU3gtM1GPNg 8A/B1nosQiYzMh+1WCMTm5BWCR7QfxFNFC2KFYpLok2ar5e1Q/o8NCFlXFDYZw8L hd0hWgHwrHpf8H6R6uJxMpTDxnqj/tc215bOwBDLLZgZWxoYXsfp5tnqwYp+KGPg KOw1ui/e0qHJJsBRDCyUmfgXKjEWTsVA+B3tn9yhsIL31kHbcADcntv/8u8npnrR 9MHC0Q== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; bh=vRDbfbgJXR/YITPwqPjbalhiHxZpSmzGC2mApXB7t K8=; b=VRQlus9RrBaZ17g5YrrddXjOtffy8VGp5xgmZEIsFw60ilkpOHtM5SZ46 EbJdqOEXN7whNwHR4kiGEZCU0RTHGvDpC97RwZ2NUGAZhUIZGr8gVmipU504Rujq 7kJ6aansczHIZsL281dI9YsWONbKHDxZtEnrgZHvmrPNAHWDgALGzBDpK9Lg65bi GbaLuCwqmosujd+twY6foORNwJsZRi06cDaRut/uZ5rsXEq1oZoiB+2+V41cuuXX WT2CcuestRqYQ3zZAeZ9aTmBqVzcR4v9rjUTdNnHa9IMucKlW1rlesMYVZPLcGBY nBkdWmhj794m5C5xqpc4+Awj8jf1A== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudekledguddvudcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhm rghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenuc ggtffrrghtthgvrhhnpeffvdffjeeuteelfeeileduudeugfetjeelveefkeejfeeigeeh teffvdekfeegudenucffohhmrghinhepughpughkrdhorhhgnecuvehluhhsthgvrhfuih iivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepthhhohhmrghssehmohhnjhgrlhho nhdrnhgvth X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun, 3 Oct 2021 03:51:21 -0400 (EDT) From: Thomas Monjalon To: Jaeeun Ham Cc: "users@dpdk.org" , "alialnu@nvidia.com" , "rasland@nvidia.com" , "asafp@nvidia.com" Subject: Re: I need DPDK MLX5 Probe error support Date: Sun, 03 Oct 2021 09:51:19 +0200 Message-ID: <10888350.v8Z1xNktaK@thomas> In-Reply-To: References: <1682223.0z2BAqYvtR@thomas> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Hi, I think you need to read the documentation. For DPDK install on Linux: https://doc.dpdk.org/guides/linux_gsg/build_dpdk.html#compiling-and-installing-dpdk-system-wide For mlx5 specific dependencies, install rdma-core package: https://doc.dpdk.org/guides/nics/mlx5.html#linux-prerequisites 02/10/2021 12:57, Jaeeun Ham: > Hi, > > Could you teach me how to install dpdk-testpmd? > I have to run the application on the host server, not a development server. > So, I don't know how to get dpdk-testpmd. > > By the way, testpmd run result is as below. > root@seroics05590:~/ejaeham# testpmd > EAL: Detected 64 lcore(s) > EAL: libmlx4.so.1: cannot open shared object file: No such file or directory > EAL: FATAL: Cannot init plugins > > EAL: Cannot init plugins > > PANIC in main(): > Cannot init EAL > 5: [testpmd(_start+0x2a) [0x55d301d98e1a]] > 4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x7f5e044a4bf7]] > 3: [testpmd(main+0x907) [0x55d301d98d07]] > 2: [/usr/lib/x86_64-linux-gnu/librte_eal.so.17.11(__rte_panic+0xbd) [0x7f5e04ca3cfd]] > 1: [/usr/lib/x86_64-linux-gnu/librte_eal.so.17.11(rte_dump_stack+0x2e) [0x7f5e04cac19e]] > Aborted > > > I added option below when the process is starting in the docker. > dv_flow_en=0 \ > --log-level=pmd,8 \ > < MLX5 log > > 415a695ba348:/tmp/logs # cat epp.log > MIDHAUL_PCI_ADDR:0000:12:01.0, BACKHAUL_PCI_ADDR:0000:12:01.1 > MIDHAUL_IP_ADDR:10.255.21.177, BACKHAUL_IP_ADDR:10.255.21.178 > mlx5_pci: unable to recognize master/representors on the multiple IB devices > common_mlx5: Failed to load driver = mlx5_pci. > > EAL: Requested device 0000:12:01.0 cannot be used > mlx5_pci: unable to recognize master/representors on the multiple IB devices > common_mlx5: Failed to load driver = mlx5_pci. > > EAL: Requested device 0000:12:01.1 cannot be used > EAL: Bus (pci) probe failed. > EAL: Trying to obtain current memory policy. > EAL: Setting policy MPOL_PREFERRED for socket 1 > Caught signal 15 > EAL: Restoring previous memory policy: 0 > EAL: Calling mem event callback 'MLX5_MEM_EVENT_CB:(nil)' > EAL: request: mp_malloc_sync > EAL: Heap on socket 1 was expanded by 5120MB > FATAL: epp_init.c::copy_mac_addr:130: Call to rte_eth_dev_get_port_by_name(src_dpdk_dev_name, &port_id) failed: -19 (Unknown error -19), rte_errno=0 (not set) > > Caught signal 6 > Obtained 7 stack frames, tid=713. > tid=713, /usr/local/bin/ericsson-packet-processor() [0x40a4a4] > tid=713, /lib64/libpthread.so.0(+0x13f80) [0x7f7e1eae8f80] > tid=713, /lib64/libc.so.6(gsignal+0x10b) [0x7f7e1c5f818b] > tid=713, /lib64/libc.so.6(abort+0x175) [0x7f7e1c5f9585] > tid=713, /usr/local/bin/ericsson-packet-processor(main+0x458) [0x406818] > tid=713, /lib64/libc.so.6(__libc_start_main+0xed) [0x7f7e1c5e334d] > tid=713, /usr/local/bin/ericsson-packet-processor(_start+0x2a) [0x4091ca] > > < i40e log > > cat epp.log > MIDHAUL_PCI_ADDR:0000:3b:0d.5, BACKHAUL_PCI_ADDR:0000:3b:0d.4 > MIDHAUL_IP_ADDR:10.51.21.112, BACKHAUL_IP_ADDR:10.51.21.113 > EAL: Trying to obtain current memory policy. > EAL: Setting policy MPOL_PREFERRED for socket 1 > EAL: Restoring previous memory policy: 0 > EAL: Calling mem event callback 'vfio_mem_event_clb:(nil)' > EAL: request: mp_malloc_sync > EAL: Heap on socket 1 was expanded by 5120MB > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 28 > i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0. > i40e_set_tx_function_flag(): Neither simple nor vector Tx enabled on Tx queue 0 > > i40evf_dev_start(): >> > i40evf_config_rss(): No hash flag is set > i40e_set_rx_function(): Vector Rx path will be used on port=0. > i40e_set_tx_function(): Xmit tx finally be used. > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 6 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 7 > i40evf_add_del_all_mac_addr(): add/rm mac:62:64:21:84:83:b0 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 10 > i40evf_dev_rx_queue_start(): >> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 8 > i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event > i40evf_dev_tx_queue_start(): >> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 8 > i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 28 > i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0. > i40e_set_tx_function_flag(): Neither simple nor vector Tx enabled on Tx queue 0 > > i40evf_dev_start(): >> > i40evf_config_rss(): No hash flag is set > i40e_set_rx_function(): Vector Rx path will be used on port=1. > i40e_set_tx_function(): Xmit tx finally be used. > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 6 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 7 > i40evf_add_del_all_mac_addr(): add/rm mac:c2:88:5c:a9:a2:ef > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 10 > i40evf_dev_rx_queue_start(): >> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 8 > i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event > i40evf_dev_tx_queue_start(): >> > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 8 > i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket -1 > i40evf_dev_mtu_set(): port 1 must be stopped before configuration > i40evf_dev_mtu_set(): port 0 must be stopped before configuration > Caught signal 10 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > Caught signal 10 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > > process start option which is triggered by shell script is as below. > > < start-epp.sh > > exec /usr/local/bin/ericsson-packet-processor \ > $(get_dpdk_core_list_parameter) \ > $(get_dpdk_mem_parameter) \ > $(get_dpdk_hugepage_parameters) \ > -d /usr/local/lib/librte_mempool_ring.so \ > -d /usr/local/lib/librte_mempool_stack.so \ > -d /usr/local/lib/librte_net_pcap.so \ > -d /usr/local/lib/librte_net_i40e.so \ > -d /usr/local/lib/librte_net_mlx5.so \ > -d /usr/local/lib/librte_event_dsw.so \ > $DPDK_PCI_OPTIONS \ > --vdev=event_dsw0 \ > --vdev=eth_pcap0,iface=midhaul_edk \ > --vdev=eth_pcap1,iface=backhaul_edk \ > --file-prefix=container \ > --log-level lib.eal:debug \ > dv_flow_en=0 \ > --log-level=pmd,8 \ > -- \ > $(get_epp_mempool_parameter) \ > "--neighbor-discovery-interface=midhaul_ker,${MIDHAUL_IP_ADDR},mac_addr_dev=${MIDHAUL_MAC_ADDR_DEV},vr_id=0" \ > "--neighbor-discovery-interface=backhaul_ker,${BACKHAUL_IP_ADDR},mac_addr_dev=${BACKHAUL_MAC_ADDR_DEV},vr_id=1" > > BR/Jaeeun > > -----Original Message----- > From: Thomas Monjalon > Sent: Wednesday, September 29, 2021 8:16 PM > To: Jaeeun Ham > Cc: users@dpdk.org; alialnu@nvidia.com; rasland@nvidia.com; asafp@nvidia.com > Subject: Re: I need DPDK MLX5 Probe error support > > 27/09/2021 02:18, Jaeeun Ham: > > Hi, > > > > I hope you are well. > > My name is Jaeeun Ham and I have been working for the Ericsson. > > > > I am suffering from enabling MLX5 NIC, so could you take a look at how to run it? > > There are two pci address for the SRIOV(vfio) mlx5 nic support but it > > doesn't run correctly. (12:01.0, 12:01.1) > > > > I started one process which is running inside the docker process that is on the MLX5 NIC support host server. > > The process started to run with following option. > > -d /usr/local/lib/librte_net_mlx5.so And the docker process has > > mlx5 libraries as below. > > Did you try on the host outside of any container? > > Please could you try following commands (variables to be replaced)? > > dpdk-hugepages.py --reserve 1G > ip link set $netdev netns $container > docker run --cap-add SYS_NICE --cap-add IPC_LOCK --cap-add NET_ADMIN \ > --device /dev/infiniband/ $image > echo show port summary all | dpdk-testpmd --in-memory -- -i > > > > > 706a37a35d29:/usr/local/lib # ls -1 | grep mlx librte_common_mlx5.so > > librte_common_mlx5.so.21 > > librte_common_mlx5.so.21.0 > > librte_net_mlx5.so > > librte_net_mlx5.so.21 > > librte_net_mlx5.so.21.0 > > > > But I failed to run the process with following error. > > (MIDHAUL_PCI_ADDR:0000:12:01.0, BACKHAUL_PCI_ADDR:0000:12:01.1) > > > > --- > > > > mlx5_pci: unable to recognize master/representors on the multiple IB > > devices > > common_mlx5: Failed to load driver = mlx5_pci. > > EAL: Requested device 0000:12:01.0 cannot be used > > mlx5_pci: unable to recognize master/representors on the multiple IB > > devices > > common_mlx5: Failed to load driver = mlx5_pci. > > EAL: Requested device 0000:12:01.1 cannot be used > > EAL: Bus (pci) probe failed. > > > > --- > > > > For the success case of pci address 12:01.2, it showed following messages. > > > > --- > > > > EAL: Detected 64 lcore(s) > > EAL: Detected 2 NUMA nodes > > EAL: Multi-process socket /var/run/dpdk/nah2/mp_socket > > EAL: Probing VFIO support... > > EAL: VFIO support initialized > > EAL: PCI device 0000:12:01.2 on NUMA socket 0 > > EAL: probe driver: 15b3:1016 net_mlx5 > > net_mlx5: MPLS over GRE/UDP tunnel offloading disabled due to old > > OFED/rdma-core version or firmware configuration > > net_mlx5: port 0 the requested maximum Rx packet size (2056) is larger > > than a single mbuf (2048) and scattered mode has not been requested > > USER1: rte_ip_frag_table_create: allocated of 6291584 bytes at socket > > 0 > > > > --- > > > > BR/Jaeeun >