From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 38A5BA0C41 for ; Wed, 6 Oct 2021 12:58:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id ACFFE40688; Wed, 6 Oct 2021 12:58:52 +0200 (CEST) Received: from out5-smtp.messagingengine.com (out5-smtp.messagingengine.com [66.111.4.29]) by mails.dpdk.org (Postfix) with ESMTP id 2520C40140 for ; Wed, 6 Oct 2021 12:58:51 +0200 (CEST) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id 7A4F05C06D3; Wed, 6 Oct 2021 06:58:50 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute1.internal (MEProxy); Wed, 06 Oct 2021 06:58:50 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm2; bh= iBFquttNQwFQGC3c4XB6qj2606203FLtWGCsyvsIVvM=; b=RLhmf4pk70AP4PDg vSZAxV7seiVEywK6U9rXAPeKNV/4nxAS+DwciTMsf7iHxiPX1KWyqx+f8xh2wWem UApEDgaSeQgbDxxYZBNRVvT4/Ml2cXxPQnVC1sL6N9aYUOWFsxHz6dCvD8SY4saj +d3az+oiwVYwNTM0BBfEP4foF89hiZE9dDe4M+YT8mdWP9Z+drhEN5zWnVFCn1at iaTuYtFbOuUlU6wWLgpOOsysEjh0EiY9RIl5+AYorfNMTEduS7c39lFY5k+sz9tA ETRS/jdU2ZFYVoxN/iLfb5qfcf/c/DIFt9QxvlwaHGMHs1UD/yTLF096ZWGH/CoT Fs+ISA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; bh=iBFquttNQwFQGC3c4XB6qj2606203FLtWGCsyvsIV vM=; b=NU3L/P6sYyr11ExcITk8EfIJbfW1T7So3QmnXorn9jeNufpcLIqFEv7qr S5rLvKhWPE8vdbCbrsXYG1oMU7Y6bIjWU/93bSahBpMApO4Q3zXmCMqZmBSpNXYK YSfmnYtZKu2V+flvJyow6ng+Q6vG7QTr5ZlodAsx35sC3fWMZ569hWlMatNtr/91 uLPuWShKjJg+QtpogM/ZsQcXFsQSP5uYmzjzwK0EOqM4mR2FEoNPPiavr8ZfzMrt 8pfpyox2AFg0x94Td96iQKde6q2ebpOqKE+0XT23M7hECM3iLE7CynwGgdstFKR0 iivOGK/cmxs5J+kw2vJB5pcnQHCDg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudeliedgfeefucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhhomhgr shcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqnecugg ftrfgrthhtvghrnhepvdeiveeigfduhedvffeuueeuteeuieeugeelveetkedtueetudek jedtvefhleffnecuffhomhgrihhnpegrvhgrihhlrggslhgvrdgsrhdpfhhirhgvvgihvg drtghomhdpughpughkrdhorhhgnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghm pehmrghilhhfrhhomhepthhhohhmrghssehmohhnjhgrlhhonhdrnhgvth X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 6 Oct 2021 06:58:48 -0400 (EDT) From: Thomas Monjalon To: Jaeeun Ham Cc: "users@dpdk.org" , "alialnu@nvidia.com" , "rasland@nvidia.com" , "asafp@nvidia.com" Subject: Re: I need DPDK MLX5 Probe error support Date: Wed, 06 Oct 2021 12:58:46 +0200 Message-ID: <6007812.z5yfQvPEFT@thomas> In-Reply-To: References: <10888350.v8Z1xNktaK@thomas> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Installing dependencies is not an issue. I don't understand which support you need. 06/10/2021 11:57, Jaeeun Ham: > Hi Thomas, > > Could you take a look at the attached file? > My engineer managed to compile DPDK 20.11 to support MLX5. Please find the output from dpdk-testpmd command in attached file. As you can see testpmd was able to probe mlx5_pci drivers and get MAC addresses. > The key issue in his case for enabling MLX5 support was to export rdma-core lib path to shared libs for meson/ninja commands as new build system automatically enables MLX5 support if needed dependencies are available. > > BR/Jaeeun > > -----Original Message----- > From: Thomas Monjalon > Sent: Sunday, October 3, 2021 4:51 PM > To: Jaeeun Ham > Cc: users@dpdk.org; alialnu@nvidia.com; rasland@nvidia.com; asafp@nvidia.com > Subject: Re: I need DPDK MLX5 Probe error support > > Hi, > > I think you need to read the documentation. > For DPDK install on Linux: > https://protect2.fireeye.com/v1/url?k=7925aba3-26be92c2-7925eb38-86d8a30ca42b-d871f122b4a0a61a&q=1&e=88eca0f4-aa71-4ba8-a332-179f08406da3&u=https%3A%2F%2Fdoc.dpdk.org%2Fguides%2Flinux_gsg%2Fbuild_dpdk.html%23compiling-and-installing-dpdk-system-wide > For mlx5 specific dependencies, install rdma-core package: > https://protect2.fireeye.com/v1/url?k=9bce4984-c45570e5-9bce091f-86d8a30ca42b-25bd3d467b5f290d&q=1&e=88eca0f4-aa71-4ba8-a332-179f08406da3&u=https%3A%2F%2Fdoc.dpdk.org%2Fguides%2Fnics%2Fmlx5.html%23linux-prerequisites > > > 02/10/2021 12:57, Jaeeun Ham: > > Hi, > > > > Could you teach me how to install dpdk-testpmd? > > I have to run the application on the host server, not a development server. > > So, I don't know how to get dpdk-testpmd. > > > > By the way, testpmd run result is as below. > > root@seroics05590:~/ejaeham# testpmd > > EAL: Detected 64 lcore(s) > > EAL: libmlx4.so.1: cannot open shared object file: No such file or > > directory > > EAL: FATAL: Cannot init plugins > > > > EAL: Cannot init plugins > > > > PANIC in main(): > > Cannot init EAL > > 5: [testpmd(_start+0x2a) [0x55d301d98e1a]] > > 4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) > > [0x7f5e044a4bf7]] > > 3: [testpmd(main+0x907) [0x55d301d98d07]] > > 2: [/usr/lib/x86_64-linux-gnu/librte_eal.so.17.11(__rte_panic+0xbd) > > [0x7f5e04ca3cfd]] > > 1: [/usr/lib/x86_64-linux-gnu/librte_eal.so.17.11(rte_dump_stack+0x2e) > > [0x7f5e04cac19e]] Aborted > > > > > > I added option below when the process is starting in the docker. > > dv_flow_en=0 \ > > --log-level=pmd,8 \ > > < MLX5 log > > > 415a695ba348:/tmp/logs # cat epp.log > > MIDHAUL_PCI_ADDR:0000:12:01.0, BACKHAUL_PCI_ADDR:0000:12:01.1 > > MIDHAUL_IP_ADDR:10.255.21.177, BACKHAUL_IP_ADDR:10.255.21.178 > > mlx5_pci: unable to recognize master/representors on the multiple IB > > devices > > common_mlx5: Failed to load driver = mlx5_pci. > > > > EAL: Requested device 0000:12:01.0 cannot be used > > mlx5_pci: unable to recognize master/representors on the multiple IB > > devices > > common_mlx5: Failed to load driver = mlx5_pci. > > > > EAL: Requested device 0000:12:01.1 cannot be used > > EAL: Bus (pci) probe failed. > > EAL: Trying to obtain current memory policy. > > EAL: Setting policy MPOL_PREFERRED for socket 1 Caught signal 15 > > EAL: Restoring previous memory policy: 0 > > EAL: Calling mem event callback 'MLX5_MEM_EVENT_CB:(nil)' > > EAL: request: mp_malloc_sync > > EAL: Heap on socket 1 was expanded by 5120MB > > FATAL: epp_init.c::copy_mac_addr:130: Call to > > rte_eth_dev_get_port_by_name(src_dpdk_dev_name, &port_id) failed: -19 > > (Unknown error -19), rte_errno=0 (not set) > > > > Caught signal 6 > > Obtained 7 stack frames, tid=713. > > tid=713, /usr/local/bin/ericsson-packet-processor() [0x40a4a4] > > tid=713, /lib64/libpthread.so.0(+0x13f80) [0x7f7e1eae8f80] tid=713, > > /lib64/libc.so.6(gsignal+0x10b) [0x7f7e1c5f818b] tid=713, > > /lib64/libc.so.6(abort+0x175) [0x7f7e1c5f9585] tid=713, > > /usr/local/bin/ericsson-packet-processor(main+0x458) [0x406818] > > tid=713, /lib64/libc.so.6(__libc_start_main+0xed) [0x7f7e1c5e334d] > > tid=713, /usr/local/bin/ericsson-packet-processor(_start+0x2a) > > [0x4091ca] > > > > < i40e log > > > cat epp.log > > MIDHAUL_PCI_ADDR:0000:3b:0d.5, BACKHAUL_PCI_ADDR:0000:3b:0d.4 > > MIDHAUL_IP_ADDR:10.51.21.112, BACKHAUL_IP_ADDR:10.51.21.113 > > EAL: Trying to obtain current memory policy. > > EAL: Setting policy MPOL_PREFERRED for socket 1 > > EAL: Restoring previous memory policy: 0 > > EAL: Calling mem event callback 'vfio_mem_event_clb:(nil)' > > EAL: request: mp_malloc_sync > > EAL: Heap on socket 1 was expanded by 5120MB > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 28 > > i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0. > > i40e_set_tx_function_flag(): Neither simple nor vector Tx enabled on > > Tx queue 0 > > > > i40evf_dev_start(): >> > > i40evf_config_rss(): No hash flag is set > > i40e_set_rx_function(): Vector Rx path will be used on port=0. > > i40e_set_tx_function(): Xmit tx finally be used. > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 6 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 7 > > i40evf_add_del_all_mac_addr(): add/rm mac:62:64:21:84:83:b0 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 10 > > i40evf_dev_rx_queue_start(): >> > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 8 > > i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event > > i40evf_dev_tx_queue_start(): >> > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 8 > > i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 28 > > i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0. > > i40e_set_tx_function_flag(): Neither simple nor vector Tx enabled on > > Tx queue 0 > > > > i40evf_dev_start(): >> > > i40evf_config_rss(): No hash flag is set > > i40e_set_rx_function(): Vector Rx path will be used on port=1. > > i40e_set_tx_function(): Xmit tx finally be used. > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 6 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 7 > > i40evf_add_del_all_mac_addr(): add/rm mac:c2:88:5c:a9:a2:ef > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 10 > > i40evf_dev_rx_queue_start(): >> > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 8 > > i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event > > i40evf_dev_tx_queue_start(): >> > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 8 > > i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > i40evf_dev_mtu_set(): port 1 must be stopped before configuration > > i40evf_dev_mtu_set(): port 0 must be stopped before configuration > > Caught signal 10 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > Caught signal 10 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > > > > > process start option which is triggered by shell script is as below. > > > > < start-epp.sh > > > exec /usr/local/bin/ericsson-packet-processor \ > > $(get_dpdk_core_list_parameter) \ > > $(get_dpdk_mem_parameter) \ > > $(get_dpdk_hugepage_parameters) \ > > -d /usr/local/lib/librte_mempool_ring.so \ -d > > /usr/local/lib/librte_mempool_stack.so \ -d > > /usr/local/lib/librte_net_pcap.so \ -d > > /usr/local/lib/librte_net_i40e.so \ -d > > /usr/local/lib/librte_net_mlx5.so \ -d > > /usr/local/lib/librte_event_dsw.so \ $DPDK_PCI_OPTIONS \ > > --vdev=event_dsw0 \ > > --vdev=eth_pcap0,iface=midhaul_edk \ > > --vdev=eth_pcap1,iface=backhaul_edk \ --file-prefix=container \ > > --log-level lib.eal:debug \ > > dv_flow_en=0 \ > > --log-level=pmd,8 \ > > -- \ > > $(get_epp_mempool_parameter) \ > > > > "--neighbor-discovery-interface=midhaul_ker,${MIDHAUL_IP_ADDR},mac_addr_dev=${MIDHAUL_MAC_ADDR_DEV},vr_id=0" \ "--neighbor-discovery-interface=backhaul_ker,${BACKHAUL_IP_ADDR},mac_addr_dev=${BACKHAUL_MAC_ADDR_DEV},vr_id=1" > > > > BR/Jaeeun > > > > -----Original Message----- > > From: Thomas Monjalon > > Sent: Wednesday, September 29, 2021 8:16 PM > > To: Jaeeun Ham > > Cc: users@dpdk.org; alialnu@nvidia.com; rasland@nvidia.com; > > asafp@nvidia.com > > Subject: Re: I need DPDK MLX5 Probe error support > > > > 27/09/2021 02:18, Jaeeun Ham: > > > Hi, > > > > > > I hope you are well. > > > My name is Jaeeun Ham and I have been working for the Ericsson. > > > > > > I am suffering from enabling MLX5 NIC, so could you take a look at how to run it? > > > There are two pci address for the SRIOV(vfio) mlx5 nic support but > > > it doesn't run correctly. (12:01.0, 12:01.1) > > > > > > I started one process which is running inside the docker process that is on the MLX5 NIC support host server. > > > The process started to run with following option. > > > -d /usr/local/lib/librte_net_mlx5.so And the docker process has > > > mlx5 libraries as below. > > > > Did you try on the host outside of any container? > > > > Please could you try following commands (variables to be replaced)? > > > > dpdk-hugepages.py --reserve 1G > > ip link set $netdev netns $container > > docker run --cap-add SYS_NICE --cap-add IPC_LOCK --cap-add NET_ADMIN \ > > --device /dev/infiniband/ $image > > echo show port summary all | dpdk-testpmd --in-memory -- -i > > > > > > > > > 706a37a35d29:/usr/local/lib # ls -1 | grep mlx librte_common_mlx5.so > > > librte_common_mlx5.so.21 > > > librte_common_mlx5.so.21.0 > > > librte_net_mlx5.so > > > librte_net_mlx5.so.21 > > > librte_net_mlx5.so.21.0 > > > > > > But I failed to run the process with following error. > > > (MIDHAUL_PCI_ADDR:0000:12:01.0, BACKHAUL_PCI_ADDR:0000:12:01.1) > > > > > > --- > > > > > > mlx5_pci: unable to recognize master/representors on the multiple IB > > > devices > > > common_mlx5: Failed to load driver = mlx5_pci. > > > EAL: Requested device 0000:12:01.0 cannot be used > > > mlx5_pci: unable to recognize master/representors on the multiple IB > > > devices > > > common_mlx5: Failed to load driver = mlx5_pci. > > > EAL: Requested device 0000:12:01.1 cannot be used > > > EAL: Bus (pci) probe failed. > > > > > > --- > > > > > > For the success case of pci address 12:01.2, it showed following messages. > > > > > > --- > > > > > > EAL: Detected 64 lcore(s) > > > EAL: Detected 2 NUMA nodes > > > EAL: Multi-process socket /var/run/dpdk/nah2/mp_socket > > > EAL: Probing VFIO support... > > > EAL: VFIO support initialized > > > EAL: PCI device 0000:12:01.2 on NUMA socket 0 > > > EAL: probe driver: 15b3:1016 net_mlx5 > > > net_mlx5: MPLS over GRE/UDP tunnel offloading disabled due to old > > > OFED/rdma-core version or firmware configuration > > > net_mlx5: port 0 the requested maximum Rx packet size (2056) is > > > larger than a single mbuf (2048) and scattered mode has not been > > > requested > > > USER1: rte_ip_frag_table_create: allocated of 6291584 bytes at > > > socket > > > 0 > > > > > > --- > > > > > > BR/Jaeeun > > > > > >