From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9434DA0C41 for ; Tue, 5 Oct 2021 08:00:13 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5DD7F410DC; Tue, 5 Oct 2021 08:00:13 +0200 (CEST) Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com [66.111.4.25]) by mails.dpdk.org (Postfix) with ESMTP id 318F54068F for ; Tue, 5 Oct 2021 08:00:11 +0200 (CEST) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id C12715C00CF; Tue, 5 Oct 2021 02:00:09 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Tue, 05 Oct 2021 02:00:09 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm2; bh= c0avm56ixkuyOsnPNKVNI/4adYS3zeJzM3ZgddkqUGc=; b=smN9eXNYfPrIorJJ zyISKMvcT/WTS4OyrFrO0KZO0EHhXqgOdyc/PjlfVKp0RghbgMfPit93kXCdhVp0 +8jZJxKfb3IaokpYS1uQlfp/3l0s2uQ6SHJWYStZqxKPJxxV/6X06faB2Fl+dRm/ RbiXkZmLHjXv7eaVrBbDvajR9baZRaL76OHummA3xCAhOiJ/wo7JIzoYI6xbvubK Y39g2jJ4W0Bj7t/+vz3kvFQdY38UKyNNrAFRuFDz8uT3ePTKiGvgGDmVCTH7R+d6 Yw6kMYy9IZ3JJrPwZxcqIXguz3IfotIcq9wcOKc4/Zf9qrWfsbvlRYK+jkG9PxiP n6SHkA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; bh=c0avm56ixkuyOsnPNKVNI/4adYS3zeJzM3ZgddkqU Gc=; b=JFz6gDmruj+WG9++J+HHQTLn+TcqUhlIkcUeeCMtSRpWY8njMmr7pGtrX /5PS6hXJ27mxkEI7vLQTRzv79I4sgWrFtdKe7Gf3SkcmlMRYPsOWR0clNKSO7XO7 E7ot1hIhcD6vpQ7pVFv9WZGtdgHlIniL1dm+SC5ALEYh1rr+dqECTVSzbOrWJsak RXXe3P4CIh6MGsu3uoNpldItOVvvaLELvpKpwX4GYSyYrDlNu6yMCagD4hs6c8HB RDuG3aLgfsFGpeAtDFXxnnIZ8WbvghN7vG+3oX40Dzcv1QaSSokDHmquCVGxlsqU WPljRVS3DDLnGb8tSDo11+MvU8XkQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudelfedguddttdcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enucfjughrpefhvffufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhm rghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenuc ggtffrrghtthgvrhhnpeetteeuudevueffvddvhfeftdejvdegleegteduueehvdekledt leffgeevkeeiueenucffohhmrghinheprgguvhgrnhgtvgdrsghrpdhfihhrvggvhigvrd gtohhmpdguphgukhdrohhrghenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhep mhgrihhlfhhrohhmpehthhhomhgrshesmhhonhhjrghlohhnrdhnvght X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 5 Oct 2021 02:00:08 -0400 (EDT) From: Thomas Monjalon To: Jaeeun Ham Cc: "users@dpdk.org" , "alialnu@nvidia.com" , "rasland@nvidia.com" , "asafp@nvidia.com" Subject: Re: I need DPDK MLX5 Probe error support Date: Tue, 05 Oct 2021 08:00:06 +0200 Message-ID: <90747432.bxWCIcx659@thomas> In-Reply-To: References: <10888350.v8Z1xNktaK@thomas> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org 05/10/2021 03:17, Jaeeun Ham: > Hi Thomas, > > I attached the testpmd result which is gathered on the host sever. > Could you please take a look at the mlx5_core PCI issue? I see no real issue in the log. For doing more tests, I recommend using the latest DPDK version. > Thank you in advance. > > BR/Jaeeun > > -----Original Message----- > From: Thomas Monjalon > Sent: Sunday, October 3, 2021 4:51 PM > To: Jaeeun Ham > Cc: users@dpdk.org; alialnu@nvidia.com; rasland@nvidia.com; asafp@nvidia.com > Subject: Re: I need DPDK MLX5 Probe error support > > Hi, > > I think you need to read the documentation. > For DPDK install on Linux: > https://protect2.fireeye.com/v1/url?k=7925aba3-26be92c2-7925eb38-86d8a30ca42b-d871f122b4a0a61a&q=1&e=88eca0f4-aa71-4ba8-a332-179f08406da3&u=https%3A%2F%2Fdoc.dpdk.org%2Fguides%2Flinux_gsg%2Fbuild_dpdk.html%23compiling-and-installing-dpdk-system-wide > For mlx5 specific dependencies, install rdma-core package: > https://protect2.fireeye.com/v1/url?k=9bce4984-c45570e5-9bce091f-86d8a30ca42b-25bd3d467b5f290d&q=1&e=88eca0f4-aa71-4ba8-a332-179f08406da3&u=https%3A%2F%2Fdoc.dpdk.org%2Fguides%2Fnics%2Fmlx5.html%23linux-prerequisites > > > 02/10/2021 12:57, Jaeeun Ham: > > Hi, > > > > Could you teach me how to install dpdk-testpmd? > > I have to run the application on the host server, not a development server. > > So, I don't know how to get dpdk-testpmd. > > > > By the way, testpmd run result is as below. > > root@seroics05590:~/ejaeham# testpmd > > EAL: Detected 64 lcore(s) > > EAL: libmlx4.so.1: cannot open shared object file: No such file or > > directory > > EAL: FATAL: Cannot init plugins > > > > EAL: Cannot init plugins > > > > PANIC in main(): > > Cannot init EAL > > 5: [testpmd(_start+0x2a) [0x55d301d98e1a]] > > 4: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) > > [0x7f5e044a4bf7]] > > 3: [testpmd(main+0x907) [0x55d301d98d07]] > > 2: [/usr/lib/x86_64-linux-gnu/librte_eal.so.17.11(__rte_panic+0xbd) > > [0x7f5e04ca3cfd]] > > 1: [/usr/lib/x86_64-linux-gnu/librte_eal.so.17.11(rte_dump_stack+0x2e) > > [0x7f5e04cac19e]] Aborted > > > > > > I added option below when the process is starting in the docker. > > dv_flow_en=0 \ > > --log-level=pmd,8 \ > > < MLX5 log > > > 415a695ba348:/tmp/logs # cat epp.log > > MIDHAUL_PCI_ADDR:0000:12:01.0, BACKHAUL_PCI_ADDR:0000:12:01.1 > > MIDHAUL_IP_ADDR:10.255.21.177, BACKHAUL_IP_ADDR:10.255.21.178 > > mlx5_pci: unable to recognize master/representors on the multiple IB > > devices > > common_mlx5: Failed to load driver = mlx5_pci. > > > > EAL: Requested device 0000:12:01.0 cannot be used > > mlx5_pci: unable to recognize master/representors on the multiple IB > > devices > > common_mlx5: Failed to load driver = mlx5_pci. > > > > EAL: Requested device 0000:12:01.1 cannot be used > > EAL: Bus (pci) probe failed. > > EAL: Trying to obtain current memory policy. > > EAL: Setting policy MPOL_PREFERRED for socket 1 Caught signal 15 > > EAL: Restoring previous memory policy: 0 > > EAL: Calling mem event callback 'MLX5_MEM_EVENT_CB:(nil)' > > EAL: request: mp_malloc_sync > > EAL: Heap on socket 1 was expanded by 5120MB > > FATAL: epp_init.c::copy_mac_addr:130: Call to > > rte_eth_dev_get_port_by_name(src_dpdk_dev_name, &port_id) failed: -19 > > (Unknown error -19), rte_errno=0 (not set) > > > > Caught signal 6 > > Obtained 7 stack frames, tid=713. > > tid=713, /usr/local/bin/ericsson-packet-processor() [0x40a4a4] > > tid=713, /lib64/libpthread.so.0(+0x13f80) [0x7f7e1eae8f80] tid=713, > > /lib64/libc.so.6(gsignal+0x10b) [0x7f7e1c5f818b] tid=713, > > /lib64/libc.so.6(abort+0x175) [0x7f7e1c5f9585] tid=713, > > /usr/local/bin/ericsson-packet-processor(main+0x458) [0x406818] > > tid=713, /lib64/libc.so.6(__libc_start_main+0xed) [0x7f7e1c5e334d] > > tid=713, /usr/local/bin/ericsson-packet-processor(_start+0x2a) > > [0x4091ca] > > > > < i40e log > > > cat epp.log > > MIDHAUL_PCI_ADDR:0000:3b:0d.5, BACKHAUL_PCI_ADDR:0000:3b:0d.4 > > MIDHAUL_IP_ADDR:10.51.21.112, BACKHAUL_IP_ADDR:10.51.21.113 > > EAL: Trying to obtain current memory policy. > > EAL: Setting policy MPOL_PREFERRED for socket 1 > > EAL: Restoring previous memory policy: 0 > > EAL: Calling mem event callback 'vfio_mem_event_clb:(nil)' > > EAL: request: mp_malloc_sync > > EAL: Heap on socket 1 was expanded by 5120MB > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 28 > > i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0. > > i40e_set_tx_function_flag(): Neither simple nor vector Tx enabled on > > Tx queue 0 > > > > i40evf_dev_start(): >> > > i40evf_config_rss(): No hash flag is set > > i40e_set_rx_function(): Vector Rx path will be used on port=0. > > i40e_set_tx_function(): Xmit tx finally be used. > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 6 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 7 > > i40evf_add_del_all_mac_addr(): add/rm mac:62:64:21:84:83:b0 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 10 > > i40evf_dev_rx_queue_start(): >> > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 8 > > i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event > > i40evf_dev_tx_queue_start(): >> > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 8 > > i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 28 > > i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0. > > i40e_set_tx_function_flag(): Neither simple nor vector Tx enabled on > > Tx queue 0 > > > > i40evf_dev_start(): >> > > i40evf_config_rss(): No hash flag is set > > i40e_set_rx_function(): Vector Rx path will be used on port=1. > > i40e_set_tx_function(): Xmit tx finally be used. > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 6 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 7 > > i40evf_add_del_all_mac_addr(): add/rm mac:c2:88:5c:a9:a2:ef > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 10 > > i40evf_dev_rx_queue_start(): >> > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 8 > > i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event > > i40evf_dev_tx_queue_start(): >> > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 8 > > i40evf_handle_pf_event(): VIRTCHNL_EVENT_LINK_CHANGE event > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 14 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > USER1: rte_ip_frag_table_create: allocated of 12583040 bytes at socket > > -1 > > i40evf_dev_mtu_set(): port 1 must be stopped before configuration > > i40evf_dev_mtu_set(): port 0 must be stopped before configuration > > Caught signal 10 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > Caught signal 10 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > i40evf_dev_alarm_handler(): ICR01_ADMINQ is reported > > i40evf_handle_aq_msg(): adminq response is received, opcode = 15 > > > > > > process start option which is triggered by shell script is as below. > > > > < start-epp.sh > > > exec /usr/local/bin/ericsson-packet-processor \ > > $(get_dpdk_core_list_parameter) \ > > $(get_dpdk_mem_parameter) \ > > $(get_dpdk_hugepage_parameters) \ > > -d /usr/local/lib/librte_mempool_ring.so \ -d > > /usr/local/lib/librte_mempool_stack.so \ -d > > /usr/local/lib/librte_net_pcap.so \ -d > > /usr/local/lib/librte_net_i40e.so \ -d > > /usr/local/lib/librte_net_mlx5.so \ -d > > /usr/local/lib/librte_event_dsw.so \ $DPDK_PCI_OPTIONS \ > > --vdev=event_dsw0 \ > > --vdev=eth_pcap0,iface=midhaul_edk \ > > --vdev=eth_pcap1,iface=backhaul_edk \ --file-prefix=container \ > > --log-level lib.eal:debug \ > > dv_flow_en=0 \ > > --log-level=pmd,8 \ > > -- \ > > $(get_epp_mempool_parameter) \ > > > > "--neighbor-discovery-interface=midhaul_ker,${MIDHAUL_IP_ADDR},mac_addr_dev=${MIDHAUL_MAC_ADDR_DEV},vr_id=0" \ "--neighbor-discovery-interface=backhaul_ker,${BACKHAUL_IP_ADDR},mac_addr_dev=${BACKHAUL_MAC_ADDR_DEV},vr_id=1" > > > > BR/Jaeeun > > > > -----Original Message----- > > From: Thomas Monjalon > > Sent: Wednesday, September 29, 2021 8:16 PM > > To: Jaeeun Ham > > Cc: users@dpdk.org; alialnu@nvidia.com; rasland@nvidia.com; > > asafp@nvidia.com > > Subject: Re: I need DPDK MLX5 Probe error support > > > > 27/09/2021 02:18, Jaeeun Ham: > > > Hi, > > > > > > I hope you are well. > > > My name is Jaeeun Ham and I have been working for the Ericsson. > > > > > > I am suffering from enabling MLX5 NIC, so could you take a look at how to run it? > > > There are two pci address for the SRIOV(vfio) mlx5 nic support but > > > it doesn't run correctly. (12:01.0, 12:01.1) > > > > > > I started one process which is running inside the docker process that is on the MLX5 NIC support host server. > > > The process started to run with following option. > > > -d /usr/local/lib/librte_net_mlx5.so And the docker process has > > > mlx5 libraries as below. > > > > Did you try on the host outside of any container? > > > > Please could you try following commands (variables to be replaced)? > > > > dpdk-hugepages.py --reserve 1G > > ip link set $netdev netns $container > > docker run --cap-add SYS_NICE --cap-add IPC_LOCK --cap-add NET_ADMIN \ > > --device /dev/infiniband/ $image > > echo show port summary all | dpdk-testpmd --in-memory -- -i > > > > > > > > > 706a37a35d29:/usr/local/lib # ls -1 | grep mlx librte_common_mlx5.so > > > librte_common_mlx5.so.21 > > > librte_common_mlx5.so.21.0 > > > librte_net_mlx5.so > > > librte_net_mlx5.so.21 > > > librte_net_mlx5.so.21.0 > > > > > > But I failed to run the process with following error. > > > (MIDHAUL_PCI_ADDR:0000:12:01.0, BACKHAUL_PCI_ADDR:0000:12:01.1) > > > > > > --- > > > > > > mlx5_pci: unable to recognize master/representors on the multiple IB > > > devices > > > common_mlx5: Failed to load driver = mlx5_pci. > > > EAL: Requested device 0000:12:01.0 cannot be used > > > mlx5_pci: unable to recognize master/representors on the multiple IB > > > devices > > > common_mlx5: Failed to load driver = mlx5_pci. > > > EAL: Requested device 0000:12:01.1 cannot be used > > > EAL: Bus (pci) probe failed. > > > > > > --- > > > > > > For the success case of pci address 12:01.2, it showed following messages. > > > > > > --- > > > > > > EAL: Detected 64 lcore(s) > > > EAL: Detected 2 NUMA nodes > > > EAL: Multi-process socket /var/run/dpdk/nah2/mp_socket > > > EAL: Probing VFIO support... > > > EAL: VFIO support initialized > > > EAL: PCI device 0000:12:01.2 on NUMA socket 0 > > > EAL: probe driver: 15b3:1016 net_mlx5 > > > net_mlx5: MPLS over GRE/UDP tunnel offloading disabled due to old > > > OFED/rdma-core version or firmware configuration > > > net_mlx5: port 0 the requested maximum Rx packet size (2056) is > > > larger than a single mbuf (2048) and scattered mode has not been > > > requested > > > USER1: rte_ip_frag_table_create: allocated of 6291584 bytes at > > > socket > > > 0 > > > > > > --- > > > > > > BR/Jaeeun > > > > > >