* [dpdk-users] DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
@ 2018-12-30 12:18 hfli
0 siblings, 0 replies; 9+ messages in thread
From: hfli @ 2018-12-30 12:18 UTC (permalink / raw)
To: users, azuredpdk
Hi Admin,
Our DPDK procedure want to deploy on Hyper-v and Azure Cloud, so we try it
on Hyper-v firstly. It need start 2 processes, a client process use port1, a
server process use port2, port1 and port2 in one internal subnet on a
virtual switch,
But only one process can be started successfully, the other said error,
"PMD: Could not add multiq qdisc (17): File exists", our procedure is
running well on Vmware/KVM/ASW, is there any help for this?
Below is our env:
OS: Windows 10 and Hyper-V on it
Guest OS: CentOS7.6(Upgrade kernel to 4.20.0)
DPDK version: 18.02.2
# uname -a
Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23
20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
root:/# ifconfig -a
bond0: flags=5122<BROADCAST,MASTER,MULTICAST> mtu 1500
ether 46:28:ec:c8:7a:74 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 75 bytes 6284 (6.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 75 bytes 6284 (6.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.16.130 netmask 255.255.255.0 broadcast 192.168.16.255
inet6 fe80::78e3:1af8:3333:ff45 prefixlen 64 scopeid 0x20<link>
ether 00:15:5d:10:85:14 txqueuelen 1000 (Ethernet)
RX packets 5494 bytes 706042 (689.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2163 bytes 438205 (427.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:15:5d:10:85:15 txqueuelen 1000 (Ethernet)
RX packets 3131 bytes 518243 (506.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port1: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:16 txqueuelen 1000 (Ethernet)
RX packets 1707 bytes 163778 (159.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 693 bytes 70666 (69.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port2: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:17 txqueuelen 1000 (Ethernet)
RX packets 900 bytes 112256 (109.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1504 bytes 122428 (119.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root:/# ethtool -i port1
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/# ethtool -i port2
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/#
Start server process successfully
# ./Tester -l 3 -n 4 --vdev="net_vdev_netvsc1,iface=port2" --socket-mem 1500
--file-prefix server
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.server_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for
net_failsafe_net_vdev_netvsc1_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc1_id0 as dtap0
PMD: net_failsafe: MAC address is 00:15:5d:10:85:17
Concorrently start client process failed
# ./Tester -l 2 -n 4 --vdev="net_vdev_netvsc0,iface=port1" --socket-mem 1500
--file-prefix client
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.client_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for
net_failsafe_net_vdev_netvsc0_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0
PMD: Could not add multiq qdisc (17): File exists
PMD: dtap0: failed to create multiq qdisc.
PMD: Disabling rte flow support: File exists(17)
PMD: Remote feature requires flow support.
PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is
06:5f:a6:0a:b4:f9
device
EAL: Bus (vdev) probe failed.
Thanks and Regards,
Jack
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-users] DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
2019-01-09 0:38 ` [dpdk-users] 答复: " hfli
@ 2019-01-09 19:55 ` Stephen Hemminger
0 siblings, 0 replies; 9+ messages in thread
From: Stephen Hemminger @ 2019-01-09 19:55 UTC (permalink / raw)
To: hfli, matan, KY Srinivasan, Haiyang Zhang; +Cc: users
I am testing a set of patches to TAP device that eliminate the tun_unit variable.
Basically, the kernel has way to assign "next available" tun device and that is preferable.
________________________________
From: hfli@netitest.com <hfli@netitest.com>
Sent: Tuesday, January 8, 2019 4:38 PM
To: Stephen Hemminger; matan@mellanox.com; KY Srinivasan; Haiyang Zhang
Cc: users@dpdk.org
Subject: 答复: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
Hi Stephen,
Got it, I have removed the “_“ from “tun_name”, so it ensure the length less than 15 bytes.
Thanks and Regards,
Jack
发件人: Stephen Hemminger <sthemmin@microsoft.com>
发送时间: 2019年1月9日 3:35
收件人: hfli@netitest.com; matan@mellanox.com; KY Srinivasan <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>
抄送: users@dpdk.org
主题: Re: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
Part of the problem with tap_name is that it is limited to 15 characters so this might break if hugefile_prefix was long.
________________________________
From: Stephen Hemminger
Sent: Tuesday, January 8, 2019 11:32 AM
To: hfli@netitest.com<mailto:hfli@netitest.com>; matan@mellanox.com<mailto:matan@mellanox.com>; KY Srinivasan; Haiyang Zhang
Cc: users@dpdk.org<mailto:users@dpdk.org>
Subject: Re: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
So the tun device name needs to be unique, and tun_unit variable is located in per-process part of DPDK.
Another solution would be to move the tun unit into the common memory area, or maybe put the pid into the tun name.
________________________________
From: hfli@netitest.com<mailto:hfli@netitest.com> <hfli@netitest.com<mailto:hfli@netitest.com>>
Sent: Monday, January 7, 2019 5:36 PM
To: Stephen Hemminger; matan@mellanox.com<mailto:matan@mellanox.com>; KY Srinivasan; Haiyang Zhang
Cc: users@dpdk.org<mailto:users@dpdk.org>
Subject: 答复: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
Hi Stephen,
Thanks for your response, I have found the reason, it due to the reduplicate “tun_name” in two processes, I added several lines code which get huge prefix and append to tun_name, it is running well, so I suggest the tun_name is appended other ID like gepid(), then we can run more dpdk instance concurrently.
# svn diff -r 4977:5088
Index: lib/librte_eal/linuxapp/eal/eal.c
===================================================================
--- lib/librte_eal/linuxapp/eal/eal.c (版本 4977)
+++ lib/librte_eal/linuxapp/eal/eal.c (版本 5088)
@@ -1146,6 +1146,12 @@
return rte_config.process_type;
}
+const char *
+rte_eal_get_huge_prefix(void)
+{
+ return internal_config.hugefile_prefix;
+}
+
int rte_eal_has_hugepages(void)
{
return ! internal_config.no_hugetlbfs;
Index: lib/librte_eal/common/include/rte_eal.h
===================================================================
--- lib/librte_eal/common/include/rte_eal.h (版本 4977)
+++ lib/librte_eal/common/include/rte_eal.h (版本 5088)
@@ -102,6 +102,8 @@
*/
enum rte_proc_type_t rte_eal_process_type(void);
+const char* rte_eal_get_huge_prefix(void);
+
/**
* Request iopl privilege for all RPL.
*
Index: drivers/net/tap/rte_eth_tap.c
===================================================================
--- drivers/net/tap/rte_eth_tap.c (版本 4977)
+++ drivers/net/tap/rte_eth_tap.c (版本 5088)
@@ -18,6 +18,7 @@
#include <rte_string_fns.h>
#include <rte_ethdev.h>
#include <rte_errno.h>
+#include <rte_eal.h>
#include <assert.h>
#include <sys/types.h>
@@ -1989,8 +1990,9 @@
}
snprintf(tun_name, sizeof(tun_name), "%s%u",
- DEFAULT_TUN_NAME, tun_unit++);
-
+ DEFAULT_TUN_NAME, tun_unit++);
+ printf("tun_name %s in %s ...\n", tun_name, __func__);
+
if (params && (params[0] != '\0')) {
TAP_LOG(DEBUG, "parameters (%s)", params);
@@ -2175,8 +2177,9 @@
}
speed = ETH_SPEED_NUM_10G;
- snprintf(tap_name, sizeof(tap_name), "%s%u",
- DEFAULT_TAP_NAME, tap_unit++);
+ snprintf(tap_name, sizeof(tap_name), "%s_%s_%u",
+ DEFAULT_TAP_NAME, rte_eal_get_huge_prefix(), tap_unit++);
+ printf("tap_name %s in %s ...\n", tap_name, __func__);
memset(remote_iface, 0, RTE_ETH_NAME_MAX_LEN);
if (params && (params[0] != '\0')) {
#
Thanks and Regards,
Jack
发件人: Stephen Hemminger <sthemmin@microsoft.com<mailto:sthemmin@microsoft.com>>
发送时间: 2019年1月8日 5:27
收件人: hfli@netitest.com<mailto:hfli@netitest.com>; matan@mellanox.com<mailto:matan@mellanox.com>; KY Srinivasan <kys@microsoft.com<mailto:kys@microsoft.com>>; Haiyang Zhang <haiyangz@microsoft.com<mailto:haiyangz@microsoft.com>>
抄送: users@dpdk.org<mailto:users@dpdk.org>
主题: Re: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
Which Linux distribution are you using? Recently Ubuntu changed to put the multiq queue discipline in linux-modules-extra package which is not normally installed. They changed the packaging and it broke TAP DPDK usage.
________________________________
From: hfli@netitest.com<mailto:hfli@netitest.com> <hfli@netitest.com<mailto:hfli@netitest.com>>
Sent: Thursday, January 3, 2019 9:27 PM
To: matan@mellanox.com<mailto:matan@mellanox.com>; Stephen Hemminger; KY Srinivasan; Haiyang Zhang
Cc: users@dpdk.org<mailto:users@dpdk.org>
Subject: 答复: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
Hi All,
I just tried DPDK 18.11 and test_pmd, it output same error, any help for this?.
Run Server Process
# ./build/app/testpmd -l 2,3 -n3 --vdev="net_vdev_netvsc1,iface=port2" --file-prefix server -- --port-topology=chained -i --nb-cores=1 --nb-ports=1 --total-num-mbufs=2048
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/server/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap0
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=2048, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 00:15:5D:10:85:17
Checking link statuses...
Done
testpmd>
Run Client process
# ./build/app/testpmd -l 0,1 -n3 --vdev="net_vdev_netvsc0,iface=port1" --file-prefix client -- --port-topology=chained -i --nb-cores=1 --nb-ports=1 --total-num-mbufs=2048
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/client/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap0
qdisc_create_multiq(): Could not add multiq qdisc (17): File exists
eth_dev_tap_create(): dtap0: failed to create multiq qdisc.
eth_dev_tap_create(): Disabling rte flow support: File exists(17)
eth_dev_tap_create(): Remote feature requires flow support.
eth_dev_tap_create(): TAP Unable to initialize net_tap_vsc0
EAL: Driver cannot attach the device (net_tap_vsc0)
EAL: Failed to attach device on primary process
net_failsafe: sub_device 1 probe failed (File exists)
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap1
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=2048, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Configuring Port 0 (socket 0)
Port 0: 06:64:AA:64:57:9D
Checking link statuses...
Done
testpmd> Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
# ethtool -i port1
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
# ethtool -i port2
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
#
# uname -a
Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23 20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
Thanks and Regards,
Jack
发件人: hfli@netitest.com<mailto:hfli@netitest.com> <hfli@netitest.com<mailto:hfli@netitest.com>>
发送时间: 2019年1月4日 11:47
收件人: 'matan@mellanox.com' <matan@mellanox.com<mailto:matan@mellanox.com>>
抄送: 'users@dpdk.org' <users@dpdk.org<mailto:users@dpdk.org>>; 'azuredpdk@microsoft.com' <azuredpdk@microsoft.com<mailto:azuredpdk@microsoft.com>>
主题: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
Hi Matan,
Could you help us for below error?
PMD: Could not add multiq qdisc (17): File exists
PMD: dtap0: failed to create multiq qdisc.
PMD: Disabling rte flow support: File exists(17)
PMD: Remote feature requires flow support.
PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
device
EAL: Bus (vdev) probe failed.
Our DPDK procedure want to deploy on Hyper-v and Azure Cloud, so we try it on Hyper-v firstly. It need start 2 processes, a client process use port1, a server process use port2, port1 and port2 in one internal subnet on a virtual switch,
But only one process can be started successfully, the other said error, “PMD: Could not add multiq qdisc (17): File exists”, our procedure is running well on Vmware/KVM/ASW, is there any help for this?
Below is our env:
OS: Windows 10 and Hyper-V on it
Guest OS: CentOS7.6(Upgrade kernel to 4.20.0)
DPDK version: 18.02.2
# uname -a
Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23 20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
root:/# ifconfig -a
bond0: flags=5122<BROADCAST,MASTER,MULTICAST> mtu 1500
ether 46:28:ec:c8:7a:74 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 75 bytes 6284 (6.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 75 bytes 6284 (6.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.16.130 netmask 255.255.255.0 broadcast 192.168.16.255
inet6 fe80::78e3:1af8:3333:ff45 prefixlen 64 scopeid 0x20<link>
ether 00:15:5d:10:85:14 txqueuelen 1000 (Ethernet)
RX packets 5494 bytes 706042 (689.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2163 bytes 438205 (427.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:15:5d:10:85:15 txqueuelen 1000 (Ethernet)
RX packets 3131 bytes 518243 (506.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port1: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:16 txqueuelen 1000 (Ethernet)
RX packets 1707 bytes 163778 (159.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 693 bytes 70666 (69.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port2: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:17 txqueuelen 1000 (Ethernet)
RX packets 900 bytes 112256 (109.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1504 bytes 122428 (119.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root:/# ethtool -i port1
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/# ethtool -i port2
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/#
Start server process successfully
# ./VM_DPDK -l 3 -n 4 --vdev="net_vdev_netvsc1,iface=port2" --socket-mem 1500 --file-prefix server
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.server_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for net_failsafe_net_vdev_netvsc1_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc1_id0 as dtap0
PMD: net_failsafe: MAC address is 00:15:5d:10:85:17
Concorrently start client process failed
# ./VM_DPDK -l 2 -n 4 --vdev="net_vdev_netvsc0,iface=port1" --socket-mem 1500 --file-prefix client
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.client_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for net_failsafe_net_vdev_netvsc0_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0
PMD: Could not add multiq qdisc (17): File exists
PMD: dtap0: failed to create multiq qdisc.
PMD: Disabling rte flow support: File exists(17)
PMD: Remote feature requires flow support.
PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
device
EAL: Bus (vdev) probe failed.
Thanks and Regards,
Jack
^ permalink raw reply [flat|nested] 9+ messages in thread
* [dpdk-users] 答复: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
2019-01-08 19:35 ` Stephen Hemminger
@ 2019-01-09 0:38 ` hfli
2019-01-09 19:55 ` [dpdk-users] " Stephen Hemminger
0 siblings, 1 reply; 9+ messages in thread
From: hfli @ 2019-01-09 0:38 UTC (permalink / raw)
To: 'Stephen Hemminger', matan, 'KY Srinivasan',
'Haiyang Zhang'
Cc: users
Hi Stephen,
Got it, I have removed the “_“ from “tun_name”, so it ensure the length
less than 15 bytes.
Thanks and Regards,
Jack
发件人: Stephen Hemminger <sthemmin@microsoft.com>
发送时间: 2019年1月9日 3:35
收件人: hfli@netitest.com; matan@mellanox.com; KY Srinivasan <kys@microsoft.
com>; Haiyang Zhang <haiyangz@microsoft.com>
抄送: users@dpdk.org
主题: Re: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17):
File exists"
Part of the problem with tap_name is that it is limited to 15 characters so
this might break if hugefile_prefix was long.
_____
From: Stephen Hemminger
Sent: Tuesday, January 8, 2019 11:32 AM
To: hfli@netitest.com <mailto:hfli@netitest.com> ; matan@mellanox.com
<mailto:matan@mellanox.com> ; KY Srinivasan; Haiyang Zhang
Cc: users@dpdk.org <mailto:users@dpdk.org>
Subject: Re: DPDK procedure start Error: "PMD: Could not add multiq qdisc
(17): File exists"
So the tun device name needs to be unique, and tun_unit variable is located
in per-process part of DPDK.
Another solution would be to move the tun unit into the common memory area,
or maybe put the pid into the tun name.
_____
From: hfli@netitest.com <mailto:hfli@netitest.com> <hfli@netitest.com
<mailto:hfli@netitest.com> >
Sent: Monday, January 7, 2019 5:36 PM
To: Stephen Hemminger; matan@mellanox.com <mailto:matan@mellanox.com> ; KY
Srinivasan; Haiyang Zhang
Cc: users@dpdk.org <mailto:users@dpdk.org>
Subject: 答复: DPDK procedure start Error: "PMD: Could not add multiq qdisc
(17): File exists"
Hi Stephen,
Thanks for your response, I have found the reason, it due to the reduplicate
“tun_name” in two processes, I added several lines code which get huge
prefix and append to tun_name, it is running well, so I suggest the tun_name
is appended other ID like gepid(), then we can run more dpdk instance
concurrently.
# svn diff -r 4977:5088
Index: lib/librte_eal/linuxapp/eal/eal.c
===================================================================
--- lib/librte_eal/linuxapp/eal/eal.c (版本 4977)
+++ lib/librte_eal/linuxapp/eal/eal.c (版本 5088)
@@ -1146,6 +1146,12 @@
return rte_config.process_type;
}
+const char *
+rte_eal_get_huge_prefix(void)
+{
+ return internal_config.hugefile_prefix;
+}
+
int rte_eal_has_hugepages(void)
{
return ! internal_config.no_hugetlbfs;
Index: lib/librte_eal/common/include/rte_eal.h
===================================================================
--- lib/librte_eal/common/include/rte_eal.h (版本 4977)
+++ lib/librte_eal/common/include/rte_eal.h (版本 5088)
@@ -102,6 +102,8 @@
*/
enum rte_proc_type_t rte_eal_process_type(void);
+const char* rte_eal_get_huge_prefix(void);
+
/**
* Request iopl privilege for all RPL.
*
Index: drivers/net/tap/rte_eth_tap.c
===================================================================
--- drivers/net/tap/rte_eth_tap.c (版本 4977)
+++ drivers/net/tap/rte_eth_tap.c (版本 5088)
@@ -18,6 +18,7 @@
#include <rte_string_fns.h>
#include <rte_ethdev.h>
#include <rte_errno.h>
+#include <rte_eal.h>
#include <assert.h>
#include <sys/types.h>
@@ -1989,8 +1990,9 @@
}
snprintf(tun_name, sizeof(tun_name), "%s%u",
- DEFAULT_TUN_NAME, tun_unit++);
-
+ DEFAULT_TUN_NAME, tun_unit++);
+ printf("tun_name %s in %s ...\n", tun_name, __func__);
+
if (params && (params[0] != '\0')) {
TAP_LOG(DEBUG, "parameters (%s)", params);
@@ -2175,8 +2177,9 @@
}
speed = ETH_SPEED_NUM_10G;
- snprintf(tap_name, sizeof(tap_name), "%s%u",
- DEFAULT_TAP_NAME, tap_unit++);
+ snprintf(tap_name, sizeof(tap_name), "%s_%s_%u",
+ DEFAULT_TAP_NAME, rte_eal_get_huge_prefix(), tap_unit++);
+ printf("tap_name %s in %s ...\n", tap_name, __func__);
memset(remote_iface, 0, RTE_ETH_NAME_MAX_LEN);
if (params && (params[0] != '\0')) {
#
Thanks and Regards,
Jack
发件人: Stephen Hemminger <sthemmin@microsoft.com
<mailto:sthemmin@microsoft.com> >
发送时间: 2019年1月8日 5:27
收件人: hfli@netitest.com <mailto:hfli@netitest.com> ; matan@mellanox.com
<mailto:matan@mellanox.com> ; KY Srinivasan <kys@microsoft.com
<mailto:kys@microsoft.com> >; Haiyang Zhang <haiyangz@microsoft.com
<mailto:haiyangz@microsoft.com> >
抄送: users@dpdk.org <mailto:users@dpdk.org>
主题: Re: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17):
File exists"
Which Linux distribution are you using? Recently Ubuntu changed to put the
multiq queue discipline in linux-modules-extra package which is not normally
installed. They changed the packaging and it broke TAP DPDK usage.
_____
From: hfli@netitest.com <mailto:hfli@netitest.com> <hfli@netitest.com
<mailto:hfli@netitest.com> >
Sent: Thursday, January 3, 2019 9:27 PM
To: matan@mellanox.com <mailto:matan@mellanox.com> ; Stephen Hemminger; KY
Srinivasan; Haiyang Zhang
Cc: users@dpdk.org <mailto:users@dpdk.org>
Subject: 答复: DPDK procedure start Error: "PMD: Could not add multiq qdisc
(17): File exists"
Hi All,
I just tried DPDK 18.11 and test_pmd, it output same error, any help for
this?.
Run Server Process
# ./build/app/testpmd -l 2,3 -n3 --vdev="net_vdev_netvsc1,iface=port2"
--file-prefix server -- --port-topology=chained -i --nb-cores=1 --nb-ports=1
--total-num-mbufs=2048
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/server/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap0
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and
--ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=2048, size=2176,
socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 00:15:5D:10:85:17
Checking link statuses...
Done
testpmd>
Run Client process
# ./build/app/testpmd -l 0,1 -n3 --vdev="net_vdev_netvsc0,iface=port1"
--file-prefix client -- --port-topology=chained -i --nb-cores=1 --nb-ports=1
--total-num-mbufs=2048
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/client/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap0
qdisc_create_multiq(): Could not add multiq qdisc (17): File exists
eth_dev_tap_create(): dtap0: failed to create multiq qdisc.
eth_dev_tap_create(): Disabling rte flow support: File exists(17)
eth_dev_tap_create(): Remote feature requires flow support.
eth_dev_tap_create(): TAP Unable to initialize net_tap_vsc0
EAL: Driver cannot attach the device (net_tap_vsc0)
EAL: Failed to attach device on primary process
net_failsafe: sub_device 1 probe failed (File exists)
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap1
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and
--ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=2048, size=2176,
socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Configuring Port 0 (socket 0)
Port 0: 06:64:AA:64:57:9D
Checking link statuses...
Done
testpmd> Cannot set owner to port 1 already owned by
Fail-safe_0000000000000001
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
# ethtool -i port1
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
# ethtool -i port2
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
#
# uname -a
Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23
20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
Thanks and Regards,
Jack
发件人: hfli@netitest.com <mailto:hfli@netitest.com> <hfli@netitest.com
<mailto:hfli@netitest.com> >
发送时间: 2019年1月4日 11:47
收件人: 'matan@mellanox.com' <matan@mellanox.com <mailto:matan@mellanox.com>
>
抄送: 'users@dpdk.org' <users@dpdk.org <mailto:users@dpdk.org> >;
'azuredpdk@microsoft.com' <azuredpdk@microsoft.com
<mailto:azuredpdk@microsoft.com> >
主题: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17):
File exists"
Hi Matan,
Could you help us for below error?
PMD: Could not add multiq qdisc (17): File exists
PMD: dtap0: failed to create multiq qdisc.
PMD: Disabling rte flow support: File exists(17)
PMD: Remote feature requires flow support.
PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is
06:5f:a6:0a:b4:f9
device
EAL: Bus (vdev) probe failed.
Our DPDK procedure want to deploy on Hyper-v and Azure Cloud, so we try it
on Hyper-v firstly. It need start 2 processes, a client process use port1, a
server process use port2, port1 and port2 in one internal subnet on a
virtual switch,
But only one process can be started successfully, the other said error,
“PMD: Could not add multiq qdisc (17): File exists”, our procedure is
running well on Vmware/KVM/ASW, is there any help for this?
Below is our env:
OS: Windows 10 and Hyper-V on it
Guest OS: CentOS7.6(Upgrade kernel to 4.20.0)
DPDK version: 18.02.2
# uname -a
Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23
20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
root:/# ifconfig -a
bond0: flags=5122<BROADCAST,MASTER,MULTICAST> mtu 1500
ether 46:28:ec:c8:7a:74 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 75 bytes 6284 (6.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 75 bytes 6284 (6.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.16.130 netmask 255.255.255.0 broadcast 192.168.16.255
inet6 fe80::78e3:1af8:3333:ff45 prefixlen 64 scopeid 0x20<link>
ether 00:15:5d:10:85:14 txqueuelen 1000 (Ethernet)
RX packets 5494 bytes 706042 (689.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2163 bytes 438205 (427.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:15:5d:10:85:15 txqueuelen 1000 (Ethernet)
RX packets 3131 bytes 518243 (506.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port1: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:16 txqueuelen 1000 (Ethernet)
RX packets 1707 bytes 163778 (159.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 693 bytes 70666 (69.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port2: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:17 txqueuelen 1000 (Ethernet)
RX packets 900 bytes 112256 (109.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1504 bytes 122428 (119.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root:/# ethtool -i port1
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/# ethtool -i port2
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/#
Start server process successfully
# ./VM_DPDK -l 3 -n 4 --vdev="net_vdev_netvsc1,iface=port2" --socket-mem
1500 --file-prefix server
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.server_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for
net_failsafe_net_vdev_netvsc1_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc1_id0 as dtap0
PMD: net_failsafe: MAC address is 00:15:5d:10:85:17
Concorrently start client process failed
# ./VM_DPDK -l 2 -n 4 --vdev="net_vdev_netvsc0,iface=port1" --socket-mem
1500 --file-prefix client
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.client_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for
net_failsafe_net_vdev_netvsc0_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0
PMD: Could not add multiq qdisc (17): File exists
PMD: dtap0: failed to create multiq qdisc.
PMD: Disabling rte flow support: File exists(17)
PMD: Remote feature requires flow support.
PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is
06:5f:a6:0a:b4:f9
device
EAL: Bus (vdev) probe failed.
Thanks and Regards,
Jack
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-users] DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
2019-01-08 19:32 ` [dpdk-users] " Stephen Hemminger
@ 2019-01-08 19:35 ` Stephen Hemminger
2019-01-09 0:38 ` [dpdk-users] 答复: " hfli
0 siblings, 1 reply; 9+ messages in thread
From: Stephen Hemminger @ 2019-01-08 19:35 UTC (permalink / raw)
To: hfli, matan, KY Srinivasan, Haiyang Zhang; +Cc: users
Part of the problem with tap_name is that it is limited to 15 characters so this might break if hugefile_prefix was long.
________________________________
From: Stephen Hemminger
Sent: Tuesday, January 8, 2019 11:32 AM
To: hfli@netitest.com; matan@mellanox.com; KY Srinivasan; Haiyang Zhang
Cc: users@dpdk.org
Subject: Re: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
So the tun device name needs to be unique, and tun_unit variable is located in per-process part of DPDK.
Another solution would be to move the tun unit into the common memory area, or maybe put the pid into the tun name.
________________________________
From: hfli@netitest.com <hfli@netitest.com>
Sent: Monday, January 7, 2019 5:36 PM
To: Stephen Hemminger; matan@mellanox.com; KY Srinivasan; Haiyang Zhang
Cc: users@dpdk.org
Subject: 答复: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
Hi Stephen,
Thanks for your response, I have found the reason, it due to the reduplicate “tun_name” in two processes, I added several lines code which get huge prefix and append to tun_name, it is running well, so I suggest the tun_name is appended other ID like gepid(), then we can run more dpdk instance concurrently.
# svn diff -r 4977:5088
Index: lib/librte_eal/linuxapp/eal/eal.c
===================================================================
--- lib/librte_eal/linuxapp/eal/eal.c (版本 4977)
+++ lib/librte_eal/linuxapp/eal/eal.c (版本 5088)
@@ -1146,6 +1146,12 @@
return rte_config.process_type;
}
+const char *
+rte_eal_get_huge_prefix(void)
+{
+ return internal_config.hugefile_prefix;
+}
+
int rte_eal_has_hugepages(void)
{
return ! internal_config.no_hugetlbfs;
Index: lib/librte_eal/common/include/rte_eal.h
===================================================================
--- lib/librte_eal/common/include/rte_eal.h (版本 4977)
+++ lib/librte_eal/common/include/rte_eal.h (版本 5088)
@@ -102,6 +102,8 @@
*/
enum rte_proc_type_t rte_eal_process_type(void);
+const char* rte_eal_get_huge_prefix(void);
+
/**
* Request iopl privilege for all RPL.
*
Index: drivers/net/tap/rte_eth_tap.c
===================================================================
--- drivers/net/tap/rte_eth_tap.c (版本 4977)
+++ drivers/net/tap/rte_eth_tap.c (版本 5088)
@@ -18,6 +18,7 @@
#include <rte_string_fns.h>
#include <rte_ethdev.h>
#include <rte_errno.h>
+#include <rte_eal.h>
#include <assert.h>
#include <sys/types.h>
@@ -1989,8 +1990,9 @@
}
snprintf(tun_name, sizeof(tun_name), "%s%u",
- DEFAULT_TUN_NAME, tun_unit++);
-
+ DEFAULT_TUN_NAME, tun_unit++);
+ printf("tun_name %s in %s ...\n", tun_name, __func__);
+
if (params && (params[0] != '\0')) {
TAP_LOG(DEBUG, "parameters (%s)", params);
@@ -2175,8 +2177,9 @@
}
speed = ETH_SPEED_NUM_10G;
- snprintf(tap_name, sizeof(tap_name), "%s%u",
- DEFAULT_TAP_NAME, tap_unit++);
+ snprintf(tap_name, sizeof(tap_name), "%s_%s_%u",
+ DEFAULT_TAP_NAME, rte_eal_get_huge_prefix(), tap_unit++);
+ printf("tap_name %s in %s ...\n", tap_name, __func__);
memset(remote_iface, 0, RTE_ETH_NAME_MAX_LEN);
if (params && (params[0] != '\0')) {
#
Thanks and Regards,
Jack
发件人: Stephen Hemminger <sthemmin@microsoft.com>
发送时间: 2019年1月8日 5:27
收件人: hfli@netitest.com; matan@mellanox.com; KY Srinivasan <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>
抄送: users@dpdk.org
主题: Re: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
Which Linux distribution are you using? Recently Ubuntu changed to put the multiq queue discipline in linux-modules-extra package which is not normally installed. They changed the packaging and it broke TAP DPDK usage.
________________________________
From: hfli@netitest.com<mailto:hfli@netitest.com> <hfli@netitest.com<mailto:hfli@netitest.com>>
Sent: Thursday, January 3, 2019 9:27 PM
To: matan@mellanox.com<mailto:matan@mellanox.com>; Stephen Hemminger; KY Srinivasan; Haiyang Zhang
Cc: users@dpdk.org<mailto:users@dpdk.org>
Subject: 答复: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
Hi All,
I just tried DPDK 18.11 and test_pmd, it output same error, any help for this?.
Run Server Process
# ./build/app/testpmd -l 2,3 -n3 --vdev="net_vdev_netvsc1,iface=port2" --file-prefix server -- --port-topology=chained -i --nb-cores=1 --nb-ports=1 --total-num-mbufs=2048
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/server/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap0
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=2048, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 00:15:5D:10:85:17
Checking link statuses...
Done
testpmd>
Run Client process
# ./build/app/testpmd -l 0,1 -n3 --vdev="net_vdev_netvsc0,iface=port1" --file-prefix client -- --port-topology=chained -i --nb-cores=1 --nb-ports=1 --total-num-mbufs=2048
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/client/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap0
qdisc_create_multiq(): Could not add multiq qdisc (17): File exists
eth_dev_tap_create(): dtap0: failed to create multiq qdisc.
eth_dev_tap_create(): Disabling rte flow support: File exists(17)
eth_dev_tap_create(): Remote feature requires flow support.
eth_dev_tap_create(): TAP Unable to initialize net_tap_vsc0
EAL: Driver cannot attach the device (net_tap_vsc0)
EAL: Failed to attach device on primary process
net_failsafe: sub_device 1 probe failed (File exists)
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap1
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=2048, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Configuring Port 0 (socket 0)
Port 0: 06:64:AA:64:57:9D
Checking link statuses...
Done
testpmd> Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
# ethtool -i port1
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
# ethtool -i port2
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
#
# uname -a
Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23 20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
Thanks and Regards,
Jack
发件人: hfli@netitest.com<mailto:hfli@netitest.com> <hfli@netitest.com<mailto:hfli@netitest.com>>
发送时间: 2019年1月4日 11:47
收件人: 'matan@mellanox.com' <matan@mellanox.com<mailto:matan@mellanox.com>>
抄送: 'users@dpdk.org' <users@dpdk.org<mailto:users@dpdk.org>>; 'azuredpdk@microsoft.com' <azuredpdk@microsoft.com<mailto:azuredpdk@microsoft.com>>
主题: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
Hi Matan,
Could you help us for below error?
PMD: Could not add multiq qdisc (17): File exists
PMD: dtap0: failed to create multiq qdisc.
PMD: Disabling rte flow support: File exists(17)
PMD: Remote feature requires flow support.
PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
device
EAL: Bus (vdev) probe failed.
Our DPDK procedure want to deploy on Hyper-v and Azure Cloud, so we try it on Hyper-v firstly. It need start 2 processes, a client process use port1, a server process use port2, port1 and port2 in one internal subnet on a virtual switch,
But only one process can be started successfully, the other said error, “PMD: Could not add multiq qdisc (17): File exists”, our procedure is running well on Vmware/KVM/ASW, is there any help for this?
Below is our env:
OS: Windows 10 and Hyper-V on it
Guest OS: CentOS7.6(Upgrade kernel to 4.20.0)
DPDK version: 18.02.2
# uname -a
Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23 20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
root:/# ifconfig -a
bond0: flags=5122<BROADCAST,MASTER,MULTICAST> mtu 1500
ether 46:28:ec:c8:7a:74 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 75 bytes 6284 (6.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 75 bytes 6284 (6.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.16.130 netmask 255.255.255.0 broadcast 192.168.16.255
inet6 fe80::78e3:1af8:3333:ff45 prefixlen 64 scopeid 0x20<link>
ether 00:15:5d:10:85:14 txqueuelen 1000 (Ethernet)
RX packets 5494 bytes 706042 (689.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2163 bytes 438205 (427.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:15:5d:10:85:15 txqueuelen 1000 (Ethernet)
RX packets 3131 bytes 518243 (506.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port1: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:16 txqueuelen 1000 (Ethernet)
RX packets 1707 bytes 163778 (159.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 693 bytes 70666 (69.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port2: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:17 txqueuelen 1000 (Ethernet)
RX packets 900 bytes 112256 (109.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1504 bytes 122428 (119.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root:/# ethtool -i port1
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/# ethtool -i port2
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/#
Start server process successfully
# ./VM_DPDK -l 3 -n 4 --vdev="net_vdev_netvsc1,iface=port2" --socket-mem 1500 --file-prefix server
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.server_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for net_failsafe_net_vdev_netvsc1_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc1_id0 as dtap0
PMD: net_failsafe: MAC address is 00:15:5d:10:85:17
Concorrently start client process failed
# ./VM_DPDK -l 2 -n 4 --vdev="net_vdev_netvsc0,iface=port1" --socket-mem 1500 --file-prefix client
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.client_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for net_failsafe_net_vdev_netvsc0_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0
PMD: Could not add multiq qdisc (17): File exists
PMD: dtap0: failed to create multiq qdisc.
PMD: Disabling rte flow support: File exists(17)
PMD: Remote feature requires flow support.
PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
device
EAL: Bus (vdev) probe failed.
Thanks and Regards,
Jack
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-users] DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
2019-01-08 1:36 ` [dpdk-users] 答复: " hfli
@ 2019-01-08 19:32 ` Stephen Hemminger
2019-01-08 19:35 ` Stephen Hemminger
0 siblings, 1 reply; 9+ messages in thread
From: Stephen Hemminger @ 2019-01-08 19:32 UTC (permalink / raw)
To: hfli, matan, KY Srinivasan, Haiyang Zhang; +Cc: users
So the tun device name needs to be unique, and tun_unit variable is located in per-process part of DPDK.
Another solution would be to move the tun unit into the common memory area, or maybe put the pid into the tun name.
________________________________
From: hfli@netitest.com <hfli@netitest.com>
Sent: Monday, January 7, 2019 5:36 PM
To: Stephen Hemminger; matan@mellanox.com; KY Srinivasan; Haiyang Zhang
Cc: users@dpdk.org
Subject: 答复: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
Hi Stephen,
Thanks for your response, I have found the reason, it due to the reduplicate “tun_name” in two processes, I added several lines code which get huge prefix and append to tun_name, it is running well, so I suggest the tun_name is appended other ID like gepid(), then we can run more dpdk instance concurrently.
# svn diff -r 4977:5088
Index: lib/librte_eal/linuxapp/eal/eal.c
===================================================================
--- lib/librte_eal/linuxapp/eal/eal.c (版本 4977)
+++ lib/librte_eal/linuxapp/eal/eal.c (版本 5088)
@@ -1146,6 +1146,12 @@
return rte_config.process_type;
}
+const char *
+rte_eal_get_huge_prefix(void)
+{
+ return internal_config.hugefile_prefix;
+}
+
int rte_eal_has_hugepages(void)
{
return ! internal_config.no_hugetlbfs;
Index: lib/librte_eal/common/include/rte_eal.h
===================================================================
--- lib/librte_eal/common/include/rte_eal.h (版本 4977)
+++ lib/librte_eal/common/include/rte_eal.h (版本 5088)
@@ -102,6 +102,8 @@
*/
enum rte_proc_type_t rte_eal_process_type(void);
+const char* rte_eal_get_huge_prefix(void);
+
/**
* Request iopl privilege for all RPL.
*
Index: drivers/net/tap/rte_eth_tap.c
===================================================================
--- drivers/net/tap/rte_eth_tap.c (版本 4977)
+++ drivers/net/tap/rte_eth_tap.c (版本 5088)
@@ -18,6 +18,7 @@
#include <rte_string_fns.h>
#include <rte_ethdev.h>
#include <rte_errno.h>
+#include <rte_eal.h>
#include <assert.h>
#include <sys/types.h>
@@ -1989,8 +1990,9 @@
}
snprintf(tun_name, sizeof(tun_name), "%s%u",
- DEFAULT_TUN_NAME, tun_unit++);
-
+ DEFAULT_TUN_NAME, tun_unit++);
+ printf("tun_name %s in %s ...\n", tun_name, __func__);
+
if (params && (params[0] != '\0')) {
TAP_LOG(DEBUG, "parameters (%s)", params);
@@ -2175,8 +2177,9 @@
}
speed = ETH_SPEED_NUM_10G;
- snprintf(tap_name, sizeof(tap_name), "%s%u",
- DEFAULT_TAP_NAME, tap_unit++);
+ snprintf(tap_name, sizeof(tap_name), "%s_%s_%u",
+ DEFAULT_TAP_NAME, rte_eal_get_huge_prefix(), tap_unit++);
+ printf("tap_name %s in %s ...\n", tap_name, __func__);
memset(remote_iface, 0, RTE_ETH_NAME_MAX_LEN);
if (params && (params[0] != '\0')) {
#
Thanks and Regards,
Jack
发件人: Stephen Hemminger <sthemmin@microsoft.com>
发送时间: 2019年1月8日 5:27
收件人: hfli@netitest.com; matan@mellanox.com; KY Srinivasan <kys@microsoft.com>; Haiyang Zhang <haiyangz@microsoft.com>
抄送: users@dpdk.org
主题: Re: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
Which Linux distribution are you using? Recently Ubuntu changed to put the multiq queue discipline in linux-modules-extra package which is not normally installed. They changed the packaging and it broke TAP DPDK usage.
________________________________
From: hfli@netitest.com<mailto:hfli@netitest.com> <hfli@netitest.com<mailto:hfli@netitest.com>>
Sent: Thursday, January 3, 2019 9:27 PM
To: matan@mellanox.com<mailto:matan@mellanox.com>; Stephen Hemminger; KY Srinivasan; Haiyang Zhang
Cc: users@dpdk.org<mailto:users@dpdk.org>
Subject: 答复: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
Hi All,
I just tried DPDK 18.11 and test_pmd, it output same error, any help for this?.
Run Server Process
# ./build/app/testpmd -l 2,3 -n3 --vdev="net_vdev_netvsc1,iface=port2" --file-prefix server -- --port-topology=chained -i --nb-cores=1 --nb-ports=1 --total-num-mbufs=2048
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/server/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap0
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=2048, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 00:15:5D:10:85:17
Checking link statuses...
Done
testpmd>
Run Client process
# ./build/app/testpmd -l 0,1 -n3 --vdev="net_vdev_netvsc0,iface=port1" --file-prefix client -- --port-topology=chained -i --nb-cores=1 --nb-ports=1 --total-num-mbufs=2048
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/client/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap0
qdisc_create_multiq(): Could not add multiq qdisc (17): File exists
eth_dev_tap_create(): dtap0: failed to create multiq qdisc.
eth_dev_tap_create(): Disabling rte flow support: File exists(17)
eth_dev_tap_create(): Remote feature requires flow support.
eth_dev_tap_create(): TAP Unable to initialize net_tap_vsc0
EAL: Driver cannot attach the device (net_tap_vsc0)
EAL: Failed to attach device on primary process
net_failsafe: sub_device 1 probe failed (File exists)
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap1
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=2048, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Configuring Port 0 (socket 0)
Port 0: 06:64:AA:64:57:9D
Checking link statuses...
Done
testpmd> Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
# ethtool -i port1
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
# ethtool -i port2
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
#
# uname -a
Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23 20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
Thanks and Regards,
Jack
发件人: hfli@netitest.com<mailto:hfli@netitest.com> <hfli@netitest.com<mailto:hfli@netitest.com>>
发送时间: 2019年1月4日 11:47
收件人: 'matan@mellanox.com' <matan@mellanox.com<mailto:matan@mellanox.com>>
抄送: 'users@dpdk.org' <users@dpdk.org<mailto:users@dpdk.org>>; 'azuredpdk@microsoft.com' <azuredpdk@microsoft.com<mailto:azuredpdk@microsoft.com>>
主题: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
Hi Matan,
Could you help us for below error?
PMD: Could not add multiq qdisc (17): File exists
PMD: dtap0: failed to create multiq qdisc.
PMD: Disabling rte flow support: File exists(17)
PMD: Remote feature requires flow support.
PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
device
EAL: Bus (vdev) probe failed.
Our DPDK procedure want to deploy on Hyper-v and Azure Cloud, so we try it on Hyper-v firstly. It need start 2 processes, a client process use port1, a server process use port2, port1 and port2 in one internal subnet on a virtual switch,
But only one process can be started successfully, the other said error, “PMD: Could not add multiq qdisc (17): File exists”, our procedure is running well on Vmware/KVM/ASW, is there any help for this?
Below is our env:
OS: Windows 10 and Hyper-V on it
Guest OS: CentOS7.6(Upgrade kernel to 4.20.0)
DPDK version: 18.02.2
# uname -a
Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23 20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
root:/# ifconfig -a
bond0: flags=5122<BROADCAST,MASTER,MULTICAST> mtu 1500
ether 46:28:ec:c8:7a:74 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 75 bytes 6284 (6.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 75 bytes 6284 (6.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.16.130 netmask 255.255.255.0 broadcast 192.168.16.255
inet6 fe80::78e3:1af8:3333:ff45 prefixlen 64 scopeid 0x20<link>
ether 00:15:5d:10:85:14 txqueuelen 1000 (Ethernet)
RX packets 5494 bytes 706042 (689.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2163 bytes 438205 (427.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:15:5d:10:85:15 txqueuelen 1000 (Ethernet)
RX packets 3131 bytes 518243 (506.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port1: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:16 txqueuelen 1000 (Ethernet)
RX packets 1707 bytes 163778 (159.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 693 bytes 70666 (69.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port2: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:17 txqueuelen 1000 (Ethernet)
RX packets 900 bytes 112256 (109.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1504 bytes 122428 (119.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root:/# ethtool -i port1
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/# ethtool -i port2
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/#
Start server process successfully
# ./VM_DPDK -l 3 -n 4 --vdev="net_vdev_netvsc1,iface=port2" --socket-mem 1500 --file-prefix server
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.server_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for net_failsafe_net_vdev_netvsc1_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc1_id0 as dtap0
PMD: net_failsafe: MAC address is 00:15:5d:10:85:17
Concorrently start client process failed
# ./VM_DPDK -l 2 -n 4 --vdev="net_vdev_netvsc0,iface=port1" --socket-mem 1500 --file-prefix client
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.client_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for net_failsafe_net_vdev_netvsc0_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0
PMD: Could not add multiq qdisc (17): File exists
PMD: dtap0: failed to create multiq qdisc.
PMD: Disabling rte flow support: File exists(17)
PMD: Remote feature requires flow support.
PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
device
EAL: Bus (vdev) probe failed.
Thanks and Regards,
Jack
^ permalink raw reply [flat|nested] 9+ messages in thread
* [dpdk-users] 答复: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
2019-01-07 21:26 ` [dpdk-users] " Stephen Hemminger
@ 2019-01-08 1:36 ` hfli
2019-01-08 19:32 ` [dpdk-users] " Stephen Hemminger
0 siblings, 1 reply; 9+ messages in thread
From: hfli @ 2019-01-08 1:36 UTC (permalink / raw)
To: 'Stephen Hemminger', matan, 'KY Srinivasan',
'Haiyang Zhang'
Cc: users
Hi Stephen,
Thanks for your response, I have found the reason, it due to the reduplicate
“tun_name” in two processes, I added several lines code which get huge
prefix and append to tun_name, it is running well, so I suggest the tun_name
is appended other ID like gepid(), then we can run more dpdk instance
concurrently.
# svn diff -r 4977:5088
Index: lib/librte_eal/linuxapp/eal/eal.c
===================================================================
--- lib/librte_eal/linuxapp/eal/eal.c (版本 4977)
+++ lib/librte_eal/linuxapp/eal/eal.c (版本 5088)
@@ -1146,6 +1146,12 @@
return rte_config.process_type;
}
+const char *
+rte_eal_get_huge_prefix(void)
+{
+ return internal_config.hugefile_prefix;
+}
+
int rte_eal_has_hugepages(void)
{
return ! internal_config.no_hugetlbfs;
Index: lib/librte_eal/common/include/rte_eal.h
===================================================================
--- lib/librte_eal/common/include/rte_eal.h (版本 4977)
+++ lib/librte_eal/common/include/rte_eal.h (版本 5088)
@@ -102,6 +102,8 @@
*/
enum rte_proc_type_t rte_eal_process_type(void);
+const char* rte_eal_get_huge_prefix(void);
+
/**
* Request iopl privilege for all RPL.
*
Index: drivers/net/tap/rte_eth_tap.c
===================================================================
--- drivers/net/tap/rte_eth_tap.c (版本 4977)
+++ drivers/net/tap/rte_eth_tap.c (版本 5088)
@@ -18,6 +18,7 @@
#include <rte_string_fns.h>
#include <rte_ethdev.h>
#include <rte_errno.h>
+#include <rte_eal.h>
#include <assert.h>
#include <sys/types.h>
@@ -1989,8 +1990,9 @@
}
snprintf(tun_name, sizeof(tun_name), "%s%u",
- DEFAULT_TUN_NAME, tun_unit++);
-
+ DEFAULT_TUN_NAME, tun_unit++);
+ printf("tun_name %s in %s ...\n", tun_name, __func__);
+
if (params && (params[0] != '\0')) {
TAP_LOG(DEBUG, "parameters (%s)", params);
@@ -2175,8 +2177,9 @@
}
speed = ETH_SPEED_NUM_10G;
- snprintf(tap_name, sizeof(tap_name), "%s%u",
- DEFAULT_TAP_NAME, tap_unit++);
+ snprintf(tap_name, sizeof(tap_name), "%s_%s_%u",
+ DEFAULT_TAP_NAME, rte_eal_get_huge_prefix(), tap_unit++);
+ printf("tap_name %s in %s ...\n", tap_name, __func__);
memset(remote_iface, 0, RTE_ETH_NAME_MAX_LEN);
if (params && (params[0] != '\0')) {
#
Thanks and Regards,
Jack
发件人: Stephen Hemminger <sthemmin@microsoft.com>
发送时间: 2019年1月8日 5:27
收件人: hfli@netitest.com; matan@mellanox.com; KY Srinivasan <kys@microsoft.
com>; Haiyang Zhang <haiyangz@microsoft.com>
抄送: users@dpdk.org
主题: Re: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17):
File exists"
Which Linux distribution are you using? Recently Ubuntu changed to put the
multiq queue discipline in linux-modules-extra package which is not normally
installed. They changed the packaging and it broke TAP DPDK usage.
_____
From: hfli@netitest.com <mailto:hfli@netitest.com> <hfli@netitest.com
<mailto:hfli@netitest.com> >
Sent: Thursday, January 3, 2019 9:27 PM
To: matan@mellanox.com <mailto:matan@mellanox.com> ; Stephen Hemminger; KY
Srinivasan; Haiyang Zhang
Cc: users@dpdk.org <mailto:users@dpdk.org>
Subject: 答复: DPDK procedure start Error: "PMD: Could not add multiq qdisc
(17): File exists"
Hi All,
I just tried DPDK 18.11 and test_pmd, it output same error, any help for
this?.
Run Server Process
# ./build/app/testpmd -l 2,3 -n3 --vdev="net_vdev_netvsc1,iface=port2"
--file-prefix server -- --port-topology=chained -i --nb-cores=1 --nb-ports=1
--total-num-mbufs=2048
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/server/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap0
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and
--ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=2048, size=2176,
socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 00:15:5D:10:85:17
Checking link statuses...
Done
testpmd>
Run Client process
# ./build/app/testpmd -l 0,1 -n3 --vdev="net_vdev_netvsc0,iface=port1"
--file-prefix client -- --port-topology=chained -i --nb-cores=1 --nb-ports=1
--total-num-mbufs=2048
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/client/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap0
qdisc_create_multiq(): Could not add multiq qdisc (17): File exists
eth_dev_tap_create(): dtap0: failed to create multiq qdisc.
eth_dev_tap_create(): Disabling rte flow support: File exists(17)
eth_dev_tap_create(): Remote feature requires flow support.
eth_dev_tap_create(): TAP Unable to initialize net_tap_vsc0
EAL: Driver cannot attach the device (net_tap_vsc0)
EAL: Failed to attach device on primary process
net_failsafe: sub_device 1 probe failed (File exists)
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap1
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and
--ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=2048, size=2176,
socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Configuring Port 0 (socket 0)
Port 0: 06:64:AA:64:57:9D
Checking link statuses...
Done
testpmd> Cannot set owner to port 1 already owned by
Fail-safe_0000000000000001
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
# ethtool -i port1
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
# ethtool -i port2
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
#
# uname -a
Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23
20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
Thanks and Regards,
Jack
发件人: hfli@netitest.com <mailto:hfli@netitest.com> <hfli@netitest.com
<mailto:hfli@netitest.com> >
发送时间: 2019年1月4日 11:47
收件人: 'matan@mellanox.com' <matan@mellanox.com <mailto:matan@mellanox.com>
>
抄送: 'users@dpdk.org' <users@dpdk.org <mailto:users@dpdk.org> >;
'azuredpdk@microsoft.com' <azuredpdk@microsoft.com
<mailto:azuredpdk@microsoft.com> >
主题: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17):
File exists"
Hi Matan,
Could you help us for below error?
PMD: Could not add multiq qdisc (17): File exists
PMD: dtap0: failed to create multiq qdisc.
PMD: Disabling rte flow support: File exists(17)
PMD: Remote feature requires flow support.
PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is
06:5f:a6:0a:b4:f9
device
EAL: Bus (vdev) probe failed.
Our DPDK procedure want to deploy on Hyper-v and Azure Cloud, so we try it
on Hyper-v firstly. It need start 2 processes, a client process use port1, a
server process use port2, port1 and port2 in one internal subnet on a
virtual switch,
But only one process can be started successfully, the other said error,
“PMD: Could not add multiq qdisc (17): File exists”, our procedure is
running well on Vmware/KVM/ASW, is there any help for this?
Below is our env:
OS: Windows 10 and Hyper-V on it
Guest OS: CentOS7.6(Upgrade kernel to 4.20.0)
DPDK version: 18.02.2
# uname -a
Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23
20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
root:/# ifconfig -a
bond0: flags=5122<BROADCAST,MASTER,MULTICAST> mtu 1500
ether 46:28:ec:c8:7a:74 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 75 bytes 6284 (6.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 75 bytes 6284 (6.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.16.130 netmask 255.255.255.0 broadcast 192.168.16.255
inet6 fe80::78e3:1af8:3333:ff45 prefixlen 64 scopeid 0x20<link>
ether 00:15:5d:10:85:14 txqueuelen 1000 (Ethernet)
RX packets 5494 bytes 706042 (689.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2163 bytes 438205 (427.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:15:5d:10:85:15 txqueuelen 1000 (Ethernet)
RX packets 3131 bytes 518243 (506.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port1: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:16 txqueuelen 1000 (Ethernet)
RX packets 1707 bytes 163778 (159.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 693 bytes 70666 (69.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port2: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:17 txqueuelen 1000 (Ethernet)
RX packets 900 bytes 112256 (109.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1504 bytes 122428 (119.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root:/# ethtool -i port1
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/# ethtool -i port2
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/#
Start server process successfully
# ./VM_DPDK -l 3 -n 4 --vdev="net_vdev_netvsc1,iface=port2" --socket-mem
1500 --file-prefix server
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.server_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for
net_failsafe_net_vdev_netvsc1_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc1_id0 as dtap0
PMD: net_failsafe: MAC address is 00:15:5d:10:85:17
Concorrently start client process failed
# ./VM_DPDK -l 2 -n 4 --vdev="net_vdev_netvsc0,iface=port1" --socket-mem
1500 --file-prefix client
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.client_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for
net_failsafe_net_vdev_netvsc0_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0
PMD: Could not add multiq qdisc (17): File exists
PMD: dtap0: failed to create multiq qdisc.
PMD: Disabling rte flow support: File exists(17)
PMD: Remote feature requires flow support.
PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is
06:5f:a6:0a:b4:f9
device
EAL: Bus (vdev) probe failed.
Thanks and Regards,
Jack
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-users] DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
2019-01-04 5:27 [dpdk-users] 答复: " hfli
@ 2019-01-07 21:26 ` Stephen Hemminger
2019-01-08 1:36 ` [dpdk-users] 答复: " hfli
0 siblings, 1 reply; 9+ messages in thread
From: Stephen Hemminger @ 2019-01-07 21:26 UTC (permalink / raw)
To: hfli, matan, KY Srinivasan, Haiyang Zhang; +Cc: users
Which Linux distribution are you using? Recently Ubuntu changed to put the multiq queue discipline in linux-modules-extra package which is not normally installed. They changed the packaging and it broke TAP DPDK usage.
________________________________
From: hfli@netitest.com <hfli@netitest.com>
Sent: Thursday, January 3, 2019 9:27 PM
To: matan@mellanox.com; Stephen Hemminger; KY Srinivasan; Haiyang Zhang
Cc: users@dpdk.org
Subject: 答复: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
Hi All,
I just tried DPDK 18.11 and test_pmd, it output same error, any help for this?.
Run Server Process
# ./build/app/testpmd -l 2,3 -n3 --vdev="net_vdev_netvsc1,iface=port2" --file-prefix server -- --port-topology=chained -i --nb-cores=1 --nb-ports=1 --total-num-mbufs=2048
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/server/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap0
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=2048, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 00:15:5D:10:85:17
Checking link statuses...
Done
testpmd>
Run Client process
# ./build/app/testpmd -l 0,1 -n3 --vdev="net_vdev_netvsc0,iface=port1" --file-prefix client -- --port-topology=chained -i --nb-cores=1 --nb-ports=1 --total-num-mbufs=2048
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/client/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap0
qdisc_create_multiq(): Could not add multiq qdisc (17): File exists
eth_dev_tap_create(): dtap0: failed to create multiq qdisc.
eth_dev_tap_create(): Disabling rte flow support: File exists(17)
eth_dev_tap_create(): Remote feature requires flow support.
eth_dev_tap_create(): TAP Unable to initialize net_tap_vsc0
EAL: Driver cannot attach the device (net_tap_vsc0)
EAL: Failed to attach device on primary process
net_failsafe: sub_device 1 probe failed (File exists)
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap1
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=2048, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Configuring Port 0 (socket 0)
Port 0: 06:64:AA:64:57:9D
Checking link statuses...
Done
testpmd> Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
# ethtool -i port1
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
# ethtool -i port2
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
#
# uname -a
Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23 20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
Thanks and Regards,
Jack
发件人: hfli@netitest.com <hfli@netitest.com>
发送时间: 2019年1月4日 11:47
收件人: 'matan@mellanox.com' <matan@mellanox.com>
抄送: 'users@dpdk.org' <users@dpdk.org>; 'azuredpdk@microsoft.com' <azuredpdk@microsoft.com>
主题: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
Hi Matan,
Could you help us for below error?
PMD: Could not add multiq qdisc (17): File exists
PMD: dtap0: failed to create multiq qdisc.
PMD: Disabling rte flow support: File exists(17)
PMD: Remote feature requires flow support.
PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
device
EAL: Bus (vdev) probe failed.
Our DPDK procedure want to deploy on Hyper-v and Azure Cloud, so we try it on Hyper-v firstly. It need start 2 processes, a client process use port1, a server process use port2, port1 and port2 in one internal subnet on a virtual switch,
But only one process can be started successfully, the other said error, “PMD: Could not add multiq qdisc (17): File exists”, our procedure is running well on Vmware/KVM/ASW, is there any help for this?
Below is our env:
OS: Windows 10 and Hyper-V on it
Guest OS: CentOS7.6(Upgrade kernel to 4.20.0)
DPDK version: 18.02.2
# uname -a
Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23 20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
root:/# ifconfig -a
bond0: flags=5122<BROADCAST,MASTER,MULTICAST> mtu 1500
ether 46:28:ec:c8:7a:74 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 75 bytes 6284 (6.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 75 bytes 6284 (6.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.16.130 netmask 255.255.255.0 broadcast 192.168.16.255
inet6 fe80::78e3:1af8:3333:ff45 prefixlen 64 scopeid 0x20<link>
ether 00:15:5d:10:85:14 txqueuelen 1000 (Ethernet)
RX packets 5494 bytes 706042 (689.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2163 bytes 438205 (427.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:15:5d:10:85:15 txqueuelen 1000 (Ethernet)
RX packets 3131 bytes 518243 (506.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port1: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:16 txqueuelen 1000 (Ethernet)
RX packets 1707 bytes 163778 (159.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 693 bytes 70666 (69.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port2: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:17 txqueuelen 1000 (Ethernet)
RX packets 900 bytes 112256 (109.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1504 bytes 122428 (119.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root:/# ethtool -i port1
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/# ethtool -i port2
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/#
Start server process successfully
# ./VM_DPDK -l 3 -n 4 --vdev="net_vdev_netvsc1,iface=port2" --socket-mem 1500 --file-prefix server
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.server_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for net_failsafe_net_vdev_netvsc1_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc1_id0 as dtap0
PMD: net_failsafe: MAC address is 00:15:5d:10:85:17
Concorrently start client process failed
# ./VM_DPDK -l 2 -n 4 --vdev="net_vdev_netvsc0,iface=port1" --socket-mem 1500 --file-prefix client
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.client_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for net_failsafe_net_vdev_netvsc0_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0
PMD: Could not add multiq qdisc (17): File exists
PMD: dtap0: failed to create multiq qdisc.
PMD: Disabling rte flow support: File exists(17)
PMD: Remote feature requires flow support.
PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
device
EAL: Bus (vdev) probe failed.
Thanks and Regards,
Jack
^ permalink raw reply [flat|nested] 9+ messages in thread
* [dpdk-users] 答复: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
@ 2019-01-04 5:27 hfli
2019-01-07 21:26 ` [dpdk-users] " Stephen Hemminger
0 siblings, 1 reply; 9+ messages in thread
From: hfli @ 2019-01-04 5:27 UTC (permalink / raw)
To: matan, sthemmin, kys, haiyangz; +Cc: users
Hi All,
I just tried DPDK 18.11 and test_pmd, it output same error, any help for
this?.
Run Server Process
# ./build/app/testpmd -l 2,3 -n3 --vdev="net_vdev_netvsc1,iface=port2"
--file-prefix server -- --port-topology=chained -i --nb-cores=1 --nb-ports=1
--total-num-mbufs=2048
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/server/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap0
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and
--ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=2048, size=2176,
socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 00:15:5D:10:85:17
Checking link statuses...
Done
testpmd>
Run Client process
# ./build/app/testpmd -l 0,1 -n3 --vdev="net_vdev_netvsc0,iface=port1"
--file-prefix client -- --port-topology=chained -i --nb-cores=1 --nb-ports=1
--total-num-mbufs=2048
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/client/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap0
qdisc_create_multiq(): Could not add multiq qdisc (17): File exists
eth_dev_tap_create(): dtap0: failed to create multiq qdisc.
eth_dev_tap_create(): Disabling rte flow support: File exists(17)
eth_dev_tap_create(): Remote feature requires flow support.
eth_dev_tap_create(): TAP Unable to initialize net_tap_vsc0
EAL: Driver cannot attach the device (net_tap_vsc0)
EAL: Failed to attach device on primary process
net_failsafe: sub_device 1 probe failed (File exists)
rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap1
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and
--ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=2048, size=2176,
socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Configuring Port 0 (socket 0)
Port 0: 06:64:AA:64:57:9D
Checking link statuses...
Done
testpmd> Cannot set owner to port 1 already owned by
Fail-safe_0000000000000001
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
Cannot set owner to port 1 already owned by Fail-safe_0000000000000001
# ethtool -i port1
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
# ethtool -i port2
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
#
# uname -a
Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23
20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
Thanks and Regards,
Jack
发件人: hfli@netitest.com <hfli@netitest.com>
发送时间: 2019年1月4日 11:47
收件人: 'matan@mellanox.com' <matan@mellanox.com>
抄送: 'users@dpdk.org' <users@dpdk.org>; 'azuredpdk@microsoft.com'
<azuredpdk@microsoft.com>
主题: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17):
File exists"
Hi Matan,
Could you help us for below error?
PMD: Could not add multiq qdisc (17): File exists
PMD: dtap0: failed to create multiq qdisc.
PMD: Disabling rte flow support: File exists(17)
PMD: Remote feature requires flow support.
PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is
06:5f:a6:0a:b4:f9
device
EAL: Bus (vdev) probe failed.
Our DPDK procedure want to deploy on Hyper-v and Azure Cloud, so we try it
on Hyper-v firstly. It need start 2 processes, a client process use port1, a
server process use port2, port1 and port2 in one internal subnet on a
virtual switch,
But only one process can be started successfully, the other said error,
“PMD: Could not add multiq qdisc (17): File exists”, our procedure is
running well on Vmware/KVM/ASW, is there any help for this?
Below is our env:
OS: Windows 10 and Hyper-V on it
Guest OS: CentOS7.6(Upgrade kernel to 4.20.0)
DPDK version: 18.02.2
# uname -a
Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23
20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
root:/# ifconfig -a
bond0: flags=5122<BROADCAST,MASTER,MULTICAST> mtu 1500
ether 46:28:ec:c8:7a:74 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 75 bytes 6284 (6.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 75 bytes 6284 (6.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.16.130 netmask 255.255.255.0 broadcast 192.168.16.255
inet6 fe80::78e3:1af8:3333:ff45 prefixlen 64 scopeid 0x20<link>
ether 00:15:5d:10:85:14 txqueuelen 1000 (Ethernet)
RX packets 5494 bytes 706042 (689.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2163 bytes 438205 (427.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:15:5d:10:85:15 txqueuelen 1000 (Ethernet)
RX packets 3131 bytes 518243 (506.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port1: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:16 txqueuelen 1000 (Ethernet)
RX packets 1707 bytes 163778 (159.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 693 bytes 70666 (69.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port2: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:17 txqueuelen 1000 (Ethernet)
RX packets 900 bytes 112256 (109.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1504 bytes 122428 (119.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root:/# ethtool -i port1
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/# ethtool -i port2
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/#
Start server process successfully
# ./VM_DPDK -l 3 -n 4 --vdev="net_vdev_netvsc1,iface=port2" --socket-mem
1500 --file-prefix server
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.server_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for
net_failsafe_net_vdev_netvsc1_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc1_id0 as dtap0
PMD: net_failsafe: MAC address is 00:15:5d:10:85:17
Concorrently start client process failed
# ./VM_DPDK -l 2 -n 4 --vdev="net_vdev_netvsc0,iface=port1" --socket-mem
1500 --file-prefix client
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.client_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for
net_failsafe_net_vdev_netvsc0_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0
PMD: Could not add multiq qdisc (17): File exists
PMD: dtap0: failed to create multiq qdisc.
PMD: Disabling rte flow support: File exists(17)
PMD: Remote feature requires flow support.
PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is
06:5f:a6:0a:b4:f9
device
EAL: Bus (vdev) probe failed.
Thanks and Regards,
Jack
^ permalink raw reply [flat|nested] 9+ messages in thread
* [dpdk-users] DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
@ 2019-01-04 3:46 hfli
0 siblings, 0 replies; 9+ messages in thread
From: hfli @ 2019-01-04 3:46 UTC (permalink / raw)
To: matan; +Cc: users, azuredpdk
Hi Matan,
Could you help us for below error?
PMD: Could not add multiq qdisc (17): File exists
PMD: dtap0: failed to create multiq qdisc.
PMD: Disabling rte flow support: File exists(17)
PMD: Remote feature requires flow support.
PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is
06:5f:a6:0a:b4:f9
device
EAL: Bus (vdev) probe failed.
Our DPDK procedure want to deploy on Hyper-v and Azure Cloud, so we try it
on Hyper-v firstly. It need start 2 processes, a client process use port1, a
server process use port2, port1 and port2 in one internal subnet on a
virtual switch,
But only one process can be started successfully, the other said error,
"PMD: Could not add multiq qdisc (17): File exists", our procedure is
running well on Vmware/KVM/ASW, is there any help for this?
Below is our env:
OS: Windows 10 and Hyper-V on it
Guest OS: CentOS7.6(Upgrade kernel to 4.20.0)
DPDK version: 18.02.2
# uname -a
Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23
20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
root:/# ifconfig -a
bond0: flags=5122<BROADCAST,MASTER,MULTICAST> mtu 1500
ether 46:28:ec:c8:7a:74 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 75 bytes 6284 (6.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 75 bytes 6284 (6.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.16.130 netmask 255.255.255.0 broadcast 192.168.16.255
inet6 fe80::78e3:1af8:3333:ff45 prefixlen 64 scopeid 0x20<link>
ether 00:15:5d:10:85:14 txqueuelen 1000 (Ethernet)
RX packets 5494 bytes 706042 (689.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2163 bytes 438205 (427.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
mgmt2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 00:15:5d:10:85:15 txqueuelen 1000 (Ethernet)
RX packets 3131 bytes 518243 (506.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port1: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:16 txqueuelen 1000 (Ethernet)
RX packets 1707 bytes 163778 (159.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 693 bytes 70666 (69.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
port2: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST> mtu 1500
ether 00:15:5d:10:85:17 txqueuelen 1000 (Ethernet)
RX packets 900 bytes 112256 (109.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1504 bytes 122428 (119.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root:/# ethtool -i port1
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/# ethtool -i port2
driver: hv_netvsc
version:
firmware-version: N/A
expansion-rom-version:
bus-info:
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
root:/#
Start server process successfully
# ./VM_DPDK -l 3 -n 4 --vdev="net_vdev_netvsc1,iface=port2" --socket-mem
1500 --file-prefix server
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.server_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for
net_failsafe_net_vdev_netvsc1_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc1_id0 as dtap0
PMD: net_failsafe: MAC address is 00:15:5d:10:85:17
Concorrently start client process failed
# ./VM_DPDK -l 2 -n 4 --vdev="net_vdev_netvsc0,iface=port1" --socket-mem
1500 --file-prefix client
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Multi-process socket /var/log/.client_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable
clock cycles !
PMD: net_failsafe: Initializing Fail-safe PMD for
net_failsafe_net_vdev_netvsc0_id0
PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0
PMD: Could not add multiq qdisc (17): File exists
PMD: dtap0: failed to create multiq qdisc.
PMD: Disabling rte flow support: File exists(17)
PMD: Remote feature requires flow support.
PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is
06:5f:a6:0a:b4:f9
device
EAL: Bus (vdev) probe failed.
Thanks and Regards,
Jack
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2019-01-09 19:55 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-30 12:18 [dpdk-users] DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists" hfli
2019-01-04 3:46 hfli
2019-01-04 5:27 [dpdk-users] 答复: " hfli
2019-01-07 21:26 ` [dpdk-users] " Stephen Hemminger
2019-01-08 1:36 ` [dpdk-users] 答复: " hfli
2019-01-08 19:32 ` [dpdk-users] " Stephen Hemminger
2019-01-08 19:35 ` Stephen Hemminger
2019-01-09 0:38 ` [dpdk-users] 答复: " hfli
2019-01-09 19:55 ` [dpdk-users] " Stephen Hemminger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).