DPDK usage discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <sthemmin@microsoft.com>
To: "hfli@netitest.com" <hfli@netitest.com>,
	"matan@mellanox.com" <matan@mellanox.com>,
	KY Srinivasan <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"
Date: Mon, 7 Jan 2019 21:26:40 +0000	[thread overview]
Message-ID: <SN6PR2101MB0912A6CF4D0D65B4DBE2E58ACC890@SN6PR2101MB0912.namprd21.prod.outlook.com> (raw)
In-Reply-To: <001401d4a3ee$279bc190$76d344b0$@netitest.com>

Which Linux distribution are you using? Recently Ubuntu changed to put the multiq queue discipline in linux-modules-extra package which is not normally installed.  They changed the packaging and it broke TAP DPDK usage.
________________________________
From: hfli@netitest.com <hfli@netitest.com>
Sent: Thursday, January 3, 2019 9:27 PM
To: matan@mellanox.com; Stephen Hemminger; KY Srinivasan; Haiyang Zhang
Cc: users@dpdk.org
Subject: 答复: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"


Hi All,



I just tried DPDK 18.11 and test_pmd, it output same error, any help for this?.



Run Server Process

# ./build/app/testpmd -l 2,3 -n3 --vdev="net_vdev_netvsc1,iface=port2" --file-prefix server -- --port-topology=chained -i --nb-cores=1 --nb-ports=1 --total-num-mbufs=2048

EAL: Detected 4 lcore(s)

EAL: Detected 1 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/server/mp_socket

EAL: No free hugepages reported in hugepages-1048576kB

EAL: Probing VFIO support...

EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !

rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap0

Interactive-mode selected

Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=2048, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc

Configuring Port 0 (socket 0)

Port 0: 00:15:5D:10:85:17

Checking link statuses...

Done

testpmd>



Run Client process

# ./build/app/testpmd -l 0,1 -n3 --vdev="net_vdev_netvsc0,iface=port1" --file-prefix client -- --port-topology=chained -i --nb-cores=1 --nb-ports=1 --total-num-mbufs=2048

EAL: Detected 4 lcore(s)

EAL: Detected 1 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/client/mp_socket

EAL: No free hugepages reported in hugepages-1048576kB

EAL: Probing VFIO support...

EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !

rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap0

qdisc_create_multiq(): Could not add multiq qdisc (17): File exists

eth_dev_tap_create(): dtap0: failed to create multiq qdisc.

eth_dev_tap_create():  Disabling rte flow support: File exists(17)

eth_dev_tap_create(): Remote feature requires flow support.

eth_dev_tap_create(): TAP Unable to initialize net_tap_vsc0

EAL: Driver cannot attach the device (net_tap_vsc0)

EAL: Failed to attach device on primary process

net_failsafe: sub_device 1 probe failed (File exists)

rte_pmd_tap_probe(): Initializing pmd_tap for net_tap_vsc0 as dtap1

Interactive-mode selected

Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=2048, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc

Cannot set owner to port 1 already owned by Fail-safe_0000000000000001

Configuring Port 0 (socket 0)

Port 0: 06:64:AA:64:57:9D

Checking link statuses...

Done

testpmd> Cannot set owner to port 1 already owned by Fail-safe_0000000000000001

Cannot set owner to port 1 already owned by Fail-safe_0000000000000001

Cannot set owner to port 1 already owned by Fail-safe_0000000000000001



# ethtool -i port1

driver: hv_netvsc

version:

firmware-version: N/A

expansion-rom-version:

bus-info:

supports-statistics: yes

supports-test: no

supports-eeprom-access: no

supports-register-dump: no

supports-priv-flags: no

# ethtool -i port2

driver: hv_netvsc

version:

firmware-version: N/A

expansion-rom-version:

bus-info:

supports-statistics: yes

supports-test: no

supports-eeprom-access: no

supports-register-dump: no

supports-priv-flags: no

#



# uname -a

Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23 20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux



Thanks and Regards,

Jack

发件人: hfli@netitest.com <hfli@netitest.com>
发送时间: 2019年1月4日 11:47
收件人: 'matan@mellanox.com' <matan@mellanox.com>
抄送: 'users@dpdk.org' <users@dpdk.org>; 'azuredpdk@microsoft.com' <azuredpdk@microsoft.com>
主题: DPDK procedure start Error: "PMD: Could not add multiq qdisc (17): File exists"



Hi Matan,



Could you help us for below error?



PMD: Could not add multiq qdisc (17): File exists

PMD: dtap0: failed to create multiq qdisc.

PMD:  Disabling rte flow support: File exists(17)

PMD: Remote feature requires flow support.

PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0

EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)

PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)

PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9

vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9

device

EAL: Bus (vdev) probe failed.





Our DPDK procedure want to deploy on Hyper-v and Azure Cloud, so we try it on Hyper-v firstly. It need start 2 processes, a client process use port1, a server process use port2, port1 and port2 in one internal subnet on a virtual switch,



But only one process can be started successfully, the other said error, “PMD: Could not add multiq qdisc (17): File exists”, our procedure is running well on Vmware/KVM/ASW, is there any help for this?



Below is our env:



OS: Windows 10 and Hyper-V on it

Guest OS: CentOS7.6(Upgrade kernel to 4.20.0)

DPDK version: 18.02.2



# uname -a

Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23 20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux



root:/# ifconfig -a

bond0: flags=5122<BROADCAST,MASTER,MULTICAST>  mtu 1500

        ether 46:28:ec:c8:7a:74  txqueuelen 1000  (Ethernet)

        RX packets 0  bytes 0 (0.0 B)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 0  bytes 0 (0.0 B)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536

        inet 127.0.0.1  netmask 255.0.0.0

        inet6 ::1  prefixlen 128  scopeid 0x10<host>

        loop  txqueuelen 1000  (Local Loopback)

        RX packets 75  bytes 6284 (6.1 KiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 75  bytes 6284 (6.1 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



mgmt1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        inet 192.168.16.130  netmask 255.255.255.0  broadcast 192.168.16.255

        inet6 fe80::78e3:1af8:3333:ff45  prefixlen 64  scopeid 0x20<link>

        ether 00:15:5d:10:85:14  txqueuelen 1000  (Ethernet)

        RX packets 5494  bytes 706042 (689.4 KiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 2163  bytes 438205 (427.9 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



mgmt2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        ether 00:15:5d:10:85:15  txqueuelen 1000  (Ethernet)

        RX packets 3131  bytes 518243 (506.0 KiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 0  bytes 0 (0.0 B)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



port1: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST>  mtu 1500

        ether 00:15:5d:10:85:16  txqueuelen 1000  (Ethernet)

        RX packets 1707  bytes 163778 (159.9 KiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 693  bytes 70666 (69.0 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



port2: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST>  mtu 1500

        ether 00:15:5d:10:85:17  txqueuelen 1000  (Ethernet)

        RX packets 900  bytes 112256 (109.6 KiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 1504  bytes 122428 (119.5 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



root:/# ethtool -i port1

driver: hv_netvsc

version:

firmware-version: N/A

expansion-rom-version:

bus-info:

supports-statistics: yes

supports-test: no

supports-eeprom-access: no

supports-register-dump: no

supports-priv-flags: no

root:/# ethtool -i port2

driver: hv_netvsc

version:

firmware-version: N/A

expansion-rom-version:

bus-info:

supports-statistics: yes

supports-test: no

supports-eeprom-access: no

supports-register-dump: no

supports-priv-flags: no

root:/#



Start server process successfully

# ./VM_DPDK -l 3 -n 4 --vdev="net_vdev_netvsc1,iface=port2" --socket-mem 1500 --file-prefix server

EAL: Detected 4 lcore(s)

EAL: No free hugepages reported in hugepages-1048576kB

EAL: Multi-process socket /var/log/.server_unix

EAL: Probing VFIO support...

EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !

PMD: net_failsafe: Initializing Fail-safe PMD for net_failsafe_net_vdev_netvsc1_id0

PMD: net_failsafe: Creating fail-safe device on NUMA socket 0

PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc1_id0 as dtap0

PMD: net_failsafe: MAC address is 00:15:5d:10:85:17



Concorrently start client process failed

# ./VM_DPDK -l 2 -n 4 --vdev="net_vdev_netvsc0,iface=port1" --socket-mem 1500 --file-prefix client

EAL: Detected 4 lcore(s)

EAL: No free hugepages reported in hugepages-1048576kB

EAL: Multi-process socket /var/log/.client_unix

EAL: Probing VFIO support...

EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !

PMD: net_failsafe: Initializing Fail-safe PMD for net_failsafe_net_vdev_netvsc0_id0

PMD: net_failsafe: Creating fail-safe device on NUMA socket 0

PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0

PMD: Could not add multiq qdisc (17): File exists

PMD: dtap0: failed to create multiq qdisc.

PMD:  Disabling rte flow support: File exists(17)

PMD: Remote feature requires flow support.

PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0

EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)

PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)

PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9

vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9

device

EAL: Bus (vdev) probe failed.







Thanks and Regards,

Jack

  reply	other threads:[~2019-01-07 21:26 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-04  5:27 [dpdk-users] 答复: " hfli
2019-01-07 21:26 ` Stephen Hemminger [this message]
2019-01-08  1:36   ` hfli
2019-01-08 19:32     ` [dpdk-users] " Stephen Hemminger
2019-01-08 19:35       ` Stephen Hemminger
2019-01-09  0:38         ` [dpdk-users] 答复: " hfli
2019-01-09 19:55           ` [dpdk-users] " Stephen Hemminger
  -- strict thread matches above, loose matches on Subject: below --
2019-01-04  3:46 hfli
2018-12-30 12:18 hfli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SN6PR2101MB0912A6CF4D0D65B4DBE2E58ACC890@SN6PR2101MB0912.namprd21.prod.outlook.com \
    --to=sthemmin@microsoft.com \
    --cc=haiyangz@microsoft.com \
    --cc=hfli@netitest.com \
    --cc=kys@microsoft.com \
    --cc=matan@mellanox.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).