From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR03-VE1-obe.outbound.protection.outlook.com (mail-eopbgr50040.outbound.protection.outlook.com [40.107.5.40]) by dpdk.org (Postfix) with ESMTP id C5DF31B9FA for ; Fri, 11 Jan 2019 22:29:24 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector1-arm-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9mgOAP+btrb+DSuM+ge+U1I89u8JJOFziE3UM3+0C1I=; b=Q8A9gXkDqqewD39s9xHLI1TnM/OIVKTNh1hJF2EL275yiYdUDyXFIlE/HBk/ZlG02TZv9BqbxvQwTSkD/afyl9hSa073qjepjnMbcwFrGFVklWRj32YprkhOqhqz2CytVfDQ6SOTcVe+sar9FFa6UA5zW7GTLTHWmFIUvzUf/00= Received: from AM6PR08MB3672.eurprd08.prod.outlook.com (20.177.115.76) by AM6PR08MB3767.eurprd08.prod.outlook.com (20.178.88.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1516.14; Fri, 11 Jan 2019 21:29:22 +0000 Received: from AM6PR08MB3672.eurprd08.prod.outlook.com ([fe80::25ec:2db7:d268:2b7b]) by AM6PR08MB3672.eurprd08.prod.outlook.com ([fe80::25ec:2db7:d268:2b7b%2]) with mapi id 15.20.1516.016; Fri, 11 Jan 2019 21:29:22 +0000 From: Honnappa Nagarahalli To: "Gavin Hu (Arm Technology China)" , "users@dpdk.org" , "pierre.laurent@emutex.com" Thread-Topic: users Digest, Vol 168, Issue 3 Thread-Index: AQHUqDxGLSC8hj6XiUy+ZFeTFYzsxaWqDahQgACGOWA= Date: Fri, 11 Jan 2019 21:29:22 +0000 Message-ID: References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: spf=none (sender IP is ) smtp.mailfrom=Honnappa.Nagarahalli@arm.com; x-originating-ip: [217.140.111.135] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; AM6PR08MB3767; 6:gAqTObkKl/TreI6Z2CQ/uzwNErAFSqSKQktucFK23Sueqe+q7Sl4Xpc/nOC5mJoiMVHJqGQH02GFETX4VlXLuY0YplMflzJcNYywKFPsN9h9hBbY1dir+k39RWIn1hOWs4VYbci39pgiHwy2g26VglZt31vmGTjjZyXj0uoQCJ+MyUaHIcjhJG3Yb6+wvF9roSj69L1wwg52RE593BYMv43fBVzFOjgJWU+PNJx4fdjDWZJVLsTiDO847TwpnL+CklQDhZ/dP42AzNkNymvfSrOjSJcWh7c1KAc29qr1EqdP0jWmeNM25SN1wrBQeXCNE6zdEG1QH58ZKsflQlk8aA3prw7DFZ21DEJ6PB7aq5CriM9H0iZqyNWHZwVer9rFOnuhLkpO1NhdSxsknguDak/VPgCkky+V+/8USByhH5t7fIGCWxPe8rquTMFXt3MP9ooQ8Pht+MBMTgID4xvaYQ==; 5:3k/NTHEqvs289s9vkxS1PUTyhQ8dEhqVZWhEfiSRvHHon3/L+Xlbcl1veSoRfuz2Z6lAk9aCu53c/EeNOom/uuoNCDoMgoW8MmL77qeiQRN8qEFtTm8P9chrdvGoszFrX7p8lX/rRrdTSPgaHfZEYw5VP8nWxrcVmTHnf9DV5sRUxaAaaXpDnqYbVN1r4BnU62rTUp89bXngooe6tWoPqw==; 7:Z+/sD61/1xdcjMnNvpJ7RvfEK8q56NP1n+YvQlvZ/ZoHzNJJEO49vFsKB5HKwqdy2X9D/MI2x5VBCiqVce/KO55Uk5gr/isc3JBSvAR4Pg2oJMvXT8AU8vIss8K/+oXBXxwT6yytKZUbektjNQ2taw== x-ms-exchange-antispam-srfa-diagnostics: SOS;SOR; x-ms-office365-filtering-correlation-id: e644a26a-d08d-4628-9cc3-08d6780bdaac x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600109)(711020)(4618075)(2017052603328)(7153060)(7193020); SRVR:AM6PR08MB3767; x-ms-traffictypediagnostic: AM6PR08MB3767: x-microsoft-antispam-prvs: x-forefront-prvs: 09144DB0F7 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(39860400002)(366004)(136003)(396003)(376002)(346002)(18543002)(27574002)(40434004)(199004)(189003)(10533003)(3846002)(7736002)(6246003)(6116002)(305945005)(53546011)(6436002)(476003)(11346002)(6506007)(446003)(71200400001)(71190400001)(30864003)(97736004)(25786009)(55016002)(2501003)(76176011)(26005)(186003)(33656002)(66066001)(102836004)(229853002)(86362001)(2201001)(81166006)(81156014)(8676002)(5660300001)(110136005)(478600001)(45080400002)(316002)(966005)(99286004)(8936002)(2906002)(68736007)(106356001)(72206003)(7696005)(74316002)(53946003)(53936002)(9686003)(5024004)(256004)(14444005)(486006)(6306002)(105586002)(14454004)(579004); DIR:OUT; SFP:1101; SCL:1; SRVR:AM6PR08MB3767; H:AM6PR08MB3672.eurprd08.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: arm.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: XvZ1rUFZ821AUJYxHMTy39vHbOehOIdH+SbvMcJw97e70MOMUo9+nLdpib4jd1l4+sr2cwa0FGUVdC8B+aFBs8xr11h+QuZDKfWg94rlxDB3CF2oei+/7JOgIQIp0vSt0j5a6qr5Elt+z+80MfUcet0DZokHQHYloYmenY5vXGMp7PWlAVSSUTrVPkn7twBZGNBnipB6BkQb6+AuQJTup1bdF3fA76pk/pYfCCpp31tmFIX8B60M8hPKE99PcitIJJch03bkQIFeTk5aKoucQYDCe1ox0gi8HboncAj7weZsR7krNeUyapwNjBB3qYtw+HRlGXb2cd2eeSx7CQpFPyOsrbmadWfy7cCAeBh4K4P4Y2hDjJ8RSLk30uV/LueLYsfkS4+eT+sLL8ed05AUwFI3azP0e3358jxwR/wZJ88= spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-Network-Message-Id: e644a26a-d08d-4628-9cc3-08d6780bdaac X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jan 2019 21:29:22.7071 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3767 Subject: Re: [dpdk-users] users Digest, Vol 168, Issue 3 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 11 Jan 2019 21:29:25 -0000 > > > > Send users mailing list submissions to > > users@dpdk.org > > > > To subscribe or unsubscribe via the World Wide Web, visit > > https://mails.dpdk.org/listinfo/users > > or, via email, send a message with subject or body 'help' to > > users-request@dpdk.org > > > > You can reach the person managing the list at > > users-owner@dpdk.org > > > > When replying, please edit your Subject line so it is more specific > > than "Re: Contents of users digest..." > > > > > > Today's Topics: > > > > 1. rte flow does not work on X550 - "Not supported by L2tunnel > > filter" (Uiu Uioreanu) > > 2. Re: HOW performance to run DPDK at ARM64 arch? (Pierre Laurent) > > 3. DPDK procedure start Error: "PMD: Could not add multiqqdisc > > (17): File exists" (hfli@netitest.com) > > 4. mempool: creating pool out of an already allocatedmemory > > (Pradeep Kumar Nalla) > > 5. DPDK procedure start Error: "PMD: Could not add multiqqdisc > > (17): File exists" (hfli@netitest.com) > > > > > > ---------------------------------------------------------------------- > > > > Message: 1 > > Date: Wed, 9 Jan 2019 16:36:23 +0200 > > From: Uiu Uioreanu > > To: users@dpdk.org > > Subject: [dpdk-users] rte flow does not work on X550 - "Not supported > > by L2tunnel filter" > > Message-ID: > > > q_fpWhg@mail.gmail.com> > > Content-Type: text/plain; charset=3D"UTF-8" > > > > Hi, > > > > I am trying to use rte flow on a X550 (ixgbe) and it does not work. > > > > For example, I used testpmd (sudo ./testpmd -c 0xf -n 1 -- -i) to > > validate some simple flow rules: > > - flow validate 0 ingress pattern eth / ipv4 / udp / end actions drop > > / end > > - flow validate 0 ingress pattern eth / ipv4 / udp / end actions end > > - flow validate 0 ingress pattern eth / end actions drop end > > - etc > > > > Every time, I receive "caught error type 9 (specific pattern item): > > cause: 0x7ffddca42768, Not supported by L2 tunnel filter". > > I also tried to make a sample application to use rte flow, but it also > > gives the error 9 and the "Not supported by L2 tunnel filter" message. > > > > I don't understand what the problem is. Is the pattern from the flow > > validate command wrong? Does the port need any additional > > configuration? (except from binding a X550 NIC to > > igb_uio/uio_pci_generic)? > > > > I am using DPDK 17.11. > > > > Thanks, > > Uiu > > > > > > ------------------------------ > > > > Message: 2 > > Date: Thu, 27 Dec 2018 16:41:57 +0000 > > From: Pierre Laurent > > To: "users@dpdk.org" > > Subject: Re: [dpdk-users] HOW performance to run DPDK at ARM64 arch? > > Message-ID: <2736c618-8a28-a267-d05f-93021a3d5004@emutex.com> > > Content-Type: text/plain; charset=3D"utf-8" > > > > Hi, > > > > Regarding your question 2, the TX+Rx numbers you get look strangely > > like you are trying to use full duplex traffic on a PCIe x4 > > > > The real bandwidth needed by an interface is approximately ((pkt size= + > > 48) * pps) . > > > > 48 bytes is the approximate little overhead , per packet, for NIC > > descriptors and PCIe overheads. This is an undocumented heuristic .....= . > > > > I guess you are using the default DPDK options, the ethernet FCS is > > not in PCIe bandwidth (stripped by the NIC on rx, generated by the NIC = on > TX). > > Same for 20 bytes ethernet preamble. > > > > > > If I assume you are using 60 bytes packets . ( 60 + 48 ) * (14 + 6) *= 8 =3D > > approx 17 Gbps =3D=3D more or less the bandwidth of a bidirectional x4 > > interface. > > > > > > Tools like "lspci" and "dmidecode" will help you to investigate the > > real capabilities of the PCIe slots where your 82599 cards are plugged = in. > > > > The output of dmidecode looks like the following example, and x2, x4, > > x8, > > x16 indicate the number of lanes an interface will be able to use. The > > more lanes, the fastest. > > > > System Slot Information > > Designation: System Slot 1 > > Type: x8 PCI Express > > Current Usage: Available > > Length: Long > > ID: 1 > > Characteristics: > > 3.3 V is provided > > > > > > To use a 82599 at full bidirectional rate, you need at least a x8 > > interface (1 > > port) or x16 interface (2 ports) > > > > Regards, > > > > Pierre > > > > > > On 27/12/2018 09:24, ????? wrote: > > > > recently, I have debug DPDK18.08 at my arm64 arch machine, with DPDK- > > pktgen3.5.2. > > but the performace is very low for bidirectional traffic with x86 > > machine > > > > here is my data: > > hardware Conditions? > > arm64: CPU - 64 cores, CPUfreq: 1.5GHz > > MEM - 64 GiB > > NIC - 82599ES dual port > > x86: CPU - 4 cores, CPUfreq: 3.2GHz > > MEM - 4GiB > > NIC - 82599ES dual port software Conditions: > > system kernel: > > arm64: linux-4.4.58 > > x86: ubuntu16.04-4.4.0-generic > > tools: > > DPDK18.08, DPDK-pktgen3.5.2 > > > > test: > > |----------| bi-directional |-----= ------| > > | arm64 | port0 | < - > | port0 | x86 = | > > |----------| = |----------| > > > > result > > arm64 = x86 > > Pkts/s (Rx/Tx) 10.2/6.0Mpps 6.0/14.80M= pps > > MBits/s(Rx/Tx) 7000/3300 MBits/s 3300/9989 MBit= s/s > > > > Questions? > > 1?Why DPDK data performance would be so much worse than the x86 > > architecture in arm64 addition? Appreciate your efforts trying to run DPDK on arm64. Depending on the micro= -architecture you might not see similar performance. This is due to the pos= itioning of that micro-architecture. Some micro-architectures bring smaller= cores but higher density (large number of small cores). In these cases it = is better to look at the performance of the complete socket rather than a s= ingle core. > > One reason is 1.5GHz vs. 3.2GHz, other possible reasons may include cpu > affinity, crossing NUMA nodes? Hugepage sizes? > Could you check these settings? > > > 2?above, Tx direction is not run full, Why Rx and TX affect each other? You might have to tune the RX and TX buffer depths. > > > > > > > > > > > > ------------------------------ > > > > Message: 3 > > Date: Sun, 30 Dec 2018 20:18:46 +0800 > > From: > > To: , > > Subject: [dpdk-users] DPDK procedure start Error: "PMD: Could not add > > multiqqdisc (17): File exists" > > Message-ID: <000a01d4a039$d0755f00$71601d00$@netitest.com> > > Content-Type: text/plain;charset=3D"us-ascii" > > > > Hi Admin, > > > > > > > > Our DPDK procedure want to deploy on Hyper-v and Azure Cloud, so we > > try it on Hyper-v firstly. It need start 2 processes, a client process > > use port1, a server process use port2, port1 and port2 in one internal > > subnet on a virtual switch, > > > > > > > > But only one process can be started successfully, the other said > > error, > > "PMD: Could not add multiq qdisc (17): File exists", our procedure is > > running well on Vmware/KVM/ASW, is there any help for this? > > > > > > > > Below is our env: > > > > > > > > OS: Windows 10 and Hyper-V on it > > > > Guest OS: CentOS7.6(Upgrade kernel to 4.20.0) > > > > DPDK version: 18.02.2 > > > > > > > > # uname -a > > > > Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec > > 23 > > 20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux > > > > > > > > root:/# ifconfig -a > > > > bond0: flags=3D5122 mtu 1500 > > > > ether 46:28:ec:c8:7a:74 txqueuelen 1000 (Ethernet) > > > > RX packets 0 bytes 0 (0.0 B) > > > > RX errors 0 dropped 0 overruns 0 frame 0 > > > > TX packets 0 bytes 0 (0.0 B) > > > > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > > > > > > lo: flags=3D73 mtu 65536 > > > > inet 127.0.0.1 netmask 255.0.0.0 > > > > inet6 ::1 prefixlen 128 scopeid 0x10 > > > > loop txqueuelen 1000 (Local Loopback) > > > > RX packets 75 bytes 6284 (6.1 KiB) > > > > RX errors 0 dropped 0 overruns 0 frame 0 > > > > TX packets 75 bytes 6284 (6.1 KiB) > > > > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > > > > > > mgmt1: flags=3D4163 mtu 1500 > > > > inet 192.168.16.130 netmask 255.255.255.0 broadcast > > 192.168.16.255 > > > > inet6 fe80::78e3:1af8:3333:ff45 prefixlen 64 scopeid > > 0x20 > > > > ether 00:15:5d:10:85:14 txqueuelen 1000 (Ethernet) > > > > RX packets 5494 bytes 706042 (689.4 KiB) > > > > RX errors 0 dropped 0 overruns 0 frame 0 > > > > TX packets 2163 bytes 438205 (427.9 KiB) > > > > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > > > > > > mgmt2: flags=3D4163 mtu 1500 > > > > ether 00:15:5d:10:85:15 txqueuelen 1000 (Ethernet) > > > > RX packets 3131 bytes 518243 (506.0 KiB) > > > > RX errors 0 dropped 0 overruns 0 frame 0 > > > > TX packets 0 bytes 0 (0.0 B) > > > > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > > > > > > port1: flags=3D4675 mtu > > 1500 > > > > ether 00:15:5d:10:85:16 txqueuelen 1000 (Ethernet) > > > > RX packets 1707 bytes 163778 (159.9 KiB) > > > > RX errors 0 dropped 0 overruns 0 frame 0 > > > > TX packets 693 bytes 70666 (69.0 KiB) > > > > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > > > > > > port2: flags=3D4675 mtu > > 1500 > > > > ether 00:15:5d:10:85:17 txqueuelen 1000 (Ethernet) > > > > RX packets 900 bytes 112256 (109.6 KiB) > > > > RX errors 0 dropped 0 overruns 0 frame 0 > > > > TX packets 1504 bytes 122428 (119.5 KiB) > > > > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > > > > > > root:/# ethtool -i port1 > > > > driver: hv_netvsc > > > > version: > > > > firmware-version: N/A > > > > expansion-rom-version: > > > > bus-info: > > > > supports-statistics: yes > > > > supports-test: no > > > > supports-eeprom-access: no > > > > supports-register-dump: no > > > > supports-priv-flags: no > > > > root:/# ethtool -i port2 > > > > driver: hv_netvsc > > > > version: > > > > firmware-version: N/A > > > > expansion-rom-version: > > > > bus-info: > > > > supports-statistics: yes > > > > supports-test: no > > > > supports-eeprom-access: no > > > > supports-register-dump: no > > > > supports-priv-flags: no > > > > root:/# > > > > > > > > Start server process successfully > > > > # ./Tester -l 3 -n 4 --vdev=3D"net_vdev_netvsc1,iface=3Dport2" > > --socket-mem > > 1500 > > --file-prefix server > > > > EAL: Detected 4 lcore(s) > > > > EAL: No free hugepages reported in hugepages-1048576kB > > > > EAL: Multi-process socket /var/log/.server_unix > > > > EAL: Probing VFIO support... > > > > EAL: WARNING: cpu flags constant_tsc=3Dyes nonstop_tsc=3Dno -> using > > unreliable clock cycles ! > > > > PMD: net_failsafe: Initializing Fail-safe PMD for > > net_failsafe_net_vdev_netvsc1_id0 > > > > PMD: net_failsafe: Creating fail-safe device on NUMA socket 0 > > > > PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc1_id0 as dtap0 > > > > PMD: net_failsafe: MAC address is 00:15:5d:10:85:17 > > > > > > > > Concorrently start client process failed > > > > # ./Tester -l 2 -n 4 --vdev=3D"net_vdev_netvsc0,iface=3Dport1" > > --socket-mem > > 1500 > > --file-prefix client > > > > EAL: Detected 4 lcore(s) > > > > EAL: No free hugepages reported in hugepages-1048576kB > > > > EAL: Multi-process socket /var/log/.client_unix > > > > EAL: Probing VFIO support... > > > > EAL: WARNING: cpu flags constant_tsc=3Dyes nonstop_tsc=3Dno -> using > > unreliable clock cycles ! > > > > PMD: net_failsafe: Initializing Fail-safe PMD for > > net_failsafe_net_vdev_netvsc0_id0 > > > > PMD: net_failsafe: Creating fail-safe device on NUMA socket 0 > > > > PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0 > > > > PMD: Could not add multiq qdisc (17): File exists > > > > PMD: dtap0: failed to create multiq qdisc. > > > > PMD: Disabling rte flow support: File exists(17) > > > > PMD: Remote feature requires flow support. > > > > PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0 > > > > EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0) > > > > PMD: net_failsafe: sub_device 1 probe failed (No such file or > > directory) > > > > PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9 > > > > vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is > > 06:5f:a6:0a:b4:f9 > > > > device > > > > EAL: Bus (vdev) probe failed. > > > > > > > > > > > > > > > > Thanks and Regards, > > > > Jack > > > > > > > > ------------------------------ > > > > Message: 4 > > Date: Wed, 2 Jan 2019 11:29:52 +0000 > > From: Pradeep Kumar Nalla > > To: "users@dpdk.org" > > Subject: [dpdk-users] mempool: creating pool out of an already > > allocatedmemory > > Message-ID: > > > 5.namprd18.prod.outlook.com> > > > > Content-Type: text/plain; charset=3D"us-ascii" > > > > Hello > > > > Is there a way or API to create a mempool out of already allocated > > memory? > > > > Thanks > > Pradeep. > > > > > > ------------------------------ > > > > Message: 5 > > Date: Fri, 4 Jan 2019 11:46:48 +0800 > > From: > > To: > > Cc: , > > Subject: [dpdk-users] DPDK procedure start Error: "PMD: Could not add > > multiqqdisc (17): File exists" > > Message-ID: <000f01d4a3e0$1f6bb060$5e431120$@netitest.com> > > Content-Type: text/plain;charset=3D"us-ascii" > > > > Hi Matan, > > > > > > > > Could you help us for below error? > > > > > > > > PMD: Could not add multiq qdisc (17): File exists > > > > PMD: dtap0: failed to create multiq qdisc. > > > > PMD: Disabling rte flow support: File exists(17) > > > > PMD: Remote feature requires flow support. > > > > PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0 > > > > EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0) > > > > PMD: net_failsafe: sub_device 1 probe failed (No such file or > > directory) > > > > PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9 > > > > vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is > > 06:5f:a6:0a:b4:f9 > > > > device > > > > EAL: Bus (vdev) probe failed. > > > > > > > > > > > > Our DPDK procedure want to deploy on Hyper-v and Azure Cloud, so we > > try it on Hyper-v firstly. It need start 2 processes, a client process > > use port1, a server process use port2, port1 and port2 in one internal > > subnet on a virtual switch, > > > > > > > > But only one process can be started successfully, the other said > > error, > > "PMD: Could not add multiq qdisc (17): File exists", our procedure is > > running well on Vmware/KVM/ASW, is there any help for this? > > > > > > > > Below is our env: > > > > > > > > OS: Windows 10 and Hyper-V on it > > > > Guest OS: CentOS7.6(Upgrade kernel to 4.20.0) > > > > DPDK version: 18.02.2 > > > > > > > > # uname -a > > > > Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec > > 23 > > 20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux > > > > > > > > root:/# ifconfig -a > > > > bond0: flags=3D5122 mtu 1500 > > > > ether 46:28:ec:c8:7a:74 txqueuelen 1000 (Ethernet) > > > > RX packets 0 bytes 0 (0.0 B) > > > > RX errors 0 dropped 0 overruns 0 frame 0 > > > > TX packets 0 bytes 0 (0.0 B) > > > > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > > > > > > lo: flags=3D73 mtu 65536 > > > > inet 127.0.0.1 netmask 255.0.0.0 > > > > inet6 ::1 prefixlen 128 scopeid 0x10 > > > > loop txqueuelen 1000 (Local Loopback) > > > > RX packets 75 bytes 6284 (6.1 KiB) > > > > RX errors 0 dropped 0 overruns 0 frame 0 > > > > TX packets 75 bytes 6284 (6.1 KiB) > > > > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > > > > > > mgmt1: flags=3D4163 mtu 1500 > > > > inet 192.168.16.130 netmask 255.255.255.0 broadcast > > 192.168.16.255 > > > > inet6 fe80::78e3:1af8:3333:ff45 prefixlen 64 scopeid > > 0x20 > > > > ether 00:15:5d:10:85:14 txqueuelen 1000 (Ethernet) > > > > RX packets 5494 bytes 706042 (689.4 KiB) > > > > RX errors 0 dropped 0 overruns 0 frame 0 > > > > TX packets 2163 bytes 438205 (427.9 KiB) > > > > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > > > > > > mgmt2: flags=3D4163 mtu 1500 > > > > ether 00:15:5d:10:85:15 txqueuelen 1000 (Ethernet) > > > > RX packets 3131 bytes 518243 (506.0 KiB) > > > > RX errors 0 dropped 0 overruns 0 frame 0 > > > > TX packets 0 bytes 0 (0.0 B) > > > > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > > > > > > port1: flags=3D4675 mtu > > 1500 > > > > ether 00:15:5d:10:85:16 txqueuelen 1000 (Ethernet) > > > > RX packets 1707 bytes 163778 (159.9 KiB) > > > > RX errors 0 dropped 0 overruns 0 frame 0 > > > > TX packets 693 bytes 70666 (69.0 KiB) > > > > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > > > > > > port2: flags=3D4675 mtu > > 1500 > > > > ether 00:15:5d:10:85:17 txqueuelen 1000 (Ethernet) > > > > RX packets 900 bytes 112256 (109.6 KiB) > > > > RX errors 0 dropped 0 overruns 0 frame 0 > > > > TX packets 1504 bytes 122428 (119.5 KiB) > > > > TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 > > > > > > > > root:/# ethtool -i port1 > > > > driver: hv_netvsc > > > > version: > > > > firmware-version: N/A > > > > expansion-rom-version: > > > > bus-info: > > > > supports-statistics: yes > > > > supports-test: no > > > > supports-eeprom-access: no > > > > supports-register-dump: no > > > > supports-priv-flags: no > > > > root:/# ethtool -i port2 > > > > driver: hv_netvsc > > > > version: > > > > firmware-version: N/A > > > > expansion-rom-version: > > > > bus-info: > > > > supports-statistics: yes > > > > supports-test: no > > > > supports-eeprom-access: no > > > > supports-register-dump: no > > > > supports-priv-flags: no > > > > root:/# > > > > > > > > Start server process successfully > > > > # ./VM_DPDK -l 3 -n 4 --vdev=3D"net_vdev_netvsc1,iface=3Dport2" > > --socket-mem > > 1500 --file-prefix server > > > > EAL: Detected 4 lcore(s) > > > > EAL: No free hugepages reported in hugepages-1048576kB > > > > EAL: Multi-process socket /var/log/.server_unix > > > > EAL: Probing VFIO support... > > > > EAL: WARNING: cpu flags constant_tsc=3Dyes nonstop_tsc=3Dno -> using > > unreliable clock cycles ! > > > > PMD: net_failsafe: Initializing Fail-safe PMD for > > net_failsafe_net_vdev_netvsc1_id0 > > > > PMD: net_failsafe: Creating fail-safe device on NUMA socket 0 > > > > PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc1_id0 as dtap0 > > > > PMD: net_failsafe: MAC address is 00:15:5d:10:85:17 > > > > > > > > Concorrently start client process failed > > > > # ./VM_DPDK -l 2 -n 4 --vdev=3D"net_vdev_netvsc0,iface=3Dport1" > > --socket-mem > > 1500 --file-prefix client > > > > EAL: Detected 4 lcore(s) > > > > EAL: No free hugepages reported in hugepages-1048576kB > > > > EAL: Multi-process socket /var/log/.client_unix > > > > EAL: Probing VFIO support... > > > > EAL: WARNING: cpu flags constant_tsc=3Dyes nonstop_tsc=3Dno -> using > > unreliable clock cycles ! > > > > PMD: net_failsafe: Initializing Fail-safe PMD for > > net_failsafe_net_vdev_netvsc0_id0 > > > > PMD: net_failsafe: Creating fail-safe device on NUMA socket 0 > > > > PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0 > > > > PMD: Could not add multiq qdisc (17): File exists > > > > PMD: dtap0: failed to create multiq qdisc. > > > > PMD: Disabling rte flow support: File exists(17) > > > > PMD: Remote feature requires flow support. > > > > PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0 > > > > EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0) > > > > PMD: net_failsafe: sub_device 1 probe failed (No such file or > > directory) > > > > PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9 > > > > vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is > > 06:5f:a6:0a:b4:f9 > > > > device > > > > EAL: Bus (vdev) probe failed. > > > > > > > > > > > > > > > > Thanks and Regards, > > > > Jack > > > > > > > > End of users Digest, Vol 168, Issue 3 > > ************************************* IMPORTANT NOTICE: The contents of this email and any attachments are confid= ential and may also be privileged. If you are not the intended recipient, p= lease notify the sender immediately and do not disclose the contents to any= other person, use it for any purpose, or store or copy the information in = any medium. Thank you.