DPDK usage discussions
 help / color / mirror / Atom feed
* Re: [dpdk-users] users Digest, Vol 168, Issue 3
       [not found] <mailman.13312.1547052929.7586.users@dpdk.org>
@ 2019-01-11 13:45 ` Gavin Hu (Arm Technology China)
  2019-01-11 21:29   ` Honnappa Nagarahalli
  0 siblings, 1 reply; 2+ messages in thread
From: Gavin Hu (Arm Technology China) @ 2019-01-11 13:45 UTC (permalink / raw)
  To: users, pierre.laurent
  Cc: Honnappa Nagarahalli, Gavin Hu (Arm Technology China)



> -----Original Message-----
> From: users <users-bounces@dpdk.org> On Behalf Of users-
> request@dpdk.org
> Sent: Thursday, January 10, 2019 12:55 AM
> To: users@dpdk.org
> Subject: users Digest, Vol 168, Issue 3
>
> Send users mailing list submissions to
> users@dpdk.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://mails.dpdk.org/listinfo/users
> or, via email, send a message with subject or body 'help' to
> users-request@dpdk.org
>
> You can reach the person managing the list at
> users-owner@dpdk.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of users digest..."
>
>
> Today's Topics:
>
>    1. rte flow does not work on X550 - "Not supported by L2tunnel
>       filter" (Uiu Uioreanu)
>    2. Re: HOW performance to run DPDK at ARM64 arch? (Pierre Laurent)
>    3. DPDK procedure start Error: "PMD: Could not add multiqqdisc
>       (17): File exists" (hfli@netitest.com)
>    4. mempool: creating pool out of an already allocatedmemory
>       (Pradeep Kumar Nalla)
>    5. DPDK procedure start Error: "PMD: Could not add multiqqdisc
>       (17): File exists" (hfli@netitest.com)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 9 Jan 2019 16:36:23 +0200
> From: Uiu Uioreanu <uiuoreanu@gmail.com>
> To: users@dpdk.org
> Subject: [dpdk-users] rte flow does not work on X550 - "Not supported
> by L2tunnel filter"
> Message-ID:
> <CAFAKT1ykLqJnpXiPBpc9+v=hOhwowXJoxkb_e=qDkN-
> q_fpWhg@mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi,
>
> I am trying to use rte flow on a X550 (ixgbe) and it does not work.
>
> For example, I used testpmd (sudo ./testpmd -c 0xf -n 1 -- -i) to
> validate some simple flow rules:
> - flow validate 0 ingress pattern eth / ipv4 / udp / end actions drop / end
> - flow validate 0 ingress pattern eth / ipv4 / udp / end actions end
> - flow validate 0 ingress pattern eth / end actions drop end
> - etc
>
> Every time, I receive "caught error type 9 (specific pattern item):
> cause: 0x7ffddca42768, Not supported by L2 tunnel filter".
> I also tried to make a sample application to use rte flow, but it also
> gives the error 9 and the "Not supported by L2 tunnel filter" message.
>
> I don't understand what the problem is. Is the pattern from the flow
> validate command wrong? Does the port need any additional
> configuration? (except from binding a X550 NIC to
> igb_uio/uio_pci_generic)?
>
> I am using DPDK 17.11.
>
> Thanks,
> Uiu
>
>
> ------------------------------
>
> Message: 2
> Date: Thu, 27 Dec 2018 16:41:57 +0000
> From: Pierre Laurent <pierre.laurent@emutex.com>
> To: "users@dpdk.org" <users@dpdk.org>
> Subject: Re: [dpdk-users] HOW performance to run DPDK at ARM64 arch?
> Message-ID: <2736c618-8a28-a267-d05f-93021a3d5004@emutex.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi,
>
> Regarding your question 2, the TX+Rx  numbers you get look strangely like
> you are trying to use full duplex traffic on a PCIe x4
>
> The real bandwidth needed by an interface is approximately   ((pkt size +
> 48) * pps)   .
>
> 48 bytes is the approximate little overhead , per packet, for NIC
> descriptors and PCIe overheads. This is an undocumented heuristic ......
>
> I guess you are using the default DPDK options, the ethernet FCS is not in
> PCIe bandwidth (stripped by the NIC on rx, generated by the NIC on TX).
> Same for 20 bytes ethernet preamble.
>
>
> If I assume you are using 60 bytes packets .   ( 60 + 48 ) * (14 + 6) * 8 =
> approx 17 Gbps == more or less the bandwidth of a bidirectional x4
> interface.
>
>
> Tools like "lspci" and "dmidecode" will help you to investigate the real
> capabilities of the PCIe slots where your 82599 cards are plugged in.
>
> The output of dmidecode looks like the following example, and x2, x4, x8,
> x16 indicate the number of lanes an interface will be able to use. The
> more lanes, the fastest.
>
> System Slot Information
>     Designation: System Slot 1
>     Type: x8 PCI Express
>     Current Usage: Available
>     Length: Long
>     ID: 1
>     Characteristics:
>         3.3 V is provided
>
>
> To use a 82599 at full bidirectional rate, you need at least a x8 interface (1
> port) or x16 interface (2 ports)
>
> Regards,
>
> Pierre
>
>
> On 27/12/2018 09:24, ????? wrote:
>
> recently, I have debug DPDK18.08 at my arm64 arch machine, with DPDK-
> pktgen3.5.2.
> but the performace is very low for bidirectional traffic with x86 machine
>
> here is my data:
> hardware Conditions?
>         arm64:    CPU - 64 cores, CPUfreq: 1.5GHz
>                        MEM - 64 GiB
>                        NIC - 82599ES dual port
>         x86:        CPU - 4 cores, CPUfreq: 3.2GHz
>                        MEM - 4GiB
>                        NIC - 82599ES dual port
> software Conditions:
>          system kernel:
>                   arm64: linux-4.4.58
>                   x86: ubuntu16.04-4.4.0-generic
>          tools:
>                   DPDK18.08, DPDK-pktgen3.5.2
>
> test:
>        |----------|                bi-directional                |-----------|
>        | arm64 | port0 |           < - >            | port0 |     x86   |
>        |----------|                                                     |----------|
>
> result
>                                   arm64                                   x86
> Pkts/s (Rx/Tx)            10.2/6.0Mpps                       6.0/14.80Mpps
> MBits/s(Rx/Tx)             7000/3300 MBits/s             3300/9989 MBits/s
>
> Questions?
> 1?Why DPDK data performance would be so much worse than the x86
> architecture in arm64 addition?

One reason is 1.5GHz vs. 3.2GHz, other possible reasons may include cpu affinity, crossing NUMA nodes? Hugepage sizes?
Could you check these settings?

> 2?above, Tx direction is not run full, Why Rx and TX affect each other?
>
>
>
>
>
> ------------------------------
>
> Message: 3
> Date: Sun, 30 Dec 2018 20:18:46 +0800
> From: <hfli@netitest.com>
> To: <users@dpdk.org>,<azuredpdk@microsoft.com>
> Subject: [dpdk-users] DPDK procedure start Error: "PMD: Could not add
> multiqqdisc (17): File exists"
> Message-ID: <000a01d4a039$d0755f00$71601d00$@netitest.com>
> Content-Type: text/plain;charset="us-ascii"
>
> Hi Admin,
>
>
>
> Our DPDK procedure want to deploy on Hyper-v and Azure Cloud, so we
> try it
> on Hyper-v firstly. It need start 2 processes, a client process use port1, a
> server process use port2, port1 and port2 in one internal subnet on a
> virtual switch,
>
>
>
> But only one process can be started successfully, the other said error,
> "PMD: Could not add multiq qdisc (17): File exists", our procedure is
> running well on Vmware/KVM/ASW, is there any help for this?
>
>
>
> Below is our env:
>
>
>
> OS: Windows 10 and Hyper-V on it
>
> Guest OS: CentOS7.6(Upgrade kernel to 4.20.0)
>
> DPDK version: 18.02.2
>
>
>
> # uname -a
>
> Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23
> 20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
>
>
>
> root:/# ifconfig -a
>
> bond0: flags=5122<BROADCAST,MASTER,MULTICAST>  mtu 1500
>
>         ether 46:28:ec:c8:7a:74  txqueuelen 1000  (Ethernet)
>
>         RX packets 0  bytes 0 (0.0 B)
>
>         RX errors 0  dropped 0  overruns 0  frame 0
>
>         TX packets 0  bytes 0 (0.0 B)
>
>         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
>
>
> lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
>
>         inet 127.0.0.1  netmask 255.0.0.0
>
>         inet6 ::1  prefixlen 128  scopeid 0x10<host>
>
>         loop  txqueuelen 1000  (Local Loopback)
>
>         RX packets 75  bytes 6284 (6.1 KiB)
>
>         RX errors 0  dropped 0  overruns 0  frame 0
>
>         TX packets 75  bytes 6284 (6.1 KiB)
>
>         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
>
>
> mgmt1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
>
>         inet 192.168.16.130  netmask 255.255.255.0  broadcast 192.168.16.255
>
>         inet6 fe80::78e3:1af8:3333:ff45  prefixlen 64  scopeid 0x20<link>
>
>         ether 00:15:5d:10:85:14  txqueuelen 1000  (Ethernet)
>
>         RX packets 5494  bytes 706042 (689.4 KiB)
>
>         RX errors 0  dropped 0  overruns 0  frame 0
>
>         TX packets 2163  bytes 438205 (427.9 KiB)
>
>         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
>
>
> mgmt2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
>
>         ether 00:15:5d:10:85:15  txqueuelen 1000  (Ethernet)
>
>         RX packets 3131  bytes 518243 (506.0 KiB)
>
>         RX errors 0  dropped 0  overruns 0  frame 0
>
>         TX packets 0  bytes 0 (0.0 B)
>
>         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
>
>
> port1: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST>  mtu
> 1500
>
>         ether 00:15:5d:10:85:16  txqueuelen 1000  (Ethernet)
>
>         RX packets 1707  bytes 163778 (159.9 KiB)
>
>         RX errors 0  dropped 0  overruns 0  frame 0
>
>         TX packets 693  bytes 70666 (69.0 KiB)
>
>         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
>
>
> port2: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST>  mtu
> 1500
>
>         ether 00:15:5d:10:85:17  txqueuelen 1000  (Ethernet)
>
>         RX packets 900  bytes 112256 (109.6 KiB)
>
>         RX errors 0  dropped 0  overruns 0  frame 0
>
>         TX packets 1504  bytes 122428 (119.5 KiB)
>
>         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
>
>
> root:/# ethtool -i port1
>
> driver: hv_netvsc
>
> version:
>
> firmware-version: N/A
>
> expansion-rom-version:
>
> bus-info:
>
> supports-statistics: yes
>
> supports-test: no
>
> supports-eeprom-access: no
>
> supports-register-dump: no
>
> supports-priv-flags: no
>
> root:/# ethtool -i port2
>
> driver: hv_netvsc
>
> version:
>
> firmware-version: N/A
>
> expansion-rom-version:
>
> bus-info:
>
> supports-statistics: yes
>
> supports-test: no
>
> supports-eeprom-access: no
>
> supports-register-dump: no
>
> supports-priv-flags: no
>
> root:/#
>
>
>
> Start server process successfully
>
> # ./Tester -l 3 -n 4 --vdev="net_vdev_netvsc1,iface=port2" --socket-mem
> 1500
> --file-prefix server
>
> EAL: Detected 4 lcore(s)
>
> EAL: No free hugepages reported in hugepages-1048576kB
>
> EAL: Multi-process socket /var/log/.server_unix
>
> EAL: Probing VFIO support...
>
> EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
> unreliable
> clock cycles !
>
> PMD: net_failsafe: Initializing Fail-safe PMD for
> net_failsafe_net_vdev_netvsc1_id0
>
> PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
>
> PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc1_id0 as dtap0
>
> PMD: net_failsafe: MAC address is 00:15:5d:10:85:17
>
>
>
> Concorrently start client process failed
>
> # ./Tester -l 2 -n 4 --vdev="net_vdev_netvsc0,iface=port1" --socket-mem
> 1500
> --file-prefix client
>
> EAL: Detected 4 lcore(s)
>
> EAL: No free hugepages reported in hugepages-1048576kB
>
> EAL: Multi-process socket /var/log/.client_unix
>
> EAL: Probing VFIO support...
>
> EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
> unreliable
> clock cycles !
>
> PMD: net_failsafe: Initializing Fail-safe PMD for
> net_failsafe_net_vdev_netvsc0_id0
>
> PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
>
> PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0
>
> PMD: Could not add multiq qdisc (17): File exists
>
> PMD: dtap0: failed to create multiq qdisc.
>
> PMD:  Disabling rte flow support: File exists(17)
>
> PMD: Remote feature requires flow support.
>
> PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
>
> EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
>
> PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
>
> PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
>
> vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is
> 06:5f:a6:0a:b4:f9
>
> device
>
> EAL: Bus (vdev) probe failed.
>
>
>
>
>
>
>
> Thanks and Regards,
>
> Jack
>
>
>
> ------------------------------
>
> Message: 4
> Date: Wed, 2 Jan 2019 11:29:52 +0000
> From: Pradeep Kumar Nalla <pnalla@marvell.com>
> To: "users@dpdk.org" <users@dpdk.org>
> Subject: [dpdk-users] mempool: creating pool out of an already
> allocatedmemory
> Message-ID:
> <BN8PR18MB2755E25C423D078FD7EFFE08C68C0@BN8PR18MB275
> 5.namprd18.prod.outlook.com>
>
> Content-Type: text/plain; charset="us-ascii"
>
> Hello
>
> Is there a way or API to create a mempool out of already allocated
> memory?
>
> Thanks
> Pradeep.
>
>
> ------------------------------
>
> Message: 5
> Date: Fri, 4 Jan 2019 11:46:48 +0800
> From: <hfli@netitest.com>
> To: <matan@mellanox.com>
> Cc: <users@dpdk.org>,<azuredpdk@microsoft.com>
> Subject: [dpdk-users] DPDK procedure start Error: "PMD: Could not add
> multiqqdisc (17): File exists"
> Message-ID: <000f01d4a3e0$1f6bb060$5e431120$@netitest.com>
> Content-Type: text/plain;charset="us-ascii"
>
> Hi Matan,
>
>
>
> Could you help us for below error?
>
>
>
> PMD: Could not add multiq qdisc (17): File exists
>
> PMD: dtap0: failed to create multiq qdisc.
>
> PMD:  Disabling rte flow support: File exists(17)
>
> PMD: Remote feature requires flow support.
>
> PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
>
> EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
>
> PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
>
> PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
>
> vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is
> 06:5f:a6:0a:b4:f9
>
> device
>
> EAL: Bus (vdev) probe failed.
>
>
>
>
>
> Our DPDK procedure want to deploy on Hyper-v and Azure Cloud, so we
> try it
> on Hyper-v firstly. It need start 2 processes, a client process use port1, a
> server process use port2, port1 and port2 in one internal subnet on a
> virtual switch,
>
>
>
> But only one process can be started successfully, the other said error,
> "PMD: Could not add multiq qdisc (17): File exists", our procedure is
> running well on Vmware/KVM/ASW, is there any help for this?
>
>
>
> Below is our env:
>
>
>
> OS: Windows 10 and Hyper-V on it
>
> Guest OS: CentOS7.6(Upgrade kernel to 4.20.0)
>
> DPDK version: 18.02.2
>
>
>
> # uname -a
>
> Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec 23
> 20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
>
>
>
> root:/# ifconfig -a
>
> bond0: flags=5122<BROADCAST,MASTER,MULTICAST>  mtu 1500
>
>         ether 46:28:ec:c8:7a:74  txqueuelen 1000  (Ethernet)
>
>         RX packets 0  bytes 0 (0.0 B)
>
>         RX errors 0  dropped 0  overruns 0  frame 0
>
>         TX packets 0  bytes 0 (0.0 B)
>
>         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
>
>
> lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
>
>         inet 127.0.0.1  netmask 255.0.0.0
>
>         inet6 ::1  prefixlen 128  scopeid 0x10<host>
>
>         loop  txqueuelen 1000  (Local Loopback)
>
>         RX packets 75  bytes 6284 (6.1 KiB)
>
>         RX errors 0  dropped 0  overruns 0  frame 0
>
>         TX packets 75  bytes 6284 (6.1 KiB)
>
>         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
>
>
> mgmt1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
>
>         inet 192.168.16.130  netmask 255.255.255.0  broadcast 192.168.16.255
>
>         inet6 fe80::78e3:1af8:3333:ff45  prefixlen 64  scopeid 0x20<link>
>
>         ether 00:15:5d:10:85:14  txqueuelen 1000  (Ethernet)
>
>         RX packets 5494  bytes 706042 (689.4 KiB)
>
>         RX errors 0  dropped 0  overruns 0  frame 0
>
>         TX packets 2163  bytes 438205 (427.9 KiB)
>
>         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
>
>
> mgmt2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
>
>         ether 00:15:5d:10:85:15  txqueuelen 1000  (Ethernet)
>
>         RX packets 3131  bytes 518243 (506.0 KiB)
>
>         RX errors 0  dropped 0  overruns 0  frame 0
>
>         TX packets 0  bytes 0 (0.0 B)
>
>         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
>
>
> port1: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST>  mtu
> 1500
>
>         ether 00:15:5d:10:85:16  txqueuelen 1000  (Ethernet)
>
>         RX packets 1707  bytes 163778 (159.9 KiB)
>
>         RX errors 0  dropped 0  overruns 0  frame 0
>
>         TX packets 693  bytes 70666 (69.0 KiB)
>
>         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
>
>
> port2: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST>  mtu
> 1500
>
>         ether 00:15:5d:10:85:17  txqueuelen 1000  (Ethernet)
>
>         RX packets 900  bytes 112256 (109.6 KiB)
>
>         RX errors 0  dropped 0  overruns 0  frame 0
>
>         TX packets 1504  bytes 122428 (119.5 KiB)
>
>         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>
>
>
> root:/# ethtool -i port1
>
> driver: hv_netvsc
>
> version:
>
> firmware-version: N/A
>
> expansion-rom-version:
>
> bus-info:
>
> supports-statistics: yes
>
> supports-test: no
>
> supports-eeprom-access: no
>
> supports-register-dump: no
>
> supports-priv-flags: no
>
> root:/# ethtool -i port2
>
> driver: hv_netvsc
>
> version:
>
> firmware-version: N/A
>
> expansion-rom-version:
>
> bus-info:
>
> supports-statistics: yes
>
> supports-test: no
>
> supports-eeprom-access: no
>
> supports-register-dump: no
>
> supports-priv-flags: no
>
> root:/#
>
>
>
> Start server process successfully
>
> # ./VM_DPDK -l 3 -n 4 --vdev="net_vdev_netvsc1,iface=port2" --socket-mem
> 1500 --file-prefix server
>
> EAL: Detected 4 lcore(s)
>
> EAL: No free hugepages reported in hugepages-1048576kB
>
> EAL: Multi-process socket /var/log/.server_unix
>
> EAL: Probing VFIO support...
>
> EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
> unreliable
> clock cycles !
>
> PMD: net_failsafe: Initializing Fail-safe PMD for
> net_failsafe_net_vdev_netvsc1_id0
>
> PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
>
> PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc1_id0 as dtap0
>
> PMD: net_failsafe: MAC address is 00:15:5d:10:85:17
>
>
>
> Concorrently start client process failed
>
> # ./VM_DPDK -l 2 -n 4 --vdev="net_vdev_netvsc0,iface=port1" --socket-mem
> 1500 --file-prefix client
>
> EAL: Detected 4 lcore(s)
>
> EAL: No free hugepages reported in hugepages-1048576kB
>
> EAL: Multi-process socket /var/log/.client_unix
>
> EAL: Probing VFIO support...
>
> EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
> unreliable
> clock cycles !
>
> PMD: net_failsafe: Initializing Fail-safe PMD for
> net_failsafe_net_vdev_netvsc0_id0
>
> PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
>
> PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0
>
> PMD: Could not add multiq qdisc (17): File exists
>
> PMD: dtap0: failed to create multiq qdisc.
>
> PMD:  Disabling rte flow support: File exists(17)
>
> PMD: Remote feature requires flow support.
>
> PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
>
> EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
>
> PMD: net_failsafe: sub_device 1 probe failed (No such file or directory)
>
> PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
>
> vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is
> 06:5f:a6:0a:b4:f9
>
> device
>
> EAL: Bus (vdev) probe failed.
>
>
>
>
>
>
>
> Thanks and Regards,
>
> Jack
>
>
>
> End of users Digest, Vol 168, Issue 3
> *************************************
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-users] users Digest, Vol 168, Issue 3
  2019-01-11 13:45 ` [dpdk-users] users Digest, Vol 168, Issue 3 Gavin Hu (Arm Technology China)
@ 2019-01-11 21:29   ` Honnappa Nagarahalli
  0 siblings, 0 replies; 2+ messages in thread
From: Honnappa Nagarahalli @ 2019-01-11 21:29 UTC (permalink / raw)
  To: Gavin Hu (Arm Technology China), users, pierre.laurent

> >
> > Send users mailing list submissions to
> > users@dpdk.org
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> > https://mails.dpdk.org/listinfo/users
> > or, via email, send a message with subject or body 'help' to
> > users-request@dpdk.org
> >
> > You can reach the person managing the list at
> > users-owner@dpdk.org
> >
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of users digest..."
> >
> >
> > Today's Topics:
> >
> >    1. rte flow does not work on X550 - "Not supported by L2tunnel
> >       filter" (Uiu Uioreanu)
> >    2. Re: HOW performance to run DPDK at ARM64 arch? (Pierre Laurent)
> >    3. DPDK procedure start Error: "PMD: Could not add multiqqdisc
> >       (17): File exists" (hfli@netitest.com)
> >    4. mempool: creating pool out of an already allocatedmemory
> >       (Pradeep Kumar Nalla)
> >    5. DPDK procedure start Error: "PMD: Could not add multiqqdisc
> >       (17): File exists" (hfli@netitest.com)
> >
> >
> > ----------------------------------------------------------------------
> >
> > Message: 1
> > Date: Wed, 9 Jan 2019 16:36:23 +0200
> > From: Uiu Uioreanu <uiuoreanu@gmail.com>
> > To: users@dpdk.org
> > Subject: [dpdk-users] rte flow does not work on X550 - "Not supported
> > by L2tunnel filter"
> > Message-ID:
> > <CAFAKT1ykLqJnpXiPBpc9+v=hOhwowXJoxkb_e=qDkN-
> > q_fpWhg@mail.gmail.com>
> > Content-Type: text/plain; charset="UTF-8"
> >
> > Hi,
> >
> > I am trying to use rte flow on a X550 (ixgbe) and it does not work.
> >
> > For example, I used testpmd (sudo ./testpmd -c 0xf -n 1 -- -i) to
> > validate some simple flow rules:
> > - flow validate 0 ingress pattern eth / ipv4 / udp / end actions drop
> > / end
> > - flow validate 0 ingress pattern eth / ipv4 / udp / end actions end
> > - flow validate 0 ingress pattern eth / end actions drop end
> > - etc
> >
> > Every time, I receive "caught error type 9 (specific pattern item):
> > cause: 0x7ffddca42768, Not supported by L2 tunnel filter".
> > I also tried to make a sample application to use rte flow, but it also
> > gives the error 9 and the "Not supported by L2 tunnel filter" message.
> >
> > I don't understand what the problem is. Is the pattern from the flow
> > validate command wrong? Does the port need any additional
> > configuration? (except from binding a X550 NIC to
> > igb_uio/uio_pci_generic)?
> >
> > I am using DPDK 17.11.
> >
> > Thanks,
> > Uiu
> >
> >
> > ------------------------------
> >
> > Message: 2
> > Date: Thu, 27 Dec 2018 16:41:57 +0000
> > From: Pierre Laurent <pierre.laurent@emutex.com>
> > To: "users@dpdk.org" <users@dpdk.org>
> > Subject: Re: [dpdk-users] HOW performance to run DPDK at ARM64 arch?
> > Message-ID: <2736c618-8a28-a267-d05f-93021a3d5004@emutex.com>
> > Content-Type: text/plain; charset="utf-8"
> >
> > Hi,
> >
> > Regarding your question 2, the TX+Rx  numbers you get look strangely
> > like you are trying to use full duplex traffic on a PCIe x4
> >
> > The real bandwidth needed by an interface is approximately   ((pkt size +
> > 48) * pps)   .
> >
> > 48 bytes is the approximate little overhead , per packet, for NIC
> > descriptors and PCIe overheads. This is an undocumented heuristic ......
> >
> > I guess you are using the default DPDK options, the ethernet FCS is
> > not in PCIe bandwidth (stripped by the NIC on rx, generated by the NIC on
> TX).
> > Same for 20 bytes ethernet preamble.
> >
> >
> > If I assume you are using 60 bytes packets .   ( 60 + 48 ) * (14 + 6) * 8 =
> > approx 17 Gbps == more or less the bandwidth of a bidirectional x4
> > interface.
> >
> >
> > Tools like "lspci" and "dmidecode" will help you to investigate the
> > real capabilities of the PCIe slots where your 82599 cards are plugged in.
> >
> > The output of dmidecode looks like the following example, and x2, x4,
> > x8,
> > x16 indicate the number of lanes an interface will be able to use. The
> > more lanes, the fastest.
> >
> > System Slot Information
> >     Designation: System Slot 1
> >     Type: x8 PCI Express
> >     Current Usage: Available
> >     Length: Long
> >     ID: 1
> >     Characteristics:
> >         3.3 V is provided
> >
> >
> > To use a 82599 at full bidirectional rate, you need at least a x8
> > interface (1
> > port) or x16 interface (2 ports)
> >
> > Regards,
> >
> > Pierre
> >
> >
> > On 27/12/2018 09:24, ????? wrote:
> >
> > recently, I have debug DPDK18.08 at my arm64 arch machine, with DPDK-
> > pktgen3.5.2.
> > but the performace is very low for bidirectional traffic with x86
> > machine
> >
> > here is my data:
> > hardware Conditions?
> >         arm64:    CPU - 64 cores, CPUfreq: 1.5GHz
> >                        MEM - 64 GiB
> >                        NIC - 82599ES dual port
> >         x86:        CPU - 4 cores, CPUfreq: 3.2GHz
> >                        MEM - 4GiB
> >                        NIC - 82599ES dual port software Conditions:
> >          system kernel:
> >                   arm64: linux-4.4.58
> >                   x86: ubuntu16.04-4.4.0-generic
> >          tools:
> >                   DPDK18.08, DPDK-pktgen3.5.2
> >
> > test:
> >        |----------|                bi-directional                |-----------|
> >        | arm64 | port0 |           < - >            | port0 |     x86   |
> >        |----------|                                                     |----------|
> >
> > result
> >                                   arm64                                   x86
> > Pkts/s (Rx/Tx)            10.2/6.0Mpps                       6.0/14.80Mpps
> > MBits/s(Rx/Tx)             7000/3300 MBits/s             3300/9989 MBits/s
> >
> > Questions?
> > 1?Why DPDK data performance would be so much worse than the x86
> > architecture in arm64 addition?
Appreciate your efforts trying to run DPDK on arm64. Depending on the micro-architecture you might not see similar performance. This is due to the positioning of that micro-architecture. Some micro-architectures bring smaller cores but higher density (large number of small cores). In these cases it is better to look at the performance of the complete socket rather than a single core.

>
> One reason is 1.5GHz vs. 3.2GHz, other possible reasons may include cpu
> affinity, crossing NUMA nodes? Hugepage sizes?
> Could you check these settings?
>
> > 2?above, Tx direction is not run full, Why Rx and TX affect each other?
You might have to tune the RX and TX buffer depths.

> >
> >
> >
> >
> >
> > ------------------------------
> >
> > Message: 3
> > Date: Sun, 30 Dec 2018 20:18:46 +0800
> > From: <hfli@netitest.com>
> > To: <users@dpdk.org>,<azuredpdk@microsoft.com>
> > Subject: [dpdk-users] DPDK procedure start Error: "PMD: Could not add
> > multiqqdisc (17): File exists"
> > Message-ID: <000a01d4a039$d0755f00$71601d00$@netitest.com>
> > Content-Type: text/plain;charset="us-ascii"
> >
> > Hi Admin,
> >
> >
> >
> > Our DPDK procedure want to deploy on Hyper-v and Azure Cloud, so we
> > try it on Hyper-v firstly. It need start 2 processes, a client process
> > use port1, a server process use port2, port1 and port2 in one internal
> > subnet on a virtual switch,
> >
> >
> >
> > But only one process can be started successfully, the other said
> > error,
> > "PMD: Could not add multiq qdisc (17): File exists", our procedure is
> > running well on Vmware/KVM/ASW, is there any help for this?
> >
> >
> >
> > Below is our env:
> >
> >
> >
> > OS: Windows 10 and Hyper-V on it
> >
> > Guest OS: CentOS7.6(Upgrade kernel to 4.20.0)
> >
> > DPDK version: 18.02.2
> >
> >
> >
> > # uname -a
> >
> > Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec
> > 23
> > 20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
> >
> >
> >
> > root:/# ifconfig -a
> >
> > bond0: flags=5122<BROADCAST,MASTER,MULTICAST>  mtu 1500
> >
> >         ether 46:28:ec:c8:7a:74  txqueuelen 1000  (Ethernet)
> >
> >         RX packets 0  bytes 0 (0.0 B)
> >
> >         RX errors 0  dropped 0  overruns 0  frame 0
> >
> >         TX packets 0  bytes 0 (0.0 B)
> >
> >         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> >
> >
> >
> > lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
> >
> >         inet 127.0.0.1  netmask 255.0.0.0
> >
> >         inet6 ::1  prefixlen 128  scopeid 0x10<host>
> >
> >         loop  txqueuelen 1000  (Local Loopback)
> >
> >         RX packets 75  bytes 6284 (6.1 KiB)
> >
> >         RX errors 0  dropped 0  overruns 0  frame 0
> >
> >         TX packets 75  bytes 6284 (6.1 KiB)
> >
> >         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> >
> >
> >
> > mgmt1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
> >
> >         inet 192.168.16.130  netmask 255.255.255.0  broadcast
> > 192.168.16.255
> >
> >         inet6 fe80::78e3:1af8:3333:ff45  prefixlen 64  scopeid
> > 0x20<link>
> >
> >         ether 00:15:5d:10:85:14  txqueuelen 1000  (Ethernet)
> >
> >         RX packets 5494  bytes 706042 (689.4 KiB)
> >
> >         RX errors 0  dropped 0  overruns 0  frame 0
> >
> >         TX packets 2163  bytes 438205 (427.9 KiB)
> >
> >         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> >
> >
> >
> > mgmt2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
> >
> >         ether 00:15:5d:10:85:15  txqueuelen 1000  (Ethernet)
> >
> >         RX packets 3131  bytes 518243 (506.0 KiB)
> >
> >         RX errors 0  dropped 0  overruns 0  frame 0
> >
> >         TX packets 0  bytes 0 (0.0 B)
> >
> >         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> >
> >
> >
> > port1: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST>  mtu
> > 1500
> >
> >         ether 00:15:5d:10:85:16  txqueuelen 1000  (Ethernet)
> >
> >         RX packets 1707  bytes 163778 (159.9 KiB)
> >
> >         RX errors 0  dropped 0  overruns 0  frame 0
> >
> >         TX packets 693  bytes 70666 (69.0 KiB)
> >
> >         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> >
> >
> >
> > port2: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST>  mtu
> > 1500
> >
> >         ether 00:15:5d:10:85:17  txqueuelen 1000  (Ethernet)
> >
> >         RX packets 900  bytes 112256 (109.6 KiB)
> >
> >         RX errors 0  dropped 0  overruns 0  frame 0
> >
> >         TX packets 1504  bytes 122428 (119.5 KiB)
> >
> >         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> >
> >
> >
> > root:/# ethtool -i port1
> >
> > driver: hv_netvsc
> >
> > version:
> >
> > firmware-version: N/A
> >
> > expansion-rom-version:
> >
> > bus-info:
> >
> > supports-statistics: yes
> >
> > supports-test: no
> >
> > supports-eeprom-access: no
> >
> > supports-register-dump: no
> >
> > supports-priv-flags: no
> >
> > root:/# ethtool -i port2
> >
> > driver: hv_netvsc
> >
> > version:
> >
> > firmware-version: N/A
> >
> > expansion-rom-version:
> >
> > bus-info:
> >
> > supports-statistics: yes
> >
> > supports-test: no
> >
> > supports-eeprom-access: no
> >
> > supports-register-dump: no
> >
> > supports-priv-flags: no
> >
> > root:/#
> >
> >
> >
> > Start server process successfully
> >
> > # ./Tester -l 3 -n 4 --vdev="net_vdev_netvsc1,iface=port2"
> > --socket-mem
> > 1500
> > --file-prefix server
> >
> > EAL: Detected 4 lcore(s)
> >
> > EAL: No free hugepages reported in hugepages-1048576kB
> >
> > EAL: Multi-process socket /var/log/.server_unix
> >
> > EAL: Probing VFIO support...
> >
> > EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
> > unreliable clock cycles !
> >
> > PMD: net_failsafe: Initializing Fail-safe PMD for
> > net_failsafe_net_vdev_netvsc1_id0
> >
> > PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
> >
> > PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc1_id0 as dtap0
> >
> > PMD: net_failsafe: MAC address is 00:15:5d:10:85:17
> >
> >
> >
> > Concorrently start client process failed
> >
> > # ./Tester -l 2 -n 4 --vdev="net_vdev_netvsc0,iface=port1"
> > --socket-mem
> > 1500
> > --file-prefix client
> >
> > EAL: Detected 4 lcore(s)
> >
> > EAL: No free hugepages reported in hugepages-1048576kB
> >
> > EAL: Multi-process socket /var/log/.client_unix
> >
> > EAL: Probing VFIO support...
> >
> > EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
> > unreliable clock cycles !
> >
> > PMD: net_failsafe: Initializing Fail-safe PMD for
> > net_failsafe_net_vdev_netvsc0_id0
> >
> > PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
> >
> > PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0
> >
> > PMD: Could not add multiq qdisc (17): File exists
> >
> > PMD: dtap0: failed to create multiq qdisc.
> >
> > PMD:  Disabling rte flow support: File exists(17)
> >
> > PMD: Remote feature requires flow support.
> >
> > PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
> >
> > EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
> >
> > PMD: net_failsafe: sub_device 1 probe failed (No such file or
> > directory)
> >
> > PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
> >
> > vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is
> > 06:5f:a6:0a:b4:f9
> >
> > device
> >
> > EAL: Bus (vdev) probe failed.
> >
> >
> >
> >
> >
> >
> >
> > Thanks and Regards,
> >
> > Jack
> >
> >
> >
> > ------------------------------
> >
> > Message: 4
> > Date: Wed, 2 Jan 2019 11:29:52 +0000
> > From: Pradeep Kumar Nalla <pnalla@marvell.com>
> > To: "users@dpdk.org" <users@dpdk.org>
> > Subject: [dpdk-users] mempool: creating pool out of an already
> > allocatedmemory
> > Message-ID:
> > <BN8PR18MB2755E25C423D078FD7EFFE08C68C0@BN8PR18MB275
> > 5.namprd18.prod.outlook.com>
> >
> > Content-Type: text/plain; charset="us-ascii"
> >
> > Hello
> >
> > Is there a way or API to create a mempool out of already allocated
> > memory?
> >
> > Thanks
> > Pradeep.
> >
> >
> > ------------------------------
> >
> > Message: 5
> > Date: Fri, 4 Jan 2019 11:46:48 +0800
> > From: <hfli@netitest.com>
> > To: <matan@mellanox.com>
> > Cc: <users@dpdk.org>,<azuredpdk@microsoft.com>
> > Subject: [dpdk-users] DPDK procedure start Error: "PMD: Could not add
> > multiqqdisc (17): File exists"
> > Message-ID: <000f01d4a3e0$1f6bb060$5e431120$@netitest.com>
> > Content-Type: text/plain;charset="us-ascii"
> >
> > Hi Matan,
> >
> >
> >
> > Could you help us for below error?
> >
> >
> >
> > PMD: Could not add multiq qdisc (17): File exists
> >
> > PMD: dtap0: failed to create multiq qdisc.
> >
> > PMD:  Disabling rte flow support: File exists(17)
> >
> > PMD: Remote feature requires flow support.
> >
> > PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
> >
> > EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
> >
> > PMD: net_failsafe: sub_device 1 probe failed (No such file or
> > directory)
> >
> > PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
> >
> > vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is
> > 06:5f:a6:0a:b4:f9
> >
> > device
> >
> > EAL: Bus (vdev) probe failed.
> >
> >
> >
> >
> >
> > Our DPDK procedure want to deploy on Hyper-v and Azure Cloud, so we
> > try it on Hyper-v firstly. It need start 2 processes, a client process
> > use port1, a server process use port2, port1 and port2 in one internal
> > subnet on a virtual switch,
> >
> >
> >
> > But only one process can be started successfully, the other said
> > error,
> > "PMD: Could not add multiq qdisc (17): File exists", our procedure is
> > running well on Vmware/KVM/ASW, is there any help for this?
> >
> >
> >
> > Below is our env:
> >
> >
> >
> > OS: Windows 10 and Hyper-V on it
> >
> > Guest OS: CentOS7.6(Upgrade kernel to 4.20.0)
> >
> > DPDK version: 18.02.2
> >
> >
> >
> > # uname -a
> >
> > Linux localhost.localdomain 4.20.0-1.el7.elrepo.x86_64 #1 SMP Sun Dec
> > 23
> > 20:11:51 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
> >
> >
> >
> > root:/# ifconfig -a
> >
> > bond0: flags=5122<BROADCAST,MASTER,MULTICAST>  mtu 1500
> >
> >         ether 46:28:ec:c8:7a:74  txqueuelen 1000  (Ethernet)
> >
> >         RX packets 0  bytes 0 (0.0 B)
> >
> >         RX errors 0  dropped 0  overruns 0  frame 0
> >
> >         TX packets 0  bytes 0 (0.0 B)
> >
> >         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> >
> >
> >
> > lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
> >
> >         inet 127.0.0.1  netmask 255.0.0.0
> >
> >         inet6 ::1  prefixlen 128  scopeid 0x10<host>
> >
> >         loop  txqueuelen 1000  (Local Loopback)
> >
> >         RX packets 75  bytes 6284 (6.1 KiB)
> >
> >         RX errors 0  dropped 0  overruns 0  frame 0
> >
> >         TX packets 75  bytes 6284 (6.1 KiB)
> >
> >         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> >
> >
> >
> > mgmt1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
> >
> >         inet 192.168.16.130  netmask 255.255.255.0  broadcast
> > 192.168.16.255
> >
> >         inet6 fe80::78e3:1af8:3333:ff45  prefixlen 64  scopeid
> > 0x20<link>
> >
> >         ether 00:15:5d:10:85:14  txqueuelen 1000  (Ethernet)
> >
> >         RX packets 5494  bytes 706042 (689.4 KiB)
> >
> >         RX errors 0  dropped 0  overruns 0  frame 0
> >
> >         TX packets 2163  bytes 438205 (427.9 KiB)
> >
> >         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> >
> >
> >
> > mgmt2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
> >
> >         ether 00:15:5d:10:85:15  txqueuelen 1000  (Ethernet)
> >
> >         RX packets 3131  bytes 518243 (506.0 KiB)
> >
> >         RX errors 0  dropped 0  overruns 0  frame 0
> >
> >         TX packets 0  bytes 0 (0.0 B)
> >
> >         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> >
> >
> >
> > port1: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST>  mtu
> > 1500
> >
> >         ether 00:15:5d:10:85:16  txqueuelen 1000  (Ethernet)
> >
> >         RX packets 1707  bytes 163778 (159.9 KiB)
> >
> >         RX errors 0  dropped 0  overruns 0  frame 0
> >
> >         TX packets 693  bytes 70666 (69.0 KiB)
> >
> >         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> >
> >
> >
> > port2: flags=4675<UP,BROADCAST,RUNNING,ALLMULTI,MULTICAST>  mtu
> > 1500
> >
> >         ether 00:15:5d:10:85:17  txqueuelen 1000  (Ethernet)
> >
> >         RX packets 900  bytes 112256 (109.6 KiB)
> >
> >         RX errors 0  dropped 0  overruns 0  frame 0
> >
> >         TX packets 1504  bytes 122428 (119.5 KiB)
> >
> >         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> >
> >
> >
> > root:/# ethtool -i port1
> >
> > driver: hv_netvsc
> >
> > version:
> >
> > firmware-version: N/A
> >
> > expansion-rom-version:
> >
> > bus-info:
> >
> > supports-statistics: yes
> >
> > supports-test: no
> >
> > supports-eeprom-access: no
> >
> > supports-register-dump: no
> >
> > supports-priv-flags: no
> >
> > root:/# ethtool -i port2
> >
> > driver: hv_netvsc
> >
> > version:
> >
> > firmware-version: N/A
> >
> > expansion-rom-version:
> >
> > bus-info:
> >
> > supports-statistics: yes
> >
> > supports-test: no
> >
> > supports-eeprom-access: no
> >
> > supports-register-dump: no
> >
> > supports-priv-flags: no
> >
> > root:/#
> >
> >
> >
> > Start server process successfully
> >
> > # ./VM_DPDK -l 3 -n 4 --vdev="net_vdev_netvsc1,iface=port2"
> > --socket-mem
> > 1500 --file-prefix server
> >
> > EAL: Detected 4 lcore(s)
> >
> > EAL: No free hugepages reported in hugepages-1048576kB
> >
> > EAL: Multi-process socket /var/log/.server_unix
> >
> > EAL: Probing VFIO support...
> >
> > EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
> > unreliable clock cycles !
> >
> > PMD: net_failsafe: Initializing Fail-safe PMD for
> > net_failsafe_net_vdev_netvsc1_id0
> >
> > PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
> >
> > PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc1_id0 as dtap0
> >
> > PMD: net_failsafe: MAC address is 00:15:5d:10:85:17
> >
> >
> >
> > Concorrently start client process failed
> >
> > # ./VM_DPDK -l 2 -n 4 --vdev="net_vdev_netvsc0,iface=port1"
> > --socket-mem
> > 1500 --file-prefix client
> >
> > EAL: Detected 4 lcore(s)
> >
> > EAL: No free hugepages reported in hugepages-1048576kB
> >
> > EAL: Multi-process socket /var/log/.client_unix
> >
> > EAL: Probing VFIO support...
> >
> > EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using
> > unreliable clock cycles !
> >
> > PMD: net_failsafe: Initializing Fail-safe PMD for
> > net_failsafe_net_vdev_netvsc0_id0
> >
> > PMD: net_failsafe: Creating fail-safe device on NUMA socket 0
> >
> > PMD: Initializing pmd_tap for net_tap_net_vdev_netvsc0_id0 as dtap0
> >
> > PMD: Could not add multiq qdisc (17): File exists
> >
> > PMD: dtap0: failed to create multiq qdisc.
> >
> > PMD:  Disabling rte flow support: File exists(17)
> >
> > PMD: Remote feature requires flow support.
> >
> > PMD: TAP Unable to initialize net_tap_net_vdev_netvsc0_id0
> >
> > EAL: Driver cannot attach the device (net_tap_net_vdev_netvsc0_id0)
> >
> > PMD: net_failsafe: sub_device 1 probe failed (No such file or
> > directory)
> >
> > PMD: net_failsafe: MAC address is 06:5f:a6:0a:b4:f9
> >
> > vdev_probe(): failed to initialize : PMD: net_failsafe: MAC address is
> > 06:5f:a6:0a:b4:f9
> >
> > device
> >
> > EAL: Bus (vdev) probe failed.
> >
> >
> >
> >
> >
> >
> >
> > Thanks and Regards,
> >
> > Jack
> >
> >
> >
> > End of users Digest, Vol 168, Issue 3
> > *************************************
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-01-11 21:29 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <mailman.13312.1547052929.7586.users@dpdk.org>
2019-01-11 13:45 ` [dpdk-users] users Digest, Vol 168, Issue 3 Gavin Hu (Arm Technology China)
2019-01-11 21:29   ` Honnappa Nagarahalli

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).