DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] dpdk vhost-user ovs unexpected low performance
@ 2019-10-14  8:12 Christian Graf
  0 siblings, 0 replies; only message in thread
From: Christian Graf @ 2019-10-14  8:12 UTC (permalink / raw)
  To: users

Dear all,

kindly asking for help.

I am observing packet-loss with dpdk-enabled OVS running vhost-user interfaces. As of now I am unclear if it is the dpdk-enabled VNF attached to OVS which is not capable of consuming the received packets, or if it is the OVS which does not properly send-out the packets.
I guess/hope that I just miss proper tuning of the VNF. So any further help or guidance whould be very much welcome.

As my troubleshooting-skills with such setup are limited, I am kindly asking for help and further guidance.

many thanks

christian


Topology
-----------
vnf (ge-0/0/4)--------[ vhost-user-vmx2 -ovs bridge br-dpdk--vhost-user-vmx1] -----vnf(ge-0/0/3)

 Issue
--------
ovs interface vhost-user-vmx1 claims to have egressed TX 1960035 packets towards attached vnf ge-0/0/3 interface.
But the vnf (ge-0/0/3) reports only 1599414 packets received (RX) - so somehow 400.000 packets get lost.
Packet-rate is about 200kpps


some general notes
-----------------------
The vnf and as well OVS run on numa node 0.
4 PMD's threads are pinned and as well l-core to dedicated cores. HT is enabled.
the vnf has all pinned cores as well
cores are isolated, hugepages enabled and HT enabled as well


ovs version
-------------
root@bn831x1a-node6-lab:~# sudo ovs-vsctl get Open_vSwitch . dpdk_initialized
true

root@bn831x1a-node6-lab:~# ovs-vswitchd --version
ovs-vswitchd (Open vSwitch) 2.11.0
DPDK 18.11.0

root@bn831x1a-node6-lab:~#  sudo ovs-vsctl get Open_vSwitch . dpdk_version
"DPDK 18.11.0"

root@bn831x1a-node6-lab:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 19.04
Release:        19.04
Codename:       disco

hugepages enabled
-----------------------
dcso@bn831x1a-node6-lab:~$ cat /sys/devices/system/node/node*/meminfo|grep Huge
Node 0 AnonHugePages:   1157120 kB
Node 0 ShmemHugePages:        0 kB
Node 0 HugePages_Total:    40
Node 0 HugePages_Free:     23
Node 0 HugePages_Surp:      0
Node 1 AnonHugePages:         0 kB
Node 1 ShmemHugePages:        0 kB
Node 1 HugePages_Total:    40
Node 1 HugePages_Free:     39
Node 1 HugePages_Surp:      0


vnf vhost-config
-------------------
virsh edit vfp-vmx1
...
  <memoryBacking>
    <hugepages/>
  </memoryBacking>
...
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='16' threads='1'/>
    <numa>
      <cell id='0' cpus='0' memory='16777216' unit='KiB' memAccess='shared'/>
    </numa>
  </cpu>
...
    <interface type='vhostuser'>
      <mac address='02:06:0a:00:00:01'/>
      <source type='unix' path='/var/run/openvswitch/vhost-user-vmx1' mode='client'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' tx_queue_size='1024'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='02:06:0a:00:00:02'/>
      <source type='unix' path='/var/run/openvswitch/vhost-user-vmx2' mode='client'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' tx_queue_size='1024'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
    </interface>


ovs config
-------------

export PATH=$PATH:/usr/share/openvswitch/scripts
export DB_SOCK=/var/run/openvswitch/db.sock
db=/var/lib/openvswitch/conf.db
schema="/usr/share/openvswitch/vswitch.ovsschema"

ovsdb-tool create $db $schema

sudo ovs-vsctl set Open_vSwitch . other_config:vhost-iommu-support=true

sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true

sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0x100000
sudo ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=2

bridge="br-dpdk"
ovs-vsctl del-br $bridge
ovs-vsctl add-br $bridge -- set bridge $bridge datapath_type=netdev
ovs-vsctl add-port $bridge vhost-user-vmx1 -- set Interface vhost-user-vmx1 type=dpdkvhostuser
ovs-vsctl add-port $bridge vhost-user-vmx2 -- set Interface vhost-user-vmx2 type=dpdkvhostuser

ovs-vsctl set interface vhost-user-vmx1 options:n_rxq=2 \
    other_config:pmd-rxq-affinity="0:8,1:28"

ovs-vsctl set interface vhost-user-vmx2 options:n_rxq=2 \
    other_config:pmd-rxq-affinity="0:9,1:29"

ovs-vsctl set Interface vhost-user-vmx1 options:n_txq_desc=2
ovs-vsctl set Interface vhost-user-vmx2 options:n_txq_desc=2

some show cmd
--------------------
root@bn831x1a-node6-lab:~# sudo ovs-vsctl show
fff633a5-c11e-4b78-b1cb-dbd1ace240a1
    Bridge br-dpdk
        Port br-dpdk
            Interface br-dpdk
                type: internal
        Port "vhost-user-vmx2"
            trunks: [10, 11, 12]
            Interface "vhost-user-vmx2"
                type: dpdkvhostuser
                options: {n_rxq="2", n_txq_desc="2"}
        Port "vhost-user-vmx1"
            trunks: [10, 11, 12]
            Interface "vhost-user-vmx1"
                type: dpdkvhostuser
                options: {n_rxq="2", n_txq_desc="2"}
    ovs_version: "2.11.0"


port stats
------------
root@bn831x1a-node6-lab:~# sudo ovs-ofctl dump-ports br-dpdk
OFPST_PORT reply (xid=0x2): 3 ports
  port  "vhost-user-vmx2": rx pkts=1960035, bytes=678163420, drop=0, errs=0, frame=?, over=?, crc=?
           tx pkts=1960963, bytes=678484021, drop=0, errs=?, coll=?
  port  "vhost-user-vmx1": rx pkts=1960964, bytes=678484111, drop=0, errs=0, frame=?, over=?, crc=?
           tx pkts=1960035, bytes=678163420, drop=0, errs=?, coll=?
  port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
           tx pkts=0, bytes=0, drop=5, errs=0, coll=0

interface-stats
--------------------
root@bn831x1a-node6-lab:~# ovs-vsctl list interface vhost-user-vmx1
_uuid               : 9e7d2537-b200-486f-bda1-d04d371c9765
..
mac                 : []
mac_in_use          : "00:00:00:00:00:00"
mtu                 : 1500
mtu_request         : []
name                : "vhost-user-vmx1"
..
options             : {n_rxq="2", n_txq_desc="2"}
other_config        : {pmd-rxq-affinity="0:8,1:28"}
statistics          : {"rx_1024_to_1522_packets"=0, "rx_128_to_255_packets"=1, "rx_1523_to_max_packets"=0, "rx_1_to_64_packets"=4, "rx_256_to_511_packets"=1960929, "rx_512_to_1023_packets"=0, "rx_65_to_127_packets"=30, rx_bytes=678484111, rx_dropped=0, rx_errors=0, rx_packets=1960964, tx_bytes=678163420, tx_dropped=0, tx_packets=1960035}
status              : {features="0x0000000050008000", mode=server, num_of_vrings="2", numa="0", socket="/var/run/openvswitch/vhost-user-vmx1", status=connected, "vring_0_size"="1024", "vring_1_size"="1024"}
type                : dpdkvhostuser


PMD-stats look good to me
-------------------------------
root@bn831x1a-node6-lab:~# sudo ovs-appctl dpif-netdev/pmd-stats-show
pmd thread numa_id 0 core_id 8:
  packets received: 1960964
  packet recirculations: 0
  avg. datapath passes per packet: 1.00
  emc hits: 1960909
  smc hits: 0
  megaflow hits: 50
  avg. subtable lookups per megaflow hit: 1.00
  miss with success upcall: 5
  miss with failed upcall: 0
  avg. packets per output batch: 1.03
  idle cycles: 2112185253324 (99.75%)
  processing cycles: 5244829908 (0.25%)
  avg cycles per packet: 1079790.39 (2117430083232/1960964)
  avg processing cycles per packet: 2674.62 (5244829908/1960964)
pmd thread numa_id 0 core_id 9:
  packets received: 1960035
  packet recirculations: 0
  avg. datapath passes per packet: 1.00
  emc hits: 1959755
  smc hits: 0
  megaflow hits: 276
  avg. subtable lookups per megaflow hit: 1.92
  miss with success upcall: 4
  miss with failed upcall: 0
  avg. packets per output batch: 1.08
  idle cycles: 2112413001990 (99.76%)
  processing cycles: 4990062087 (0.24%)
  avg cycles per packet: 1080288.39 (2117403064077/1960035)
  avg processing cycles per packet: 2545.90 (4990062087/1960035)
pmd thread numa_id 0 core_id 28:
  packets received: 0
  packet recirculations: 0
  avg. datapath passes per packet: 0.00
  emc hits: 0
  smc hits: 0
  megaflow hits: 0
  avg. subtable lookups per megaflow hit: 0.00
  miss with success upcall: 0
  miss with failed upcall: 0
  avg. packets per output batch: 0.00
pmd thread numa_id 0 core_id 29:
  packets received: 0
  packet recirculations: 0
  avg. datapath passes per packet: 0.00
  emc hits: 0
  smc hits: 0
  megaflow hits: 0
  avg. subtable lookups per megaflow hit: 0.00
  miss with success upcall: 0
  miss with failed upcall: 0
  avg. packets per output batch: 0.00
main thread:
  packets received: 0
  packet recirculations: 0
  avg. datapath passes per packet: 0.00
  emc hits: 0
  smc hits: 0
  megaflow hits: 0
  avg. subtable lookups per megaflow hit: 0.00
  miss with success upcall: 0
  miss with failed upcall: 0
  avg. packets per output batch: 0.00


PMD-cpu utilization looks rather low
-----------------------------------------
root@bn831x1a-node6-lab:~# ovs-appctl dpif-netdev/pmd-rxq-show
pmd thread numa_id 0 core_id 8:
  isolated : true
  port: vhost-user-vmx1   queue-id:  0  pmd usage:  2 %
pmd thread numa_id 0 core_id 9:
  isolated : true
  port: vhost-user-vmx2   queue-id:  0  pmd usage:  2 %
pmd thread numa_id 0 core_id 28:
  isolated : false
pmd thread numa_id 0 core_id 29:
  isolated : false


As said initially, OVS claims to have egressed 1.96mio packets on its interface vhost-user-vmx1, however the vnf attached to this interface is only seeing 1.58mio packets, so somewhere we are loosing 400.000 packets.
As the packet-rate is just about 200kpps, both the OVS and the VNF should have been able to process packets at given rate.
How can I please investigate further where the packets are getting lost?
What further tuning

thanks

christian

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2019-10-14  8:12 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-14  8:12 [dpdk-users] dpdk vhost-user ovs unexpected low performance Christian Graf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).