DPDK usage discussions
 help / color / mirror / Atom feed
From: Vipul Ujawane <vipul999ujawane@gmail.com>
To: ovs-discuss@openvswitch.org
Subject: [dpdk-users] Poor performance when using OVS with DPDK
Date: Wed, 24 Jun 2020 18:56:43 +0800	[thread overview]
Message-ID: <CABgxuK41VEPQSK9WiZdybRud4Mm_eS2SWFNYzD50CUWq6mcpFA@mail.gmail.com> (raw)

Dear all,

I am observing a very low performance when running OVS-DPDK when compared
to OVS running with the Kernel Datapath.
I have OvS version 2.13.90 compiled from source with the latest stable DPDK
v19.11.3 on a stable Debian system running kernel 4.19.0-9-amd64 (real
version:4.19.118).

I have tried to use the latest released OvS as well (2.12) with the same
LTS DPDK. As a last resort, I have tried an older kernel, whether it has
any problem (4.19.0-8-amd64 (real version:4.19.98)).

I have not been able to troubleshoot the problem, and kindly request your
help regarding the same.

HW configuration
================
We have to two totally identical servers (Debian stable, Intel(R) Xeon(R)
Gold 6230 CPU, 96G Mem), each runs KVM virtual machine. On the hypervisor
layer, we have OvS for traffic routing. The servers are connected directly
via a Mellanox ConnectX-5 (1x100G).
OVS Forwarding tables are configured for simple port-forwarding only to
avoid any packet processing-related issue.

Problem
=======
When both servers are running OVS-Kernel at the hypervisor layer and VMs
are connected to it via libvirt and virtio interfaces, the
VM->Server1->Server2->VM throughput is around 16-18Gbps.
However, when using OVS-DPDK with the same setting, the throughput drops
down to 4-6Gbps.


SW/driver configurations:
==================
DPDK
----
In config common_base, besides the defaults, I have enabled the following
extra drivers/features to be compiled/enabled.
CONFIG_RTE_LIBRTE_MLX5_PMD=y
CONFIG_RTE_LIBRTE_VHOST=y
CONFIG_RTE_LIBRTE_VHOST_NUMA=y
CONFIG_RTE_LIBRTE_PMD_VHOST=y
CONFIG_RTE_VIRTIO_USER=n
CONFIG_RTE_EAL_VFIO=y


OVS
---
$ovs-vswitchd --version
ovs-vswitchd (Open vSwitch) 2.13.90

$sudo ovs-vsctl get Open_vSwitch . dpdk_initialized
true

$sudo ovs-vsctl get Open_vSwitch . dpdk_version
"DPDK 19.11.3"

OS settings
-----------
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 10 (buster)
Release: 10
Codename: buster


$ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-4.19.0-9-amd64 root=/dev/mapper/Volume0-debian--stable
ro default_hugepagesz=1G hugepagesz=1G hugepages=16 intel_iommu=on iommu=pt
quiet

./usertools/dpdk-devbind.py --status
Network devices using kernel driver
===================================
0000:b3:00.0 'MT27800 Family [ConnectX-5] 1017' if=ens2 drv=mlx5_core
unused=igb_uio,vfio-pci

Due to the way how Mellanox cards and their driver work, I have not bond
igb_uio to the interface, however, uio, igb_uio and vfio-pci kernel modules
are loaded.


Relevant part of the VM-config for Qemu/KVM
-------------------------------------------
  <cputune>
    <shares>4096</shares>
    <vcpupin vcpu='0' cpuset='4'/>
    <vcpupin vcpu='1' cpuset='5'/>
    <emulatorpin cpuset='4-5'/>
  </cputune>
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
    <topology sockets='2' cores='1' threads='1'/>
    <numa>
      <cell id='0' cpus='0-1' memory='4194304' unit='KiB'
memAccess='shared'/>
    </numa>
  </cpu>
    <interface type='vhostuser'>
      <mac address='00:00:00:00:00:aa'/>
      <source type='unix' path='/usr/local/var/run/openvswitch/vhostuser'
mo$
      <model type='virtio'/>
      <driver queues='2'>
        <host mrg_rxbuf='on'/>
      </driver>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00'
function='0x0'$
    </interface>

-----------------------------------
OVS Start Config
-----------------------------------
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-mem="4096,0"
ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-mask=0xff
ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0e
ovs-vsctl add-port ovsbr dpdk0 -- set Interface dpdk0 type=dpdk
options:dpdk-devargs=0000:b3:00.0
ovs-vsctl set interface dpdk0 options:n_rxq=2
ovs-vsctl add-port ovsbr vhost-vm -- set Interface vhostuser
type=dpdkvhostuser



-------------------------------------------------------
$cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-4.19.0-9-amd64 root=/dev/mapper/Volume0-debian--stable
ro default_hugepagesz=1G hugepagesz=1G hugepages=16 intel_iommu=on iommu=pt
quiet


Is there anything I should be aware of the versions and setting I am using?
Did I compile DPDK and/or OvS in a wrong way?

Thank you for your kind help ;)

-- 

Vipul Ujawane

             reply	other threads:[~2020-06-30  7:33 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-24 10:56 Vipul Ujawane [this message]
2020-06-24 11:03 Vipul Ujawane
2020-06-25 18:03 ` David Christensen
2020-06-26  9:39   ` Vipul Ujawane
2020-06-26 19:32     ` David Christensen
2020-06-29  8:33       ` Vipul Ujawane
2020-06-30  4:41         ` Xia, Chenbo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CABgxuK41VEPQSK9WiZdybRud4Mm_eS2SWFNYzD50CUWq6mcpFA@mail.gmail.com \
    --to=vipul999ujawane@gmail.com \
    --cc=ovs-discuss@openvswitch.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).