DPDK usage discussions
 help / color / mirror / Atom feed
From: Vipul Ujawane <vipul999ujawane@gmail.com>
To: David Christensen <drc@linux.vnet.ibm.com>
Cc: users@dpdk.org
Subject: Re: [dpdk-users] Poor performance when using OVS with DPDK
Date: Fri, 26 Jun 2020 17:39:23 +0800	[thread overview]
Message-ID: <CABgxuK4Fkvb8rtN0eKV5ha7-tDDQGd8QNLXa3bZnbdAZHp=Rtg@mail.gmail.com> (raw)
In-Reply-To: <f06f3d0a-2306-03a1-f368-aa5cb312a52d@linux.vnet.ibm.com>

On Fri, 2020-06-26 at 02:08 +0800, Vipul Ujawane wrote:
>
>
> ---------- Forwarded message ---------
> From: David Christensen <drc@linux.vnet.ibm.com>
> Date: Fri, Jun 26, 2020, 02:03
> Subject: Re: [dpdk-users] Poor performance when using OVS with DPDK
> To: Vipul Ujawane <vipul999ujawane@gmail.com>, <users@dpdk.org>
>
>
>
>
> On 6/24/20 4:03 AM, Vipul Ujawane wrote:
> > Dear all,
> >
> > I am observing a very low performance when running OVS-DPDK when
> compared
> > to OVS running with the Kernel Datapath.
> > I have OvS version 2.13.90 compiled from source with the latest
> stable DPDK
> > v19.11.3 on a stable Debian system running kernel 4.19.0-9-amd64
> (real
> > version:4.19.118).
> >
> > I have tried to use the latest released OvS as well (2.12) with the
> same
> > LTS DPDK. As a last resort, I have tried an older kernel, whether
> it has
> > any problem (4.19.0-8-amd64 (real version:4.19.98)).
> >
> > I have not been able to troubleshoot the problem, and kindly
> request your
> > help regarding the same.
> >
> > HW configuration
> > ================
> > We have to two totally identical servers (Debian stable, Intel(R)
> Xeon(R)
> > Gold 6230 CPU, 96G Mem), each runs KVM virtual machine. On the
> hypervisor
> > layer, we have OvS for traffic routing. The servers are connected
> directly
> > via a Mellanox ConnectX-5 (1x100G).
> > OVS Forwarding tables are configured for simple port-forwarding
> only to
> > avoid any packet processing-related issue.
> >
> > Problem
> > =======
> > When both servers are running OVS-Kernel at the hypervisor layer
> and VMs
> > are connected to it via libvirt and virtio interfaces, the
> > VM->Server1->Server2->VM throughput is around 16-18Gbps.
> > However, when using OVS-DPDK with the same setting, the throughput
> drops
> > down to 4-6Gbps.
>
> You don't mention the traffic profile.  I assume 64 byte frames but
> best
> to be explicit.

Sure, sorry about that! we did use iperf (MTU sized packets) and the
gotten throughput was 4-6Gbps.


> >
> > SW/driver configurations:
> > ==================
> > DPDK
> > ----
> > In config common_base, besides the defaults, I have enabled the
> following
> > extra drivers/features to be compiled/enabled.
> > CONFIG_RTE_LIBRTE_MLX5_PMD=y
> > CONFIG_RTE_LIBRTE_VHOST=y
> > CONFIG_RTE_LIBRTE_VHOST_NUMA=y
> > CONFIG_RTE_LIBRTE_PMD_VHOST=y
> > CONFIG_RTE_VIRTIO_USER=n
> > CONFIG_RTE_EAL_VFIO=y
> >
> >
> > OVS
> > ---
> > $ovs-vswitchd --version
> > ovs-vswitchd (Open vSwitch) 2.13.90
> >
> > $sudo ovs-vsctl get Open_vSwitch . dpdk_initialized
> > true
> >
> > $sudo ovs-vsctl get Open_vSwitch . dpdk_version
> > "DPDK 19.11.3"
> >
> > OS settings
> > -----------
> > $ lsb_release -a
> > No LSB modules are available.
> > Distributor ID: Debian
> > Description: Debian GNU/Linux 10 (buster)
> > Release: 10
> > Codename: buster
> >
> >
> > $ cat /proc/cmdline
> > BOOT_IMAGE=/vmlinuz-4.19.0-9-amd64 root=/dev/mapper/Volume0-debian-
> -stable
> > ro default_hugepagesz=1G hugepagesz=1G hugepages=16 intel_iommu=on
> iommu=pt
> > quiet
>
> Why don't you reserve any CPUs for OVS/DPDK or VM usage?  All
> published
> performance white papers recommend settings for CPU isolation like
> this
> Mellanox DPDK performance report:
>
>
https://fast.dpdk.org/doc/perf/DPDK_19_08_Mellanox_NIC_performance_report.pdf
<https://mailtrack.io/trace/link/d820a3bc37ae49e92351ef785e3cfb4c21ab5d3c?url=https%3A%2F%2Ffast.dpdk.org%2Fdoc%2Fperf%2FDPDK_19_08_Mellanox_NIC_performance_report.pdf&userId=1840365&signature=eda177b39475fac6>
>
> For their test system:
>
> isolcpus=24-47 intel_idle.max_cstate=0 processor.max_cstate=0
> intel_pstate=disable nohz_full=24-47
> rcu_nocbs=24-47 rcu_nocb_poll default_hugepagesz=1G hugepagesz=1G
> hugepages=64 audit=0
> nosoftlockup
>
> Using the tuned service (CPU partitioning profile) make this process
> easier:
>
> https://tuned-project.org/
<https://mailtrack.io/trace/link/4f3f47457c7163aacfe6bb108c6eff554be9cd4d?url=https%3A%2F%2Ftuned-project.org%2F&userId=1840365&signature=f1504b405d9e880f>
>
Nice tutorial, thanks for sharing. I have checked it and configured our
server like this:

isolcpus=12-19 intel_idle.max_cstate=0 processor.max_cstate=0
nohz_full=12-19 rcu_nocbs=12-19 intel_pstate=disable
default_hugepagesz=1G hugepagesz=1G hugepages=24 audit=0 nosoftlockup
intel_iommu=on iommu=pt rcu_nocb_poll


Even though our servers are NUMA-capable and NUMA-aware, we only have
one CPU installed in one socket.
And one CPU has 20 physical cores (40 threads), so I figured out to use
the "top-most" cores for DPDK/OVS, that's the reason of isolcpus=12-19

> >
> > ./usertools/dpdk-devbind.py --status
> > Network devices using kernel driver
> > ===================================
> > 0000:b3:00.0 'MT27800 Family [ConnectX-5] 1017' if=ens2
> drv=mlx5_core
> > unused=igb_uio,vfio-pci
> >
> > Due to the way how Mellanox cards and their driver work, I have not
> bond
> > igb_uio to the interface, however, uio, igb_uio and vfio-pci kernel
> modules
> > are loaded.
> >
> >
> > Relevant part of the VM-config for Qemu/KVM
> > -------------------------------------------
> >    <cputune>
> >      <shares>4096</shares>
> >      <vcpupin vcpu='0' cpuset='4'/>
> >      <vcpupin vcpu='1' cpuset='5'/>
>
> Where did you get these CPU mapping values?  x86 systems typically
> map
> even-numbered CPUs to one NUMA node and odd-numbered CPUs to a
> different
> NUMA node.  You generally want to select CPUs from the same NUMA node
> as
> the mlx5 NIC you're using for DPDK.
>
> You should have at least 4 CPUs in the VM, selected according to the
> NUMA topology of the system.
as per my answer above, our system has no secondary NUMA node, all
mappings are to the same socket/CPU.

>
> Take a look at this bash script written for Red Hat:
>
>
https://github.com/ctrautma/RHEL_NIC_QUALIFICATION/blob/ansible/ansible/get_cpulist.sh
<https://mailtrack.io/trace/link/cf74e0c69acb6d9a348606606825a0320898824a?url=https%3A%2F%2Fgithub.com%2Fctrautma%2FRHEL_NIC_QUALIFICATION%2Fblob%2Fansible%2Fansible%2Fget_cpulist.sh&userId=1840365&signature=9884f8ea2b42399d>
>
> It gives you a good starting reference which CPUs to select for the
> OVS/DPDK and VM configurations on your particular system.  Also
> review
> the Ansible script pvp_ovsdpdk.yml, it provides a lot of other
> useful
> steps you might be able to apply to your Debian OS.
>
> >      <emulatorpin cpuset='4-5'/>
> >    </cputune>
> >    <cpu mode='host-model' check='partial'>
> >      <model fallback='allow'/>
> >      <topology sockets='2' cores='1' threads='1'/>
> >      <numa>
> >        <cell id='0' cpus='0-1' memory='4194304' unit='KiB'
> > memAccess='shared'/>
> >      </numa>
> >    </cpu>
> >      <interface type='vhostuser'>
> >        <mac address='00:00:00:00:00:aa'/>
> >        <source type='unix'
> path='/usr/local/var/run/openvswitch/vhostuser'
> > mo$
> >        <model type='virtio'/>
> >        <driver queues='2'>
> >          <host mrg_rxbuf='on'/>
>
> Is there a requirement for mergeable RX buffers?  Some PMDs like
> mlx5
> can take advantage of SSE instructions when this is disabled,
> yielding
> better performance.
Good point, there is no requirement, I just took an example config and
though it's necessary for the driver queues setting.
>
> >        </driver>
> >        <address type='pci' domain='0x0000' bus='0x07' slot='0x00'
> > function='0x0'$
> >      </interface>
> >
>
> I don't see hugepage usage in the libvirt XML.  Something similar to:
>
>    <memory unit='KiB'>8388608</memory>
>    <currentMemory unit='KiB'>8388608</currentMemory>
>    <memoryBacking>
>      <hugepages>
>        <page size='1048576' unit='KiB' nodeset='0'/>
>      </hugepages>
>    </memoryBacking>
I did not copy this part of the XML, but we have hugepages configured
properly.
>
>
> > -----------------------------------
> > OVS Start Config
> > -----------------------------------
> > ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true
> > ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-
> mem="4096,0"
> > ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-
> mask=0xff
> > ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=0e
>
> These two masks shouldn't overlap:
>
https://developers.redhat.com/blog/2017/06/28/ovs-dpdk-parameters-dealing-with-multi-numa/
<https://mailtrack.io/trace/link/6c59473e0a8547a9cb80d8f52f9cf5190a0712f6?url=https%3A%2F%2Fdevelopers.redhat.com%2Fblog%2F2017%2F06%2F28%2Fovs-dpdk-parameters-dealing-with-multi-numa%2F&userId=1840365&signature=b01114dc094e5fb9>
>
Thanks, this did really help me understand in which order these
commands should be issued.

So, the problem now is the following.
I did all the changes you shared, and started OVS/DPDK in a proper way
and set these features:

ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-socket-
mem="8192,0"

ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-lcore-
mask=0x01000

ovs-vsctl --no-wait set Open_vSwitch . other_config:dpdk-init=true

and, finally this:
ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-
mask=0x0e000

The documentation you shared say this last one can be even set during
runtime. So, I was playing with it to see there is any change.

I did not start any VM on top of OVS/DPDK, just set up a port forward
rule (in_port=1, actions=output:IN_PORT), since I only have one
physical ports on each mellanox card.
Then, I generated traffic from the other server towards OVS
Using pktsize 64B, the max throughput Pktgen reports is 8Gbps.
In particular, I got these metrics:
Size        Sent_pps       Recv_pps      Recv_Gbps
64B           93M            11M             ~8
128B          65M            12.5M           ~15
256B          42.5M          12.3M           ~27
512B          23.5M          11.9M           ~51
1024B         11.9M          10M             ~83
1280B         9.6M           8.3M            ~86
1500B         8.3M           6.7M            ~82

It's quite interesting that for 64B, the pps is less than for greater
sizes. Because PPS should be the practical limitation in throughput,
and according to the packet size we can count the throughput in Gbps.

Anyway, OVS-DPDK have 3 cores to use, but only one rx queue is assigned
to the port (so, basically --- as `top` also shows --- it is the one-
core performance.

Increasing the cores did not help, and the performance remained the
same. Is this performance normal for OVS/DPDK?

  reply	other threads:[~2020-06-26  9:40 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-24 11:03 Vipul Ujawane
2020-06-25 18:03 ` David Christensen
2020-06-26  9:39   ` Vipul Ujawane [this message]
2020-06-26 19:32     ` David Christensen
2020-06-29  8:33       ` Vipul Ujawane
2020-06-30  4:41         ` Xia, Chenbo
  -- strict thread matches above, loose matches on Subject: below --
2020-06-24 10:56 Vipul Ujawane

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CABgxuK4Fkvb8rtN0eKV5ha7-tDDQGd8QNLXa3bZnbdAZHp=Rtg@mail.gmail.com' \
    --to=vipul999ujawane@gmail.com \
    --cc=drc@linux.vnet.ibm.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).