DPDK usage discussions
 help / color / mirror / Atom feed
From: "Mooney, Sean K" <sean.k.mooney@intel.com>
To: abhishek jain <ashujain9727@gmail.com>,
	"users@dpdk.org" <users@dpdk.org>
Cc: "Mooney, Sean K" <sean.k.mooney@intel.com>
Subject: Re: [dpdk-users] Poor OVS-DPDK performance
Date: Fri, 8 Dec 2017 12:41:44 +0000	[thread overview]
Message-ID: <4B1BB321037C0849AAE171801564DFA68898FED4@IRSMSX107.ger.corp.intel.com> (raw)
In-Reply-To: <CA+9-LvsqoTMQ_EBJ_2Qegb1ebp5e3REGM5rN4VFiYFiwig5oTA@mail.gmail.com>

Hi can you provide the qemu commandline you are using for the vm
As well as the following commands

Ovs-vsctl show
Ovs-vsctl list bridge
Ovs-vsctl list interface
Ovs-vsctl list port

Basically with the topology info above I want to confirm:

-          All bridges are interconnected by patch ports not veth,

-          All bridges are datapath type netdev

-          Vm is connected by vhost-user interfaces not kernel vhost(it still works with ovs-dpdk but is really slow)

Looking at you core masks the vswitch I incorrectly tunned

{dpdk-init="true", dpdk-lcore-mask="7fe", dpdk-socket-mem="1024", pmd-cpu-mask="1800"}
root@node-3:~#

You have configured 10 lcores which are not used for packet processing and only the 1st of which will actually  be used by ovs-dpdk
And you have configured a single core for the pmd

You have not said if you are on a multi socket system of single but assuming that it is a single socket system try this instead.

{dpdk-init="true", dpdk-lcore-mask="0x2", dpdk-socket-mem="1024", pmd-cpu-mask="0xC"}
Above im assuming core 0 is used by os  core 1 will be used for lcore thread and cores 2 and 3 will be used for pmd threads which does all the packet forwarding.

If you have hyperthreads turn on on your host add the hyperthread siblings to the pmd-cpu-mask
For example if you had a 16 core cpu with 32 threads the pmd-cpu-mask should be “0x60006”

On the kernel cmdline change the isolcpus to isolcpus=2,3 or isolcpus=2,3,18,19 for hyper threading.

With the HT config you should be able to handel upto 30mpps on a 2.2GHz cpu assuming you compiled ovs and dpdk with “-fPIC -02 –march=native” and link statically.
If you have a faster cpu you should get more but as a rough estimate when using ovs 2.4 and dpdk 2.0 you should
expect between 6.5-8mpps phy to phy per physical core + an additional 70-80%  if hyper thereading is used.

Your phy vm phy numbers will be a little lower as the vhost-user pmd takes more clock cycles to process packets then
The physical nic drivers do but that should help set your expectations.

Ovs 2.4 and dpdk 2.0 are quite old at this point but they still should give a significant performance increase over kernel ovs.




From: abhishek jain [mailto:ashujain9727@gmail.com]
Sent: Friday, December 8, 2017 9:34 AM
To: users@dpdk.org; Mooney, Sean K <sean.k.mooney@intel.com>
Subject: Re: Poor OVS-DPDK performance

Hi Team
Below is my OVS configuration..

root@node-3:~# ovs-vsctl get Open_vSwitch . other_config
{dpdk-init="true", dpdk-lcore-mask="7fe", dpdk-socket-mem="1024", pmd-cpu-mask="1800"}
root@node-3:~#

root@node-3:~#
root@node-3:~# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-3.13.0-137-generic root=/dev/mapper/os-root ro console=tty0 net.ifnames=0 biosdevname=0 rootdelay=90 nomodeset root=UUID=2949f531-bedc-47a0-a2f2-6ebf8e1d1edb iommu=pt intel_iommu=on isolcpus=11,12,13,14,15,16

root@node-3:~#
root@node-3:~# ovs-appctl dpif-netdev/pmd-stats-show
main thread:
        emc hits:2
        megaflow hits:0
        miss:2
        lost:0
        polling cycles:99607459 (98.80%)
        processing cycles:1207437 (1.20%)
        avg cycles per packet: 25203724.00 (100814896/4)
        avg processing cycles per packet: 301859.25 (1207437/4)
pmd thread numa_id 0 core_id 11:
        emc hits:0
        megaflow hits:0
        miss:0
        lost:0
        polling cycles:272926895316 (100.00%)
        processing cycles:0 (0.00%)
pmd thread numa_id 0 core_id 12:
        emc hits:0
        megaflow hits:0
        miss:0
        lost:0
        polling cycles:240339950037 (100.00%)
        processing cycles:0 (0.00%)
root@node-3:~#

root@node-3:~#
root@node-3:~# grep -r Huge /proc/meminfo
AnonHugePages:     59392 kB
HugePages_Total:    5126
HugePages_Free:     4870
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
root@node-3:~#
I'm using OVS version 2.4.1
Regards
Abhishek Jain

On Fri, Dec 8, 2017 at 11:18 AM, abhishek jain <ashujain9727@gmail.com<mailto:ashujain9727@gmail.com>> wrote:
Hi Team

Currently I have OVS-DPDK setup configured on Ubuntu 14.04.5 LTS. I'm also having one vnf with vhost interfaces mapped to OVS bridge br-int.
However when I'm performing throughput with the same vnf,I'm getting very less throughput.

Please provide me some pointers to boost the performance of vnf with OVS-DPDK configuration.
Regards
Abhishek Jain


  reply	other threads:[~2017-12-08 12:45 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-08  5:48 abhishek jain
2017-12-08  9:33 ` abhishek jain
2017-12-08 12:41   ` Mooney, Sean K [this message]
2017-12-11  6:55     ` abhishek jain
2017-12-11 21:28       ` [dpdk-users] Fwd: " abhishek jain
2017-12-12 12:55       ` [dpdk-users] " Mooney, Sean K

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4B1BB321037C0849AAE171801564DFA68898FED4@IRSMSX107.ger.corp.intel.com \
    --to=sean.k.mooney@intel.com \
    --cc=ashujain9727@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).