DPDK usage discussions
 help / color / mirror / Atom feed
From: "Avi Cohen (A)" <avi.cohen@huawei.com>
To: "Wiles, Keith" <keith.wiles@intel.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] OVS vs OVS-DPDK
Date: Thu, 25 May 2017 09:03:14 +0000	[thread overview]
Message-ID: <B84047ECBD981D4B93EAE5A6245AA361013BCF41@FRAEML521-MBX.china.huawei.com> (raw)
In-Reply-To: <365623D9-223D-4A37-ACB7-73599B4E163C@intel.com>

I found this article very relevant to this issue:
http://porto.polito.it/2616822/1/2015_Chain_performance.pdf


especially it says that  for the vhost-net interface used for standard OVS:   "the transmission of a batch of packets
from a VM causes a VM exit; this means that the CPU stops to execute the guest (i.e., the vCPU thread), and run a piece
of code in the hypervisor, which performs the I/O operation on behalf of the guest. The same happens when an interrupt
has to be "inserted" in the VM, e.g., because vhost has to inform the guest that there are packets to be received. These
VM exits (and the subsequent VM entries) are one of the main causes of overhead in network I/O of VMs"

this is not the case with the vhost-user interface - allows direct access between VM and ovs-dpdk and minimizes context-switches.
Best Regards
avi



> -----Original Message-----
> From: Avi Cohen (A)
> Sent: Wednesday, 24 May, 2017 4:52 PM
> To: 'Wiles, Keith'
> Cc: users@dpdk.org
> Subject: RE: [dpdk-users] OVS vs OVS-DPDK
> 
> Thanks Keith for your reply
> 
> I found out that the bottleneck are the VMs and not the OVS/OVS-DPDK
> running in the host.
> VMs  on both setup are unaware to OVS/OVS-DPDK  and use their linux IP-
> stack.
> I found that the performance (e.g. throughput) between  VMa - OVS-DPDK  -
> network - OVS-DPDK - VMb  is much better than with  standard OVS.
> 
> I use vhost-user virtio for the OVS-DPDK setup to connect to VM ,  and vhost-net
> for the standard OVS
> 
> The  reasons for  standard OVS poor performance can be for example:
> 
> 1. number of packet copies in the path NIC - OVS - OS-guest-virtio -
> Application on guest
> 
> 2. interrupt upon receiving a packet
> 
> 3. # of context-switch / VM-exit
> etc..
> 
> I didn't see any info regarding these potential reasons on the docs.
> 
> Best Regards
> avi
> 
> > -----Original Message-----
> > From: Wiles, Keith [mailto:keith.wiles@intel.com]
> > Sent: Wednesday, 24 May, 2017 4:23 PM
> > To: Avi Cohen (A)
> > Cc: users@dpdk.org
> > Subject: Re: [dpdk-users] OVS vs OVS-DPDK
> >
> >
> > > On May 24, 2017, at 3:29 AM, Avi Cohen (A) <avi.cohen@huawei.com>
> > wrote:
> > >
> > > Hello
> > > Let me  ask it in a different way:
> > > I want to understand the reasons for the  differences in performance
> > > between
> > OVS-DPDK and standard OVS My setup is:  ovs/ovs-dpdk is running @ host
> > communicating with a VM
> > >
> > > OVS-DPDK
> > > 1. packet is received via physical port to the device.
> > >
> > > 2.DMA  transfer   to mempools on huge-pages  allocated by dpdk-ovs - in
> > user-space.
> > >
> > > 3. OVS-DPDK  copies this packet to the shared-vring of the
> > > associated  guest
> > (shared between ovs-dpdk userspace process and guest)
> > >
> > > 4. guest OS copies the packet to  userspace application on VM .
> > >
> > > Standard OVS
> > >
> > > 1. packet is received via physical port to the device.
> > >
> > > 2.packet is processed by the OVS and transferred to a virtio device
> > > connected
> > to the VM - whar are the additional overhead here ?  QEMU processing
> > - translation , VM exit ??  other ?
> > >
> > > 3. guest OS copies the packet to  userspace application on VM .
> > >
> > >
> > > Question:  what are the additional overhead in the standard OVS   that
> cause
> > to poor performance related to the OVS-DPDK setup ?
> > > I'm not talking about  the PMD improvements (OVS-DPDK)  running on
> > > the
> > host - but on overhead in the VM context in the standard OVS setup
> >
> > The primary reasons are OVS is not using DPDK and OVS is using the
> > Linux kernel as well :-)
> >
> > >
> > > Best Regards
> > > avi
> >
> > Regards,
> > Keith

      parent reply	other threads:[~2017-05-25  9:03 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-24  8:29 Avi Cohen (A)
2017-05-24 13:23 ` Wiles, Keith
2017-05-24 13:51   ` Avi Cohen (A)
2017-05-25  9:03   ` Avi Cohen (A) [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=B84047ECBD981D4B93EAE5A6245AA361013BCF41@FRAEML521-MBX.china.huawei.com \
    --to=avi.cohen@huawei.com \
    --cc=keith.wiles@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).