DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] OVS vs OVS-DPDK
@ 2017-05-24  8:29 Avi Cohen (A)
  2017-05-24 13:23 ` Wiles, Keith
  0 siblings, 1 reply; 4+ messages in thread
From: Avi Cohen (A) @ 2017-05-24  8:29 UTC (permalink / raw)
  To: users

Hello
Let me  ask it in a different way:
I want to understand the reasons for the  differences in performance between OVS-DPDK and standard OVS My setup is:  ovs/ovs-dpdk is running @ host communicating with a VM

OVS-DPDK
1. packet is received via physical port to the device. 

2.DMA  transfer   to mempools on huge-pages  allocated by dpdk-ovs - in  user-space.

3. OVS-DPDK  copies this packet to the shared-vring of the associated  guest (shared between ovs-dpdk userspace process and guest) 

4. guest OS copies the packet to  userspace application on VM .

Standard OVS

1. packet is received via physical port to the device. 

2.packet is processed by the OVS and transferred to a virtio device connected to the VM - whar are the additional overhead here ?  QEMU processing  - translation , VM exit ??  other ?

3. guest OS copies the packet to  userspace application on VM .


Question:  what are the additional overhead in the standard OVS   that cause to poor performance related to the OVS-DPDK setup ?
I'm not talking about  the PMD improvements (OVS-DPDK)  running on the host - but on overhead in the VM context in the standard OVS setup

Best Regards
avi

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] OVS vs OVS-DPDK
  2017-05-24  8:29 [dpdk-users] OVS vs OVS-DPDK Avi Cohen (A)
@ 2017-05-24 13:23 ` Wiles, Keith
  2017-05-24 13:51   ` Avi Cohen (A)
  2017-05-25  9:03   ` Avi Cohen (A)
  0 siblings, 2 replies; 4+ messages in thread
From: Wiles, Keith @ 2017-05-24 13:23 UTC (permalink / raw)
  To: Avi Cohen (A); +Cc: users


> On May 24, 2017, at 3:29 AM, Avi Cohen (A) <avi.cohen@huawei.com> wrote:
> 
> Hello
> Let me  ask it in a different way:
> I want to understand the reasons for the  differences in performance between OVS-DPDK and standard OVS My setup is:  ovs/ovs-dpdk is running @ host communicating with a VM
> 
> OVS-DPDK
> 1. packet is received via physical port to the device. 
> 
> 2.DMA  transfer   to mempools on huge-pages  allocated by dpdk-ovs - in  user-space.
> 
> 3. OVS-DPDK  copies this packet to the shared-vring of the associated  guest (shared between ovs-dpdk userspace process and guest) 
> 
> 4. guest OS copies the packet to  userspace application on VM .
> 
> Standard OVS
> 
> 1. packet is received via physical port to the device. 
> 
> 2.packet is processed by the OVS and transferred to a virtio device connected to the VM - whar are the additional overhead here ?  QEMU processing  - translation , VM exit ??  other ?
> 
> 3. guest OS copies the packet to  userspace application on VM .
> 
> 
> Question:  what are the additional overhead in the standard OVS   that cause to poor performance related to the OVS-DPDK setup ?
> I'm not talking about  the PMD improvements (OVS-DPDK)  running on the host - but on overhead in the VM context in the standard OVS setup

The primary reasons are OVS is not using DPDK and OVS is using the Linux kernel as well :-)

> 
> Best Regards
> avi

Regards,
Keith

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] OVS vs OVS-DPDK
  2017-05-24 13:23 ` Wiles, Keith
@ 2017-05-24 13:51   ` Avi Cohen (A)
  2017-05-25  9:03   ` Avi Cohen (A)
  1 sibling, 0 replies; 4+ messages in thread
From: Avi Cohen (A) @ 2017-05-24 13:51 UTC (permalink / raw)
  To: Wiles, Keith; +Cc: users

Thanks Keith for your reply

I found out that the bottleneck are the VMs and not the OVS/OVS-DPDK  running in the host.
VMs  on both setup are unaware to OVS/OVS-DPDK  and use their linux IP-stack. 
I found that the performance (e.g. throughput) between  VMa - OVS-DPDK  - network - OVS-DPDK - VMb  is much better than with  standard OVS.

I use vhost-user virtio for the OVS-DPDK setup to connect to VM ,  and vhost-net for the standard OVS

The  reasons for  standard OVS poor performance can be for example:

1. number of packet copies in the path NIC - OVS - OS-guest-virtio - Application on guest

2. interrupt upon receiving a packet

3. # of context-switch / VM-exit
etc..

I didn't see any info regarding these potential reasons on the docs.

Best Regards
avi

> -----Original Message-----
> From: Wiles, Keith [mailto:keith.wiles@intel.com]
> Sent: Wednesday, 24 May, 2017 4:23 PM
> To: Avi Cohen (A)
> Cc: users@dpdk.org
> Subject: Re: [dpdk-users] OVS vs OVS-DPDK
> 
> 
> > On May 24, 2017, at 3:29 AM, Avi Cohen (A) <avi.cohen@huawei.com>
> wrote:
> >
> > Hello
> > Let me  ask it in a different way:
> > I want to understand the reasons for the  differences in performance between
> OVS-DPDK and standard OVS My setup is:  ovs/ovs-dpdk is running @ host
> communicating with a VM
> >
> > OVS-DPDK
> > 1. packet is received via physical port to the device.
> >
> > 2.DMA  transfer   to mempools on huge-pages  allocated by dpdk-ovs - in
> user-space.
> >
> > 3. OVS-DPDK  copies this packet to the shared-vring of the associated  guest
> (shared between ovs-dpdk userspace process and guest)
> >
> > 4. guest OS copies the packet to  userspace application on VM .
> >
> > Standard OVS
> >
> > 1. packet is received via physical port to the device.
> >
> > 2.packet is processed by the OVS and transferred to a virtio device connected
> to the VM - whar are the additional overhead here ?  QEMU processing  -
> translation , VM exit ??  other ?
> >
> > 3. guest OS copies the packet to  userspace application on VM .
> >
> >
> > Question:  what are the additional overhead in the standard OVS   that cause
> to poor performance related to the OVS-DPDK setup ?
> > I'm not talking about  the PMD improvements (OVS-DPDK)  running on the
> host - but on overhead in the VM context in the standard OVS setup
> 
> The primary reasons are OVS is not using DPDK and OVS is using the Linux
> kernel as well :-)
> 
> >
> > Best Regards
> > avi
> 
> Regards,
> Keith

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] OVS vs OVS-DPDK
  2017-05-24 13:23 ` Wiles, Keith
  2017-05-24 13:51   ` Avi Cohen (A)
@ 2017-05-25  9:03   ` Avi Cohen (A)
  1 sibling, 0 replies; 4+ messages in thread
From: Avi Cohen (A) @ 2017-05-25  9:03 UTC (permalink / raw)
  To: Wiles, Keith; +Cc: users

I found this article very relevant to this issue:
http://porto.polito.it/2616822/1/2015_Chain_performance.pdf


especially it says that  for the vhost-net interface used for standard OVS:   "the transmission of a batch of packets
from a VM causes a VM exit; this means that the CPU stops to execute the guest (i.e., the vCPU thread), and run a piece
of code in the hypervisor, which performs the I/O operation on behalf of the guest. The same happens when an interrupt
has to be "inserted" in the VM, e.g., because vhost has to inform the guest that there are packets to be received. These
VM exits (and the subsequent VM entries) are one of the main causes of overhead in network I/O of VMs"

this is not the case with the vhost-user interface - allows direct access between VM and ovs-dpdk and minimizes context-switches.
Best Regards
avi



> -----Original Message-----
> From: Avi Cohen (A)
> Sent: Wednesday, 24 May, 2017 4:52 PM
> To: 'Wiles, Keith'
> Cc: users@dpdk.org
> Subject: RE: [dpdk-users] OVS vs OVS-DPDK
> 
> Thanks Keith for your reply
> 
> I found out that the bottleneck are the VMs and not the OVS/OVS-DPDK
> running in the host.
> VMs  on both setup are unaware to OVS/OVS-DPDK  and use their linux IP-
> stack.
> I found that the performance (e.g. throughput) between  VMa - OVS-DPDK  -
> network - OVS-DPDK - VMb  is much better than with  standard OVS.
> 
> I use vhost-user virtio for the OVS-DPDK setup to connect to VM ,  and vhost-net
> for the standard OVS
> 
> The  reasons for  standard OVS poor performance can be for example:
> 
> 1. number of packet copies in the path NIC - OVS - OS-guest-virtio -
> Application on guest
> 
> 2. interrupt upon receiving a packet
> 
> 3. # of context-switch / VM-exit
> etc..
> 
> I didn't see any info regarding these potential reasons on the docs.
> 
> Best Regards
> avi
> 
> > -----Original Message-----
> > From: Wiles, Keith [mailto:keith.wiles@intel.com]
> > Sent: Wednesday, 24 May, 2017 4:23 PM
> > To: Avi Cohen (A)
> > Cc: users@dpdk.org
> > Subject: Re: [dpdk-users] OVS vs OVS-DPDK
> >
> >
> > > On May 24, 2017, at 3:29 AM, Avi Cohen (A) <avi.cohen@huawei.com>
> > wrote:
> > >
> > > Hello
> > > Let me  ask it in a different way:
> > > I want to understand the reasons for the  differences in performance
> > > between
> > OVS-DPDK and standard OVS My setup is:  ovs/ovs-dpdk is running @ host
> > communicating with a VM
> > >
> > > OVS-DPDK
> > > 1. packet is received via physical port to the device.
> > >
> > > 2.DMA  transfer   to mempools on huge-pages  allocated by dpdk-ovs - in
> > user-space.
> > >
> > > 3. OVS-DPDK  copies this packet to the shared-vring of the
> > > associated  guest
> > (shared between ovs-dpdk userspace process and guest)
> > >
> > > 4. guest OS copies the packet to  userspace application on VM .
> > >
> > > Standard OVS
> > >
> > > 1. packet is received via physical port to the device.
> > >
> > > 2.packet is processed by the OVS and transferred to a virtio device
> > > connected
> > to the VM - whar are the additional overhead here ?  QEMU processing
> > - translation , VM exit ??  other ?
> > >
> > > 3. guest OS copies the packet to  userspace application on VM .
> > >
> > >
> > > Question:  what are the additional overhead in the standard OVS   that
> cause
> > to poor performance related to the OVS-DPDK setup ?
> > > I'm not talking about  the PMD improvements (OVS-DPDK)  running on
> > > the
> > host - but on overhead in the VM context in the standard OVS setup
> >
> > The primary reasons are OVS is not using DPDK and OVS is using the
> > Linux kernel as well :-)
> >
> > >
> > > Best Regards
> > > avi
> >
> > Regards,
> > Keith

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2017-05-25  9:03 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-24  8:29 [dpdk-users] OVS vs OVS-DPDK Avi Cohen (A)
2017-05-24 13:23 ` Wiles, Keith
2017-05-24 13:51   ` Avi Cohen (A)
2017-05-25  9:03   ` Avi Cohen (A)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).