From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by dpdk.org (Postfix) with ESMTP id 8835E9173 for ; Thu, 25 May 2017 11:03:34 +0200 (CEST) Received: from 172.30.72.57 (EHLO NKGEML413-HUB.china.huawei.com) ([172.30.72.57]) by dggrg03-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AOG75024; Thu, 25 May 2017 17:03:22 +0800 (CST) Received: from FRAEML704-CAH.china.huawei.com (10.206.14.35) by NKGEML413-HUB.china.huawei.com (10.98.56.74) with Microsoft SMTP Server (TLS) id 14.3.235.1; Thu, 25 May 2017 17:03:20 +0800 Received: from FRAEML521-MBX.china.huawei.com ([169.254.1.104]) by FRAEML704-CAH.china.huawei.com ([10.206.14.35]) with mapi id 14.03.0301.000; Thu, 25 May 2017 11:03:13 +0200 From: "Avi Cohen (A)" To: "Wiles, Keith" CC: "users@dpdk.org" Thread-Topic: [dpdk-users] OVS vs OVS-DPDK Thread-Index: AdLUZ97jZUqJvxB6SCGIH0ZJhQMn0wAY7dGAAA6IwXAAC9k/gA== Date: Thu, 25 May 2017 09:03:14 +0000 Message-ID: References: <365623D9-223D-4A37-ACB7-73599B4E163C@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.200.202.128] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020204.59269DDA.00E4, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=169.254.1.104, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 54761a354c919399a42fec862659b8d7 Subject: Re: [dpdk-users] OVS vs OVS-DPDK X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 May 2017 09:03:50 -0000 I found this article very relevant to this issue: http://porto.polito.it/2616822/1/2015_Chain_performance.pdf especially it says that for the vhost-net interface used for standard OVS:= "the transmission of a batch of packets from a VM causes a VM exit; this means that the CPU stops to execute the gu= est (i.e., the vCPU thread), and run a piece of code in the hypervisor, which performs the I/O operation on behalf of th= e guest. The same happens when an interrupt has to be "inserted" in the VM, e.g., because vhost has to inform the guest= that there are packets to be received. These VM exits (and the subsequent VM entries) are one of the main causes of over= head in network I/O of VMs" this is not the case with the vhost-user interface - allows direct access b= etween VM and ovs-dpdk and minimizes context-switches. Best Regards avi > -----Original Message----- > From: Avi Cohen (A) > Sent: Wednesday, 24 May, 2017 4:52 PM > To: 'Wiles, Keith' > Cc: users@dpdk.org > Subject: RE: [dpdk-users] OVS vs OVS-DPDK >=20 > Thanks Keith for your reply >=20 > I found out that the bottleneck are the VMs and not the OVS/OVS-DPDK > running in the host. > VMs on both setup are unaware to OVS/OVS-DPDK and use their linux IP- > stack. > I found that the performance (e.g. throughput) between VMa - OVS-DPDK - > network - OVS-DPDK - VMb is much better than with standard OVS. >=20 > I use vhost-user virtio for the OVS-DPDK setup to connect to VM , and vh= ost-net > for the standard OVS >=20 > The reasons for standard OVS poor performance can be for example: >=20 > 1. number of packet copies in the path NIC - OVS - OS-guest-virtio - > Application on guest >=20 > 2. interrupt upon receiving a packet >=20 > 3. # of context-switch / VM-exit > etc.. >=20 > I didn't see any info regarding these potential reasons on the docs. >=20 > Best Regards > avi >=20 > > -----Original Message----- > > From: Wiles, Keith [mailto:keith.wiles@intel.com] > > Sent: Wednesday, 24 May, 2017 4:23 PM > > To: Avi Cohen (A) > > Cc: users@dpdk.org > > Subject: Re: [dpdk-users] OVS vs OVS-DPDK > > > > > > > On May 24, 2017, at 3:29 AM, Avi Cohen (A) > > wrote: > > > > > > Hello > > > Let me ask it in a different way: > > > I want to understand the reasons for the differences in performance > > > between > > OVS-DPDK and standard OVS My setup is: ovs/ovs-dpdk is running @ host > > communicating with a VM > > > > > > OVS-DPDK > > > 1. packet is received via physical port to the device. > > > > > > 2.DMA transfer to mempools on huge-pages allocated by dpdk-ovs - = in > > user-space. > > > > > > 3. OVS-DPDK copies this packet to the shared-vring of the > > > associated guest > > (shared between ovs-dpdk userspace process and guest) > > > > > > 4. guest OS copies the packet to userspace application on VM . > > > > > > Standard OVS > > > > > > 1. packet is received via physical port to the device. > > > > > > 2.packet is processed by the OVS and transferred to a virtio device > > > connected > > to the VM - whar are the additional overhead here ? QEMU processing > > - translation , VM exit ?? other ? > > > > > > 3. guest OS copies the packet to userspace application on VM . > > > > > > > > > Question: what are the additional overhead in the standard OVS tha= t > cause > > to poor performance related to the OVS-DPDK setup ? > > > I'm not talking about the PMD improvements (OVS-DPDK) running on > > > the > > host - but on overhead in the VM context in the standard OVS setup > > > > The primary reasons are OVS is not using DPDK and OVS is using the > > Linux kernel as well :-) > > > > > > > > Best Regards > > > avi > > > > Regards, > > Keith