From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by dpdk.org (Postfix) with ESMTP id B0AD17CA3 for ; Wed, 24 May 2017 10:33:29 +0200 (CEST) Received: from 172.30.72.55 (EHLO nkgeml411-hub.china.huawei.com) ([172.30.72.55]) by dggrg02-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id AOD19919; Wed, 24 May 2017 16:33:26 +0800 (CST) Received: from FRAEML704-CAH.china.huawei.com (10.206.14.35) by nkgeml411-hub.china.huawei.com (10.98.56.70) with Microsoft SMTP Server (TLS) id 14.3.235.1; Wed, 24 May 2017 16:29:48 +0800 Received: from FRAEML521-MBX.china.huawei.com ([169.254.1.104]) by FRAEML704-CAH.china.huawei.com ([10.206.14.35]) with mapi id 14.03.0301.000; Wed, 24 May 2017 10:29:44 +0200 From: "Avi Cohen (A)" To: "users@dpdk.org" Thread-Topic: OVS vs OVS-DPDK Thread-Index: AdLUZ97jZUqJvxB6SCGIH0ZJhQMn0w== Date: Wed, 24 May 2017 08:29:43 +0000 Message-ID: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.200.202.128] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A0B0205.59254556.0176, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=169.254.1.104, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 810ac0ddf24b36d5512f81307e77a87e Subject: [dpdk-users] OVS vs OVS-DPDK X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 24 May 2017 08:33:32 -0000 Hello Let me ask it in a different way: I want to understand the reasons for the differences in performance betwee= n OVS-DPDK and standard OVS My setup is: ovs/ovs-dpdk is running @ host co= mmunicating with a VM OVS-DPDK 1. packet is received via physical port to the device.=20 2.DMA transfer to mempools on huge-pages allocated by dpdk-ovs - in us= er-space. 3. OVS-DPDK copies this packet to the shared-vring of the associated gues= t (shared between ovs-dpdk userspace process and guest)=20 4. guest OS copies the packet to userspace application on VM . Standard OVS 1. packet is received via physical port to the device.=20 2.packet is processed by the OVS and transferred to a virtio device connect= ed to the VM - whar are the additional overhead here ? QEMU processing - = translation , VM exit ?? other ? 3. guest OS copies the packet to userspace application on VM . Question: what are the additional overhead in the standard OVS that caus= e to poor performance related to the OVS-DPDK setup ? I'm not talking about the PMD improvements (OVS-DPDK) running on the host= - but on overhead in the VM context in the standard OVS setup Best Regards avi