From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 37F1F68D9 for ; Sat, 5 Oct 2013 09:40:12 +0200 (CEST) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga101.jf.intel.com with ESMTP; 05 Oct 2013 00:40:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.90,1038,1371106800"; d="scan'208";a="405679824" Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35]) by fmsmga001.fm.intel.com with ESMTP; 05 Oct 2013 00:40:34 -0700 Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server (TLS) id 14.3.123.3; Sat, 5 Oct 2013 00:40:34 -0700 Received: from fmsmsx105.amr.corp.intel.com ([169.254.5.47]) by FMSMSX151.amr.corp.intel.com ([169.254.7.71]) with mapi id 14.03.0123.003; Sat, 5 Oct 2013 00:40:34 -0700 From: "Patel, Rashmin N" To: Vincent JARDIN , "dev@dpdk.org" Thread-Topic: [dpdk-dev] L2fwd Performance issue with Virtual Machine Thread-Index: AQHOwIEbJ14yUY77bEmEKeECyz9pSZnkz2dwgAAHHNCAANH9R4AADLRg Date: Sat, 5 Oct 2013 07:40:33 +0000 Message-ID: References: <524FB293.4080209@6wind.com> In-Reply-To: <524FB293.4080209@6wind.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.1.200.106] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] L2fwd Performance issue with Virtual Machine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Oct 2013 07:40:13 -0000 Vincent, I agree with your explanation that when you move to PMD from regul= ar interrupt based driver, you would definitely get better performance of a= Linux guest until you reach the bottleneck of the vSwitch. But I said "opt= imal performance benefit having PMD" which is only possible if vmxnet3 back= end driver has support for vmxnet3-PMD frontend driver inside guest and you= never know if vmware helps adding support for the same or not but there is= a way OPEN.=20 The motive is of having PMD of para-virtual devices is to get performance c= lose to shared memory solution while supporting standard devices, I believe= and being at Intel, we strive for the optimal solution and pardon me if I = created any confusion. Thanks, Rashmin -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Vincent JARDIN Sent: Friday, October 04, 2013 11:33 PM To: dev@dpdk.org Subject: Re: [dpdk-dev] L2fwd Performance issue with Virtual Machine I disagree Rashmin. We did measurements with 64 bytes packets: the Linux ke= rnel of the guest is the bottleneck, so the vmxnet3 PMD helps to increase t= he packet rate of the Linux guests. PMD helps within guest for packet rate until you rich (of course) the bottl= eneck of the host's vSwitch. In order to accelerate the host's vSwitch, you have to run a fast path base= d vSwitch on the host too. Best regards, Vincent On 04/10/2013 23:36, Selvaganapathy Chidambaram wrote: > Thanks Rashmin for your time and help! > > So it looks like with the given hardware config, we could probably=20 > only achieve around 8 Gbps in VM without using SRIOV. Once DPDK is=20 > used in vSwitch design, we could gain more performance. > > > Thanks, > Selvaganapathy.C. > > > On Fri, Oct 4, 2013 at 11:02 AM, Patel, Rashmin N=20 > > wrote: > >> Correction: "you would NOT get optimal performance benefit having PMD" >> >> Thanks, >> Rashmin >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Patel, Rashmin N >> Sent: Friday, October 04, 2013 10:47 AM >> To: Selvaganapathy Chidambaram >> Cc: dev@dpdk.org >> Subject: Re: [dpdk-dev] L2fwd Performance issue with Virtual Machine >> >> Hi, >> >> If you are not using SRIOV or direct device assignment to VM, your=20 >> traffic hits vSwitch(via vmware native ixgbe driver and network=20 >> stack) in the ESX and switched to your E1000/VMXNET3 interface=20 >> connected to a VM. The vSwitch is not optimized for PMD at present so=20 >> you would get optimal performance benefit having PMD, I believe. >> >> For the RSS front, I would say you won't see much difference with RSS=20 >> enabled for 1500 bytes frames. In fact, core is capable of handling=20 >> such traffic in VM, but the bottleneck is in ESXi software switching=20 >> layer, that's what my initial research shows across multiple hypervisors= . >> >> Thanks, >> Rashmin >> >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Selvaganapathy=20 >> Chidambaram >> Sent: Thursday, October 03, 2013 2:39 PM >> To: dev@dpdk.org >> Subject: [dpdk-dev] L2fwd Performance issue with Virtual Machine >> >> Hello Everyone, >> >> I have tried to run DPDK sample application l2fwd(modified to support=20 >> multiple queues) in my ESX Virtual Machine. I see that performance is=20 >> not scaling with cores. [My apologies for the long email] >> >> *Setup:* >> >> Connected VM to two ports of Spirent with 10Gig link. Sent 10 Gig=20 >> traffic of L3 packet of length 1500 bytes (with four different flows)=20 >> from Spirent through one port and received at the second port. Also=20 >> sent traffic from reverse direction so that net traffic is 20 Gbps. Have= n't enabled SR-IOV or >> Direct path I/O. >> >> *Emulated Driver:* >> >> With default emulated driver, I got 7.3 Gbps for 1 core. Adding=20 >> multiple cores did not improve the performance. On debugging I=20 >> noticed that function >> eth_em_infos_get() says RSS is not supported. >> >> *vmxnet3_usermap:* >> >> Then I tried extension vmxnet3_usermap and got 8.7 Gbps for 1 core.=20 >> Again adding another core did not help. On debugging, I noticed that=20 >> in vmxnet3 kernel driver (in function vmxnet3_probe_device) , RSS is=20 >> disabled if * >> adapter->is_shm* is non zero. In our case, its=20 >> adapter->VMXNET3_SHM_USERMAP_DRIVER >> which is non zero. >> >> Before trying to enable it, I would like to know if there is any=20 >> known limitation why RSS is not enabled in both the drivers. Please=20 >> help me understand. >> >> *Hardware Configuration:* >> Hardware : Intel Xeon 2.4 Ghz 4 CPUs >> Hyperthreading : No >> RAM : 16 GB >> Hypervisor : ESXi 5.1 >> Ethernet : Intel 82599EB 10 Gig SFP >> >> >> Guest VM : 2 vCPU, 2 GB RAM >> GuestOS : Centos 6.2 32 bit >> >> Thanks in advance for your time and help!!! >> >> Thanks, >> Selva. >>