From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id A273068C5 for ; Fri, 4 Oct 2013 19:45:59 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP; 04 Oct 2013 10:46:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.90,1034,1371106800"; d="scan'208";a="411631773" Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36]) by fmsmga002.fm.intel.com with ESMTP; 04 Oct 2013 10:46:40 -0700 Received: from fmsmsx154.amr.corp.intel.com (10.18.116.70) by FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server (TLS) id 14.3.123.3; Fri, 4 Oct 2013 10:46:40 -0700 Received: from fmsmsx105.amr.corp.intel.com ([169.254.5.47]) by FMSMSX154.amr.corp.intel.com ([169.254.6.3]) with mapi id 14.03.0123.003; Fri, 4 Oct 2013 10:46:39 -0700 From: "Patel, Rashmin N" To: Selvaganapathy Chidambaram Thread-Topic: [dpdk-dev] L2fwd Performance issue with Virtual Machine Thread-Index: AQHOwIEbJ14yUY77bEmEKeECyz9pSZnkz2dw Date: Fri, 4 Oct 2013 17:46:39 +0000 Message-ID: References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.1.200.107] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] L2fwd Performance issue with Virtual Machine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Oct 2013 17:46:00 -0000 Hi, If you are not using SRIOV or direct device assignment to VM, your traffic = hits vSwitch(via vmware native ixgbe driver and network stack) in the ESX a= nd switched to your E1000/VMXNET3 interface connected to a VM. The vSwitch = is not optimized for PMD at present so you would get optimal performance be= nefit having PMD, I believe. For the RSS front, I would say you won't see much difference with RSS enabl= ed for 1500 bytes frames. In fact, core is capable of handling such traffic= in VM, but the bottleneck is in ESXi software switching layer, that's what= my initial research shows across multiple hypervisors. Thanks, Rashmin -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Selvaganapathy Chidamb= aram Sent: Thursday, October 03, 2013 2:39 PM To: dev@dpdk.org Subject: [dpdk-dev] L2fwd Performance issue with Virtual Machine Hello Everyone, I have tried to run DPDK sample application l2fwd(modified to support multi= ple queues) in my ESX Virtual Machine. I see that performance is not scalin= g with cores. [My apologies for the long email] *Setup:* Connected VM to two ports of Spirent with 10Gig link. Sent 10 Gig traffic o= f L3 packet of length 1500 bytes (with four different flows) from Spirent t= hrough one port and received at the second port. Also sent traffic from rev= erse direction so that net traffic is 20 Gbps. Haven't enabled SR-IOV or D= irect path I/O. *Emulated Driver:* With default emulated driver, I got 7.3 Gbps for 1 core. Adding multiple co= res did not improve the performance. On debugging I noticed that function e= th_em_infos_get() says RSS is not supported. *vmxnet3_usermap:* Then I tried extension vmxnet3_usermap and got 8.7 Gbps for 1 core. Again a= dding another core did not help. On debugging, I noticed that in vmxnet3 ke= rnel driver (in function vmxnet3_probe_device) , RSS is disabled if * adapter->is_shm* is non zero. In our case, its=20 adapter->VMXNET3_SHM_USERMAP_DRIVER which is non zero. Before trying to enable it, I would like to know if there is any known limi= tation why RSS is not enabled in both the drivers. Please help me understan= d. *Hardware Configuration:* Hardware : Intel Xeon 2.4 Ghz 4 CPUs Hyperthreading : No RAM : 16 GB Hypervisor : ESXi 5.1 Ethernet : Intel 82599EB 10 Gig SFP Guest VM : 2 vCPU, 2 GB RAM GuestOS : Centos 6.2 32 bit Thanks in advance for your time and help!!! Thanks, Selva.