From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-we0-f171.google.com (mail-we0-f171.google.com [74.125.82.171]) by dpdk.org (Postfix) with ESMTP id 35C1E68CD for ; Sat, 5 Oct 2013 08:32:11 +0200 (CEST) Received: by mail-we0-f171.google.com with SMTP id p61so5718082wes.30 for ; Fri, 04 Oct 2013 23:32:54 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:organization:user-agent :mime-version:to:subject:references:in-reply-to:content-type :content-transfer-encoding; bh=91x7coA4QM540fClGKvSKwN0ovJUJX3+4O7XXGa9cfY=; b=WI6TeE9mB7z75rpVUXHdBffyJroEX/MLn+a4ebZ5SV4yQqwyLl52WycFSfQ9g31trY 9+jGosMJqgBcWDj1hbGciQlL3ylJr4VqF+3ds18h/pGfWEWoYT51pxud1OMrIvxGCM50 /X1oib2n5xrhq0JPLzk4wvs2Y0xuR/VnKbQf5+dLDhy1yy5z95VRiEiIo5FS+sg7WNx/ GaL0DT0Q+nmWHRYRNKYe/yqdsctT3APSy6Wer9IWLRXoM6OqF3KWnrIlbtJVNjP7jv9Z XuT1+V1IaTip4I9N9QAHL45spaQnLhct0GM+cUYzW3OBC0Eqgvjp0tAuMHAQsHJNjTb/ tqZw== X-Gm-Message-State: ALoCoQnYF/XQyiGkp4+XYW31vd73lVWxUR4R4MHNA1WCZvt9v7QZb36oRBCy/L+Tmjdmr+iBt8l3 X-Received: by 10.180.206.129 with SMTP id lo1mr10354813wic.15.1380954774459; Fri, 04 Oct 2013 23:32:54 -0700 (PDT) Received: from saturne.dev.6wind.com ([2a01:e35:8a8e:1b70:226:b9ff:fec7:e133]) by mx.google.com with ESMTPSA id dl10sm22615196wib.1.1969.12.31.16.00.00 (version=TLSv1 cipher=RC4-SHA bits=128/128); Fri, 04 Oct 2013 23:32:53 -0700 (PDT) Message-ID: <524FB293.4080209@6wind.com> Date: Sat, 05 Oct 2013 08:32:51 +0200 From: Vincent JARDIN Organization: www.6wind.com User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130625 Thunderbird/17.0.7 MIME-Version: 1.0 To: dev@dpdk.org References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] L2fwd Performance issue with Virtual Machine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 05 Oct 2013 06:32:11 -0000 I disagree Rashmin. We did measurements with 64 bytes packets: the Linux kernel of the guest is the bottleneck, so the vmxnet3 PMD helps to increase the packet rate of the Linux guests. PMD helps within guest for packet rate until you rich (of course) the bottleneck of the host's vSwitch. In order to accelerate the host's vSwitch, you have to run a fast path based vSwitch on the host too. Best regards, Vincent On 04/10/2013 23:36, Selvaganapathy Chidambaram wrote: > Thanks Rashmin for your time and help! > > So it looks like with the given hardware config, we could probably only > achieve around 8 Gbps in VM without using SRIOV. Once DPDK is used in > vSwitch design, we could gain more performance. > > > Thanks, > Selvaganapathy.C. > > > On Fri, Oct 4, 2013 at 11:02 AM, Patel, Rashmin N > wrote: > >> Correction: "you would NOT get optimal performance benefit having PMD" >> >> Thanks, >> Rashmin >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Patel, Rashmin N >> Sent: Friday, October 04, 2013 10:47 AM >> To: Selvaganapathy Chidambaram >> Cc: dev@dpdk.org >> Subject: Re: [dpdk-dev] L2fwd Performance issue with Virtual Machine >> >> Hi, >> >> If you are not using SRIOV or direct device assignment to VM, your traffic >> hits vSwitch(via vmware native ixgbe driver and network stack) in the ESX >> and switched to your E1000/VMXNET3 interface connected to a VM. The vSwitch >> is not optimized for PMD at present so you would get optimal performance >> benefit having PMD, I believe. >> >> For the RSS front, I would say you won't see much difference with RSS >> enabled for 1500 bytes frames. In fact, core is capable of handling such >> traffic in VM, but the bottleneck is in ESXi software switching layer, >> that's what my initial research shows across multiple hypervisors. >> >> Thanks, >> Rashmin >> >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Selvaganapathy >> Chidambaram >> Sent: Thursday, October 03, 2013 2:39 PM >> To: dev@dpdk.org >> Subject: [dpdk-dev] L2fwd Performance issue with Virtual Machine >> >> Hello Everyone, >> >> I have tried to run DPDK sample application l2fwd(modified to support >> multiple queues) in my ESX Virtual Machine. I see that performance is not >> scaling with cores. [My apologies for the long email] >> >> *Setup:* >> >> Connected VM to two ports of Spirent with 10Gig link. Sent 10 Gig traffic >> of L3 packet of length 1500 bytes (with four different flows) from Spirent >> through one port and received at the second port. Also sent traffic from >> reverse direction so that net traffic is 20 Gbps. Haven't enabled SR-IOV or >> Direct path I/O. >> >> *Emulated Driver:* >> >> With default emulated driver, I got 7.3 Gbps for 1 core. Adding multiple >> cores did not improve the performance. On debugging I noticed that function >> eth_em_infos_get() says RSS is not supported. >> >> *vmxnet3_usermap:* >> >> Then I tried extension vmxnet3_usermap and got 8.7 Gbps for 1 core. Again >> adding another core did not help. On debugging, I noticed that in vmxnet3 >> kernel driver (in function vmxnet3_probe_device) , RSS is disabled if * >> adapter->is_shm* is non zero. In our case, its >> adapter->VMXNET3_SHM_USERMAP_DRIVER >> which is non zero. >> >> Before trying to enable it, I would like to know if there is any known >> limitation why RSS is not enabled in both the drivers. Please help me >> understand. >> >> *Hardware Configuration:* >> Hardware : Intel Xeon 2.4 Ghz 4 CPUs >> Hyperthreading : No >> RAM : 16 GB >> Hypervisor : ESXi 5.1 >> Ethernet : Intel 82599EB 10 Gig SFP >> >> >> Guest VM : 2 vCPU, 2 GB RAM >> GuestOS : Centos 6.2 32 bit >> >> Thanks in advance for your time and help!!! >> >> Thanks, >> Selva. >>