From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qa0-x235.google.com (mail-qa0-x235.google.com [IPv6:2607:f8b0:400d:c00::235]) by dpdk.org (Postfix) with ESMTP id 0B40168D9 for ; Fri, 4 Oct 2013 23:36:00 +0200 (CEST) Received: by mail-qa0-f53.google.com with SMTP id k4so1475674qaq.12 for ; Fri, 04 Oct 2013 14:36:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=7pNXYruQfCRWd8kyiZFlGg2JRxo5Aq0oBeEu26CUjyk=; b=eg2lkDLT9yW3xoQ+Jdjox7p/yMWHrp5nQXx9bQ4+IUkXYmbvGFG+qLkjX/EdQEv5NX lE7E0fJiSJpcFIXJQakUXWuroD28cOabxUFMGcHMazskiGifTA5pjagiET+Rs2y5CBHr c6WJ0YCaCE+HXtfSJIWfWQ1vt5KIKrM7sNMEW0vaFANGZ2fbRteENfG1f0xOR3k07fp/ Cs/FtJR0xYQDKCTPBQPiBBYKRsp8ocXk7Vu4PzG7frkm5IhY7Ce2NpXjnSZV5itzb1NU J0K2GB622PfKfOPkdjMXiw+CVteWVDezb5bxKwSql7QERsSb4/I1cBiH4BA+hFMZ8iW3 RWmA== MIME-Version: 1.0 X-Received: by 10.224.171.133 with SMTP id h5mr6158073qaz.98.1380922604186; Fri, 04 Oct 2013 14:36:44 -0700 (PDT) Received: by 10.49.72.105 with HTTP; Fri, 4 Oct 2013 14:36:44 -0700 (PDT) In-Reply-To: References: Date: Fri, 4 Oct 2013 14:36:44 -0700 Message-ID: From: Selvaganapathy Chidambaram To: "Patel, Rashmin N" Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] L2fwd Performance issue with Virtual Machine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 04 Oct 2013 21:36:01 -0000 Thanks Rashmin for your time and help! So it looks like with the given hardware config, we could probably only achieve around 8 Gbps in VM without using SRIOV. Once DPDK is used in vSwitch design, we could gain more performance. Thanks, Selvaganapathy.C. On Fri, Oct 4, 2013 at 11:02 AM, Patel, Rashmin N wrote: > Correction: "you would NOT get optimal performance benefit having PMD" > > Thanks, > Rashmin > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Patel, Rashmin N > Sent: Friday, October 04, 2013 10:47 AM > To: Selvaganapathy Chidambaram > Cc: dev@dpdk.org > Subject: Re: [dpdk-dev] L2fwd Performance issue with Virtual Machine > > Hi, > > If you are not using SRIOV or direct device assignment to VM, your traffic > hits vSwitch(via vmware native ixgbe driver and network stack) in the ESX > and switched to your E1000/VMXNET3 interface connected to a VM. The vSwitch > is not optimized for PMD at present so you would get optimal performance > benefit having PMD, I believe. > > For the RSS front, I would say you won't see much difference with RSS > enabled for 1500 bytes frames. In fact, core is capable of handling such > traffic in VM, but the bottleneck is in ESXi software switching layer, > that's what my initial research shows across multiple hypervisors. > > Thanks, > Rashmin > > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Selvaganapathy > Chidambaram > Sent: Thursday, October 03, 2013 2:39 PM > To: dev@dpdk.org > Subject: [dpdk-dev] L2fwd Performance issue with Virtual Machine > > Hello Everyone, > > I have tried to run DPDK sample application l2fwd(modified to support > multiple queues) in my ESX Virtual Machine. I see that performance is not > scaling with cores. [My apologies for the long email] > > *Setup:* > > Connected VM to two ports of Spirent with 10Gig link. Sent 10 Gig traffic > of L3 packet of length 1500 bytes (with four different flows) from Spirent > through one port and received at the second port. Also sent traffic from > reverse direction so that net traffic is 20 Gbps. Haven't enabled SR-IOV or > Direct path I/O. > > *Emulated Driver:* > > With default emulated driver, I got 7.3 Gbps for 1 core. Adding multiple > cores did not improve the performance. On debugging I noticed that function > eth_em_infos_get() says RSS is not supported. > > *vmxnet3_usermap:* > > Then I tried extension vmxnet3_usermap and got 8.7 Gbps for 1 core. Again > adding another core did not help. On debugging, I noticed that in vmxnet3 > kernel driver (in function vmxnet3_probe_device) , RSS is disabled if * > adapter->is_shm* is non zero. In our case, its > adapter->VMXNET3_SHM_USERMAP_DRIVER > which is non zero. > > Before trying to enable it, I would like to know if there is any known > limitation why RSS is not enabled in both the drivers. Please help me > understand. > > *Hardware Configuration:* > Hardware : Intel Xeon 2.4 Ghz 4 CPUs > Hyperthreading : No > RAM : 16 GB > Hypervisor : ESXi 5.1 > Ethernet : Intel 82599EB 10 Gig SFP > > > Guest VM : 2 vCPU, 2 GB RAM > GuestOS : Centos 6.2 32 bit > > Thanks in advance for your time and help!!! > > Thanks, > Selva. >