From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qa0-x233.google.com (mail-qa0-x233.google.com [IPv6:2607:f8b0:400d:c00::233]) by dpdk.org (Postfix) with ESMTP id DA74858D9 for ; Thu, 3 Oct 2013 23:38:32 +0200 (CEST) Received: by mail-qa0-f51.google.com with SMTP id j15so425080qaq.3 for ; Thu, 03 Oct 2013 14:39:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=gyxNi59+4EMHAnZLoK1IytWEyGlqQB42Kyrp9l6ilP0=; b=rTen5NDp5fV+6LaJNRhU2A9Iv4JRNOi4Su33Oef/JT5x+gtfa1wj7jpHgXbNoyyunY 0sdVKRLgER3MNIItYkTTTI8cMAleSqaTDLgOhLJI+i3Cgrfxb5vlvrSawfJ7HdrnEauu YC67chsfs0c74rdalwOwvDRxZLXqRdN/D6/xKhYvDQgvaChwqgQ0b9MAtz1jrFy9Wv7y BZqzgugRGNkpWUcxgD7PcXFuBwTKftT4hLn4ZbR362R0plCfN6a6HWMvC/Al/2H+W2AE jgaeAyv8r4W/oZgGmIIZBSQfxFQi4AfCL9IVDRx8Qv33Cg3r/Xg0nznevvhcczs19bc5 aYmw== MIME-Version: 1.0 X-Received: by 10.224.64.74 with SMTP id d10mr13642524qai.62.1380836355756; Thu, 03 Oct 2013 14:39:15 -0700 (PDT) Received: by 10.49.1.112 with HTTP; Thu, 3 Oct 2013 14:39:15 -0700 (PDT) Date: Thu, 3 Oct 2013 14:39:15 -0700 Message-ID: From: Selvaganapathy Chidambaram To: dev@dpdk.org Content-Type: text/plain; charset=ISO-8859-1 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-dev] L2fwd Performance issue with Virtual Machine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 03 Oct 2013 21:38:33 -0000 Hello Everyone, I have tried to run DPDK sample application l2fwd(modified to support multiple queues) in my ESX Virtual Machine. I see that performance is not scaling with cores. [My apologies for the long email] *Setup:* Connected VM to two ports of Spirent with 10Gig link. Sent 10 Gig traffic of L3 packet of length 1500 bytes (with four different flows) from Spirent through one port and received at the second port. Also sent traffic from reverse direction so that net traffic is 20 Gbps. Haven't enabled SR-IOV or Direct path I/O. *Emulated Driver:* With default emulated driver, I got 7.3 Gbps for 1 core. Adding multiple cores did not improve the performance. On debugging I noticed that function eth_em_infos_get() says RSS is not supported. *vmxnet3_usermap:* Then I tried extension vmxnet3_usermap and got 8.7 Gbps for 1 core. Again adding another core did not help. On debugging, I noticed that in vmxnet3 kernel driver (in function vmxnet3_probe_device) , RSS is disabled if * adapter->is_shm* is non zero. In our case, its VMXNET3_SHM_USERMAP_DRIVER which is non zero. Before trying to enable it, I would like to know if there is any known limitation why RSS is not enabled in both the drivers. Please help me understand. *Hardware Configuration:* Hardware : Intel Xeon 2.4 Ghz 4 CPUs Hyperthreading : No RAM : 16 GB Hypervisor : ESXi 5.1 Ethernet : Intel 82599EB 10 Gig SFP Guest VM : 2 vCPU, 2 GB RAM GuestOS : Centos 6.2 32 bit Thanks in advance for your time and help!!! Thanks, Selva.