From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 1B260C6F6 for ; Thu, 25 Jun 2015 22:56:19 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga102.fm.intel.com with ESMTP; 25 Jun 2015 13:56:18 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,679,1427785200"; d="scan'208";a="594864798" Received: from orsmsx109.amr.corp.intel.com ([10.22.240.7]) by orsmga003.jf.intel.com with ESMTP; 25 Jun 2015 13:56:18 -0700 Received: from fmsmsx113.amr.corp.intel.com (10.18.116.7) by ORSMSX109.amr.corp.intel.com (10.22.240.7) with Microsoft SMTP Server (TLS) id 14.3.224.2; Thu, 25 Jun 2015 13:56:18 -0700 Received: from fmsmsx105.amr.corp.intel.com ([169.254.4.50]) by FMSMSX113.amr.corp.intel.com ([169.254.13.28]) with mapi id 14.03.0224.002; Thu, 25 Jun 2015 13:56:17 -0700 From: "Patel, Rashmin N" To: Matthew Hall , "Vass, Sandor (Nokia - HU/Budapest)" Thread-Topic: [dpdk-dev] VMXNET3 on vmware, ping delay Thread-Index: AdCvJ2To6UqiEGymSWSUJy1NN7xnKgAbXr4AAAMbtxA= Date: Thu, 25 Jun 2015 20:56:16 +0000 Message-ID: References: <792CF0A6B0883C45AF8C719B2ECA946E42B2430F@DEMUMBX003.nsn-intra.net> <20150625151834.GA29296@mhcomputing.net> In-Reply-To: <20150625151834.GA29296@mhcomputing.net> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.1.200.106] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] VMXNET3 on vmware, ping delay X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 25 Jun 2015 20:56:20 -0000 For tuning ESXi and vSwitch for latency sensitive workloads, I remember the= following paper published by VMware: https://www.vmware.com/files/pdf/tech= paper/VMW-Tuning-Latency-Sensitive-Workloads.pdf that you can try out. The overall latency in setup (vmware and dpdk-vm using vmxnet3) remains in = vmware-native-driver/vmkernel/vmxnet3-backend/vmx-emulation threads in ESXi= . So you can better tune ESXi (as explained in the above white paper) and/o= r make sure that these important threads are not starving to improve latenc= y and throughput in some cases of this setup. Thanks, Rashmin -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Matthew Hall Sent: Thursday, June 25, 2015 8:19 AM To: Vass, Sandor (Nokia - HU/Budapest) Cc: dev@dpdk.org Subject: Re: [dpdk-dev] VMXNET3 on vmware, ping delay On Thu, Jun 25, 2015 at 09:14:53AM +0000, Vass, Sandor (Nokia - HU/Budapest= ) wrote: > According to my understanding each packet should go through BR as fast=20 > as possible, but it seems that the rte_eth_rx_burst retrieves packets=20 > only when there are at least 2 packets on the RX queue of the NIC. At=20 > least most of the times as there are cases (rarely - according to my=20 > console log) when it can retrieve 1 packet also and sometimes only 3=20 > packets can be retrieved... By default DPDK is optimized for throughput not latency. Try a test with he= avier traffic. There is also some work going on now for DPDK interrupt-driven mode, which = will work more like traditional Ethernet drivers instead of polling mode Et= hernet drivers. Though I'm not an expert on it, there is also a series of ways to optimize = for latency, which hopefully some others could discuss... or maybe search t= he archives / web site / Intel tuning documentation. Matthew.