From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 463317EBC for ; Fri, 24 Oct 2014 10:02:14 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP; 24 Oct 2014 01:10:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.97,862,1389772800"; d="scan'208";a="405277706" Received: from irsmsx104.ger.corp.intel.com ([163.33.3.159]) by FMSMGA003.fm.intel.com with ESMTP; 24 Oct 2014 01:02:55 -0700 Received: from irsmsx107.ger.corp.intel.com (163.33.3.99) by IRSMSX104.ger.corp.intel.com (163.33.3.159) with Microsoft SMTP Server (TLS) id 14.3.195.1; Fri, 24 Oct 2014 09:10:41 +0100 Received: from irsmsx102.ger.corp.intel.com ([169.254.2.25]) by IRSMSX107.ger.corp.intel.com ([169.254.10.21]) with mapi id 14.03.0195.001; Fri, 24 Oct 2014 09:10:40 +0100 From: "O'driscoll, Tim" To: Matthew Hall Thread-Topic: [dpdk-dev] [dpdk-announce] DPDK Features for Q1 2015 Thread-Index: Ac/tRv4Y4Vfrsm7FTDyvOFKZotGjVAA3h82AAE7VMHA= Date: Fri, 24 Oct 2014 08:10:40 +0000 Message-ID: <26FA93C7ED1EAA44AB77D62FBE1D27BA54C42AEE@IRSMSX102.ger.corp.intel.com> References: <26FA93C7ED1EAA44AB77D62FBE1D27BA54C361DF@IRSMSX102.ger.corp.intel.com> <20141022192219.GB10424@mhcomputing.net> In-Reply-To: <20141022192219.GB10424@mhcomputing.net> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [dpdk-announce] DPDK Features for Q1 2015 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 24 Oct 2014 08:02:14 -0000 > From: Matthew Hall [mailto:mhall@mhcomputing.net] >=20 > On Wed, Oct 22, 2014 at 01:48:36PM +0000, O'driscoll, Tim wrote: > > Single Virtio Driver: Merge existing Virtio drivers into a single > > implementation, incorporating the best features from each of the > > existing drivers. >=20 > Specifically, in the virtio-net case above, I have discovered, and Sergio= at Intel > just reproduced today, that neither virtio PMD works at all inside of > VirtualBox. One can't init, and the other gets into an infinite loop. But= yet it's > claiming support for VBox on the DPDK Supported NICs page though it > doesn't seem it ever could have worked. At the moment, within Intel we test with KVM, Xen and ESXi. We've never tes= ted with VirtualBox. So, maybe this is an error on the Supported NICs page,= or maybe somebody else is testing that configuration. > So I'd like to request an initiative alongside any virtio-net and/or vmxn= et3 > type of changes, to make some kind of a Virtualization Test Lab, where we > support VMWare ESXi, QEMU, Xen, VBox, and the other popular VM > systems. >=20 > Otherwise it's hard for us community / app developers to make the DPDK > available to end users in simple, elegant ways, such as packaging it into > Vagrant VM's, Amazon AMI's etc. which are prebaked and ready-to-run. Expanding the scope of virtualization testing is a good idea, especially gi= ven industry trends like NFV. We're in the process of getting our DPDK Test= Suite ready to push to dpdk.org soon. The hope is that others will use it = to validate changes they're making to DPDK, and contribute test cases so th= at we can build up a more comprehensive set over time. One area where this does need further work is in virtualization. At the mom= ent, our virtualization tests are manual, so they won't be included in the = initial DPDK Test Suite release. We will look into automating our current v= irtualization tests and adding these to the test suite in future. > Another thing which would help in this area would be additional > improvements to the NUMA / socket / core / number of NICs / number of > queues autodetections. To write a single app which can run on a virtual c= ard, > a hardware card without RSS available, and a hardware card with RSS > available, in a thread-safe, flow-safe way, is somewhat complex at the > present time. >=20 > I'm running into this in the VM based environments because most VNIC's > don't have RSS and it complicates the process of keeping consistent state= of > the flows among the cores. This is interesting. Do you have more details on what you're thinking here,= that perhaps could be used as the basis for an RFC? Tim