DPDK patches and discussions
 help / color / mirror / Atom feed
From: Matthew Hall <mhall@mhcomputing.net>
To: "O'driscoll, Tim" <tim.o'driscoll@intel.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] [dpdk-announce] DPDK Features for Q1 2015
Date: Wed, 22 Oct 2014 12:22:19 -0700	[thread overview]
Message-ID: <20141022192219.GB10424@mhcomputing.net> (raw)
In-Reply-To: <26FA93C7ED1EAA44AB77D62FBE1D27BA54C361DF@IRSMSX102.ger.corp.intel.com>

On Wed, Oct 22, 2014 at 01:48:36PM +0000, O'driscoll, Tim wrote:
> Single Virtio Driver: Merge existing Virtio drivers into a single 
> implementation, incorporating the best features from each of the existing 
> drivers.

Tim,

There is a lot of good stuff in there.

Specifically, in the virtio-net case above, I have discovered, and Sergio at 
Intel just reproduced today, that neither virtio PMD works at all inside of 
VirtualBox. One can't init, and the other gets into an infinite loop. But yet 
it's claiming support for VBox on the DPDK Supported NICs page though it 
doesn't seem it ever could have worked.

So I'd like to request an initiative alongside any virtio-net and/or vmxnet3 
type of changes, to make some kind of a Virtualization Test Lab, where we 
support VMWare ESXi, QEMU, Xen, VBox, and the other popular VM systems.

Otherwise it's hard for us community / app developers to make the DPDK 
available to end users in simple, elegant ways, such as packaging it into 
Vagrant VM's, Amazon AMI's etc. which are prebaked and ready-to-run.

Note personally of course I prefer using things like the 82599... but these 
are only going to be present after the customers have begun to adopt and test 
in the virtual environment, then they decide they like it and want to scale up 
to bigger boxes.

Another thing which would help in this area would be additional improvements 
to the NUMA / socket / core / number of NICs / number of queues 
autodetections. To write a single app which can run on a virtual card, a 
hardware card without RSS available, and a hardware card with RSS available, 
in a thread-safe, flow-safe way, is somewhat complex at the present time.

I'm running into this in the VM based environments because most VNIC's don't 
have RSS and it complicates the process of keeping consistent state of the 
flows among the cores.

Thanks,
Matthew.

  parent reply	other threads:[~2014-10-22 19:14 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-10-22 13:48 O'driscoll, Tim
2014-10-22 14:20 ` [dpdk-dev] " Thomas Monjalon
2014-10-22 14:44   ` Zhou, Danny
2014-10-22 15:05     ` Liang, Cunming
2014-10-22 16:10 ` [dpdk-dev] [dpdk-announce] " Luke Gorrie
2014-10-23 12:29   ` O'driscoll, Tim
2014-10-22 19:22 ` Matthew Hall [this message]
2014-10-24  8:10   ` O'driscoll, Tim
2014-10-24 10:10     ` Thomas Monjalon
2014-10-24 19:02       ` Matthew Hall
2014-10-24 19:01     ` Matthew Hall
2014-10-23  3:06 ` Tetsuya Mukawa
2014-10-23 10:04   ` O'driscoll, Tim
2014-10-23  3:17 ` Tetsuya Mukawa
2014-10-23 11:27   ` O'driscoll, Tim
2014-10-31 22:05   ` Xie, Huawei
2014-11-02 12:50     ` Tetsuya Mukawa
2014-10-23 14:18 ` [dpdk-dev] Fwd: " Jay Rolette
2014-10-23 14:52   ` Alex Markuze

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20141022192219.GB10424@mhcomputing.net \
    --to=mhall@mhcomputing.net \
    --cc=dev@dpdk.org \
    --cc=tim.o'driscoll@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).