DPDK patches and discussions
 help / color / mirror / Atom feed
From: Matthew Hall <mhall@mhcomputing.net>
To: "O'driscoll, Tim" <tim.o'driscoll@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [dpdk-announce] DPDK Features for Q1 2015
Date: Fri, 24 Oct 2014 12:01:26 -0700
Message-ID: <20141024190126.GB29024@mhcomputing.net> (raw)
In-Reply-To: <26FA93C7ED1EAA44AB77D62FBE1D27BA54C42AEE@IRSMSX102.ger.corp.intel.com>

On Fri, Oct 24, 2014 at 08:10:40AM +0000, O'driscoll, Tim wrote:
> At the moment, within Intel we test with KVM, Xen and ESXi. We've never 
> tested with VirtualBox. So, maybe this is an error on the Supported NICs 
> page, or maybe somebody else is testing that configuration.

So, one of the most popular ways developers test out new code these days is 
using Vagrant or Docker. Vagrant by default creates machines using VirtualBox. 
VirtualBox runs on nearly everything out there (Linux, Windows, OS X, and 
more). Docker uses Linux LXC so it isn't multiplatform. There is a system 
called CoreOS which is still under development. It requires bare-metal w/ 
custom Linux on top.

https://www.vagrantup.com/
https://www.docker.com/
https://coreos.com/

As an open source DPDK app developer, who previously used it successfully in 
some commercial big-iron projects in the past, now I'm trying to drive 
adoption of the technology among security programmers. I'm doing it because I 
think DPDK is better than everything else I've seen for packet processing.

So it would help to drive adoption if there were a multiplatform 
virtualization environment that worked with the best performing DPDK drivers, 
so I could make it easy for developers to download, install, and run, so 
they'll get excited and learn more about all the great work you guys did and 
use it to build more DPDK apps.

I don't care if it's VBox necessarily. But we should support at least 1 
end-developer-friendly Virtualization environment so I can make it easy to 
deploy and run an app and get people excited to work with the DPDK. Low 
barrier to entry is important.

> One area where this does need further work is in virtualization. At the 
> moment, our virtualization tests are manual, so they won't be included in 
> the initial DPDK Test Suite release. We will look into automating our 
> current virtualization tests and adding these to the test suite in future.

Sounds good. Then we could help you make it work and keep it working on more 
platforms.

> > Another thing which would help in this area would be additional
> > improvements to the NUMA / socket / core / number of NICs / number of
> > queues autodetections. To write a single app which can run on a virtual card,
> > a hardware card without RSS available, and a hardware card with RSS
> > available, in a thread-safe, flow-safe way, is somewhat complex at the
> > present time.
> > 
> > I'm running into this in the VM based environments because most VNIC's
> > don't have RSS and it complicates the process of keeping consistent state of
> > the flows among the cores.
> 
> This is interesting. Do you have more details on what you're thinking here, 
> that perhaps could be used as the basis for an RFC?

It's something I am still trying to figure out how to deal with actually, 
hence all my virtio-net questions and PCI bus questions I've been hounding 
about on the list the last few weeks. It would be good if you had a contact 
for the virtual DPDK at Intel or 6WIND who could help me figure out the 
solution pattern.

I think it might involve making an app or some DPDK helper code which has 
something like this algorithm:

At load-time, app autodetects if RSS is available or not, and if NUMA is 
present or not.

If RSS is available, and NUMA is not available, enable RSS and create 1 RX 
queue for each lcore.

If RSS is available, and NUMA is available, find the NUMA socket of the NIC, 
and make 1 RX queue for each connected lcore on that NUMA socket.

If RSS is not available, and NUMA is not available, then configure the 
distributor framework. (I never used it so I am not sure if this part is 
right). Create 1 Load Balance on master lcore that does RX from all NICs, 
and hashes up and distributes packets to every other lcore.

If RSS is not available, and NUMA is available, then configure the distributor 
framework. (Again this might not be right). Create 1 Load Balance on first 
lcore on each socket that does RX from all NUMA connected NICs, and hashes up 
and distibutes packets to other NUMA connected lcores.

> Tim

Thanks,
Matthew.

  parent reply	other threads:[~2014-10-24 18:54 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-10-22 13:48 O'driscoll, Tim
2014-10-22 14:20 ` [dpdk-dev] " Thomas Monjalon
2014-10-22 14:44   ` Zhou, Danny
2014-10-22 15:05     ` Liang, Cunming
2014-10-22 16:10 ` [dpdk-dev] [dpdk-announce] " Luke Gorrie
2014-10-23 12:29   ` O'driscoll, Tim
2014-10-22 19:22 ` Matthew Hall
2014-10-24  8:10   ` O'driscoll, Tim
2014-10-24 10:10     ` Thomas Monjalon
2014-10-24 19:02       ` Matthew Hall
2014-10-24 19:01     ` Matthew Hall [this message]
2014-10-23  3:06 ` Tetsuya Mukawa
2014-10-23 10:04   ` O'driscoll, Tim
2014-10-23  3:17 ` Tetsuya Mukawa
2014-10-23 11:27   ` O'driscoll, Tim
2014-10-31 22:05   ` Xie, Huawei
2014-11-02 12:50     ` Tetsuya Mukawa
2014-10-23 14:18 ` [dpdk-dev] Fwd: " Jay Rolette
2014-10-23 14:52   ` Alex Markuze

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20141024190126.GB29024@mhcomputing.net \
    --to=mhall@mhcomputing.net \
    --cc=dev@dpdk.org \
    --cc=tim.o'driscoll@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK patches and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ https://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git