From: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
To: "John Joyce (joycej)" <joycej@cisco.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Testing memnic for VM to VM transfer
Date: Wed, 18 Jun 2014 04:02:20 +0000 [thread overview]
Message-ID: <7F861DC0615E0C47A872E6F3C5FCDDBD0111A162@BPXM14GP.gisp.nec.co.jp> (raw)
In-Reply-To: <7E47E681F1539348840E7A8F1E8AA02E214A68E5@xmb-aln-x09.cisco.com>
Hi,
> Subject: [dpdk-dev] Testing memnic for VM to VM transfer
>
> Hi everyone:
> We are interested in testing the performance of the memnic driver posted at http://dpdk.org/browse/memnic/refs/.
> We want to compare its performance compared to other techniques to transfer packets between the guest and the kernel,
> predominately for VM to VM transfers.
>
> We have downloaded the memnic components and have got it running in a guest VM.
>
> The question we hope this group might be able to help with is what would be the best way to processes the packets in the
> kernel to get a VM to VM transfer.
I think there is no kernel code work with MEMNIC.
The recommend switching software on the host is Intel DPDK vSwitch hosted on 01.org and github.
https://github.com/01org/dpdk-ovs/tree/development
Intel DPDK vSwitch runs on userspace not kernel.
I introduced this mechanism to DPDK vSwitch and the guest drivers are maintained in dpdk.org.
thanks,
Hiroshi
>
> A couple options might be possible
>
>
> 1. Common shared buffer between two VMs. With some utility/code to switch TX & RX rings between the two VMs.
>
> VM1 application --- memnic --- common shared memory buffer on the host --- memnic --- VM2 application
>
> 2. Special purpose Kernel switching module
>
> VM1 application --- memnic --- shared memory VM1 --- Kernel switching module --- shared memory VM2 --- memnic ---
> VM2 application
>
> 3. Existing Kernel switching module
>
> VM1 application --- memnic --- shared memory VM1 --- existing Kernel switching module (e.g. OVS/linux Bridge/VETh pair)
> --- shared memory VM2 --- memnic --- VM2 application
>
> Can anyone recommend which approach might be best or easiest? We would like to avoid writing much (or any) kernel code
> so if there are already any open source code or test utilities that provide one of these options or would be a good starting
> point to start from, a pointer would be much appreciated.
>
> Thanks in advance
>
>
> John Joyce
next prev parent reply other threads:[~2014-06-18 4:02 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-06-13 15:29 John Joyce (joycej)
2014-06-18 4:02 ` Hiroshi Shimamoto [this message]
2014-06-18 10:18 ` GongJinrong
2014-06-18 11:11 ` [dpdk-dev] ##freemail## " Hiroshi Shimamoto
2014-06-18 11:26 ` GongJinrong
2014-06-18 11:42 [dpdk-dev] " Hiroshi Shimamoto
2014-06-18 11:49 ` Thomas Monjalon
2014-06-18 12:06 ` Hiroshi Shimamoto
2014-06-18 12:25 ` GongJinrong
2014-07-02 15:59 ` Thomas Monjalon
2014-07-03 6:34 ` GongJinrong
2014-07-03 11:03 ` GongJinrong
2014-07-03 12:05 ` Thomas Monjalon
2014-07-07 9:58 ` xie huawei
2014-07-07 10:05 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7F861DC0615E0C47A872E6F3C5FCDDBD0111A162@BPXM14GP.gisp.nec.co.jp \
--to=h-shimamoto@ct.jp.nec.com \
--cc=dev@dpdk.org \
--cc=joycej@cisco.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).