DPDK patches and discussions
 help / color / mirror / Atom feed
From: Zhuangyanying <ann.zhuangyanying@huawei.com>
To: "dev@dpdk.org" <dev@dpdk.org>
Cc: gaoxiaoqiu <gaoxiaoqiu@huawei.com>,
	"Zhangbo \(Oscar\)" <oscar.zhangbo@huawei.com>,
	Zhbzg <zhbzg@huawei.com>, Guohongzhen <guohongzhen@huawei.com>,
	Zhoujingbin <zhoujingbin@huawei.com>
Subject: [dpdk-dev] [RFC] combining dpdk with ovs via vhost_net
Date: Thu, 20 Aug 2015 07:49:54 +0000	[thread overview]
Message-ID: <EC9759BC1E3E98429B5DE9A03DF86D8B592D8A38@SZXEMA502-MBX.china.huawei.com> (raw)

Hi all:
   AFAIK, nowadays there's only one solution to apply DPDK into Docker Containers, which is Passing-Through physical NIC to applications.
   I'm now working on another solution, considering combining DPDK and OVS via vhost-net, I name it "vhost_net pmd driver".
   The detailed solution is as follows:
   1 Similar to the process of qemu<->vhost_net, we use a serial of ioctl commands to make virtqueue visible to both vhost_net and vhost_net pmd driver.
   2 In kvm guests, the tx/rx queue is consisted of GPA addresses, and the vhost_net will transform it into HVA addresses, then the tap device could copy datagram afterwards. However,  GPA addresses are not necessary for containers to fulfill the tx/rx queue. Thus, we fake it to fulfill the HVA addresses into the tx/rx queues, and pass the (HVA, HVA) map table to vhost_net by VHOST_SET_MEM_TABLE ioctl during initialization. Thus *the vhost_net codes could keep untouched*.
   3 the packet-transceiver-process is totally the same to virtio pmd driver.

   The demo has been worked out already. In the demo, the dpdk could directly access vhost_net to realize L2 forward.
     clients  |                      host                   |    contrainer
      ping    |                                             |
vm0   ----- > |ixgbe:enp131s0f0 <-> ovs:br0  <-> vhost:tap0 |<-> vhost-net pmd
              |                                             |         |
              |                                             |      testpmd
              |                                             |         |
vm1  <------  |ixgbe:enp131s0f1 <-> ovs:br1  <-> vhost:tap1 |<-> vhost-net pmd
              |                                             |

     I don't know wheter this solution is acceptable here. Any blueprints for combining container with dpdk? any suggestions or advices? Thanks in advance.


---
Ann

             reply	other threads:[~2015-08-20  7:50 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-08-20  7:49 Zhuangyanying [this message]
2015-08-20  8:36 ` Xie, Huawei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=EC9759BC1E3E98429B5DE9A03DF86D8B592D8A38@SZXEMA502-MBX.china.huawei.com \
    --to=ann.zhuangyanying@huawei.com \
    --cc=dev@dpdk.org \
    --cc=gaoxiaoqiu@huawei.com \
    --cc=guohongzhen@huawei.com \
    --cc=oscar.zhangbo@huawei.com \
    --cc=zhbzg@huawei.com \
    --cc=zhoujingbin@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).