DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [RFC] combining dpdk with ovs via vhost_net
@ 2015-08-20  7:49 Zhuangyanying
  2015-08-20  8:36 ` Xie, Huawei
  0 siblings, 1 reply; 2+ messages in thread
From: Zhuangyanying @ 2015-08-20  7:49 UTC (permalink / raw)
  To: dev; +Cc: gaoxiaoqiu, Zhangbo (Oscar), Zhbzg, Guohongzhen, Zhoujingbin

Hi all:
   AFAIK, nowadays there's only one solution to apply DPDK into Docker Containers, which is Passing-Through physical NIC to applications.
   I'm now working on another solution, considering combining DPDK and OVS via vhost-net, I name it "vhost_net pmd driver".
   The detailed solution is as follows:
   1 Similar to the process of qemu<->vhost_net, we use a serial of ioctl commands to make virtqueue visible to both vhost_net and vhost_net pmd driver.
   2 In kvm guests, the tx/rx queue is consisted of GPA addresses, and the vhost_net will transform it into HVA addresses, then the tap device could copy datagram afterwards. However,  GPA addresses are not necessary for containers to fulfill the tx/rx queue. Thus, we fake it to fulfill the HVA addresses into the tx/rx queues, and pass the (HVA, HVA) map table to vhost_net by VHOST_SET_MEM_TABLE ioctl during initialization. Thus *the vhost_net codes could keep untouched*.
   3 the packet-transceiver-process is totally the same to virtio pmd driver.

   The demo has been worked out already. In the demo, the dpdk could directly access vhost_net to realize L2 forward.
     clients  |                      host                   |    contrainer
      ping    |                                             |
vm0   ----- > |ixgbe:enp131s0f0 <-> ovs:br0  <-> vhost:tap0 |<-> vhost-net pmd
              |                                             |         |
              |                                             |      testpmd
              |                                             |         |
vm1  <------  |ixgbe:enp131s0f1 <-> ovs:br1  <-> vhost:tap1 |<-> vhost-net pmd
              |                                             |

     I don't know wheter this solution is acceptable here. Any blueprints for combining container with dpdk? any suggestions or advices? Thanks in advance.


---
Ann

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-dev] [RFC] combining dpdk with ovs via vhost_net
  2015-08-20  7:49 [dpdk-dev] [RFC] combining dpdk with ovs via vhost_net Zhuangyanying
@ 2015-08-20  8:36 ` Xie, Huawei
  0 siblings, 0 replies; 2+ messages in thread
From: Xie, Huawei @ 2015-08-20  8:36 UTC (permalink / raw)
  To: Zhuangyanying, dev
  Cc: gaoxiaoqiu, Zhangbo (Oscar), Zhbzg, Guohongzhen, Zhoujingbin

Hi Yanping:
I don't quite get your idea. Last year I had a design and POC which enables  user space virtio interface  in container.
Don't know if it has similarity with your proposal. Would post the idea later in the following mail.

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Zhuangyanying
> Sent: Thursday, August 20, 2015 3:50 PM
> To: dev@dpdk.org
> Cc: gaoxiaoqiu; Zhangbo (Oscar); Zhbzg; Guohongzhen; Zhoujingbin
> Subject: [dpdk-dev] [RFC] combining dpdk with ovs via vhost_net
> 
> Hi all:
>    AFAIK, nowadays there's only one solution to apply DPDK into Docker
> Containers, which is Passing-Through physical NIC to applications.
>    I'm now working on another solution, considering combining DPDK and
> OVS via vhost-net, I name it "vhost_net pmd driver".
>    The detailed solution is as follows:
>    1 Similar to the process of qemu<->vhost_net, we use a serial of ioctl
> commands to make virtqueue visible to both vhost_net and vhost_net pmd
> driver.
>    2 In kvm guests, the tx/rx queue is consisted of GPA addresses, and the
> vhost_net will transform it into HVA addresses, then the tap device could
> copy datagram afterwards. However,  GPA addresses are not necessary for
> containers to fulfill the tx/rx queue. Thus, we fake it to fulfill the HVA
> addresses into the tx/rx queues, and pass the (HVA, HVA) map table to
> vhost_net by VHOST_SET_MEM_TABLE ioctl during initialization. Thus *the
> vhost_net codes could keep untouched*.
>    3 the packet-transceiver-process is totally the same to virtio pmd driver.
> 
>    The demo has been worked out already. In the demo, the dpdk could
> directly access vhost_net to realize L2 forward.
>      clients  |                      host                   |    contrainer
>       ping    |                                             |
> vm0   ----- > |ixgbe:enp131s0f0 <-> ovs:br0  <-> vhost:tap0 |<-> vhost-net
> pmd
>               |                                             |         |
>               |                                             |      testpmd
>               |                                             |         |
> vm1  <------  |ixgbe:enp131s0f1 <-> ovs:br1  <-> vhost:tap1 |<-> vhost-net
> pmd
>               |                                             |
> 
>      I don't know wheter this solution is acceptable here. Any blueprints for
> combining container with dpdk? any suggestions or advices? Thanks in
> advance.
> 
> 
> ---
> Ann

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2015-08-20  8:37 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-08-20  7:49 [dpdk-dev] [RFC] combining dpdk with ovs via vhost_net Zhuangyanying
2015-08-20  8:36 ` Xie, Huawei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).