From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [119.145.14.66]) by dpdk.org (Postfix) with ESMTP id E64CB8D39 for ; Thu, 20 Aug 2015 09:50:10 +0200 (CEST) Received: from 172.24.1.47 (EHLO SZXEMA412-HUB.china.huawei.com) ([172.24.1.47]) by szxrg03-dlp.huawei.com (MOS 4.4.3-GA FastPath queued) with ESMTP id BLJ34098; Thu, 20 Aug 2015 15:50:04 +0800 (CST) Received: from SZXEMA502-MBX.china.huawei.com ([169.254.3.39]) by SZXEMA412-HUB.china.huawei.com ([10.82.72.71]) with mapi id 14.03.0235.001; Thu, 20 Aug 2015 15:49:54 +0800 From: Zhuangyanying To: "dev@dpdk.org" Thread-Topic: [RFC] combining dpdk with ovs via vhost_net Thread-Index: AdDbHMzt1z/8DYNUSWGAT+2WrdP2MQ== Date: Thu, 20 Aug 2015 07:49:54 +0000 Message-ID: Accept-Language: zh-CN, en-US Content-Language: zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.177.21.2] MIME-Version: 1.0 X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020205.55D586AD.00A0, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=169.254.3.39, so=2013-05-26 15:14:31, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 3966e022d2049025c483ae364bc3b17b Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: gaoxiaoqiu , "Zhangbo \(Oscar\)" , Zhbzg , Guohongzhen , Zhoujingbin Subject: [dpdk-dev] [RFC] combining dpdk with ovs via vhost_net X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Aug 2015 07:50:42 -0000 Hi all: AFAIK, nowadays there's only one solution to apply DPDK into Docker Cont= ainers, which is Passing-Through physical NIC to applications. I'm now working on another solution, considering combining DPDK and OVS = via vhost-net, I name it "vhost_net pmd driver". The detailed solution is as follows: 1 Similar to the process of qemu<->vhost_net, we use a serial of ioctl c= ommands to make virtqueue visible to both vhost_net and vhost_net pmd drive= r. 2 In kvm guests, the tx/rx queue is consisted of GPA addresses, and the = vhost_net will transform it into HVA addresses, then the tap device could c= opy datagram afterwards. However, GPA addresses are not necessary for cont= ainers to fulfill the tx/rx queue. Thus, we fake it to fulfill the HVA addr= esses into the tx/rx queues, and pass the (HVA, HVA) map table to vhost_net= by VHOST_SET_MEM_TABLE ioctl during initialization. Thus *the vhost_net co= des could keep untouched*. 3 the packet-transceiver-process is totally the same to virtio pmd drive= r. The demo has been worked out already. In the demo, the dpdk could direct= ly access vhost_net to realize L2 forward. clients | host | contrainer ping | | vm0 ----- > |ixgbe:enp131s0f0 <-> ovs:br0 <-> vhost:tap0 |<-> vhost-net = pmd | | | | | testpmd | | | vm1 <------ |ixgbe:enp131s0f1 <-> ovs:br1 <-> vhost:tap1 |<-> vhost-net = pmd | | I don't know wheter this solution is acceptable here. Any blueprints f= or combining container with dpdk? any suggestions or advices? Thanks in adv= ance. --- Ann