DPDK patches and discussions
 help / color / mirror / Atom feed
* Re: [dpdk-dev] [RFC 0/5] virtio support for container
@ 2015-12-30  9:46 Pavel Fedin
  2015-12-31  9:19 ` Tan, Jianfeng
  0 siblings, 1 reply; 16+ messages in thread
From: Pavel Fedin @ 2015-12-30  9:46 UTC (permalink / raw)
  To: dev

 Hello everybody!

 I am currently working on improved version of this patchset, and i am testing it with openvswitch. I run two openvswitch instances:
on host and in container. Both ovs instances forward packets between its LOCAL port and vhost/virtio port. This way i can
comfortably run PING between my host and container.
 The problem is that the patchset seems to be broken somehow. ovs-vswitchd fails to open dpdk0 device, and if i set --log-level=9
for DPDK, i see this in the console:
--- cut ---
Broadcast message from systemd-journald@localhost.localdomain (Wed 2015-12-30 11:13:00 MSK):

ovs-vswitchd[557]: EAL: TSC frequency is ~3400032 KHz


Broadcast message from systemd-journald@localhost.localdomain (Wed 2015-12-30 11:13:00 MSK):

ovs-vswitchd[560]: EAL: memzone_reserve_aligned_thread_unsafe(): memzone <RG_MP_ovs_mp_1500_0_262144> already exists


Broadcast message from systemd-journald@localhost.localdomain (Wed 2015-12-30 11:13:00 MSK):

ovs-vswitchd[560]: RING: Cannot reserve memory
--- cut ---

 How can i debug this?

Kind regards,
Pavel Fedin
Expert Engineer
Samsung Electronics Research center Russia

^ permalink raw reply	[flat|nested] 16+ messages in thread
* Re: [dpdk-dev] [RFC 0/5] virtio support for container
@ 2017-06-15  8:21 Avi Cohen (A)
  0 siblings, 0 replies; 16+ messages in thread
From: Avi Cohen (A) @ 2017-06-15  8:21 UTC (permalink / raw)
  To: dev

Hello,
Just want to check the status of this project 
Is it alive ?  working ?
Can I run a container connected to OVS-DPDK via a virtio device ?
Where can I download the code/patches ?
Best Regards
avi

^ permalink raw reply	[flat|nested] 16+ messages in thread
* [dpdk-dev] [RFC 0/5] virtio support for container
@ 2015-11-05 18:31 Jianfeng Tan
  2015-11-24  3:53 ` Zhuangyanying
  0 siblings, 1 reply; 16+ messages in thread
From: Jianfeng Tan @ 2015-11-05 18:31 UTC (permalink / raw)
  To: dev
  Cc: nakajima.yoshihiro, zhbzg, mst, gaoxiaoqiu, oscar.zhangbo,
	ann.zhuangyanying, zhoujingbin, guohongzhen

This patchset only acts as a PoC to request the community for comments.
 
This patchset is to provide high performance networking interface
(virtio) for container-based DPDK applications. The way of starting
DPDK applications in containers with ownership of NIC devices
exclusively is beyond the scope. The basic idea here is to present
a new virtual device (named eth_cvio), which can be discovered
and initialized in container-based DPDK applications rte_eal_init().
To minimize the change, we reuse already-existing virtio frontend
driver code (driver/net/virtio/).
 
Compared to QEMU/VM case, virtio device framework (translates I/O
port r/w operations into unix socket/cuse protocol, which is originally
provided in QEMU),  is integrated in virtio frontend driver. Aka, this
new converged driver actually plays the role of original frontend
driver and the role of QEMU device framework.
 
The biggest difference here lies in how to calculate relative address
for backend. The principle of virtio is that: based on one or multiple
shared memory segments, vhost maintains a reference system with
the base addresses and length of these segments so that an address
from VM comes (usually GPA, Guest Physical Address), vhost can
translate it into self-recognizable address (aka VVA, Vhost Virtual
Address). To decrease the overhead of address translation, we should
maintain as few segments as better. In the context of virtual machines,
GPA is always locally continuous. So it's a good choice. In container's
case, CVA (Container Virtual Address) can be used. This means that:
a. when set_base_addr, CVA address is used; b. when preparing RX's
descriptors, CVA address is used; c. when transmitting packets, CVA is
filled in TX's descriptors; d. in TX and CQ's header, CVA is used.
 
How to share memory? In VM's case, qemu always shares all physical
layout to backend. But it's not feasible for a container, as a process,
to share all virtual memory regions to backend. So only specified
virtual memory regions (type is shared) are sent to backend. It leads
to a limitation that only addresses in these areas can be used to
transmit or receive packets. For now, the shared memory is created
in /dev/shm using shm_open() in the memory initialization process.
 
How to use?
 
a. Apply the patch of virtio for container. We need two copies of
patched code (referred as dpdk-app/ and dpdk-vhost/)
 
b. To compile container apps:
$: cd dpdk-app
$: vim config/common_linuxapp (uncomment "CONFIG_RTE_VIRTIO_VDEV=y")
$: make config RTE_SDK=`pwd` T=x86_64-native-linuxapp-gcc
$: make install RTE_SDK=`pwd` T=x86_64-native-linuxapp-gcc
$: make -C examples/l2fwd RTE_SDK=`pwd` T=x86_64-native-linuxapp-gcc
 
c. To build a docker image using Dockerfile below.
$: cat ./Dockerfile
FROM ubuntu:latest
WORKDIR /usr/src/dpdk
COPY . /usr/src/dpdk
CMD ["/usr/src/dpdk/examples/l2fwd/build/l2fwd", "-c", "0xc", "-n", "4", "--no-huge", "--no-pci", "--vdev=eth_cvio0,queue_num=256,rx=1,tx=1,cq=0,path=/var/run/usvhost", "--", "-p", "0x1"]
$: docker build -t dpdk-app-l2fwd .
 
d. To compile vhost:
$: cd dpdk-vhost
$: make config RTE_SDK=`pwd` T=x86_64-native-linuxapp-gcc
$: make install RTE_SDK=`pwd` T=x86_64-native-linuxapp-gcc
$: make -C examples/vhost RTE_SDK=`pwd` T=x86_64-native-linuxapp-gcc
 
e. Start vhost-switch
$: ./examples/vhost/build/vhost-switch -c 3 -n 4 --socket-mem 1024,1024 -- -p 0x1 --stats 1
 
f. Start docker
$: docker run -i -t -v <path to vhost unix socket>:/var/run/usvhost dpdk-app-l2fwd

Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Signed-off-by: Jianfeng Tan <jianfeng.tan@intel.com>

Jianfeng Tan (5):
  virtio/container: add handler for ioport rd/wr
  virtio/container: add a new virtual device named eth_cvio
  virtio/container: unify desc->addr assignment
  virtio/container: adjust memory initialization process
  vhost/container: change mode of vhost listening socket

 config/common_linuxapp                       |   5 +
 drivers/net/virtio/Makefile                  |   4 +
 drivers/net/virtio/vhost-user.c              | 433 +++++++++++++++++++++++++++
 drivers/net/virtio/vhost-user.h              | 137 +++++++++
 drivers/net/virtio/virtio_ethdev.c           | 319 +++++++++++++++-----
 drivers/net/virtio/virtio_ethdev.h           |  16 +
 drivers/net/virtio/virtio_pci.h              |  32 +-
 drivers/net/virtio/virtio_rxtx.c             |   9 +-
 drivers/net/virtio/virtio_rxtx_simple.c      |   9 +-
 drivers/net/virtio/virtqueue.h               |   9 +-
 lib/librte_eal/common/include/rte_memory.h   |   5 +
 lib/librte_eal/linuxapp/eal/eal_memory.c     |  58 +++-
 lib/librte_mempool/rte_mempool.c             |  16 +-
 lib/librte_vhost/vhost_user/vhost-net-user.c |   5 +
 14 files changed, 967 insertions(+), 90 deletions(-)
 create mode 100644 drivers/net/virtio/vhost-user.c
 create mode 100644 drivers/net/virtio/vhost-user.h

-- 
2.1.4

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2017-06-15  8:21 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-30  9:46 [dpdk-dev] [RFC 0/5] virtio support for container Pavel Fedin
2015-12-31  9:19 ` Tan, Jianfeng
2015-12-31  9:40   ` Pavel Fedin
2015-12-31 10:02     ` Tan, Jianfeng
2015-12-31 10:38       ` Pavel Fedin
2015-12-31 11:58         ` Tan, Jianfeng
2015-12-31 12:44           ` Pavel Fedin
2015-12-31 12:54             ` Tan, Jianfeng
2015-12-31 13:07               ` Pavel Fedin
2015-12-31 13:47           ` Pavel Fedin
2015-12-31 15:39           ` Pavel Fedin
2016-01-06  5:47             ` Tan, Jianfeng
  -- strict thread matches above, loose matches on Subject: below --
2017-06-15  8:21 Avi Cohen (A)
2015-11-05 18:31 Jianfeng Tan
2015-11-24  3:53 ` Zhuangyanying
2015-11-24  6:19   ` Tan, Jianfeng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).