* [dpdk-dev] [PATCH v3] lib/librte_vhost: user space vhost driver library
@ 2014-08-05 15:53 Huawei Xie
2014-08-05 15:53 ` [dpdk-dev] [PATCH v3] lib/librte_vhost: vhost library support to facilitate integration with DPDK accelerated vswitch Huawei Xie
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Huawei Xie @ 2014-08-05 15:53 UTC (permalink / raw)
To: dev
This user space vhost library is provided aiming to facilitate integration with DPDK accelerated vswitch.
Huawei Xie (1):
vhost library support to facilitate integration with DPDK accelerated vswitch.
config/common_linuxapp | 7 +
lib/Makefile | 1 +
lib/librte_vhost/Makefile | 48 ++
lib/librte_vhost/eventfd_link/Makefile | 39 +
lib/librte_vhost/eventfd_link/eventfd_link.c | 194 +++++
lib/librte_vhost/eventfd_link/eventfd_link.h | 40 +
lib/librte_vhost/rte_virtio_net.h | 192 +++++
lib/librte_vhost/vhost-net-cdev.c | 363 ++++++++++
lib/librte_vhost/vhost-net-cdev.h | 109 +++
lib/librte_vhost/vhost_rxtx.c | 292 ++++++++
lib/librte_vhost/virtio-net.c | 1002 ++++++++++++++++++++++++++
11 files changed, 2287 insertions(+)
create mode 100644 lib/librte_vhost/Makefile
create mode 100644 lib/librte_vhost/eventfd_link/Makefile
create mode 100644 lib/librte_vhost/eventfd_link/eventfd_link.c
create mode 100644 lib/librte_vhost/eventfd_link/eventfd_link.h
create mode 100644 lib/librte_vhost/rte_virtio_net.h
create mode 100644 lib/librte_vhost/vhost-net-cdev.c
create mode 100644 lib/librte_vhost/vhost-net-cdev.h
create mode 100644 lib/librte_vhost/vhost_rxtx.c
create mode 100644 lib/librte_vhost/virtio-net.c
--
1.8.1.4
^ permalink raw reply [flat|nested] 9+ messages in thread
* [dpdk-dev] [PATCH v3] lib/librte_vhost: vhost library support to facilitate integration with DPDK accelerated vswitch.
2014-08-05 15:53 [dpdk-dev] [PATCH v3] lib/librte_vhost: user space vhost driver library Huawei Xie
@ 2014-08-05 15:53 ` Huawei Xie
2014-08-05 15:56 ` Xie, Huawei
2014-08-28 20:15 ` Thomas Monjalon
2014-08-06 3:09 ` [dpdk-dev] [PATCH v3] lib/librte_vhost: user space vhost driver library Fu, JingguoX
2014-08-07 14:22 ` Cao, Waterman
2 siblings, 2 replies; 9+ messages in thread
From: Huawei Xie @ 2014-08-05 15:53 UTC (permalink / raw)
To: dev
Signed-off-by: Huawei Xie <huawei.xie@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Tommy Long <thomas.long@intel.com>
---
config/common_linuxapp | 7 +
lib/Makefile | 1 +
lib/librte_vhost/Makefile | 48 ++
lib/librte_vhost/eventfd_link/Makefile | 39 +
lib/librte_vhost/eventfd_link/eventfd_link.c | 194 +++++
lib/librte_vhost/eventfd_link/eventfd_link.h | 40 +
lib/librte_vhost/rte_virtio_net.h | 192 +++++
lib/librte_vhost/vhost-net-cdev.c | 363 ++++++++++
lib/librte_vhost/vhost-net-cdev.h | 109 +++
lib/librte_vhost/vhost_rxtx.c | 292 ++++++++
lib/librte_vhost/virtio-net.c | 1002 ++++++++++++++++++++++++++
11 files changed, 2287 insertions(+)
create mode 100644 lib/librte_vhost/Makefile
create mode 100644 lib/librte_vhost/eventfd_link/Makefile
create mode 100644 lib/librte_vhost/eventfd_link/eventfd_link.c
create mode 100644 lib/librte_vhost/eventfd_link/eventfd_link.h
create mode 100644 lib/librte_vhost/rte_virtio_net.h
create mode 100644 lib/librte_vhost/vhost-net-cdev.c
create mode 100644 lib/librte_vhost/vhost-net-cdev.h
create mode 100644 lib/librte_vhost/vhost_rxtx.c
create mode 100644 lib/librte_vhost/virtio-net.c
diff --git a/config/common_linuxapp b/config/common_linuxapp
index 9047975..c7c1c83 100644
--- a/config/common_linuxapp
+++ b/config/common_linuxapp
@@ -390,6 +390,13 @@ CONFIG_RTE_KNI_VHOST_DEBUG_RX=n
CONFIG_RTE_KNI_VHOST_DEBUG_TX=n
#
+# Compile vhost library
+# fuse, fuse-devel, kernel-modules-extra packages are needed
+#
+CONFIG_RTE_LIBRTE_VHOST=n
+CONFIG_RTE_LIBRTE_VHOST_DEBUG=n
+
+#
#Compile Xen domain0 support
#
CONFIG_RTE_LIBRTE_XEN_DOM0=n
diff --git a/lib/Makefile b/lib/Makefile
index 10c5bb3..007c174 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -60,6 +60,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter
DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched
DIRS-$(CONFIG_RTE_LIBRTE_KVARGS) += librte_kvargs
DIRS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) += librte_distributor
+DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
DIRS-$(CONFIG_RTE_LIBRTE_PORT) += librte_port
DIRS-$(CONFIG_RTE_LIBRTE_TABLE) += librte_table
DIRS-$(CONFIG_RTE_LIBRTE_PIPELINE) += librte_pipeline
diff --git a/lib/librte_vhost/Makefile b/lib/librte_vhost/Makefile
new file mode 100644
index 0000000..6ad706d
--- /dev/null
+++ b/lib/librte_vhost/Makefile
@@ -0,0 +1,48 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_vhost.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 -D_FILE_OFFSET_BITS=64 -lfuse
+LDFLAGS += -lfuse
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_VHOST) := vhost-net-cdev.c virtio-net.c vhost_rxtx.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_VHOST)-include += rte_virtio_net.h
+
+# this lib needs eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_VHOST) += lib/librte_eal lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_vhost/eventfd_link/Makefile b/lib/librte_vhost/eventfd_link/Makefile
new file mode 100644
index 0000000..fc3927b
--- /dev/null
+++ b/lib/librte_vhost/eventfd_link/Makefile
@@ -0,0 +1,39 @@
+# BSD LICENSE
+#
+# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+#
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in
+# the documentation and/or other materials provided with the
+# distribution.
+# * Neither the name of Intel Corporation nor the names of its
+# contributors may be used to endorse or promote products derived
+# from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+obj-m += eventfd_link.o
+
+
+all:
+ make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
+
+clean:
+ make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
diff --git a/lib/librte_vhost/eventfd_link/eventfd_link.c b/lib/librte_vhost/eventfd_link/eventfd_link.c
new file mode 100644
index 0000000..4c9b628
--- /dev/null
+++ b/lib/librte_vhost/eventfd_link/eventfd_link.c
@@ -0,0 +1,194 @@
+/*-
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ * The full GNU General Public License is included in this distribution
+ * in the file called LICENSE.GPL.
+ *
+ * Contact Information:
+ * Intel Corporation
+ */
+
+#include <linux/eventfd.h>
+#include <linux/miscdevice.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/rcupdate.h>
+#include <linux/file.h>
+#include <linux/slab.h>
+#include <linux/fs.h>
+#include <linux/mmu_context.h>
+#include <linux/sched.h>
+#include <asm/mmu_context.h>
+#include <linux/fdtable.h>
+
+#include "eventfd_link.h"
+
+
+/*
+ * get_files_struct is copied from fs/file.c
+ */
+struct files_struct *
+get_files_struct(struct task_struct *task)
+{
+ struct files_struct *files;
+
+ task_lock(task);
+ files = task->files;
+ if (files)
+ atomic_inc(&files->count);
+ task_unlock(task);
+
+ return files;
+}
+
+/*
+ * put_files_struct is extracted from fs/file.c
+ */
+void
+put_files_struct(struct files_struct *files)
+{
+ if (atomic_dec_and_test(&files->count))
+ BUG();
+}
+
+
+static long
+eventfd_link_ioctl(struct file *f, unsigned int ioctl, unsigned long arg)
+{
+ void __user *argp = (void __user *) arg;
+ struct task_struct *task_target = NULL;
+ struct file *file;
+ struct files_struct *files;
+ struct fdtable *fdt;
+ struct eventfd_copy eventfd_copy;
+
+ switch (ioctl) {
+ case EVENTFD_COPY:
+ if (copy_from_user(&eventfd_copy, argp, sizeof(struct eventfd_copy)))
+ return -EFAULT;
+
+ /*
+ * Find the task struct for the target pid
+ */
+ task_target =
+ pid_task(find_vpid(eventfd_copy.target_pid), PIDTYPE_PID);
+ if (task_target == NULL) {
+ printk(KERN_DEBUG "Failed to get mem ctx for target pid\n");
+ return -EFAULT;
+ }
+
+ files = get_files_struct(current);
+ if (files == NULL) {
+ printk(KERN_DEBUG "Failed to get files struct\n");
+ return -EFAULT;
+ }
+
+ rcu_read_lock();
+ file = fcheck_files(files, eventfd_copy.source_fd);
+ if (file) {
+ if (file->f_mode & FMODE_PATH
+ || !atomic_long_inc_not_zero(&file->f_count))
+ file = NULL;
+ }
+ rcu_read_unlock();
+ put_files_struct(files);
+
+ if (file == NULL) {
+ printk(KERN_DEBUG "Failed to get file from source pid\n");
+ return 0;
+ }
+
+ /*
+ * Release the existing eventfd in the source process
+ */
+ spin_lock(&files->file_lock);
+ filp_close(file, files);
+ fdt = files_fdtable(files);
+ fdt->fd[eventfd_copy.source_fd] = NULL;
+ spin_unlock(&files->file_lock);
+
+ /*
+ * Find the file struct associated with the target fd.
+ */
+
+ files = get_files_struct(task_target);
+ if (files == NULL) {
+ printk(KERN_DEBUG "Failed to get files struct\n");
+ return -EFAULT;
+ }
+
+ rcu_read_lock();
+ file = fcheck_files(files, eventfd_copy.target_fd);
+ if (file) {
+ if (file->f_mode & FMODE_PATH
+ || !atomic_long_inc_not_zero(&file->f_count))
+ file = NULL;
+ }
+ rcu_read_unlock();
+ put_files_struct(files);
+
+ if (file == NULL) {
+ printk(KERN_DEBUG "Failed to get file from target pid\n");
+ return 0;
+ }
+
+
+ /*
+ * Install the file struct from the target process into the
+ * file desciptor of the source process,
+ */
+
+ fd_install(eventfd_copy.source_fd, file);
+
+ return 0;
+
+ default:
+ return -ENOIOCTLCMD;
+ }
+}
+
+static const struct file_operations eventfd_link_fops = {
+ .owner = THIS_MODULE,
+ .unlocked_ioctl = eventfd_link_ioctl,
+};
+
+
+static struct miscdevice eventfd_link_misc = {
+ .name = "eventfd-link",
+ .fops = &eventfd_link_fops,
+};
+
+static int __init
+eventfd_link_init(void)
+{
+ return misc_register(&eventfd_link_misc);
+}
+
+module_init(eventfd_link_init);
+
+static void __exit
+eventfd_link_exit(void)
+{
+ misc_deregister(&eventfd_link_misc);
+}
+
+module_exit(eventfd_link_exit);
+
+MODULE_VERSION("0.0.1");
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Anthony Fee");
+MODULE_DESCRIPTION("Link eventfd");
+MODULE_ALIAS("devname:eventfd-link");
diff --git a/lib/librte_vhost/eventfd_link/eventfd_link.h b/lib/librte_vhost/eventfd_link/eventfd_link.h
new file mode 100644
index 0000000..38052e2
--- /dev/null
+++ b/lib/librte_vhost/eventfd_link/eventfd_link.h
@@ -0,0 +1,40 @@
+/*-
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; If not, see <http://www.gnu.org/licenses/>.
+ * The full GNU General Public License is included in this distribution
+ * in the file called LICENSE.GPL.
+ *
+ * Contact Information:
+ * Intel Corporation
+ */
+
+#ifndef _EVENTFD_LINK_H_
+#define _EVENTFD_LINK_H_
+
+/*
+ * ioctl to copy an fd entry in calling process to an fd in a target process
+ */
+#define EVENTFD_COPY 1
+
+/*
+ * arguements for the EVENTFD_COPY ioctl
+ */
+struct eventfd_copy {
+ unsigned target_fd; /**< fd in the target pid */
+ unsigned source_fd; /**< fd in the calling pid */
+ pid_t target_pid; /**< pid of the target pid */
+};
+#endif /* _EVENTFD_LINK_H_ */
diff --git a/lib/librte_vhost/rte_virtio_net.h b/lib/librte_vhost/rte_virtio_net.h
new file mode 100644
index 0000000..7a05dab
--- /dev/null
+++ b/lib/librte_vhost/rte_virtio_net.h
@@ -0,0 +1,192 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _VIRTIO_NET_H_
+#define _VIRTIO_NET_H_
+
+#include <stdint.h>
+#include <linux/virtio_ring.h>
+#include <linux/virtio_net.h>
+#include <sys/eventfd.h>
+
+#include <rte_memory.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+
+#define VIRTIO_DEV_RUNNING 1 /**< Used to indicate that the device is running on a data core. */
+#define VIRTIO_DEV_STOPPED -1 /**< Backend value set by guest. */
+
+/* Enum for virtqueue management. */
+enum {VIRTIO_RXQ, VIRTIO_TXQ, VIRTIO_QNUM};
+
+/**
+ * Structure contains variables relevant to RX/TX virtqueues.
+ */
+struct vhost_virtqueue {
+ struct vring_desc *desc; /**< descriptor ring. */
+ struct vring_avail *avail; /**< available ring. */
+ struct vring_used *used; /**< used ring. */
+ uint32_t size; /**< Size of descriptor ring. */
+ uint32_t backend; /**< Backend value to determine if device should be started/stopped. */
+ uint16_t vhost_hlen; /**< Vhost header length (varies depending on RX merge buffers. */
+ volatile uint16_t last_used_idx; /**< Last index used on the available ring. */
+ volatile uint16_t last_used_idx_res; /**< Used for multiple devices reserving buffers. */
+ eventfd_t callfd; /**< Currently unused as polling mode is enabled. */
+ eventfd_t kickfd; /**< Used to notify the guest (trigger interrupt). */
+} __rte_cache_aligned;
+
+/**
+ * Information relating to memory regions including offsets to
+ * addresses in QEMUs memory file.
+ */
+struct virtio_memory_regions {
+ uint64_t guest_phys_address; /**< Base guest physical address of region. */
+ uint64_t guest_phys_address_end; /**< End guest physical address of region. */
+ uint64_t memory_size; /**< Size of region. */
+ uint64_t userspace_address; /**< Base userspace address of region. */
+ uint64_t address_offset; /**< Offset of region for address translation. */
+};
+
+
+/**
+ * Memory structure includes region and mapping information.
+ */
+struct virtio_memory {
+ uint64_t base_address; /**< Base QEMU userspace address of the memory file. */
+ uint64_t mapped_address; /**< Mapped address of memory file base in our applications memory space. */
+ uint64_t mapped_size; /**< Total size of memory file. */
+ uint32_t nregions; /**< Number of memory regions. */
+ struct virtio_memory_regions regions[0]; /**< Memory region information. */
+};
+
+/**
+ * Device structure contains all configuration information relating to the device.
+ */
+struct virtio_net {
+ struct vhost_virtqueue *virtqueue[VIRTIO_QNUM]; /**< Contains all virtqueue information. */
+ struct virtio_memory *mem; /**< QEMU memory and memory region information. */
+ uint64_t features; /**< Negotiated feature set. */
+ uint64_t device_fh; /**< Device identifier. */
+ uint32_t flags; /**< Device flags. Only used to check if device is running on data core. */
+ void *priv;
+} __rte_cache_aligned;
+
+/**
+ * Device operations to add/remove device.
+ */
+struct virtio_net_device_ops {
+ int (*new_device)(struct virtio_net *); /**< Add device. */
+ void (*destroy_device)(struct virtio_net *); /**< Remove device. */
+};
+
+
+static inline uint16_t __attribute__((always_inline))
+rte_vring_available_entries(struct virtio_net *dev, uint16_t queue_id)
+{
+ struct vhost_virtqueue *vq = dev->virtqueue[queue_id];
+ return *(volatile uint16_t *)&vq->avail->idx - vq->last_used_idx_res;
+}
+
+/**
+ * Function to convert guest physical addresses to vhost virtual addresses.
+ * This is used to convert guest virtio buffer addresses.
+ */
+static inline uint64_t __attribute__((always_inline))
+gpa_to_vva(struct virtio_net *dev, uint64_t guest_pa)
+{
+ struct virtio_memory_regions *region;
+ uint32_t regionidx;
+ uint64_t vhost_va = 0;
+
+ for (regionidx = 0; regionidx < dev->mem->nregions; regionidx++) {
+ region = &dev->mem->regions[regionidx];
+ if ((guest_pa >= region->guest_phys_address) &&
+ (guest_pa <= region->guest_phys_address_end)) {
+ vhost_va = region->address_offset + guest_pa;
+ break;
+ }
+ }
+ return vhost_va;
+}
+
+/**
+ * Disable features in feature_mask. Returns 0 on success.
+ */
+int rte_vhost_feature_disable(uint64_t feature_mask);
+
+/**
+ * Enable features in feature_mask. Returns 0 on success.
+ */
+int rte_vhost_feature_enable(uint64_t feature_mask);
+
+/* Returns currently supported vhost features */
+uint64_t rte_vhost_feature_get(void);
+
+int rte_vhost_enable_guest_notification(struct virtio_net *dev, uint16_t queue_id, int enable);
+
+/* Register vhost driver. dev_name could be different for multiple instance support. */
+int rte_vhost_driver_register(const char *dev_name);
+
+/* Register callbacks. */
+int rte_vhost_driver_callback_register(struct virtio_net_device_ops const * const);
+
+int rte_vhost_driver_session_start(void);
+
+/**
+ * This function adds buffers to the virtio devices RX virtqueue. Buffers can
+ * be received from the physical port or from another virtual device. A packet
+ * count is returned to indicate the number of packets that were succesfully
+ * added to the RX queue.
+ * @param queue_id
+ * virtio queue index in mq case
+ * @return
+ * num of packets enqueued
+ */
+uint32_t rte_vhost_enqueue_burst(struct virtio_net *dev, uint16_t queue_id,
+ struct rte_mbuf **pkts, uint32_t count);
+
+/**
+ * This function gets guest buffers from the virtio device TX virtqueue,
+ * construct host mbufs, copies guest buffer content to host mbufs and
+ * store them in pkts to be processed.
+ * @param mbuf_pool
+ * mbuf_pool where host mbuf is allocated.
+ * @param queue_id
+ * virtio queue index in mq case.
+ * @return
+ * num of packets dequeued
+ */
+uint32_t rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id,
+ struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint32_t count);
+
+#endif /* _VIRTIO_NET_H_ */
diff --git a/lib/librte_vhost/vhost-net-cdev.c b/lib/librte_vhost/vhost-net-cdev.c
new file mode 100644
index 0000000..b65c67b
--- /dev/null
+++ b/lib/librte_vhost/vhost-net-cdev.c
@@ -0,0 +1,363 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <errno.h>
+#include <fuse/cuse_lowlevel.h>
+#include <linux/limits.h>
+#include <linux/vhost.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+
+#include <rte_ethdev.h>
+#include <rte_log.h>
+#include <rte_string_fns.h>
+#include <rte_virtio_net.h>
+
+#include "vhost-net-cdev.h"
+
+#define FUSE_OPT_DUMMY "\0\0"
+#define FUSE_OPT_FORE "-f\0\0"
+#define FUSE_OPT_NOMULTI "-s\0\0"
+
+static const uint32_t default_major = 231;
+static const uint32_t default_minor = 1;
+static const char cuse_device_name[] = "/dev/cuse";
+static const char default_cdev[] = "vhost-net";
+
+static struct fuse_session *session;
+static struct vhost_net_device_ops const *ops;
+
+/**
+ * Returns vhost_device_ctx from given fuse_req_t. The index is populated later when
+ * the device is added to the device linked list.
+ */
+static struct vhost_device_ctx
+fuse_req_to_vhost_ctx(fuse_req_t req, struct fuse_file_info *fi)
+{
+ struct vhost_device_ctx ctx;
+ struct fuse_ctx const *const req_ctx = fuse_req_ctx(req);
+
+ ctx.pid = req_ctx->pid;
+ ctx.fh = fi->fh;
+
+ return ctx;
+}
+
+/**
+ * When the device is created in QEMU it gets initialised here and added to the device linked list.
+ */
+static void
+vhost_net_open(fuse_req_t req, struct fuse_file_info *fi)
+{
+ struct vhost_device_ctx ctx = fuse_req_to_vhost_ctx(req, fi);
+ int err = 0;
+
+ err = ops->new_device(ctx);
+ if (err == -1) {
+ fuse_reply_err(req, EPERM);
+ return;
+ }
+
+ fi->fh = err;
+
+ RTE_LOG(INFO, VHOST_CONFIG, "(%"PRIu64") Device configuration started\n", fi->fh);
+ fuse_reply_open(req, fi);
+}
+
+/*
+ * When QEMU is shutdown or killed the device gets released.
+ */
+static void
+vhost_net_release(fuse_req_t req, struct fuse_file_info *fi)
+{
+ int err = 0;
+ struct vhost_device_ctx ctx = fuse_req_to_vhost_ctx(req, fi);
+
+ ops->destroy_device(ctx);
+ RTE_LOG(INFO, VHOST_CONFIG, "(%"PRIu64") Device released\n", ctx.fh);
+ fuse_reply_err(req, err);
+}
+
+/*
+ * Boilerplate code for CUSE IOCTL
+ * Implicit arguments: ctx, req, result.
+ */
+#define VHOST_IOCTL(func) do { \
+ result = (func)(ctx); \
+ fuse_reply_ioctl(req, result, NULL, 0); \
+} while (0)
+
+/*
+ * Boilerplate IOCTL RETRY
+ * Implicit arguments: req.
+ */
+#define VHOST_IOCTL_RETRY(size_r, size_w) do { \
+ struct iovec iov_r = { arg, (size_r) }; \
+ struct iovec iov_w = { arg, (size_w) }; \
+ fuse_reply_ioctl_retry(req, &iov_r, (size_r) ? 1 : 0, &iov_w, (size_w) ? 1 : 0); \
+} while (0) \
+
+/*
+ * Boilerplate code for CUSE Read IOCTL
+ * Implicit arguments: ctx, req, result, in_bufsz, in_buf.
+ */
+#define VHOST_IOCTL_R(type, var, func) do { \
+ if (!in_bufsz) { \
+ VHOST_IOCTL_RETRY(sizeof(type), 0); \
+ } else { \
+ (var) = *(const type*)in_buf; \
+ result = func(ctx, &(var)); \
+ fuse_reply_ioctl(req, result, NULL, 0); \
+ } \
+} while (0)
+
+/*
+ * Boilerplate code for CUSE Write IOCTL
+ * Implicit arguments: ctx, req, result, out_bufsz.
+ */
+#define VHOST_IOCTL_W(type, var, func) do { \
+ if (!out_bufsz) { \
+ VHOST_IOCTL_RETRY(0, sizeof(type)); \
+ } else { \
+ result = (func)(ctx, &(var)); \
+ fuse_reply_ioctl(req, result, &(var), sizeof(type)); \
+ } \
+} while (0)
+
+/*
+ * Boilerplate code for CUSE Read/Write IOCTL
+ * Implicit arguments: ctx, req, result, in_bufsz, in_buf.
+ */
+#define VHOST_IOCTL_RW(type1, var1, type2, var2, func) do { \
+ if (!in_bufsz) { \
+ VHOST_IOCTL_RETRY(sizeof(type1), sizeof(type2)); \
+ } else { \
+ (var1) = *(const type1*) (in_buf); \
+ result = (func)(ctx, (var1), &(var2)); \
+ fuse_reply_ioctl(req, result, &(var2), sizeof(type2)); \
+ } \
+} while (0)
+
+/**
+ * The IOCTLs are handled using CUSE/FUSE in userspace. Depending on
+ * the type of IOCTL a buffer is requested to read or to write. This
+ * request is handled by FUSE and the buffer is then given to CUSE.
+ */
+static void
+vhost_net_ioctl(fuse_req_t req, int cmd, void *arg,
+ struct fuse_file_info *fi, __rte_unused unsigned flags,
+ const void *in_buf, size_t in_bufsz, size_t out_bufsz)
+{
+ struct vhost_device_ctx ctx = fuse_req_to_vhost_ctx(req, fi);
+ struct vhost_vring_file file;
+ struct vhost_vring_state state;
+ struct vhost_vring_addr addr;
+ uint64_t features;
+ uint32_t index;
+ int result = 0;
+
+ switch (cmd) {
+
+ case VHOST_NET_SET_BACKEND:
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_NET_SET_BACKEND\n", ctx.fh);
+ VHOST_IOCTL_R(struct vhost_vring_file, file, ops->set_backend);
+ break;
+
+ case VHOST_GET_FEATURES:
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_GET_FEATURES\n", ctx.fh);
+ VHOST_IOCTL_W(uint64_t, features, ops->get_features);
+ break;
+
+ case VHOST_SET_FEATURES:
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_SET_FEATURES\n", ctx.fh);
+ VHOST_IOCTL_R(uint64_t, features, ops->set_features);
+ break;
+
+ case VHOST_RESET_OWNER:
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_RESET_OWNER\n", ctx.fh);
+ VHOST_IOCTL(ops->reset_owner);
+ break;
+
+ case VHOST_SET_OWNER:
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_SET_OWNER\n", ctx.fh);
+ VHOST_IOCTL(ops->set_owner);
+ break;
+
+ case VHOST_SET_MEM_TABLE:
+ /*TODO fix race condition.*/
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_SET_MEM_TABLE\n", ctx.fh);
+ static struct vhost_memory mem_temp;
+
+ switch (in_bufsz) {
+ case 0:
+ VHOST_IOCTL_RETRY(sizeof(struct vhost_memory), 0);
+ break;
+
+ case sizeof(struct vhost_memory):
+ mem_temp = *(const struct vhost_memory *) in_buf;
+
+ if (mem_temp.nregions > 0) {
+ VHOST_IOCTL_RETRY(sizeof(struct vhost_memory) + (sizeof(struct vhost_memory_region) * mem_temp.nregions), 0);
+ } else {
+ result = -1;
+ fuse_reply_ioctl(req, result, NULL, 0);
+ }
+ break;
+
+ default:
+ result = ops->set_mem_table(ctx, in_buf, mem_temp.nregions);
+ if (result)
+ fuse_reply_err(req, EINVAL);
+ else
+ fuse_reply_ioctl(req, result, NULL, 0);
+
+ }
+
+ break;
+
+ case VHOST_SET_VRING_NUM:
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_SET_VRING_NUM\n", ctx.fh);
+ VHOST_IOCTL_R(struct vhost_vring_state, state, ops->set_vring_num);
+ break;
+
+ case VHOST_SET_VRING_BASE:
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_SET_VRING_BASE\n", ctx.fh);
+ VHOST_IOCTL_R(struct vhost_vring_state, state, ops->set_vring_base);
+ break;
+
+ case VHOST_GET_VRING_BASE:
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_GET_VRING_BASE\n", ctx.fh);
+ VHOST_IOCTL_RW(uint32_t, index, struct vhost_vring_state, state, ops->get_vring_base);
+ break;
+
+ case VHOST_SET_VRING_ADDR:
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_SET_VRING_ADDR\n", ctx.fh);
+ VHOST_IOCTL_R(struct vhost_vring_addr, addr, ops->set_vring_addr);
+ break;
+
+ case VHOST_SET_VRING_KICK:
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_SET_VRING_KICK\n", ctx.fh);
+ VHOST_IOCTL_R(struct vhost_vring_file, file, ops->set_vring_kick);
+ break;
+
+ case VHOST_SET_VRING_CALL:
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_SET_VRING_CALL\n", ctx.fh);
+ VHOST_IOCTL_R(struct vhost_vring_file, file, ops->set_vring_call);
+ break;
+
+ default:
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") IOCTL: DOESN NOT EXIST\n", ctx.fh);
+ result = -1;
+ fuse_reply_ioctl(req, result, NULL, 0);
+ }
+
+ if (result < 0)
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: FAIL\n", ctx.fh);
+ else
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: SUCCESS\n", ctx.fh);
+}
+
+/**
+ * Structure handling open, release and ioctl function pointers is populated.
+ */
+static const struct cuse_lowlevel_ops vhost_net_ops = {
+ .open = vhost_net_open,
+ .release = vhost_net_release,
+ .ioctl = vhost_net_ioctl,
+};
+
+/**
+ * cuse_info is populated and used to register the cuse device. vhost_net_device_ops are
+ * also passed when the device is registered in main.c.
+ */
+int
+rte_vhost_driver_register(const char *dev_name)
+{
+ struct cuse_info cuse_info;
+ char device_name[PATH_MAX] = "";
+ char char_device_name[PATH_MAX] = "";
+ const char *device_argv[] = { device_name };
+
+ char fuse_opt_dummy[] = FUSE_OPT_DUMMY;
+ char fuse_opt_fore[] = FUSE_OPT_FORE;
+ char fuse_opt_nomulti[] = FUSE_OPT_NOMULTI;
+ char *fuse_argv[] = {fuse_opt_dummy, fuse_opt_fore, fuse_opt_nomulti};
+
+ if (access(cuse_device_name, R_OK | W_OK) < 0) {
+ RTE_LOG(ERR, VHOST_CONFIG, "Character device %s can't be accessed, maybe not exist\n", cuse_device_name);
+ return -1;
+ }
+
+ /*
+ * The device name is created. This is passed to QEMU so that it can register
+ * the device with our application. The dev_name allows us to have multiple instances
+ * of userspace vhost which we can then add devices to separately.
+ */
+ snprintf(device_name, PATH_MAX, "DEVNAME=%s", dev_name);
+ snprintf(char_device_name, PATH_MAX, "/dev/%s", dev_name);
+
+ /* Check if device already exists. */
+ if (access(char_device_name, F_OK) != -1) {
+ RTE_LOG(ERR, VHOST_CONFIG, "Character device %s already exists\n", char_device_name);
+ return -1;
+ }
+
+ memset(&cuse_info, 0, sizeof(cuse_info));
+ cuse_info.dev_major = default_major;
+ cuse_info.dev_minor = default_minor;
+ cuse_info.dev_info_argc = 1;
+ cuse_info.dev_info_argv = device_argv;
+ cuse_info.flags = CUSE_UNRESTRICTED_IOCTL;
+
+ ops = get_virtio_net_callbacks();
+
+ session = cuse_lowlevel_setup(3, fuse_argv,
+ &cuse_info, &vhost_net_ops, 0, NULL);
+ if (session == NULL)
+ return -1;
+
+ return 0;
+}
+
+
+/**
+ * The CUSE session is launched allowing the application to receive open, release and ioctl calls.
+ */
+int
+rte_vhost_driver_session_start(void)
+{
+ fuse_session_loop(session);
+
+ return 0;
+}
diff --git a/lib/librte_vhost/vhost-net-cdev.h b/lib/librte_vhost/vhost-net-cdev.h
new file mode 100644
index 0000000..01a1b58
--- /dev/null
+++ b/lib/librte_vhost/vhost-net-cdev.h
@@ -0,0 +1,109 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+#include <stdio.h>
+#include <sys/types.h>
+#include <unistd.h>
+#include <linux/vhost.h>
+
+#include <rte_log.h>
+
+/* Macros for printing using RTE_LOG */
+#define RTE_LOGTYPE_VHOST_CONFIG RTE_LOGTYPE_USER1
+#define RTE_LOGTYPE_VHOST_DATA RTE_LOGTYPE_USER1
+
+#ifdef RTE_LIBRTE_VHOST_DEBUG
+#define VHOST_MAX_PRINT_BUFF 6072
+#define LOG_LEVEL RTE_LOG_DEBUG
+#define LOG_DEBUG(log_type, fmt, args...) RTE_LOG(DEBUG, log_type, fmt, ##args)
+#define VHOST_PRINT_PACKET(device, addr, size, header) do { \
+ char *pkt_addr = (char *)(addr); \
+ unsigned int index; \
+ char packet[VHOST_MAX_PRINT_BUFF]; \
+ \
+ if ((header)) \
+ snprintf(packet, VHOST_MAX_PRINT_BUFF, "(%"PRIu64") Header size %d: ", (device->device_fh), (size)); \
+ else \
+ snprintf(packet, VHOST_MAX_PRINT_BUFF, "(%"PRIu64") Packet size %d: ", (device->device_fh), (size)); \
+ for (index = 0; index < (size); index++) { \
+ snprintf(packet + strnlen(packet, VHOST_MAX_PRINT_BUFF), VHOST_MAX_PRINT_BUFF - strnlen(packet, VHOST_MAX_PRINT_BUFF), \
+ "%02hhx ", pkt_addr[index]); \
+ } \
+ snprintf(packet + strnlen(packet, VHOST_MAX_PRINT_BUFF), VHOST_MAX_PRINT_BUFF - strnlen(packet, VHOST_MAX_PRINT_BUFF), "\n"); \
+ \
+ LOG_DEBUG(VHOST_DATA, "%s", packet); \
+} while (0)
+#else
+#define LOG_LEVEL RTE_LOG_INFO
+#define LOG_DEBUG(log_type, fmt, args...) do {} while (0)
+#define VHOST_PRINT_PACKET(device, addr, size, header) do {} while (0)
+#endif
+
+/**
+ * Structure used to identify device context.
+ */
+struct vhost_device_ctx {
+ pid_t pid; /**< PID of process calling the IOCTL. */
+ uint64_t fh; /**< Populated with fi->fh to track the device index. */
+};
+
+/**
+ * Structure contains function pointers to be defined in virtio-net.c. These
+ * functions are called in CUSE context and are used to configure devices.
+ */
+struct vhost_net_device_ops {
+ int (*new_device)(struct vhost_device_ctx);
+ void (*destroy_device)(struct vhost_device_ctx);
+
+ int (*get_features)(struct vhost_device_ctx, uint64_t *);
+ int (*set_features)(struct vhost_device_ctx, uint64_t *);
+
+ int (*set_mem_table)(struct vhost_device_ctx, const void *, uint32_t);
+
+ int (*set_vring_num)(struct vhost_device_ctx, struct vhost_vring_state *);
+ int (*set_vring_addr)(struct vhost_device_ctx, struct vhost_vring_addr *);
+ int (*set_vring_base)(struct vhost_device_ctx, struct vhost_vring_state *);
+ int (*get_vring_base)(struct vhost_device_ctx, uint32_t, struct vhost_vring_state *);
+
+ int (*set_vring_kick)(struct vhost_device_ctx, struct vhost_vring_file *);
+ int (*set_vring_call)(struct vhost_device_ctx, struct vhost_vring_file *);
+
+ int (*set_backend)(struct vhost_device_ctx, struct vhost_vring_file *);
+
+ int (*set_owner)(struct vhost_device_ctx);
+ int (*reset_owner)(struct vhost_device_ctx);
+};
+
+
+struct vhost_net_device_ops const *get_virtio_net_callbacks(void);
diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vhost/vhost_rxtx.c
new file mode 100644
index 0000000..d25457b
--- /dev/null
+++ b/lib/librte_vhost/vhost_rxtx.c
@@ -0,0 +1,292 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+#include <linux/virtio_net.h>
+
+#include <rte_mbuf.h>
+#include <rte_memcpy.h>
+#include <rte_virtio_net.h>
+
+#include "vhost-net-cdev.h"
+
+#define VHOST_MAX_PKT_BURST 64
+#define VHOST_MAX_MRG_PKT_BURST 64
+
+
+uint32_t
+rte_vhost_enqueue_burst(struct virtio_net *dev, uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count)
+{
+ struct vhost_virtqueue *vq;
+ struct vring_desc *desc;
+ struct rte_mbuf *buff;
+ /* The virtio_hdr is initialised to 0. */
+ struct virtio_net_hdr_mrg_rxbuf virtio_hdr = {{0, 0, 0, 0, 0, 0}, 0};
+ uint64_t buff_addr = 0;
+ uint64_t buff_hdr_addr = 0;
+ uint32_t head[VHOST_MAX_PKT_BURST], packet_len = 0;
+ uint32_t head_idx, packet_success = 0;
+ uint32_t mergeable, mrg_count = 0;
+ uint16_t avail_idx, res_cur_idx;
+ uint16_t res_base_idx, res_end_idx;
+ uint16_t free_entries;
+ uint8_t success = 0;
+
+ LOG_DEBUG(VHOST_DATA, "(%"PRIu64") %s()\n", dev->device_fh, __func__);
+ if (unlikely(queue_id != VIRTIO_RXQ)) {
+ LOG_DEBUG(VHOST_DATA, "mq isn't supported in this version.\n");
+ return 0;
+ }
+
+ vq = dev->virtqueue[VIRTIO_RXQ];
+ count = (count > VHOST_MAX_PKT_BURST) ? VHOST_MAX_PKT_BURST : count;
+ /* As many data cores may want access to available buffers, they need to be reserved. */
+ do {
+ res_base_idx = vq->last_used_idx_res;
+ avail_idx = *((volatile uint16_t *)&vq->avail->idx);
+
+ free_entries = (avail_idx - res_base_idx);
+ /*check that we have enough buffers*/
+ if (unlikely(count > free_entries))
+ count = free_entries;
+
+ if (count == 0)
+ return 0;
+
+ res_end_idx = res_base_idx + count;
+ /* vq->last_used_idx_res is atomically updated. */
+ /* TODO: Allow to disable cmpset if no concurrency in application */
+ success = rte_atomic16_cmpset(&vq->last_used_idx_res,
+ res_base_idx, res_end_idx);
+ /* If there is contention here and failed, try again. */
+ } while (unlikely(success == 0));
+ res_cur_idx = res_base_idx;
+ LOG_DEBUG(VHOST_DATA, "(%"PRIu64") Current Index %d| End Index %d\n",
+ dev->device_fh,
+ res_cur_idx, res_end_idx);
+
+ /* Prefetch available ring to retrieve indexes. */
+ rte_prefetch0(&vq->avail->ring[res_cur_idx & (vq->size - 1)]);
+
+ /* Check if the VIRTIO_NET_F_MRG_RXBUF feature is enabled. */
+ mergeable = dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF);
+
+ /* Retrieve all of the head indexes first to avoid caching issues. */
+ for (head_idx = 0; head_idx < count; head_idx++)
+ head[head_idx] = vq->avail->ring[(res_cur_idx + head_idx) & (vq->size - 1)];
+
+ /*Prefetch descriptor index. */
+ rte_prefetch0(&vq->desc[head[packet_success]]);
+
+ while (res_cur_idx != res_end_idx) {
+ /* Get descriptor from available ring */
+ desc = &vq->desc[head[packet_success]];
+
+ buff = pkts[packet_success];
+
+ /* Convert from gpa to vva (guest physical addr -> vhost virtual addr) */
+ buff_addr = gpa_to_vva(dev, desc->addr);
+ /* Prefetch buffer address. */
+ rte_prefetch0((void *)(uintptr_t)buff_addr);
+
+ if (mergeable && (mrg_count != 0)) {
+ desc->len = packet_len = rte_pktmbuf_data_len(buff);
+ } else {
+ /* Copy virtio_hdr to packet and increment buffer address */
+ buff_hdr_addr = buff_addr;
+ packet_len = rte_pktmbuf_data_len(buff) + vq->vhost_hlen;
+
+ /*
+ * If the descriptors are chained the header and data are placed in
+ * separate buffers.
+ */
+ if (desc->flags & VRING_DESC_F_NEXT) {
+ desc->len = vq->vhost_hlen;
+ desc = &vq->desc[desc->next];
+ /* Buffer address translation. */
+ buff_addr = gpa_to_vva(dev, desc->addr);
+ desc->len = rte_pktmbuf_data_len(buff);
+ } else {
+ buff_addr += vq->vhost_hlen;
+ desc->len = packet_len;
+ }
+ }
+
+ VHOST_PRINT_PACKET(dev, (uintptr_t)buff_addr, rte_pktmbuf_data_len(buff), 0);
+
+ /* Update used ring with desc information */
+ vq->used->ring[res_cur_idx & (vq->size - 1)].id = head[packet_success];
+ vq->used->ring[res_cur_idx & (vq->size - 1)].len = packet_len;
+
+ /* Copy mbuf data to buffer */
+ /* TODO fixme for sg mbuf and the case that desc couldn't hold the mbuf data */
+ rte_memcpy((void *)(uintptr_t)buff_addr, (const void *)buff->pkt.data, rte_pktmbuf_data_len(buff));
+
+ res_cur_idx++;
+ packet_success++;
+
+ /* If mergeable is disabled then a header is required per buffer. */
+ if (!mergeable) {
+ rte_memcpy((void *)(uintptr_t)buff_hdr_addr, (const void *)&virtio_hdr, vq->vhost_hlen);
+ VHOST_PRINT_PACKET(dev, (uintptr_t)buff_hdr_addr, vq->vhost_hlen, 1);
+ } else {
+ mrg_count++;
+ /* Merge buffer can only handle so many buffers at a time. Tell the guest if this limit is reached. */
+ if ((mrg_count == VHOST_MAX_MRG_PKT_BURST) || (res_cur_idx == res_end_idx)) {
+ virtio_hdr.num_buffers = mrg_count;
+ LOG_DEBUG(VHOST_DATA, "(%"PRIu64") RX: Num merge buffers %d\n", dev->device_fh, virtio_hdr.num_buffers);
+ rte_memcpy((void *)(uintptr_t)buff_hdr_addr, (const void *)&virtio_hdr, vq->vhost_hlen);
+ VHOST_PRINT_PACKET(dev, (uintptr_t)buff_hdr_addr, vq->vhost_hlen, 1);
+ mrg_count = 0;
+ }
+ }
+ if (res_cur_idx < res_end_idx) {
+ /* Prefetch descriptor index. */
+ rte_prefetch0(&vq->desc[head[packet_success]]);
+ }
+ }
+
+ rte_compiler_barrier();
+
+ /* Wait until it's our turn to add our buffer to the used ring. */
+ while (unlikely(vq->last_used_idx != res_base_idx))
+ rte_pause();
+
+ *(volatile uint16_t *)&vq->used->idx += count;
+ vq->last_used_idx = res_end_idx;
+
+ /* Kick the guest if necessary. */
+ if (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT))
+ eventfd_write((int)vq->kickfd, 1);
+ return count;
+}
+
+
+uint32_t
+rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint32_t count)
+{
+ struct rte_mbuf *mbuf;
+ struct vhost_virtqueue *vq;
+ struct vring_desc *desc;
+ uint64_t buff_addr = 0;
+ uint32_t head[VHOST_MAX_PKT_BURST];
+ uint32_t used_idx;
+ uint32_t i;
+ uint16_t free_entries, packet_success = 0;
+ uint16_t avail_idx;
+
+ if (unlikely(queue_id != VIRTIO_TXQ)) {
+ LOG_DEBUG(VHOST_DATA, "mq isn't supported in this version.\n");
+ return 0;
+ }
+
+ vq = dev->virtqueue[VIRTIO_TXQ];
+ avail_idx = *((volatile uint16_t *)&vq->avail->idx);
+
+ /* If there are no available buffers then return. */
+ if (vq->last_used_idx == avail_idx)
+ return 0;
+
+ LOG_DEBUG(VHOST_DATA, "(%"PRIu64") virtio_dev_tx()\n", dev->device_fh);
+
+ /* Prefetch available ring to retrieve head indexes. */
+ rte_prefetch0(&vq->avail->ring[vq->last_used_idx & (vq->size - 1)]);
+
+ /*get the number of free entries in the ring*/
+ free_entries = (avail_idx - vq->last_used_idx);
+
+ if (free_entries > count)
+ free_entries = count;
+ /* Limit to MAX_PKT_BURST. */
+ if (free_entries > VHOST_MAX_PKT_BURST)
+ free_entries = VHOST_MAX_PKT_BURST;
+
+ LOG_DEBUG(VHOST_DATA, "(%"PRIu64") Buffers available %d\n", dev->device_fh, free_entries);
+ /* Retrieve all of the head indexes first to avoid caching issues. */
+ for (i = 0; i < free_entries; i++)
+ head[i] = vq->avail->ring[(vq->last_used_idx + i) & (vq->size - 1)];
+
+ /* Prefetch descriptor index. */
+ rte_prefetch0(&vq->desc[head[packet_success]]);
+ rte_prefetch0(&vq->used->ring[vq->last_used_idx & (vq->size - 1)]);
+
+ while (packet_success < free_entries) {
+ desc = &vq->desc[head[packet_success]];
+
+ /* Discard first buffer as it is the virtio header */
+ desc = &vq->desc[desc->next];
+
+ /* Buffer address translation. */
+ buff_addr = gpa_to_vva(dev, desc->addr);
+ /* Prefetch buffer address. */
+ rte_prefetch0((void *)(uintptr_t)buff_addr);
+
+ used_idx = vq->last_used_idx & (vq->size - 1);
+
+ if (packet_success < (free_entries - 1)) {
+ /* Prefetch descriptor index. */
+ rte_prefetch0(&vq->desc[head[packet_success+1]]);
+ rte_prefetch0(&vq->used->ring[(used_idx + 1) & (vq->size - 1)]);
+ }
+
+ /* Update used index buffer information. */
+ vq->used->ring[used_idx].id = head[packet_success];
+ vq->used->ring[used_idx].len = 0;
+
+ mbuf = rte_pktmbuf_alloc(mbuf_pool);
+ if (unlikely(mbuf == NULL)) {
+ RTE_LOG(ERR, VHOST_DATA, "Failed to allocate memory for mbuf.\n");
+ return packet_success;
+ }
+ mbuf->pkt.data_len = desc->len;
+ mbuf->pkt.pkt_len = mbuf->pkt.data_len;
+
+ rte_memcpy((void *) mbuf->pkt.data,
+ (const void *) buff_addr, mbuf->pkt.data_len);
+
+ pkts[packet_success] = mbuf;
+
+ VHOST_PRINT_PACKET(dev, (uintptr_t)buff_addr, desc->len, 0);
+
+ vq->last_used_idx++;
+ packet_success++;
+ }
+
+ rte_compiler_barrier();
+ vq->used->idx += packet_success;
+ /* Kick guest if required. */
+ if (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT))
+ eventfd_write((int)vq->kickfd, 1);
+
+ return packet_success;
+}
diff --git a/lib/librte_vhost/virtio-net.c b/lib/librte_vhost/virtio-net.c
new file mode 100644
index 0000000..80e3b8c
--- /dev/null
+++ b/lib/librte_vhost/virtio-net.c
@@ -0,0 +1,1002 @@
+/*-
+ * BSD LICENSE
+ *
+ * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <dirent.h>
+#include <fuse/cuse_lowlevel.h>
+#include <linux/vhost.h>
+#include <linux/virtio_net.h>
+#include <stddef.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <sys/eventfd.h>
+#include <sys/ioctl.h>
+#include <sys/mman.h>
+#include <unistd.h>
+
+#include <rte_ethdev.h>
+#include <rte_log.h>
+#include <rte_string_fns.h>
+#include <rte_memory.h>
+#include <rte_virtio_net.h>
+
+#include "vhost-net-cdev.h"
+#include "eventfd_link/eventfd_link.h"
+
+/**
+ * Device linked list structure for configuration.
+ */
+struct virtio_net_config_ll {
+ struct virtio_net dev; /* Virtio device. */
+ struct virtio_net_config_ll *next; /* Next entry on linked list. */
+};
+
+static const char eventfd_cdev[] = "/dev/eventfd-link";
+
+/* device ops to add/remove device to data core. */
+static struct virtio_net_device_ops const *notify_ops;
+/* Root address of the linked list in the configuration core. */
+static struct virtio_net_config_ll *ll_root;
+
+/* Features supported by this library. */
+#define VHOST_SUPPORTED_FEATURES (1ULL << VIRTIO_NET_F_MRG_RXBUF)
+static uint64_t VHOST_FEATURES = VHOST_SUPPORTED_FEATURES;
+
+/* Line size for reading maps file. */
+static const uint32_t BUFSIZE = PATH_MAX;
+
+/* Size of prot char array in procmap. */
+#define PROT_SZ 5
+
+/* Number of elements in procmap struct. */
+#define PROCMAP_SZ 8
+
+/* Structure containing information gathered from maps file. */
+struct procmap {
+ uint64_t va_start; /* Start virtual address in file. */
+ uint64_t len; /* Size of file. */
+ uint64_t pgoff; /* Not used. */
+ uint32_t maj; /* Not used. */
+ uint32_t min; /* Not used. */
+ uint32_t ino; /* Not used. */
+ char prot[PROT_SZ]; /* Not used. */
+ char fname[PATH_MAX]; /* File name. */
+};
+
+/**
+ * Converts QEMU virtual address to Vhost virtual address. This function is used
+ * to convert the ring addresses to our address space.
+ */
+static uint64_t
+qva_to_vva(struct virtio_net *dev, uint64_t qemu_va)
+{
+ struct virtio_memory_regions *region;
+ uint64_t vhost_va = 0;
+ uint32_t regionidx = 0;
+
+ /* Find the region where the address lives. */
+ for (regionidx = 0; regionidx < dev->mem->nregions; regionidx++) {
+ region = &dev->mem->regions[regionidx];
+ if ((qemu_va >= region->userspace_address) &&
+ (qemu_va <= region->userspace_address +
+ region->memory_size)) {
+ vhost_va = dev->mem->mapped_address + qemu_va - dev->mem->base_address;
+ break;
+ }
+ }
+ return vhost_va;
+}
+
+/**
+ * Locate the file containing QEMU's memory space and map it to our address space.
+ */
+static int
+host_memory_map(struct virtio_net *dev, struct virtio_memory *mem, pid_t pid, uint64_t addr)
+{
+ struct dirent *dptr = NULL;
+ struct procmap procmap;
+ DIR *dp = NULL;
+ int fd;
+ int i;
+ char memfile[PATH_MAX];
+ char mapfile[PATH_MAX];
+ char procdir[PATH_MAX];
+ char resolved_path[PATH_MAX];
+ FILE *fmap;
+ void *map;
+ uint8_t found = 0;
+ char line[BUFSIZE];
+ char dlm[] = "- : ";
+ char *str, *sp, *in[PROCMAP_SZ];
+ char *end = NULL;
+
+ /* Path where mem files are located. */
+ snprintf(procdir, PATH_MAX, "/proc/%u/fd/", pid);
+ /* Maps file used to locate mem file. */
+ snprintf(mapfile, PATH_MAX, "/proc/%u/maps", pid);
+
+ fmap = fopen(mapfile, "r");
+ if (fmap == NULL) {
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to open maps file for pid %d\n", dev->device_fh, pid);
+ return -1;
+ }
+
+ /* Read through maps file until we find out base_address. */
+ while (fgets(line, BUFSIZE, fmap) != 0) {
+ str = line;
+ errno = 0;
+ /* Split line in to fields. */
+ for (i = 0; i < PROCMAP_SZ; i++) {
+ in[i] = strtok_r(str, &dlm[i], &sp);
+ if ((in[i] == NULL) || (errno != 0)) {
+ fclose(fmap);
+ return -1;
+ }
+ str = NULL;
+ }
+
+ /* Convert/Copy each field as needed. */
+ procmap.va_start = strtoull(in[0], &end, 16);
+ if ((in[0] == '\0') || (end == NULL) || (*end != '\0') || (errno != 0)) {
+ fclose(fmap);
+ return -1;
+ }
+
+ procmap.len = strtoull(in[1], &end, 16);
+ if ((in[1] == '\0') || (end == NULL) || (*end != '\0') || (errno != 0)) {
+ fclose(fmap);
+ return -1;
+ }
+
+ procmap.pgoff = strtoull(in[3], &end, 16);
+ if ((in[3] == '\0') || (end == NULL) || (*end != '\0') || (errno != 0)) {
+ fclose(fmap);
+ return -1;
+ }
+
+ procmap.maj = strtoul(in[4], &end, 16);
+ if ((in[4] == '\0') || (end == NULL) || (*end != '\0') || (errno != 0)) {
+ fclose(fmap);
+ return -1;
+ }
+
+ procmap.min = strtoul(in[5], &end, 16);
+ if ((in[5] == '\0') || (end == NULL) || (*end != '\0') || (errno != 0)) {
+ fclose(fmap);
+ return -1;
+ }
+
+ procmap.ino = strtoul(in[6], &end, 16);
+ if ((in[6] == '\0') || (end == NULL) || (*end != '\0') || (errno != 0)) {
+ fclose(fmap);
+ return -1;
+ }
+
+ memcpy(&procmap.prot, in[2], PROT_SZ);
+ memcpy(&procmap.fname, in[7], PATH_MAX);
+
+ if (procmap.va_start == addr) {
+ procmap.len = procmap.len - procmap.va_start;
+ found = 1;
+ break;
+ }
+ }
+ fclose(fmap);
+
+ if (!found) {
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find memory file in pid %d maps file\n", dev->device_fh, pid);
+ return -1;
+ }
+
+ /* Find the guest memory file among the process fds. */
+ dp = opendir(procdir);
+ if (dp == NULL) {
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Cannot open pid %d process directory\n", dev->device_fh, pid);
+ return -1;
+
+ }
+
+ found = 0;
+
+ /* Read the fd directory contents. */
+ while (NULL != (dptr = readdir(dp))) {
+ snprintf(memfile, PATH_MAX, "/proc/%u/fd/%s", pid, dptr->d_name);
+ realpath(memfile, resolved_path);
+ if (resolved_path == NULL) {
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to resolve fd directory\n", dev->device_fh);
+ closedir(dp);
+ return -1;
+ }
+ if (strncmp(resolved_path, procmap.fname,
+ strnlen(procmap.fname, PATH_MAX)) == 0) {
+ found = 1;
+ break;
+ }
+ }
+
+ closedir(dp);
+
+ if (found == 0) {
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find memory file for pid %d\n", dev->device_fh, pid);
+ return -1;
+ }
+ /* Open the shared memory file and map the memory into this process. */
+ fd = open(memfile, O_RDWR);
+
+ if (fd == -1) {
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to open %s for pid %d\n", dev->device_fh, memfile, pid);
+ return -1;
+ }
+
+ map = mmap(0, (size_t)procmap.len, PROT_READ|PROT_WRITE , MAP_POPULATE|MAP_SHARED, fd, 0);
+ close(fd);
+
+ if (map == MAP_FAILED) {
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Error mapping the file %s for pid %d\n", dev->device_fh, memfile, pid);
+ return -1;
+ }
+
+ /* Store the memory address and size in the device data structure */
+ mem->mapped_address = (uint64_t)(uintptr_t)map;
+ mem->mapped_size = procmap.len;
+
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") Mem File: %s->%s - Size: %llu - VA: %p\n", dev->device_fh,
+ memfile, resolved_path, (long long unsigned)mem->mapped_size, map);
+
+ return 0;
+}
+
+/**
+ * Initialise all variables in device structure.
+ */
+static void
+init_device(struct virtio_net *dev)
+{
+ uint64_t vq_offset;
+
+ /* Virtqueues have already been malloced so we don't want to set them to NULL. */
+ vq_offset = offsetof(struct virtio_net, mem);
+
+ /* Set everything to 0. */
+ memset((void *)(uintptr_t)((uint64_t)(uintptr_t)dev + vq_offset), 0,
+ (sizeof(struct virtio_net) - (size_t)vq_offset));
+ memset(dev->virtqueue[VIRTIO_RXQ], 0, sizeof(struct vhost_virtqueue));
+ memset(dev->virtqueue[VIRTIO_TXQ], 0, sizeof(struct vhost_virtqueue));
+
+ /* Backends are set to -1 indicating an inactive device. */
+ dev->virtqueue[VIRTIO_RXQ]->backend = VIRTIO_DEV_STOPPED;
+ dev->virtqueue[VIRTIO_TXQ]->backend = VIRTIO_DEV_STOPPED;
+}
+
+/**
+ * Unmap any memory, close any file descriptors and free any memory owned by a device.
+ */
+static void
+cleanup_device(struct virtio_net *dev)
+{
+ /* Unmap QEMU memory file if mapped. */
+ if (dev->mem) {
+ munmap((void *)(uintptr_t)dev->mem->mapped_address, (size_t)dev->mem->mapped_size);
+ free(dev->mem);
+ }
+
+ /* Close any event notifiers opened by device. */
+ if (dev->virtqueue[VIRTIO_RXQ]->callfd)
+ close((int)dev->virtqueue[VIRTIO_RXQ]->callfd);
+ if (dev->virtqueue[VIRTIO_RXQ]->kickfd)
+ close((int)dev->virtqueue[VIRTIO_RXQ]->kickfd);
+ if (dev->virtqueue[VIRTIO_TXQ]->callfd)
+ close((int)dev->virtqueue[VIRTIO_TXQ]->callfd);
+ if (dev->virtqueue[VIRTIO_TXQ]->kickfd)
+ close((int)dev->virtqueue[VIRTIO_TXQ]->kickfd);
+}
+
+/**
+ * Release virtqueues and device memory.
+ */
+static void
+free_device(struct virtio_net_config_ll *ll_dev)
+{
+ /* Free any malloc'd memory */
+ free(ll_dev->dev.virtqueue[VIRTIO_RXQ]);
+ free(ll_dev->dev.virtqueue[VIRTIO_TXQ]);
+ free(ll_dev);
+}
+
+/**
+ * Retrieves an entry from the devices configuration linked list.
+ */
+static struct virtio_net_config_ll *
+get_config_ll_entry(struct vhost_device_ctx ctx)
+{
+ struct virtio_net_config_ll *ll_dev = ll_root;
+
+ /* Loop through linked list until the device_fh is found. */
+ while (ll_dev != NULL) {
+ if (ll_dev->dev.device_fh == ctx.fh)
+ return ll_dev;
+ ll_dev = ll_dev->next;
+ }
+
+ return NULL;
+}
+
+/**
+ * Searches the configuration core linked list and retrieves the device if it exists.
+ */
+static struct virtio_net *
+get_device(struct vhost_device_ctx ctx)
+{
+ struct virtio_net_config_ll *ll_dev;
+
+ ll_dev = get_config_ll_entry(ctx);
+
+ /* If a matching entry is found in the linked list, return the device in that entry. */
+ if (ll_dev)
+ return &ll_dev->dev;
+
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Device not found in linked list.\n", ctx.fh);
+ return NULL;
+}
+
+/**
+ * Add entry containing a device to the device configuration linked list.
+ */
+static void
+add_config_ll_entry(struct virtio_net_config_ll *new_ll_dev)
+{
+ struct virtio_net_config_ll *ll_dev = ll_root;
+
+ /* If ll_dev == NULL then this is the first device so go to else */
+ if (ll_dev) {
+ /* If the 1st device_fh != 0 then we insert our device here. */
+ if (ll_dev->dev.device_fh != 0) {
+ new_ll_dev->dev.device_fh = 0;
+ new_ll_dev->next = ll_dev;
+ ll_root = new_ll_dev;
+ } else {
+ /* Increment through the ll until we find un unused device_fh. Insert the device at that entry*/
+ while ((ll_dev->next != NULL) && (ll_dev->dev.device_fh == (ll_dev->next->dev.device_fh - 1)))
+ ll_dev = ll_dev->next;
+
+ new_ll_dev->dev.device_fh = ll_dev->dev.device_fh + 1;
+ new_ll_dev->next = ll_dev->next;
+ ll_dev->next = new_ll_dev;
+ }
+ } else {
+ ll_root = new_ll_dev;
+ ll_root->dev.device_fh = 0;
+ }
+
+}
+
+/**
+ * Remove an entry from the device configuration linked list.
+ */
+static struct virtio_net_config_ll *
+rm_config_ll_entry(struct virtio_net_config_ll *ll_dev, struct virtio_net_config_ll *ll_dev_last)
+{
+ /* First remove the device and then clean it up. */
+ if (ll_dev == ll_root) {
+ ll_root = ll_dev->next;
+ cleanup_device(&ll_dev->dev);
+ free_device(ll_dev);
+ return ll_root;
+ } else {
+ if (likely(ll_dev_last != NULL)) {
+ ll_dev_last->next = ll_dev->next;
+ cleanup_device(&ll_dev->dev);
+ free_device(ll_dev);
+ return ll_dev_last->next;
+ } else {
+ cleanup_device(&ll_dev->dev);
+ free_device(ll_dev);
+ RTE_LOG(ERR, VHOST_CONFIG, "Remove entry from config_ll failed\n");
+ return NULL;
+ }
+ }
+}
+
+
+/**
+ * Function is called from the CUSE open function. The device structure is
+ * initialised and a new entry is added to the device configuration linked
+ * list.
+ */
+static int
+new_device(struct vhost_device_ctx ctx)
+{
+ struct virtio_net_config_ll *new_ll_dev;
+ struct vhost_virtqueue *virtqueue_rx, *virtqueue_tx;
+
+ /* Setup device and virtqueues. */
+ new_ll_dev = malloc(sizeof(struct virtio_net_config_ll));
+ if (new_ll_dev == NULL) {
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to allocate memory for dev.\n", ctx.fh);
+ return -1;
+ }
+
+ virtqueue_rx = malloc(sizeof(struct vhost_virtqueue));
+ if (virtqueue_rx == NULL) {
+ free(new_ll_dev);
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to allocate memory for virtqueue_rx.\n", ctx.fh);
+ return -1;
+ }
+
+ virtqueue_tx = malloc(sizeof(struct vhost_virtqueue));
+ if (virtqueue_tx == NULL) {
+ free(virtqueue_rx);
+ free(new_ll_dev);
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to allocate memory for virtqueue_tx.\n", ctx.fh);
+ return -1;
+ }
+
+ new_ll_dev->dev.virtqueue[VIRTIO_RXQ] = virtqueue_rx;
+ new_ll_dev->dev.virtqueue[VIRTIO_TXQ] = virtqueue_tx;
+
+ /* Initialise device and virtqueues. */
+ init_device(&new_ll_dev->dev);
+
+ new_ll_dev->next = NULL;
+
+ /* Add entry to device configuration linked list. */
+ add_config_ll_entry(new_ll_dev);
+
+ return new_ll_dev->dev.device_fh;
+}
+
+/**
+ * Function is called from the CUSE release function. This function will cleanup
+ * the device and remove it from device configuration linked list.
+ */
+static void
+destroy_device(struct vhost_device_ctx ctx)
+{
+ struct virtio_net_config_ll *ll_dev_cur_ctx, *ll_dev_last = NULL;
+ struct virtio_net_config_ll *ll_dev_cur = ll_root;
+
+ /* Find the linked list entry for the device to be removed. */
+ ll_dev_cur_ctx = get_config_ll_entry(ctx);
+ while (ll_dev_cur != NULL) {
+ /* If the device is found or a device that doesn't exist is found then it is removed. */
+ if (ll_dev_cur == ll_dev_cur_ctx) {
+ /*
+ * If the device is running on a data core then call the function to remove it from
+ * the data core.
+ */
+ if ((ll_dev_cur->dev.flags & VIRTIO_DEV_RUNNING))
+ notify_ops->destroy_device(&(ll_dev_cur->dev));
+ ll_dev_cur = rm_config_ll_entry(ll_dev_cur, ll_dev_last);
+ /*TODO return here? */
+ } else {
+ ll_dev_last = ll_dev_cur;
+ ll_dev_cur = ll_dev_cur->next;
+ }
+ }
+}
+
+/**
+ * Called from CUSE IOCTL: VHOST_SET_OWNER
+ * This function just returns success at the moment unless the device hasn't been initialised.
+ */
+static int
+set_owner(struct vhost_device_ctx ctx)
+{
+ struct virtio_net *dev;
+
+ dev = get_device(ctx);
+ if (dev == NULL)
+ return -1;
+
+ return 0;
+}
+
+/**
+ * Called from CUSE IOCTL: VHOST_RESET_OWNER
+ */
+static int
+reset_owner(struct vhost_device_ctx ctx)
+{
+ struct virtio_net_config_ll *ll_dev;
+
+ ll_dev = get_config_ll_entry(ctx);
+
+ cleanup_device(&ll_dev->dev);
+ init_device(&ll_dev->dev);
+
+ return 0;
+}
+
+/**
+ * Called from CUSE IOCTL: VHOST_GET_FEATURES
+ * The features that we support are requested.
+ */
+static int
+get_features(struct vhost_device_ctx ctx, uint64_t *pu)
+{
+ struct virtio_net *dev;
+
+ dev = get_device(ctx);
+ if (dev == NULL)
+ return -1;
+
+ /* Send our supported features. */
+ *pu = VHOST_FEATURES;
+ return 0;
+}
+
+/**
+ * Called from CUSE IOCTL: VHOST_SET_FEATURES
+ * We receive the negotiated set of features supported by us and the virtio device.
+ */
+static int
+set_features(struct vhost_device_ctx ctx, uint64_t *pu)
+{
+ struct virtio_net *dev;
+
+ dev = get_device(ctx);
+ if (dev == NULL)
+ return -1;
+ if (*pu & ~VHOST_FEATURES)
+ return -1;
+
+ /* Store the negotiated feature list for the device. */
+ dev->features = *pu;
+
+ /* Set the vhost_hlen depending on if VIRTIO_NET_F_MRG_RXBUF is set. */
+ if (dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF)) {
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") Mergeable RX buffers enabled\n", dev->device_fh);
+ dev->virtqueue[VIRTIO_RXQ]->vhost_hlen = sizeof(struct virtio_net_hdr_mrg_rxbuf);
+ dev->virtqueue[VIRTIO_TXQ]->vhost_hlen = sizeof(struct virtio_net_hdr_mrg_rxbuf);
+ } else {
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") Mergeable RX buffers disabled\n", dev->device_fh);
+ dev->virtqueue[VIRTIO_RXQ]->vhost_hlen = sizeof(struct virtio_net_hdr);
+ dev->virtqueue[VIRTIO_TXQ]->vhost_hlen = sizeof(struct virtio_net_hdr);
+ }
+ return 0;
+}
+
+
+/**
+ * Called from CUSE IOCTL: VHOST_SET_MEM_TABLE
+ * This function creates and populates the memory structure for the device. This includes
+ * storing offsets used to translate buffer addresses.
+ */
+static int
+set_mem_table(struct vhost_device_ctx ctx, const void *mem_regions_addr, uint32_t nregions)
+{
+ struct virtio_net *dev;
+ struct vhost_memory_region *mem_regions;
+ struct virtio_memory *mem;
+ uint64_t size = offsetof(struct vhost_memory, regions);
+ uint32_t regionidx, valid_regions;
+
+ dev = get_device(ctx);
+ if (dev == NULL)
+ return -1;
+
+ if (dev->mem) {
+ munmap((void *)(uintptr_t)dev->mem->mapped_address, (size_t)dev->mem->mapped_size);
+ free(dev->mem);
+ }
+
+ /* Malloc the memory structure depending on the number of regions. */
+ mem = calloc(1, sizeof(struct virtio_memory) + (sizeof(struct virtio_memory_regions) * nregions));
+ if (mem == NULL) {
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to allocate memory for dev->mem.\n", dev->device_fh);
+ return -1;
+ }
+
+ mem->nregions = nregions;
+
+ mem_regions = (void *)(uintptr_t)((uint64_t)(uintptr_t)mem_regions_addr + size);
+
+ for (regionidx = 0; regionidx < mem->nregions; regionidx++) {
+ /* Populate the region structure for each region. */
+ mem->regions[regionidx].guest_phys_address = mem_regions[regionidx].guest_phys_addr;
+ mem->regions[regionidx].guest_phys_address_end = mem->regions[regionidx].guest_phys_address +
+ mem_regions[regionidx].memory_size;
+ mem->regions[regionidx].memory_size = mem_regions[regionidx].memory_size;
+ mem->regions[regionidx].userspace_address = mem_regions[regionidx].userspace_addr;
+
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") REGION: %u - GPA: %p - QEMU VA: %p - SIZE (%"PRIu64")\n", dev->device_fh,
+ regionidx, (void *)(uintptr_t)mem->regions[regionidx].guest_phys_address,
+ (void *)(uintptr_t)mem->regions[regionidx].userspace_address,
+ mem->regions[regionidx].memory_size);
+
+ /*set the base address mapping*/
+ if (mem->regions[regionidx].guest_phys_address == 0x0) {
+ mem->base_address = mem->regions[regionidx].userspace_address;
+ /* Map VM memory file */
+ if (host_memory_map(dev, mem, ctx.pid, mem->base_address) != 0) {
+ free(mem);
+ return -1;
+ }
+ }
+ }
+
+ /* Check that we have a valid base address. */
+ if (mem->base_address == 0) {
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find base address of qemu memory file.\n", dev->device_fh);
+ free(mem);
+ return -1;
+ }
+
+ /* Check if all of our regions have valid mappings. Usually one does not exist in the QEMU memory file. */
+ valid_regions = mem->nregions;
+ for (regionidx = 0; regionidx < mem->nregions; regionidx++) {
+ if ((mem->regions[regionidx].userspace_address < mem->base_address) ||
+ (mem->regions[regionidx].userspace_address > (mem->base_address + mem->mapped_size)))
+ valid_regions--;
+ }
+
+ /* If a region does not have a valid mapping we rebuild our memory struct to contain only valid entries. */
+ if (valid_regions != mem->nregions) {
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") Not all memory regions exist in the QEMU mem file. Re-populating mem structure\n",
+ dev->device_fh);
+
+ /* Re-populate the memory structure with only valid regions. Invalid regions are over-written with memmove. */
+ valid_regions = 0;
+
+ for (regionidx = mem->nregions; 0 != regionidx--;) {
+ if ((mem->regions[regionidx].userspace_address < mem->base_address) ||
+ (mem->regions[regionidx].userspace_address > (mem->base_address + mem->mapped_size))) {
+ memmove(&mem->regions[regionidx], &mem->regions[regionidx + 1],
+ sizeof(struct virtio_memory_regions) * valid_regions);
+ } else {
+ valid_regions++;
+ }
+ }
+ }
+ mem->nregions = valid_regions;
+ dev->mem = mem;
+
+ /*
+ * Calculate the address offset for each region. This offset is used to identify the vhost virtual address
+ * corresponding to a QEMU guest physical address.
+ */
+ for (regionidx = 0; regionidx < dev->mem->nregions; regionidx++)
+ dev->mem->regions[regionidx].address_offset = dev->mem->regions[regionidx].userspace_address - dev->mem->base_address
+ + dev->mem->mapped_address - dev->mem->regions[regionidx].guest_phys_address;
+
+ return 0;
+}
+
+/**
+ * Called from CUSE IOCTL: VHOST_SET_VRING_NUM
+ * The virtio device sends us the size of the descriptor ring.
+ */
+static int
+set_vring_num(struct vhost_device_ctx ctx, struct vhost_vring_state *state)
+{
+ struct virtio_net *dev;
+
+ dev = get_device(ctx);
+ if (dev == NULL)
+ return -1;
+
+ /* State->index refers to the queue index. The TX queue is 1, RX queue is 0. */
+ dev->virtqueue[state->index]->size = state->num;
+
+ return 0;
+}
+
+/**
+ * Called from CUSE IOCTL: VHOST_SET_VRING_ADDR
+ * The virtio device sends us the desc, used and avail ring addresses. This function
+ * then converts these to our address space.
+ */
+static int
+set_vring_addr(struct vhost_device_ctx ctx, struct vhost_vring_addr *addr)
+{
+ struct virtio_net *dev;
+ struct vhost_virtqueue *vq;
+
+ dev = get_device(ctx);
+ if (dev == NULL)
+ return -1;
+
+ /* addr->index refers to the queue index. The TX queue is 1, RX queue is 0. */
+ vq = dev->virtqueue[addr->index];
+
+ /* The addresses are converted from QEMU virtual to Vhost virtual. */
+ vq->desc = (struct vring_desc *)(uintptr_t)qva_to_vva(dev, addr->desc_user_addr);
+ if (vq->desc == 0) {
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find descriptor ring address.\n", dev->device_fh);
+ return -1;
+ }
+
+ vq->avail = (struct vring_avail *)(uintptr_t)qva_to_vva(dev, addr->avail_user_addr);
+ if (vq->avail == 0) {
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find available ring address.\n", dev->device_fh);
+ return -1;
+ }
+
+ vq->used = (struct vring_used *)(uintptr_t)qva_to_vva(dev, addr->used_user_addr);
+ if (vq->used == 0) {
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find used ring address.\n", dev->device_fh);
+ return -1;
+ }
+
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") mapped address desc: %p\n", dev->device_fh, vq->desc);
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") mapped address avail: %p\n", dev->device_fh, vq->avail);
+ LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") mapped address used: %p\n", dev->device_fh, vq->used);
+
+ return 0;
+}
+
+/**
+ * Called from CUSE IOCTL: VHOST_SET_VRING_BASE
+ * The virtio device sends us the available ring last used index.
+ */
+static int
+set_vring_base(struct vhost_device_ctx ctx, struct vhost_vring_state *state)
+{
+ struct virtio_net *dev;
+
+ dev = get_device(ctx);
+ if (dev == NULL)
+ return -1;
+
+ /* State->index refers to the queue index. The TX queue is 1, RX queue is 0. */
+ dev->virtqueue[state->index]->last_used_idx = state->num;
+ dev->virtqueue[state->index]->last_used_idx_res = state->num;
+
+ return 0;
+}
+
+/**
+ * Called from CUSE IOCTL: VHOST_GET_VRING_BASE
+ * We send the virtio device our available ring last used index.
+ */
+static int
+get_vring_base(struct vhost_device_ctx ctx, uint32_t index, struct vhost_vring_state *state)
+{
+ struct virtio_net *dev;
+
+ dev = get_device(ctx);
+ if (dev == NULL)
+ return -1;
+
+ state->index = index;
+ /* State->index refers to the queue index. The TX queue is 1, RX queue is 0. */
+ state->num = dev->virtqueue[state->index]->last_used_idx;
+
+ return 0;
+}
+
+/**
+ * This function uses the eventfd_link kernel module to copy an eventfd file descriptor
+ * provided by QEMU in to our process space.
+ */
+static int
+eventfd_copy(struct virtio_net *dev, struct eventfd_copy *eventfd_copy)
+{
+ int eventfd_link, ret;
+
+ /* Open the character device to the kernel module. */
+ eventfd_link = open(eventfd_cdev, O_RDWR);
+ if (eventfd_link < 0) {
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") eventfd_link module is not loaded\n", dev->device_fh);
+ return -1;
+ }
+
+ /* Call the IOCTL to copy the eventfd. */
+ ret = ioctl(eventfd_link, EVENTFD_COPY, eventfd_copy);
+ close(eventfd_link);
+
+ if (ret < 0) {
+ RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") EVENTFD_COPY ioctl failed\n", dev->device_fh);
+ return -1;
+ }
+
+
+ return 0;
+}
+
+/**
+ * Called from CUSE IOCTL: VHOST_SET_VRING_CALL
+ * The virtio device sends an eventfd to interrupt the guest. This fd gets copied in
+ * to our process space.
+ */
+static int
+set_vring_call(struct vhost_device_ctx ctx, struct vhost_vring_file *file)
+{
+ struct virtio_net *dev;
+ struct eventfd_copy eventfd_kick;
+ struct vhost_virtqueue *vq;
+
+ dev = get_device(ctx);
+ if (dev == NULL)
+ return -1;
+
+ /* file->index refers to the queue index. The TX queue is 1, RX queue is 0. */
+ vq = dev->virtqueue[file->index];
+
+ if (vq->kickfd)
+ close((int)vq->kickfd);
+
+ /* Populate the eventfd_copy structure and call eventfd_copy. */
+ vq->kickfd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ eventfd_kick.source_fd = vq->kickfd;
+ eventfd_kick.target_fd = file->fd;
+ eventfd_kick.target_pid = ctx.pid;
+
+ if (eventfd_copy(dev, &eventfd_kick))
+ return -1;
+
+ return 0;
+}
+
+/**
+ * Called from CUSE IOCTL: VHOST_SET_VRING_KICK
+ * The virtio device sends an eventfd that it can use to notify us. This fd gets copied in
+ * to our process space.
+ */
+static int
+set_vring_kick(struct vhost_device_ctx ctx, struct vhost_vring_file *file)
+{
+ struct virtio_net *dev;
+ struct eventfd_copy eventfd_call;
+ struct vhost_virtqueue *vq;
+
+ dev = get_device(ctx);
+ if (dev == NULL)
+ return -1;
+
+ /* file->index refers to the queue index. The TX queue is 1, RX queue is 0. */
+ vq = dev->virtqueue[file->index];
+
+ if (vq->callfd)
+ close((int)vq->callfd);
+
+ /* Populate the eventfd_copy structure and call eventfd_copy. */
+ vq->callfd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
+ eventfd_call.source_fd = vq->callfd;
+ eventfd_call.target_fd = file->fd;
+ eventfd_call.target_pid = ctx.pid;
+
+ if (eventfd_copy(dev, &eventfd_call))
+ return -1;
+
+ return 0;
+}
+
+/**
+ * Called from CUSE IOCTL: VHOST_NET_SET_BACKEND
+ * To complete device initialisation when the virtio driver is loaded we are provided with a
+ * valid fd for a tap device (not used by us). If this happens then we can add the device to a
+ * data core. When the virtio driver is removed we get fd=-1. At that point we remove the device
+ * from the data core. The device will still exist in the device configuration linked list.
+ */
+static int
+set_backend(struct vhost_device_ctx ctx, struct vhost_vring_file *file)
+{
+ struct virtio_net *dev;
+
+ dev = get_device(ctx);
+ if (dev == NULL)
+ return -1;
+
+ /* file->index refers to the queue index. The TX queue is 1, RX queue is 0. */
+ dev->virtqueue[file->index]->backend = file->fd;
+
+ /* If the device isn't already running and both backend fds are set we add the device. */
+ if (!(dev->flags & VIRTIO_DEV_RUNNING)) {
+ if (((int)dev->virtqueue[VIRTIO_TXQ]->backend != VIRTIO_DEV_STOPPED) &&
+ ((int)dev->virtqueue[VIRTIO_RXQ]->backend != VIRTIO_DEV_STOPPED))
+ return notify_ops->new_device(dev);
+ /* Otherwise we remove it. */
+ } else
+ if (file->fd == VIRTIO_DEV_STOPPED)
+ notify_ops->destroy_device(dev);
+ return 0;
+}
+
+/**
+ * Function pointers are set for the device operations to allow CUSE to call functions
+ * when an IOCTL, device_add or device_release is received.
+ */
+static const struct vhost_net_device_ops vhost_device_ops = {
+ .new_device = new_device,
+ .destroy_device = destroy_device,
+
+ .get_features = get_features,
+ .set_features = set_features,
+
+ .set_mem_table = set_mem_table,
+
+ .set_vring_num = set_vring_num,
+ .set_vring_addr = set_vring_addr,
+ .set_vring_base = set_vring_base,
+ .get_vring_base = get_vring_base,
+
+ .set_vring_kick = set_vring_kick,
+ .set_vring_call = set_vring_call,
+
+ .set_backend = set_backend,
+
+ .set_owner = set_owner,
+ .reset_owner = reset_owner,
+};
+
+/**
+ * Called by main to setup callbacks when registering CUSE device.
+ */
+struct vhost_net_device_ops const *
+get_virtio_net_callbacks(void)
+{
+ return &vhost_device_ops;
+}
+
+int rte_vhost_enable_guest_notification(struct virtio_net *dev, uint16_t queue_id, int enable)
+{
+ if (enable) {
+ RTE_LOG(ERR, VHOST_CONFIG, "guest notification isn't supported.\n");
+ return -1;
+ }
+
+ dev->virtqueue[queue_id]->used->flags = enable ? 0 : VRING_USED_F_NO_NOTIFY;
+ return 0;
+}
+
+uint64_t rte_vhost_feature_get(void)
+{
+ return VHOST_FEATURES;
+}
+
+int rte_vhost_feature_disable(uint64_t feature_mask)
+{
+ VHOST_FEATURES = VHOST_FEATURES & ~feature_mask;
+ return 0;
+}
+
+int rte_vhost_feature_enable(uint64_t feature_mask)
+{
+ if ((feature_mask & VHOST_SUPPORTED_FEATURES) == feature_mask) {
+ VHOST_FEATURES = VHOST_FEATURES | feature_mask;
+ return 0;
+ }
+ return -1;
+}
+
+
+/*
+ * Register ops so that we can add/remove device to data core.
+ */
+int
+rte_vhost_driver_callback_register(struct virtio_net_device_ops const * const ops)
+{
+ notify_ops = ops;
+
+ return 0;
+}
--
1.8.1.4
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-dev] [PATCH v3] lib/librte_vhost: vhost library support to facilitate integration with DPDK accelerated vswitch.
2014-08-05 15:53 ` [dpdk-dev] [PATCH v3] lib/librte_vhost: vhost library support to facilitate integration with DPDK accelerated vswitch Huawei Xie
@ 2014-08-05 15:56 ` Xie, Huawei
2014-08-28 20:15 ` Thomas Monjalon
1 sibling, 0 replies; 9+ messages in thread
From: Xie, Huawei @ 2014-08-05 15:56 UTC (permalink / raw)
To: dev
This v3 patch fixes plenty of checkpatch issues.
> -----Original Message-----
> From: Xie, Huawei
> Sent: Tuesday, August 05, 2014 11:54 PM
> To: dev@dpdk.org
> Cc: Xie, Huawei
> Subject: [PATCH v3] lib/librte_vhost: vhost library support to facilitate
> integration with DPDK accelerated vswitch.
>
> Signed-off-by: Huawei Xie <huawei.xie@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Acked-by: Tommy Long <thomas.long@intel.com>
> ---
> config/common_linuxapp | 7 +
> lib/Makefile | 1 +
> lib/librte_vhost/Makefile | 48 ++
> lib/librte_vhost/eventfd_link/Makefile | 39 +
> lib/librte_vhost/eventfd_link/eventfd_link.c | 194 +++++
> lib/librte_vhost/eventfd_link/eventfd_link.h | 40 +
> lib/librte_vhost/rte_virtio_net.h | 192 +++++
> lib/librte_vhost/vhost-net-cdev.c | 363 ++++++++++
> lib/librte_vhost/vhost-net-cdev.h | 109 +++
> lib/librte_vhost/vhost_rxtx.c | 292 ++++++++
> lib/librte_vhost/virtio-net.c | 1002 ++++++++++++++++++++++++++
> 11 files changed, 2287 insertions(+)
> create mode 100644 lib/librte_vhost/Makefile
> create mode 100644 lib/librte_vhost/eventfd_link/Makefile
> create mode 100644 lib/librte_vhost/eventfd_link/eventfd_link.c
> create mode 100644 lib/librte_vhost/eventfd_link/eventfd_link.h
> create mode 100644 lib/librte_vhost/rte_virtio_net.h
> create mode 100644 lib/librte_vhost/vhost-net-cdev.c
> create mode 100644 lib/librte_vhost/vhost-net-cdev.h
> create mode 100644 lib/librte_vhost/vhost_rxtx.c
> create mode 100644 lib/librte_vhost/virtio-net.c
>
> diff --git a/config/common_linuxapp b/config/common_linuxapp
> index 9047975..c7c1c83 100644
> --- a/config/common_linuxapp
> +++ b/config/common_linuxapp
> @@ -390,6 +390,13 @@ CONFIG_RTE_KNI_VHOST_DEBUG_RX=n
> CONFIG_RTE_KNI_VHOST_DEBUG_TX=n
>
> #
> +# Compile vhost library
> +# fuse, fuse-devel, kernel-modules-extra packages are needed
> +#
> +CONFIG_RTE_LIBRTE_VHOST=n
> +CONFIG_RTE_LIBRTE_VHOST_DEBUG=n
> +
> +#
> #Compile Xen domain0 support
> #
> CONFIG_RTE_LIBRTE_XEN_DOM0=n
> diff --git a/lib/Makefile b/lib/Makefile
> index 10c5bb3..007c174 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -60,6 +60,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_METER) += librte_meter
> DIRS-$(CONFIG_RTE_LIBRTE_SCHED) += librte_sched
> DIRS-$(CONFIG_RTE_LIBRTE_KVARGS) += librte_kvargs
> DIRS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) += librte_distributor
> +DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
> DIRS-$(CONFIG_RTE_LIBRTE_PORT) += librte_port
> DIRS-$(CONFIG_RTE_LIBRTE_TABLE) += librte_table
> DIRS-$(CONFIG_RTE_LIBRTE_PIPELINE) += librte_pipeline
> diff --git a/lib/librte_vhost/Makefile b/lib/librte_vhost/Makefile
> new file mode 100644
> index 0000000..6ad706d
> --- /dev/null
> +++ b/lib/librte_vhost/Makefile
> @@ -0,0 +1,48 @@
> +# BSD LICENSE
> +#
> +# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
> +# All rights reserved.
> +#
> +# Redistribution and use in source and binary forms, with or without
> +# modification, are permitted provided that the following conditions
> +# are met:
> +#
> +# * Redistributions of source code must retain the above copyright
> +# notice, this list of conditions and the following disclaimer.
> +# * Redistributions in binary form must reproduce the above copyright
> +# notice, this list of conditions and the following disclaimer in
> +# the documentation and/or other materials provided with the
> +# distribution.
> +# * Neither the name of Intel Corporation nor the names of its
> +# contributors may be used to endorse or promote products derived
> +# from this software without specific prior written permission.
> +#
> +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS FOR
> +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT
> +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL,
> +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> NOT
> +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
> USE,
> +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> ON ANY
> +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
> +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +# library name
> +LIB = librte_vhost.a
> +
> +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 -D_FILE_OFFSET_BITS=64 -lfuse
> +LDFLAGS += -lfuse
> +# all source are stored in SRCS-y
> +SRCS-$(CONFIG_RTE_LIBRTE_VHOST) := vhost-net-cdev.c virtio-net.c
> vhost_rxtx.c
> +
> +# install includes
> +SYMLINK-$(CONFIG_RTE_LIBRTE_VHOST)-include += rte_virtio_net.h
> +
> +# this lib needs eal
> +DEPDIRS-$(CONFIG_RTE_LIBRTE_VHOST) += lib/librte_eal lib/librte_mbuf
> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_vhost/eventfd_link/Makefile
> b/lib/librte_vhost/eventfd_link/Makefile
> new file mode 100644
> index 0000000..fc3927b
> --- /dev/null
> +++ b/lib/librte_vhost/eventfd_link/Makefile
> @@ -0,0 +1,39 @@
> +# BSD LICENSE
> +#
> +# Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
> +# All rights reserved.
> +#
> +# Redistribution and use in source and binary forms, with or without
> +# modification, are permitted provided that the following conditions
> +# are met:
> +#
> +# * Redistributions of source code must retain the above copyright
> +# notice, this list of conditions and the following disclaimer.
> +# * Redistributions in binary form must reproduce the above copyright
> +# notice, this list of conditions and the following disclaimer in
> +# the documentation and/or other materials provided with the
> +# distribution.
> +# * Neither the name of Intel Corporation nor the names of its
> +# contributors may be used to endorse or promote products derived
> +# from this software without specific prior written permission.
> +#
> +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS FOR
> +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT
> +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL,
> +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> NOT
> +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
> USE,
> +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> ON ANY
> +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
> +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> +
> +obj-m += eventfd_link.o
> +
> +
> +all:
> + make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
> +
> +clean:
> + make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
> diff --git a/lib/librte_vhost/eventfd_link/eventfd_link.c
> b/lib/librte_vhost/eventfd_link/eventfd_link.c
> new file mode 100644
> index 0000000..4c9b628
> --- /dev/null
> +++ b/lib/librte_vhost/eventfd_link/eventfd_link.c
> @@ -0,0 +1,194 @@
> +/*-
> + * GPL LICENSE SUMMARY
> + *
> + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of version 2 of the GNU General Public License as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful, but
> + * WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
> + * General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; If not, see <http://www.gnu.org/licenses/>.
> + * The full GNU General Public License is included in this distribution
> + * in the file called LICENSE.GPL.
> + *
> + * Contact Information:
> + * Intel Corporation
> + */
> +
> +#include <linux/eventfd.h>
> +#include <linux/miscdevice.h>
> +#include <linux/module.h>
> +#include <linux/moduleparam.h>
> +#include <linux/rcupdate.h>
> +#include <linux/file.h>
> +#include <linux/slab.h>
> +#include <linux/fs.h>
> +#include <linux/mmu_context.h>
> +#include <linux/sched.h>
> +#include <asm/mmu_context.h>
> +#include <linux/fdtable.h>
> +
> +#include "eventfd_link.h"
> +
> +
> +/*
> + * get_files_struct is copied from fs/file.c
> + */
> +struct files_struct *
> +get_files_struct(struct task_struct *task)
> +{
> + struct files_struct *files;
> +
> + task_lock(task);
> + files = task->files;
> + if (files)
> + atomic_inc(&files->count);
> + task_unlock(task);
> +
> + return files;
> +}
> +
> +/*
> + * put_files_struct is extracted from fs/file.c
> + */
> +void
> +put_files_struct(struct files_struct *files)
> +{
> + if (atomic_dec_and_test(&files->count))
> + BUG();
> +}
> +
> +
> +static long
> +eventfd_link_ioctl(struct file *f, unsigned int ioctl, unsigned long arg)
> +{
> + void __user *argp = (void __user *) arg;
> + struct task_struct *task_target = NULL;
> + struct file *file;
> + struct files_struct *files;
> + struct fdtable *fdt;
> + struct eventfd_copy eventfd_copy;
> +
> + switch (ioctl) {
> + case EVENTFD_COPY:
> + if (copy_from_user(&eventfd_copy, argp, sizeof(struct
> eventfd_copy)))
> + return -EFAULT;
> +
> + /*
> + * Find the task struct for the target pid
> + */
> + task_target =
> + pid_task(find_vpid(eventfd_copy.target_pid),
> PIDTYPE_PID);
> + if (task_target == NULL) {
> + printk(KERN_DEBUG "Failed to get mem ctx for target
> pid\n");
> + return -EFAULT;
> + }
> +
> + files = get_files_struct(current);
> + if (files == NULL) {
> + printk(KERN_DEBUG "Failed to get files struct\n");
> + return -EFAULT;
> + }
> +
> + rcu_read_lock();
> + file = fcheck_files(files, eventfd_copy.source_fd);
> + if (file) {
> + if (file->f_mode & FMODE_PATH
> + || !atomic_long_inc_not_zero(&file->f_count))
> + file = NULL;
> + }
> + rcu_read_unlock();
> + put_files_struct(files);
> +
> + if (file == NULL) {
> + printk(KERN_DEBUG "Failed to get file from source
> pid\n");
> + return 0;
> + }
> +
> + /*
> + * Release the existing eventfd in the source process
> + */
> + spin_lock(&files->file_lock);
> + filp_close(file, files);
> + fdt = files_fdtable(files);
> + fdt->fd[eventfd_copy.source_fd] = NULL;
> + spin_unlock(&files->file_lock);
> +
> + /*
> + * Find the file struct associated with the target fd.
> + */
> +
> + files = get_files_struct(task_target);
> + if (files == NULL) {
> + printk(KERN_DEBUG "Failed to get files struct\n");
> + return -EFAULT;
> + }
> +
> + rcu_read_lock();
> + file = fcheck_files(files, eventfd_copy.target_fd);
> + if (file) {
> + if (file->f_mode & FMODE_PATH
> + || !atomic_long_inc_not_zero(&file->f_count))
> + file = NULL;
> + }
> + rcu_read_unlock();
> + put_files_struct(files);
> +
> + if (file == NULL) {
> + printk(KERN_DEBUG "Failed to get file from target
> pid\n");
> + return 0;
> + }
> +
> +
> + /*
> + * Install the file struct from the target process into the
> + * file desciptor of the source process,
> + */
> +
> + fd_install(eventfd_copy.source_fd, file);
> +
> + return 0;
> +
> + default:
> + return -ENOIOCTLCMD;
> + }
> +}
> +
> +static const struct file_operations eventfd_link_fops = {
> + .owner = THIS_MODULE,
> + .unlocked_ioctl = eventfd_link_ioctl,
> +};
> +
> +
> +static struct miscdevice eventfd_link_misc = {
> + .name = "eventfd-link",
> + .fops = &eventfd_link_fops,
> +};
> +
> +static int __init
> +eventfd_link_init(void)
> +{
> + return misc_register(&eventfd_link_misc);
> +}
> +
> +module_init(eventfd_link_init);
> +
> +static void __exit
> +eventfd_link_exit(void)
> +{
> + misc_deregister(&eventfd_link_misc);
> +}
> +
> +module_exit(eventfd_link_exit);
> +
> +MODULE_VERSION("0.0.1");
> +MODULE_LICENSE("GPL v2");
> +MODULE_AUTHOR("Anthony Fee");
> +MODULE_DESCRIPTION("Link eventfd");
> +MODULE_ALIAS("devname:eventfd-link");
> diff --git a/lib/librte_vhost/eventfd_link/eventfd_link.h
> b/lib/librte_vhost/eventfd_link/eventfd_link.h
> new file mode 100644
> index 0000000..38052e2
> --- /dev/null
> +++ b/lib/librte_vhost/eventfd_link/eventfd_link.h
> @@ -0,0 +1,40 @@
> +/*-
> + * GPL LICENSE SUMMARY
> + *
> + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of version 2 of the GNU General Public License as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful, but
> + * WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
> + * General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; If not, see <http://www.gnu.org/licenses/>.
> + * The full GNU General Public License is included in this distribution
> + * in the file called LICENSE.GPL.
> + *
> + * Contact Information:
> + * Intel Corporation
> + */
> +
> +#ifndef _EVENTFD_LINK_H_
> +#define _EVENTFD_LINK_H_
> +
> +/*
> + * ioctl to copy an fd entry in calling process to an fd in a target process
> + */
> +#define EVENTFD_COPY 1
> +
> +/*
> + * arguements for the EVENTFD_COPY ioctl
> + */
> +struct eventfd_copy {
> + unsigned target_fd; /**< fd in the target pid */
> + unsigned source_fd; /**< fd in the calling pid */
> + pid_t target_pid; /**< pid of the target pid */
> +};
> +#endif /* _EVENTFD_LINK_H_ */
> diff --git a/lib/librte_vhost/rte_virtio_net.h b/lib/librte_vhost/rte_virtio_net.h
> new file mode 100644
> index 0000000..7a05dab
> --- /dev/null
> +++ b/lib/librte_vhost/rte_virtio_net.h
> @@ -0,0 +1,192 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
> + * All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of Intel Corporation nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> + */
> +
> +#ifndef _VIRTIO_NET_H_
> +#define _VIRTIO_NET_H_
> +
> +#include <stdint.h>
> +#include <linux/virtio_ring.h>
> +#include <linux/virtio_net.h>
> +#include <sys/eventfd.h>
> +
> +#include <rte_memory.h>
> +#include <rte_mempool.h>
> +#include <rte_mbuf.h>
> +
> +#define VIRTIO_DEV_RUNNING 1 /**< Used to indicate that the device is
> running on a data core. */
> +#define VIRTIO_DEV_STOPPED -1 /**< Backend value set by guest. */
> +
> +/* Enum for virtqueue management. */
> +enum {VIRTIO_RXQ, VIRTIO_TXQ, VIRTIO_QNUM};
> +
> +/**
> + * Structure contains variables relevant to RX/TX virtqueues.
> + */
> +struct vhost_virtqueue {
> + struct vring_desc *desc; /**< descriptor ring. */
> + struct vring_avail *avail; /**< available ring. */
> + struct vring_used *used; /**< used ring. */
> + uint32_t size; /**< Size of descriptor ring. */
> + uint32_t backend; /**< Backend value to determine if
> device should be started/stopped. */
> + uint16_t vhost_hlen; /**< Vhost header length (varies
> depending on RX merge buffers. */
> + volatile uint16_t last_used_idx; /**< Last index used on the available
> ring. */
> + volatile uint16_t last_used_idx_res; /**< Used for multiple devices
> reserving buffers. */
> + eventfd_t callfd; /**< Currently unused as polling mode is
> enabled. */
> + eventfd_t kickfd; /**< Used to notify the guest (trigger
> interrupt). */
> +} __rte_cache_aligned;
> +
> +/**
> + * Information relating to memory regions including offsets to
> + * addresses in QEMUs memory file.
> + */
> +struct virtio_memory_regions {
> + uint64_t guest_phys_address; /**< Base guest physical address of
> region. */
> + uint64_t guest_phys_address_end; /**< End guest physical address of
> region. */
> + uint64_t memory_size; /**< Size of region. */
> + uint64_t userspace_address; /**< Base userspace address of region.
> */
> + uint64_t address_offset; /**< Offset of region for address
> translation. */
> +};
> +
> +
> +/**
> + * Memory structure includes region and mapping information.
> + */
> +struct virtio_memory {
> + uint64_t base_address; /**< Base QEMU userspace address of the
> memory file. */
> + uint64_t mapped_address; /**< Mapped address of memory file base
> in our applications memory space. */
> + uint64_t mapped_size; /**< Total size of memory file. */
> + uint32_t nregions; /**< Number of memory regions. */
> + struct virtio_memory_regions regions[0]; /**< Memory region
> information. */
> +};
> +
> +/**
> + * Device structure contains all configuration information relating to the device.
> + */
> +struct virtio_net {
> + struct vhost_virtqueue *virtqueue[VIRTIO_QNUM]; /**< Contains all
> virtqueue information. */
> + struct virtio_memory *mem; /**< QEMU memory and
> memory region information. */
> + uint64_t features; /**< Negotiated feature set. */
> + uint64_t device_fh; /**< Device identifier. */
> + uint32_t flags; /**< Device flags. Only used to check if device is
> running on data core. */
> + void *priv;
> +} __rte_cache_aligned;
> +
> +/**
> + * Device operations to add/remove device.
> + */
> +struct virtio_net_device_ops {
> + int (*new_device)(struct virtio_net *); /**< Add device. */
> + void (*destroy_device)(struct virtio_net *); /**< Remove device. */
> +};
> +
> +
> +static inline uint16_t __attribute__((always_inline))
> +rte_vring_available_entries(struct virtio_net *dev, uint16_t queue_id)
> +{
> + struct vhost_virtqueue *vq = dev->virtqueue[queue_id];
> + return *(volatile uint16_t *)&vq->avail->idx - vq->last_used_idx_res;
> +}
> +
> +/**
> + * Function to convert guest physical addresses to vhost virtual addresses.
> + * This is used to convert guest virtio buffer addresses.
> + */
> +static inline uint64_t __attribute__((always_inline))
> +gpa_to_vva(struct virtio_net *dev, uint64_t guest_pa)
> +{
> + struct virtio_memory_regions *region;
> + uint32_t regionidx;
> + uint64_t vhost_va = 0;
> +
> + for (regionidx = 0; regionidx < dev->mem->nregions; regionidx++) {
> + region = &dev->mem->regions[regionidx];
> + if ((guest_pa >= region->guest_phys_address) &&
> + (guest_pa <= region->guest_phys_address_end)) {
> + vhost_va = region->address_offset + guest_pa;
> + break;
> + }
> + }
> + return vhost_va;
> +}
> +
> +/**
> + * Disable features in feature_mask. Returns 0 on success.
> + */
> +int rte_vhost_feature_disable(uint64_t feature_mask);
> +
> +/**
> + * Enable features in feature_mask. Returns 0 on success.
> + */
> +int rte_vhost_feature_enable(uint64_t feature_mask);
> +
> +/* Returns currently supported vhost features */
> +uint64_t rte_vhost_feature_get(void);
> +
> +int rte_vhost_enable_guest_notification(struct virtio_net *dev, uint16_t
> queue_id, int enable);
> +
> +/* Register vhost driver. dev_name could be different for multiple instance
> support. */
> +int rte_vhost_driver_register(const char *dev_name);
> +
> +/* Register callbacks. */
> +int rte_vhost_driver_callback_register(struct virtio_net_device_ops const *
> const);
> +
> +int rte_vhost_driver_session_start(void);
> +
> +/**
> + * This function adds buffers to the virtio devices RX virtqueue. Buffers can
> + * be received from the physical port or from another virtual device. A packet
> + * count is returned to indicate the number of packets that were succesfully
> + * added to the RX queue.
> + * @param queue_id
> + * virtio queue index in mq case
> + * @return
> + * num of packets enqueued
> + */
> +uint32_t rte_vhost_enqueue_burst(struct virtio_net *dev, uint16_t queue_id,
> + struct rte_mbuf **pkts, uint32_t count);
> +
> +/**
> + * This function gets guest buffers from the virtio device TX virtqueue,
> + * construct host mbufs, copies guest buffer content to host mbufs and
> + * store them in pkts to be processed.
> + * @param mbuf_pool
> + * mbuf_pool where host mbuf is allocated.
> + * @param queue_id
> + * virtio queue index in mq case.
> + * @return
> + * num of packets dequeued
> + */
> +uint32_t rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id,
> + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint32_t
> count);
> +
> +#endif /* _VIRTIO_NET_H_ */
> diff --git a/lib/librte_vhost/vhost-net-cdev.c b/lib/librte_vhost/vhost-net-cdev.c
> new file mode 100644
> index 0000000..b65c67b
> --- /dev/null
> +++ b/lib/librte_vhost/vhost-net-cdev.c
> @@ -0,0 +1,363 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
> + * All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of Intel Corporation nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> + */
> +
> +#include <errno.h>
> +#include <fuse/cuse_lowlevel.h>
> +#include <linux/limits.h>
> +#include <linux/vhost.h>
> +#include <stdint.h>
> +#include <string.h>
> +#include <unistd.h>
> +
> +#include <rte_ethdev.h>
> +#include <rte_log.h>
> +#include <rte_string_fns.h>
> +#include <rte_virtio_net.h>
> +
> +#include "vhost-net-cdev.h"
> +
> +#define FUSE_OPT_DUMMY "\0\0"
> +#define FUSE_OPT_FORE "-f\0\0"
> +#define FUSE_OPT_NOMULTI "-s\0\0"
> +
> +static const uint32_t default_major = 231;
> +static const uint32_t default_minor = 1;
> +static const char cuse_device_name[] = "/dev/cuse";
> +static const char default_cdev[] = "vhost-net";
> +
> +static struct fuse_session *session;
> +static struct vhost_net_device_ops const *ops;
> +
> +/**
> + * Returns vhost_device_ctx from given fuse_req_t. The index is populated later
> when
> + * the device is added to the device linked list.
> + */
> +static struct vhost_device_ctx
> +fuse_req_to_vhost_ctx(fuse_req_t req, struct fuse_file_info *fi)
> +{
> + struct vhost_device_ctx ctx;
> + struct fuse_ctx const *const req_ctx = fuse_req_ctx(req);
> +
> + ctx.pid = req_ctx->pid;
> + ctx.fh = fi->fh;
> +
> + return ctx;
> +}
> +
> +/**
> + * When the device is created in QEMU it gets initialised here and added to the
> device linked list.
> + */
> +static void
> +vhost_net_open(fuse_req_t req, struct fuse_file_info *fi)
> +{
> + struct vhost_device_ctx ctx = fuse_req_to_vhost_ctx(req, fi);
> + int err = 0;
> +
> + err = ops->new_device(ctx);
> + if (err == -1) {
> + fuse_reply_err(req, EPERM);
> + return;
> + }
> +
> + fi->fh = err;
> +
> + RTE_LOG(INFO, VHOST_CONFIG, "(%"PRIu64") Device configuration
> started\n", fi->fh);
> + fuse_reply_open(req, fi);
> +}
> +
> +/*
> + * When QEMU is shutdown or killed the device gets released.
> + */
> +static void
> +vhost_net_release(fuse_req_t req, struct fuse_file_info *fi)
> +{
> + int err = 0;
> + struct vhost_device_ctx ctx = fuse_req_to_vhost_ctx(req, fi);
> +
> + ops->destroy_device(ctx);
> + RTE_LOG(INFO, VHOST_CONFIG, "(%"PRIu64") Device released\n",
> ctx.fh);
> + fuse_reply_err(req, err);
> +}
> +
> +/*
> + * Boilerplate code for CUSE IOCTL
> + * Implicit arguments: ctx, req, result.
> + */
> +#define VHOST_IOCTL(func) do { \
> + result = (func)(ctx); \
> + fuse_reply_ioctl(req, result, NULL, 0); \
> +} while (0)
> +
> +/*
> + * Boilerplate IOCTL RETRY
> + * Implicit arguments: req.
> + */
> +#define VHOST_IOCTL_RETRY(size_r, size_w) do { \
> + struct iovec iov_r = { arg, (size_r) }; \
> + struct iovec iov_w = { arg, (size_w) }; \
> + fuse_reply_ioctl_retry(req, &iov_r, (size_r) ? 1 : 0, &iov_w, (size_w) ? 1 :
> 0); \
> +} while (0)
>
> \
> +
> +/*
> + * Boilerplate code for CUSE Read IOCTL
> + * Implicit arguments: ctx, req, result, in_bufsz, in_buf.
> + */
> +#define VHOST_IOCTL_R(type, var, func) do { \
> + if (!in_bufsz) { \
> + VHOST_IOCTL_RETRY(sizeof(type), 0); \
> + } else { \
> + (var) = *(const type*)in_buf; \
> + result = func(ctx, &(var)); \
> + fuse_reply_ioctl(req, result, NULL, 0); \
> + } \
> +} while (0)
> +
> +/*
> + * Boilerplate code for CUSE Write IOCTL
> + * Implicit arguments: ctx, req, result, out_bufsz.
> + */
> +#define VHOST_IOCTL_W(type, var, func) do { \
> + if (!out_bufsz) { \
> + VHOST_IOCTL_RETRY(0, sizeof(type)); \
> + } else { \
> + result = (func)(ctx, &(var)); \
> + fuse_reply_ioctl(req, result, &(var), sizeof(type)); \
> + } \
> +} while (0)
> +
> +/*
> + * Boilerplate code for CUSE Read/Write IOCTL
> + * Implicit arguments: ctx, req, result, in_bufsz, in_buf.
> + */
> +#define VHOST_IOCTL_RW(type1, var1, type2, var2, func) do { \
> + if (!in_bufsz) { \
> + VHOST_IOCTL_RETRY(sizeof(type1), sizeof(type2)); \
> + } else { \
> + (var1) = *(const type1*) (in_buf); \
> + result = (func)(ctx, (var1), &(var2)); \
> + fuse_reply_ioctl(req, result, &(var2), sizeof(type2)); \
> + } \
> +} while (0)
> +
> +/**
> + * The IOCTLs are handled using CUSE/FUSE in userspace. Depending on
> + * the type of IOCTL a buffer is requested to read or to write. This
> + * request is handled by FUSE and the buffer is then given to CUSE.
> + */
> +static void
> +vhost_net_ioctl(fuse_req_t req, int cmd, void *arg,
> + struct fuse_file_info *fi, __rte_unused unsigned flags,
> + const void *in_buf, size_t in_bufsz, size_t out_bufsz)
> +{
> + struct vhost_device_ctx ctx = fuse_req_to_vhost_ctx(req, fi);
> + struct vhost_vring_file file;
> + struct vhost_vring_state state;
> + struct vhost_vring_addr addr;
> + uint64_t features;
> + uint32_t index;
> + int result = 0;
> +
> + switch (cmd) {
> +
> + case VHOST_NET_SET_BACKEND:
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL:
> VHOST_NET_SET_BACKEND\n", ctx.fh);
> + VHOST_IOCTL_R(struct vhost_vring_file, file, ops->set_backend);
> + break;
> +
> + case VHOST_GET_FEATURES:
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL:
> VHOST_GET_FEATURES\n", ctx.fh);
> + VHOST_IOCTL_W(uint64_t, features, ops->get_features);
> + break;
> +
> + case VHOST_SET_FEATURES:
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL:
> VHOST_SET_FEATURES\n", ctx.fh);
> + VHOST_IOCTL_R(uint64_t, features, ops->set_features);
> + break;
> +
> + case VHOST_RESET_OWNER:
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL:
> VHOST_RESET_OWNER\n", ctx.fh);
> + VHOST_IOCTL(ops->reset_owner);
> + break;
> +
> + case VHOST_SET_OWNER:
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL:
> VHOST_SET_OWNER\n", ctx.fh);
> + VHOST_IOCTL(ops->set_owner);
> + break;
> +
> + case VHOST_SET_MEM_TABLE:
> + /*TODO fix race condition.*/
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL:
> VHOST_SET_MEM_TABLE\n", ctx.fh);
> + static struct vhost_memory mem_temp;
> +
> + switch (in_bufsz) {
> + case 0:
> + VHOST_IOCTL_RETRY(sizeof(struct vhost_memory), 0);
> + break;
> +
> + case sizeof(struct vhost_memory):
> + mem_temp = *(const struct vhost_memory *) in_buf;
> +
> + if (mem_temp.nregions > 0) {
> + VHOST_IOCTL_RETRY(sizeof(struct
> vhost_memory) + (sizeof(struct vhost_memory_region) * mem_temp.nregions),
> 0);
> + } else {
> + result = -1;
> + fuse_reply_ioctl(req, result, NULL, 0);
> + }
> + break;
> +
> + default:
> + result = ops->set_mem_table(ctx, in_buf,
> mem_temp.nregions);
> + if (result)
> + fuse_reply_err(req, EINVAL);
> + else
> + fuse_reply_ioctl(req, result, NULL, 0);
> +
> + }
> +
> + break;
> +
> + case VHOST_SET_VRING_NUM:
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL:
> VHOST_SET_VRING_NUM\n", ctx.fh);
> + VHOST_IOCTL_R(struct vhost_vring_state, state, ops-
> >set_vring_num);
> + break;
> +
> + case VHOST_SET_VRING_BASE:
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL:
> VHOST_SET_VRING_BASE\n", ctx.fh);
> + VHOST_IOCTL_R(struct vhost_vring_state, state, ops-
> >set_vring_base);
> + break;
> +
> + case VHOST_GET_VRING_BASE:
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL:
> VHOST_GET_VRING_BASE\n", ctx.fh);
> + VHOST_IOCTL_RW(uint32_t, index, struct vhost_vring_state,
> state, ops->get_vring_base);
> + break;
> +
> + case VHOST_SET_VRING_ADDR:
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL:
> VHOST_SET_VRING_ADDR\n", ctx.fh);
> + VHOST_IOCTL_R(struct vhost_vring_addr, addr, ops-
> >set_vring_addr);
> + break;
> +
> + case VHOST_SET_VRING_KICK:
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL:
> VHOST_SET_VRING_KICK\n", ctx.fh);
> + VHOST_IOCTL_R(struct vhost_vring_file, file, ops-
> >set_vring_kick);
> + break;
> +
> + case VHOST_SET_VRING_CALL:
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL:
> VHOST_SET_VRING_CALL\n", ctx.fh);
> + VHOST_IOCTL_R(struct vhost_vring_file, file, ops-
> >set_vring_call);
> + break;
> +
> + default:
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") IOCTL: DOESN
> NOT EXIST\n", ctx.fh);
> + result = -1;
> + fuse_reply_ioctl(req, result, NULL, 0);
> + }
> +
> + if (result < 0)
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: FAIL\n",
> ctx.fh);
> + else
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: SUCCESS\n",
> ctx.fh);
> +}
> +
> +/**
> + * Structure handling open, release and ioctl function pointers is populated.
> + */
> +static const struct cuse_lowlevel_ops vhost_net_ops = {
> + .open = vhost_net_open,
> + .release = vhost_net_release,
> + .ioctl = vhost_net_ioctl,
> +};
> +
> +/**
> + * cuse_info is populated and used to register the cuse device.
> vhost_net_device_ops are
> + * also passed when the device is registered in main.c.
> + */
> +int
> +rte_vhost_driver_register(const char *dev_name)
> +{
> + struct cuse_info cuse_info;
> + char device_name[PATH_MAX] = "";
> + char char_device_name[PATH_MAX] = "";
> + const char *device_argv[] = { device_name };
> +
> + char fuse_opt_dummy[] = FUSE_OPT_DUMMY;
> + char fuse_opt_fore[] = FUSE_OPT_FORE;
> + char fuse_opt_nomulti[] = FUSE_OPT_NOMULTI;
> + char *fuse_argv[] = {fuse_opt_dummy, fuse_opt_fore,
> fuse_opt_nomulti};
> +
> + if (access(cuse_device_name, R_OK | W_OK) < 0) {
> + RTE_LOG(ERR, VHOST_CONFIG, "Character device %s can't be
> accessed, maybe not exist\n", cuse_device_name);
> + return -1;
> + }
> +
> + /*
> + * The device name is created. This is passed to QEMU so that it can
> register
> + * the device with our application. The dev_name allows us to have
> multiple instances
> + * of userspace vhost which we can then add devices to separately.
> + */
> + snprintf(device_name, PATH_MAX, "DEVNAME=%s", dev_name);
> + snprintf(char_device_name, PATH_MAX, "/dev/%s", dev_name);
> +
> + /* Check if device already exists. */
> + if (access(char_device_name, F_OK) != -1) {
> + RTE_LOG(ERR, VHOST_CONFIG, "Character device %s already
> exists\n", char_device_name);
> + return -1;
> + }
> +
> + memset(&cuse_info, 0, sizeof(cuse_info));
> + cuse_info.dev_major = default_major;
> + cuse_info.dev_minor = default_minor;
> + cuse_info.dev_info_argc = 1;
> + cuse_info.dev_info_argv = device_argv;
> + cuse_info.flags = CUSE_UNRESTRICTED_IOCTL;
> +
> + ops = get_virtio_net_callbacks();
> +
> + session = cuse_lowlevel_setup(3, fuse_argv,
> + &cuse_info, &vhost_net_ops, 0, NULL);
> + if (session == NULL)
> + return -1;
> +
> + return 0;
> +}
> +
> +
> +/**
> + * The CUSE session is launched allowing the application to receive open,
> release and ioctl calls.
> + */
> +int
> +rte_vhost_driver_session_start(void)
> +{
> + fuse_session_loop(session);
> +
> + return 0;
> +}
> diff --git a/lib/librte_vhost/vhost-net-cdev.h b/lib/librte_vhost/vhost-net-cdev.h
> new file mode 100644
> index 0000000..01a1b58
> --- /dev/null
> +++ b/lib/librte_vhost/vhost-net-cdev.h
> @@ -0,0 +1,109 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
> + * All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of Intel Corporation nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> + */
> +
> +#include <stdint.h>
> +#include <stdio.h>
> +#include <sys/types.h>
> +#include <unistd.h>
> +#include <linux/vhost.h>
> +
> +#include <rte_log.h>
> +
> +/* Macros for printing using RTE_LOG */
> +#define RTE_LOGTYPE_VHOST_CONFIG RTE_LOGTYPE_USER1
> +#define RTE_LOGTYPE_VHOST_DATA RTE_LOGTYPE_USER1
> +
> +#ifdef RTE_LIBRTE_VHOST_DEBUG
> +#define VHOST_MAX_PRINT_BUFF 6072
> +#define LOG_LEVEL RTE_LOG_DEBUG
> +#define LOG_DEBUG(log_type, fmt, args...) RTE_LOG(DEBUG, log_type, fmt,
> ##args)
> +#define VHOST_PRINT_PACKET(device, addr, size, header) do { \
> + char *pkt_addr = (char *)(addr); \
> + unsigned int index; \
> + char packet[VHOST_MAX_PRINT_BUFF]; \
> + \
> + if ((header)) \
> + snprintf(packet, VHOST_MAX_PRINT_BUFF, "(%"PRIu64")
> Header size %d: ", (device->device_fh), (size)); \
> + else \
> + snprintf(packet, VHOST_MAX_PRINT_BUFF, "(%"PRIu64") Packet
> size %d: ", (device->device_fh), (size)); \
> + for (index = 0; index < (size); index++) { \
> + snprintf(packet + strnlen(packet, VHOST_MAX_PRINT_BUFF),
> VHOST_MAX_PRINT_BUFF - strnlen(packet, VHOST_MAX_PRINT_BUFF), \
> + "%02hhx ", pkt_addr[index]); \
> + } \
> + snprintf(packet + strnlen(packet, VHOST_MAX_PRINT_BUFF),
> VHOST_MAX_PRINT_BUFF - strnlen(packet, VHOST_MAX_PRINT_BUFF), "\n"); \
> + \
> + LOG_DEBUG(VHOST_DATA, "%s", packet); \
> +} while (0)
> +#else
> +#define LOG_LEVEL RTE_LOG_INFO
> +#define LOG_DEBUG(log_type, fmt, args...) do {} while (0)
> +#define VHOST_PRINT_PACKET(device, addr, size, header) do {} while (0)
> +#endif
> +
> +/**
> + * Structure used to identify device context.
> + */
> +struct vhost_device_ctx {
> + pid_t pid; /**< PID of process calling the IOCTL. */
> + uint64_t fh; /**< Populated with fi->fh to track the device index. */
> +};
> +
> +/**
> + * Structure contains function pointers to be defined in virtio-net.c. These
> + * functions are called in CUSE context and are used to configure devices.
> + */
> +struct vhost_net_device_ops {
> + int (*new_device)(struct vhost_device_ctx);
> + void (*destroy_device)(struct vhost_device_ctx);
> +
> + int (*get_features)(struct vhost_device_ctx, uint64_t *);
> + int (*set_features)(struct vhost_device_ctx, uint64_t *);
> +
> + int (*set_mem_table)(struct vhost_device_ctx, const void *, uint32_t);
> +
> + int (*set_vring_num)(struct vhost_device_ctx, struct vhost_vring_state
> *);
> + int (*set_vring_addr)(struct vhost_device_ctx, struct vhost_vring_addr *);
> + int (*set_vring_base)(struct vhost_device_ctx, struct vhost_vring_state
> *);
> + int (*get_vring_base)(struct vhost_device_ctx, uint32_t, struct
> vhost_vring_state *);
> +
> + int (*set_vring_kick)(struct vhost_device_ctx, struct vhost_vring_file *);
> + int (*set_vring_call)(struct vhost_device_ctx, struct vhost_vring_file *);
> +
> + int (*set_backend)(struct vhost_device_ctx, struct vhost_vring_file *);
> +
> + int (*set_owner)(struct vhost_device_ctx);
> + int (*reset_owner)(struct vhost_device_ctx);
> +};
> +
> +
> +struct vhost_net_device_ops const *get_virtio_net_callbacks(void);
> diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vhost/vhost_rxtx.c
> new file mode 100644
> index 0000000..d25457b
> --- /dev/null
> +++ b/lib/librte_vhost/vhost_rxtx.c
> @@ -0,0 +1,292 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
> + * All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of Intel Corporation nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> + */
> +
> +#include <stdint.h>
> +#include <linux/virtio_net.h>
> +
> +#include <rte_mbuf.h>
> +#include <rte_memcpy.h>
> +#include <rte_virtio_net.h>
> +
> +#include "vhost-net-cdev.h"
> +
> +#define VHOST_MAX_PKT_BURST 64
> +#define VHOST_MAX_MRG_PKT_BURST 64
> +
> +
> +uint32_t
> +rte_vhost_enqueue_burst(struct virtio_net *dev, uint16_t queue_id, struct
> rte_mbuf **pkts, uint32_t count)
> +{
> + struct vhost_virtqueue *vq;
> + struct vring_desc *desc;
> + struct rte_mbuf *buff;
> + /* The virtio_hdr is initialised to 0. */
> + struct virtio_net_hdr_mrg_rxbuf virtio_hdr = {{0, 0, 0, 0, 0, 0}, 0};
> + uint64_t buff_addr = 0;
> + uint64_t buff_hdr_addr = 0;
> + uint32_t head[VHOST_MAX_PKT_BURST], packet_len = 0;
> + uint32_t head_idx, packet_success = 0;
> + uint32_t mergeable, mrg_count = 0;
> + uint16_t avail_idx, res_cur_idx;
> + uint16_t res_base_idx, res_end_idx;
> + uint16_t free_entries;
> + uint8_t success = 0;
> +
> + LOG_DEBUG(VHOST_DATA, "(%"PRIu64") %s()\n", dev->device_fh,
> __func__);
> + if (unlikely(queue_id != VIRTIO_RXQ)) {
> + LOG_DEBUG(VHOST_DATA, "mq isn't supported in this
> version.\n");
> + return 0;
> + }
> +
> + vq = dev->virtqueue[VIRTIO_RXQ];
> + count = (count > VHOST_MAX_PKT_BURST) ? VHOST_MAX_PKT_BURST :
> count;
> + /* As many data cores may want access to available buffers, they need
> to be reserved. */
> + do {
> + res_base_idx = vq->last_used_idx_res;
> + avail_idx = *((volatile uint16_t *)&vq->avail->idx);
> +
> + free_entries = (avail_idx - res_base_idx);
> + /*check that we have enough buffers*/
> + if (unlikely(count > free_entries))
> + count = free_entries;
> +
> + if (count == 0)
> + return 0;
> +
> + res_end_idx = res_base_idx + count;
> + /* vq->last_used_idx_res is atomically updated. */
> + /* TODO: Allow to disable cmpset if no concurrency in
> application */
> + success = rte_atomic16_cmpset(&vq->last_used_idx_res,
> + res_base_idx, res_end_idx);
> + /* If there is contention here and failed, try again. */
> + } while (unlikely(success == 0));
> + res_cur_idx = res_base_idx;
> + LOG_DEBUG(VHOST_DATA, "(%"PRIu64") Current Index %d| End
> Index %d\n",
> + dev->device_fh,
> + res_cur_idx, res_end_idx);
> +
> + /* Prefetch available ring to retrieve indexes. */
> + rte_prefetch0(&vq->avail->ring[res_cur_idx & (vq->size - 1)]);
> +
> + /* Check if the VIRTIO_NET_F_MRG_RXBUF feature is enabled. */
> + mergeable = dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF);
> +
> + /* Retrieve all of the head indexes first to avoid caching issues. */
> + for (head_idx = 0; head_idx < count; head_idx++)
> + head[head_idx] = vq->avail->ring[(res_cur_idx + head_idx) &
> (vq->size - 1)];
> +
> + /*Prefetch descriptor index. */
> + rte_prefetch0(&vq->desc[head[packet_success]]);
> +
> + while (res_cur_idx != res_end_idx) {
> + /* Get descriptor from available ring */
> + desc = &vq->desc[head[packet_success]];
> +
> + buff = pkts[packet_success];
> +
> + /* Convert from gpa to vva (guest physical addr -> vhost virtual
> addr) */
> + buff_addr = gpa_to_vva(dev, desc->addr);
> + /* Prefetch buffer address. */
> + rte_prefetch0((void *)(uintptr_t)buff_addr);
> +
> + if (mergeable && (mrg_count != 0)) {
> + desc->len = packet_len = rte_pktmbuf_data_len(buff);
> + } else {
> + /* Copy virtio_hdr to packet and increment buffer
> address */
> + buff_hdr_addr = buff_addr;
> + packet_len = rte_pktmbuf_data_len(buff) + vq-
> >vhost_hlen;
> +
> + /*
> + * If the descriptors are chained the header and data
> are placed in
> + * separate buffers.
> + */
> + if (desc->flags & VRING_DESC_F_NEXT) {
> + desc->len = vq->vhost_hlen;
> + desc = &vq->desc[desc->next];
> + /* Buffer address translation. */
> + buff_addr = gpa_to_vva(dev, desc->addr);
> + desc->len = rte_pktmbuf_data_len(buff);
> + } else {
> + buff_addr += vq->vhost_hlen;
> + desc->len = packet_len;
> + }
> + }
> +
> + VHOST_PRINT_PACKET(dev, (uintptr_t)buff_addr,
> rte_pktmbuf_data_len(buff), 0);
> +
> + /* Update used ring with desc information */
> + vq->used->ring[res_cur_idx & (vq->size - 1)].id =
> head[packet_success];
> + vq->used->ring[res_cur_idx & (vq->size - 1)].len = packet_len;
> +
> + /* Copy mbuf data to buffer */
> + /* TODO fixme for sg mbuf and the case that desc couldn't hold
> the mbuf data */
> + rte_memcpy((void *)(uintptr_t)buff_addr, (const void *)buff-
> >pkt.data, rte_pktmbuf_data_len(buff));
> +
> + res_cur_idx++;
> + packet_success++;
> +
> + /* If mergeable is disabled then a header is required per buffer.
> */
> + if (!mergeable) {
> + rte_memcpy((void *)(uintptr_t)buff_hdr_addr, (const
> void *)&virtio_hdr, vq->vhost_hlen);
> + VHOST_PRINT_PACKET(dev, (uintptr_t)buff_hdr_addr,
> vq->vhost_hlen, 1);
> + } else {
> + mrg_count++;
> + /* Merge buffer can only handle so many buffers at a
> time. Tell the guest if this limit is reached. */
> + if ((mrg_count == VHOST_MAX_MRG_PKT_BURST) ||
> (res_cur_idx == res_end_idx)) {
> + virtio_hdr.num_buffers = mrg_count;
> + LOG_DEBUG(VHOST_DATA, "(%"PRIu64") RX:
> Num merge buffers %d\n", dev->device_fh, virtio_hdr.num_buffers);
> + rte_memcpy((void *)(uintptr_t)buff_hdr_addr,
> (const void *)&virtio_hdr, vq->vhost_hlen);
> + VHOST_PRINT_PACKET(dev,
> (uintptr_t)buff_hdr_addr, vq->vhost_hlen, 1);
> + mrg_count = 0;
> + }
> + }
> + if (res_cur_idx < res_end_idx) {
> + /* Prefetch descriptor index. */
> + rte_prefetch0(&vq->desc[head[packet_success]]);
> + }
> + }
> +
> + rte_compiler_barrier();
> +
> + /* Wait until it's our turn to add our buffer to the used ring. */
> + while (unlikely(vq->last_used_idx != res_base_idx))
> + rte_pause();
> +
> + *(volatile uint16_t *)&vq->used->idx += count;
> + vq->last_used_idx = res_end_idx;
> +
> + /* Kick the guest if necessary. */
> + if (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT))
> + eventfd_write((int)vq->kickfd, 1);
> + return count;
> +}
> +
> +
> +uint32_t
> +rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, struct
> rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint32_t count)
> +{
> + struct rte_mbuf *mbuf;
> + struct vhost_virtqueue *vq;
> + struct vring_desc *desc;
> + uint64_t buff_addr = 0;
> + uint32_t head[VHOST_MAX_PKT_BURST];
> + uint32_t used_idx;
> + uint32_t i;
> + uint16_t free_entries, packet_success = 0;
> + uint16_t avail_idx;
> +
> + if (unlikely(queue_id != VIRTIO_TXQ)) {
> + LOG_DEBUG(VHOST_DATA, "mq isn't supported in this
> version.\n");
> + return 0;
> + }
> +
> + vq = dev->virtqueue[VIRTIO_TXQ];
> + avail_idx = *((volatile uint16_t *)&vq->avail->idx);
> +
> + /* If there are no available buffers then return. */
> + if (vq->last_used_idx == avail_idx)
> + return 0;
> +
> + LOG_DEBUG(VHOST_DATA, "(%"PRIu64") virtio_dev_tx()\n", dev-
> >device_fh);
> +
> + /* Prefetch available ring to retrieve head indexes. */
> + rte_prefetch0(&vq->avail->ring[vq->last_used_idx & (vq->size - 1)]);
> +
> + /*get the number of free entries in the ring*/
> + free_entries = (avail_idx - vq->last_used_idx);
> +
> + if (free_entries > count)
> + free_entries = count;
> + /* Limit to MAX_PKT_BURST. */
> + if (free_entries > VHOST_MAX_PKT_BURST)
> + free_entries = VHOST_MAX_PKT_BURST;
> +
> + LOG_DEBUG(VHOST_DATA, "(%"PRIu64") Buffers available %d\n", dev-
> >device_fh, free_entries);
> + /* Retrieve all of the head indexes first to avoid caching issues. */
> + for (i = 0; i < free_entries; i++)
> + head[i] = vq->avail->ring[(vq->last_used_idx + i) & (vq->size - 1)];
> +
> + /* Prefetch descriptor index. */
> + rte_prefetch0(&vq->desc[head[packet_success]]);
> + rte_prefetch0(&vq->used->ring[vq->last_used_idx & (vq->size - 1)]);
> +
> + while (packet_success < free_entries) {
> + desc = &vq->desc[head[packet_success]];
> +
> + /* Discard first buffer as it is the virtio header */
> + desc = &vq->desc[desc->next];
> +
> + /* Buffer address translation. */
> + buff_addr = gpa_to_vva(dev, desc->addr);
> + /* Prefetch buffer address. */
> + rte_prefetch0((void *)(uintptr_t)buff_addr);
> +
> + used_idx = vq->last_used_idx & (vq->size - 1);
> +
> + if (packet_success < (free_entries - 1)) {
> + /* Prefetch descriptor index. */
> + rte_prefetch0(&vq->desc[head[packet_success+1]]);
> + rte_prefetch0(&vq->used->ring[(used_idx + 1) & (vq-
> >size - 1)]);
> + }
> +
> + /* Update used index buffer information. */
> + vq->used->ring[used_idx].id = head[packet_success];
> + vq->used->ring[used_idx].len = 0;
> +
> + mbuf = rte_pktmbuf_alloc(mbuf_pool);
> + if (unlikely(mbuf == NULL)) {
> + RTE_LOG(ERR, VHOST_DATA, "Failed to allocate
> memory for mbuf.\n");
> + return packet_success;
> + }
> + mbuf->pkt.data_len = desc->len;
> + mbuf->pkt.pkt_len = mbuf->pkt.data_len;
> +
> + rte_memcpy((void *) mbuf->pkt.data,
> + (const void *) buff_addr, mbuf->pkt.data_len);
> +
> + pkts[packet_success] = mbuf;
> +
> + VHOST_PRINT_PACKET(dev, (uintptr_t)buff_addr, desc->len, 0);
> +
> + vq->last_used_idx++;
> + packet_success++;
> + }
> +
> + rte_compiler_barrier();
> + vq->used->idx += packet_success;
> + /* Kick guest if required. */
> + if (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT))
> + eventfd_write((int)vq->kickfd, 1);
> +
> + return packet_success;
> +}
> diff --git a/lib/librte_vhost/virtio-net.c b/lib/librte_vhost/virtio-net.c
> new file mode 100644
> index 0000000..80e3b8c
> --- /dev/null
> +++ b/lib/librte_vhost/virtio-net.c
> @@ -0,0 +1,1002 @@
> +/*-
> + * BSD LICENSE
> + *
> + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
> + * All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + *
> + * * Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * * Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in
> + * the documentation and/or other materials provided with the
> + * distribution.
> + * * Neither the name of Intel Corporation nor the names of its
> + * contributors may be used to endorse or promote products derived
> + * from this software without specific prior written permission.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> CONTRIBUTORS
> + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> FITNESS FOR
> + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> COPYRIGHT
> + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> INCIDENTAL,
> + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> NOT
> + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> OF USE,
> + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> ON ANY
> + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> THE USE
> + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> DAMAGE.
> + */
> +
> +#include <dirent.h>
> +#include <fuse/cuse_lowlevel.h>
> +#include <linux/vhost.h>
> +#include <linux/virtio_net.h>
> +#include <stddef.h>
> +#include <stdint.h>
> +#include <stdlib.h>
> +#include <sys/eventfd.h>
> +#include <sys/ioctl.h>
> +#include <sys/mman.h>
> +#include <unistd.h>
> +
> +#include <rte_ethdev.h>
> +#include <rte_log.h>
> +#include <rte_string_fns.h>
> +#include <rte_memory.h>
> +#include <rte_virtio_net.h>
> +
> +#include "vhost-net-cdev.h"
> +#include "eventfd_link/eventfd_link.h"
> +
> +/**
> + * Device linked list structure for configuration.
> + */
> +struct virtio_net_config_ll {
> + struct virtio_net dev; /* Virtio device. */
> + struct virtio_net_config_ll *next; /* Next entry on linked list. */
> +};
> +
> +static const char eventfd_cdev[] = "/dev/eventfd-link";
> +
> +/* device ops to add/remove device to data core. */
> +static struct virtio_net_device_ops const *notify_ops;
> +/* Root address of the linked list in the configuration core. */
> +static struct virtio_net_config_ll *ll_root;
> +
> +/* Features supported by this library. */
> +#define VHOST_SUPPORTED_FEATURES (1ULL << VIRTIO_NET_F_MRG_RXBUF)
> +static uint64_t VHOST_FEATURES = VHOST_SUPPORTED_FEATURES;
> +
> +/* Line size for reading maps file. */
> +static const uint32_t BUFSIZE = PATH_MAX;
> +
> +/* Size of prot char array in procmap. */
> +#define PROT_SZ 5
> +
> +/* Number of elements in procmap struct. */
> +#define PROCMAP_SZ 8
> +
> +/* Structure containing information gathered from maps file. */
> +struct procmap {
> + uint64_t va_start; /* Start virtual address in file. */
> + uint64_t len; /* Size of file. */
> + uint64_t pgoff; /* Not used. */
> + uint32_t maj; /* Not used. */
> + uint32_t min; /* Not used. */
> + uint32_t ino; /* Not used. */
> + char prot[PROT_SZ]; /* Not used. */
> + char fname[PATH_MAX]; /* File name. */
> +};
> +
> +/**
> + * Converts QEMU virtual address to Vhost virtual address. This function is used
> + * to convert the ring addresses to our address space.
> + */
> +static uint64_t
> +qva_to_vva(struct virtio_net *dev, uint64_t qemu_va)
> +{
> + struct virtio_memory_regions *region;
> + uint64_t vhost_va = 0;
> + uint32_t regionidx = 0;
> +
> + /* Find the region where the address lives. */
> + for (regionidx = 0; regionidx < dev->mem->nregions; regionidx++) {
> + region = &dev->mem->regions[regionidx];
> + if ((qemu_va >= region->userspace_address) &&
> + (qemu_va <= region->userspace_address +
> + region->memory_size)) {
> + vhost_va = dev->mem->mapped_address + qemu_va -
> dev->mem->base_address;
> + break;
> + }
> + }
> + return vhost_va;
> +}
> +
> +/**
> + * Locate the file containing QEMU's memory space and map it to our address
> space.
> + */
> +static int
> +host_memory_map(struct virtio_net *dev, struct virtio_memory *mem, pid_t
> pid, uint64_t addr)
> +{
> + struct dirent *dptr = NULL;
> + struct procmap procmap;
> + DIR *dp = NULL;
> + int fd;
> + int i;
> + char memfile[PATH_MAX];
> + char mapfile[PATH_MAX];
> + char procdir[PATH_MAX];
> + char resolved_path[PATH_MAX];
> + FILE *fmap;
> + void *map;
> + uint8_t found = 0;
> + char line[BUFSIZE];
> + char dlm[] = "- : ";
> + char *str, *sp, *in[PROCMAP_SZ];
> + char *end = NULL;
> +
> + /* Path where mem files are located. */
> + snprintf(procdir, PATH_MAX, "/proc/%u/fd/", pid);
> + /* Maps file used to locate mem file. */
> + snprintf(mapfile, PATH_MAX, "/proc/%u/maps", pid);
> +
> + fmap = fopen(mapfile, "r");
> + if (fmap == NULL) {
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to open
> maps file for pid %d\n", dev->device_fh, pid);
> + return -1;
> + }
> +
> + /* Read through maps file until we find out base_address. */
> + while (fgets(line, BUFSIZE, fmap) != 0) {
> + str = line;
> + errno = 0;
> + /* Split line in to fields. */
> + for (i = 0; i < PROCMAP_SZ; i++) {
> + in[i] = strtok_r(str, &dlm[i], &sp);
> + if ((in[i] == NULL) || (errno != 0)) {
> + fclose(fmap);
> + return -1;
> + }
> + str = NULL;
> + }
> +
> + /* Convert/Copy each field as needed. */
> + procmap.va_start = strtoull(in[0], &end, 16);
> + if ((in[0] == '\0') || (end == NULL) || (*end != '\0') || (errno != 0))
> {
> + fclose(fmap);
> + return -1;
> + }
> +
> + procmap.len = strtoull(in[1], &end, 16);
> + if ((in[1] == '\0') || (end == NULL) || (*end != '\0') || (errno != 0))
> {
> + fclose(fmap);
> + return -1;
> + }
> +
> + procmap.pgoff = strtoull(in[3], &end, 16);
> + if ((in[3] == '\0') || (end == NULL) || (*end != '\0') || (errno != 0))
> {
> + fclose(fmap);
> + return -1;
> + }
> +
> + procmap.maj = strtoul(in[4], &end, 16);
> + if ((in[4] == '\0') || (end == NULL) || (*end != '\0') || (errno != 0))
> {
> + fclose(fmap);
> + return -1;
> + }
> +
> + procmap.min = strtoul(in[5], &end, 16);
> + if ((in[5] == '\0') || (end == NULL) || (*end != '\0') || (errno != 0))
> {
> + fclose(fmap);
> + return -1;
> + }
> +
> + procmap.ino = strtoul(in[6], &end, 16);
> + if ((in[6] == '\0') || (end == NULL) || (*end != '\0') || (errno != 0))
> {
> + fclose(fmap);
> + return -1;
> + }
> +
> + memcpy(&procmap.prot, in[2], PROT_SZ);
> + memcpy(&procmap.fname, in[7], PATH_MAX);
> +
> + if (procmap.va_start == addr) {
> + procmap.len = procmap.len - procmap.va_start;
> + found = 1;
> + break;
> + }
> + }
> + fclose(fmap);
> +
> + if (!found) {
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find
> memory file in pid %d maps file\n", dev->device_fh, pid);
> + return -1;
> + }
> +
> + /* Find the guest memory file among the process fds. */
> + dp = opendir(procdir);
> + if (dp == NULL) {
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Cannot open
> pid %d process directory\n", dev->device_fh, pid);
> + return -1;
> +
> + }
> +
> + found = 0;
> +
> + /* Read the fd directory contents. */
> + while (NULL != (dptr = readdir(dp))) {
> + snprintf(memfile, PATH_MAX, "/proc/%u/fd/%s", pid, dptr-
> >d_name);
> + realpath(memfile, resolved_path);
> + if (resolved_path == NULL) {
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to
> resolve fd directory\n", dev->device_fh);
> + closedir(dp);
> + return -1;
> + }
> + if (strncmp(resolved_path, procmap.fname,
> + strnlen(procmap.fname, PATH_MAX)) == 0) {
> + found = 1;
> + break;
> + }
> + }
> +
> + closedir(dp);
> +
> + if (found == 0) {
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find
> memory file for pid %d\n", dev->device_fh, pid);
> + return -1;
> + }
> + /* Open the shared memory file and map the memory into this process.
> */
> + fd = open(memfile, O_RDWR);
> +
> + if (fd == -1) {
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to open %s
> for pid %d\n", dev->device_fh, memfile, pid);
> + return -1;
> + }
> +
> + map = mmap(0, (size_t)procmap.len, PROT_READ|PROT_WRITE ,
> MAP_POPULATE|MAP_SHARED, fd, 0);
> + close(fd);
> +
> + if (map == MAP_FAILED) {
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Error mapping the
> file %s for pid %d\n", dev->device_fh, memfile, pid);
> + return -1;
> + }
> +
> + /* Store the memory address and size in the device data structure */
> + mem->mapped_address = (uint64_t)(uintptr_t)map;
> + mem->mapped_size = procmap.len;
> +
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") Mem File: %s->%s -
> Size: %llu - VA: %p\n", dev->device_fh,
> + memfile, resolved_path, (long long unsigned)mem-
> >mapped_size, map);
> +
> + return 0;
> +}
> +
> +/**
> + * Initialise all variables in device structure.
> + */
> +static void
> +init_device(struct virtio_net *dev)
> +{
> + uint64_t vq_offset;
> +
> + /* Virtqueues have already been malloced so we don't want to set them
> to NULL. */
> + vq_offset = offsetof(struct virtio_net, mem);
> +
> + /* Set everything to 0. */
> + memset((void *)(uintptr_t)((uint64_t)(uintptr_t)dev + vq_offset), 0,
> + (sizeof(struct virtio_net) - (size_t)vq_offset));
> + memset(dev->virtqueue[VIRTIO_RXQ], 0, sizeof(struct vhost_virtqueue));
> + memset(dev->virtqueue[VIRTIO_TXQ], 0, sizeof(struct vhost_virtqueue));
> +
> + /* Backends are set to -1 indicating an inactive device. */
> + dev->virtqueue[VIRTIO_RXQ]->backend = VIRTIO_DEV_STOPPED;
> + dev->virtqueue[VIRTIO_TXQ]->backend = VIRTIO_DEV_STOPPED;
> +}
> +
> +/**
> + * Unmap any memory, close any file descriptors and free any memory owned
> by a device.
> + */
> +static void
> +cleanup_device(struct virtio_net *dev)
> +{
> + /* Unmap QEMU memory file if mapped. */
> + if (dev->mem) {
> + munmap((void *)(uintptr_t)dev->mem->mapped_address,
> (size_t)dev->mem->mapped_size);
> + free(dev->mem);
> + }
> +
> + /* Close any event notifiers opened by device. */
> + if (dev->virtqueue[VIRTIO_RXQ]->callfd)
> + close((int)dev->virtqueue[VIRTIO_RXQ]->callfd);
> + if (dev->virtqueue[VIRTIO_RXQ]->kickfd)
> + close((int)dev->virtqueue[VIRTIO_RXQ]->kickfd);
> + if (dev->virtqueue[VIRTIO_TXQ]->callfd)
> + close((int)dev->virtqueue[VIRTIO_TXQ]->callfd);
> + if (dev->virtqueue[VIRTIO_TXQ]->kickfd)
> + close((int)dev->virtqueue[VIRTIO_TXQ]->kickfd);
> +}
> +
> +/**
> + * Release virtqueues and device memory.
> + */
> +static void
> +free_device(struct virtio_net_config_ll *ll_dev)
> +{
> + /* Free any malloc'd memory */
> + free(ll_dev->dev.virtqueue[VIRTIO_RXQ]);
> + free(ll_dev->dev.virtqueue[VIRTIO_TXQ]);
> + free(ll_dev);
> +}
> +
> +/**
> + * Retrieves an entry from the devices configuration linked list.
> + */
> +static struct virtio_net_config_ll *
> +get_config_ll_entry(struct vhost_device_ctx ctx)
> +{
> + struct virtio_net_config_ll *ll_dev = ll_root;
> +
> + /* Loop through linked list until the device_fh is found. */
> + while (ll_dev != NULL) {
> + if (ll_dev->dev.device_fh == ctx.fh)
> + return ll_dev;
> + ll_dev = ll_dev->next;
> + }
> +
> + return NULL;
> +}
> +
> +/**
> + * Searches the configuration core linked list and retrieves the device if it exists.
> + */
> +static struct virtio_net *
> +get_device(struct vhost_device_ctx ctx)
> +{
> + struct virtio_net_config_ll *ll_dev;
> +
> + ll_dev = get_config_ll_entry(ctx);
> +
> + /* If a matching entry is found in the linked list, return the device in that
> entry. */
> + if (ll_dev)
> + return &ll_dev->dev;
> +
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Device not found in linked
> list.\n", ctx.fh);
> + return NULL;
> +}
> +
> +/**
> + * Add entry containing a device to the device configuration linked list.
> + */
> +static void
> +add_config_ll_entry(struct virtio_net_config_ll *new_ll_dev)
> +{
> + struct virtio_net_config_ll *ll_dev = ll_root;
> +
> + /* If ll_dev == NULL then this is the first device so go to else */
> + if (ll_dev) {
> + /* If the 1st device_fh != 0 then we insert our device here. */
> + if (ll_dev->dev.device_fh != 0) {
> + new_ll_dev->dev.device_fh = 0;
> + new_ll_dev->next = ll_dev;
> + ll_root = new_ll_dev;
> + } else {
> + /* Increment through the ll until we find un unused
> device_fh. Insert the device at that entry*/
> + while ((ll_dev->next != NULL) && (ll_dev->dev.device_fh
> == (ll_dev->next->dev.device_fh - 1)))
> + ll_dev = ll_dev->next;
> +
> + new_ll_dev->dev.device_fh = ll_dev->dev.device_fh + 1;
> + new_ll_dev->next = ll_dev->next;
> + ll_dev->next = new_ll_dev;
> + }
> + } else {
> + ll_root = new_ll_dev;
> + ll_root->dev.device_fh = 0;
> + }
> +
> +}
> +
> +/**
> + * Remove an entry from the device configuration linked list.
> + */
> +static struct virtio_net_config_ll *
> +rm_config_ll_entry(struct virtio_net_config_ll *ll_dev, struct
> virtio_net_config_ll *ll_dev_last)
> +{
> + /* First remove the device and then clean it up. */
> + if (ll_dev == ll_root) {
> + ll_root = ll_dev->next;
> + cleanup_device(&ll_dev->dev);
> + free_device(ll_dev);
> + return ll_root;
> + } else {
> + if (likely(ll_dev_last != NULL)) {
> + ll_dev_last->next = ll_dev->next;
> + cleanup_device(&ll_dev->dev);
> + free_device(ll_dev);
> + return ll_dev_last->next;
> + } else {
> + cleanup_device(&ll_dev->dev);
> + free_device(ll_dev);
> + RTE_LOG(ERR, VHOST_CONFIG, "Remove entry from
> config_ll failed\n");
> + return NULL;
> + }
> + }
> +}
> +
> +
> +/**
> + * Function is called from the CUSE open function. The device structure is
> + * initialised and a new entry is added to the device configuration linked
> + * list.
> + */
> +static int
> +new_device(struct vhost_device_ctx ctx)
> +{
> + struct virtio_net_config_ll *new_ll_dev;
> + struct vhost_virtqueue *virtqueue_rx, *virtqueue_tx;
> +
> + /* Setup device and virtqueues. */
> + new_ll_dev = malloc(sizeof(struct virtio_net_config_ll));
> + if (new_ll_dev == NULL) {
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to allocate
> memory for dev.\n", ctx.fh);
> + return -1;
> + }
> +
> + virtqueue_rx = malloc(sizeof(struct vhost_virtqueue));
> + if (virtqueue_rx == NULL) {
> + free(new_ll_dev);
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to allocate
> memory for virtqueue_rx.\n", ctx.fh);
> + return -1;
> + }
> +
> + virtqueue_tx = malloc(sizeof(struct vhost_virtqueue));
> + if (virtqueue_tx == NULL) {
> + free(virtqueue_rx);
> + free(new_ll_dev);
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to allocate
> memory for virtqueue_tx.\n", ctx.fh);
> + return -1;
> + }
> +
> + new_ll_dev->dev.virtqueue[VIRTIO_RXQ] = virtqueue_rx;
> + new_ll_dev->dev.virtqueue[VIRTIO_TXQ] = virtqueue_tx;
> +
> + /* Initialise device and virtqueues. */
> + init_device(&new_ll_dev->dev);
> +
> + new_ll_dev->next = NULL;
> +
> + /* Add entry to device configuration linked list. */
> + add_config_ll_entry(new_ll_dev);
> +
> + return new_ll_dev->dev.device_fh;
> +}
> +
> +/**
> + * Function is called from the CUSE release function. This function will cleanup
> + * the device and remove it from device configuration linked list.
> + */
> +static void
> +destroy_device(struct vhost_device_ctx ctx)
> +{
> + struct virtio_net_config_ll *ll_dev_cur_ctx, *ll_dev_last = NULL;
> + struct virtio_net_config_ll *ll_dev_cur = ll_root;
> +
> + /* Find the linked list entry for the device to be removed. */
> + ll_dev_cur_ctx = get_config_ll_entry(ctx);
> + while (ll_dev_cur != NULL) {
> + /* If the device is found or a device that doesn't exist is found
> then it is removed. */
> + if (ll_dev_cur == ll_dev_cur_ctx) {
> + /*
> + * If the device is running on a data core then call the
> function to remove it from
> + * the data core.
> + */
> + if ((ll_dev_cur->dev.flags & VIRTIO_DEV_RUNNING))
> + notify_ops->destroy_device(&(ll_dev_cur-
> >dev));
> + ll_dev_cur = rm_config_ll_entry(ll_dev_cur, ll_dev_last);
> + /*TODO return here? */
> + } else {
> + ll_dev_last = ll_dev_cur;
> + ll_dev_cur = ll_dev_cur->next;
> + }
> + }
> +}
> +
> +/**
> + * Called from CUSE IOCTL: VHOST_SET_OWNER
> + * This function just returns success at the moment unless the device hasn't
> been initialised.
> + */
> +static int
> +set_owner(struct vhost_device_ctx ctx)
> +{
> + struct virtio_net *dev;
> +
> + dev = get_device(ctx);
> + if (dev == NULL)
> + return -1;
> +
> + return 0;
> +}
> +
> +/**
> + * Called from CUSE IOCTL: VHOST_RESET_OWNER
> + */
> +static int
> +reset_owner(struct vhost_device_ctx ctx)
> +{
> + struct virtio_net_config_ll *ll_dev;
> +
> + ll_dev = get_config_ll_entry(ctx);
> +
> + cleanup_device(&ll_dev->dev);
> + init_device(&ll_dev->dev);
> +
> + return 0;
> +}
> +
> +/**
> + * Called from CUSE IOCTL: VHOST_GET_FEATURES
> + * The features that we support are requested.
> + */
> +static int
> +get_features(struct vhost_device_ctx ctx, uint64_t *pu)
> +{
> + struct virtio_net *dev;
> +
> + dev = get_device(ctx);
> + if (dev == NULL)
> + return -1;
> +
> + /* Send our supported features. */
> + *pu = VHOST_FEATURES;
> + return 0;
> +}
> +
> +/**
> + * Called from CUSE IOCTL: VHOST_SET_FEATURES
> + * We receive the negotiated set of features supported by us and the virtio
> device.
> + */
> +static int
> +set_features(struct vhost_device_ctx ctx, uint64_t *pu)
> +{
> + struct virtio_net *dev;
> +
> + dev = get_device(ctx);
> + if (dev == NULL)
> + return -1;
> + if (*pu & ~VHOST_FEATURES)
> + return -1;
> +
> + /* Store the negotiated feature list for the device. */
> + dev->features = *pu;
> +
> + /* Set the vhost_hlen depending on if VIRTIO_NET_F_MRG_RXBUF is set.
> */
> + if (dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF)) {
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") Mergeable RX
> buffers enabled\n", dev->device_fh);
> + dev->virtqueue[VIRTIO_RXQ]->vhost_hlen = sizeof(struct
> virtio_net_hdr_mrg_rxbuf);
> + dev->virtqueue[VIRTIO_TXQ]->vhost_hlen = sizeof(struct
> virtio_net_hdr_mrg_rxbuf);
> + } else {
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") Mergeable RX
> buffers disabled\n", dev->device_fh);
> + dev->virtqueue[VIRTIO_RXQ]->vhost_hlen = sizeof(struct
> virtio_net_hdr);
> + dev->virtqueue[VIRTIO_TXQ]->vhost_hlen = sizeof(struct
> virtio_net_hdr);
> + }
> + return 0;
> +}
> +
> +
> +/**
> + * Called from CUSE IOCTL: VHOST_SET_MEM_TABLE
> + * This function creates and populates the memory structure for the device.
> This includes
> + * storing offsets used to translate buffer addresses.
> + */
> +static int
> +set_mem_table(struct vhost_device_ctx ctx, const void *mem_regions_addr,
> uint32_t nregions)
> +{
> + struct virtio_net *dev;
> + struct vhost_memory_region *mem_regions;
> + struct virtio_memory *mem;
> + uint64_t size = offsetof(struct vhost_memory, regions);
> + uint32_t regionidx, valid_regions;
> +
> + dev = get_device(ctx);
> + if (dev == NULL)
> + return -1;
> +
> + if (dev->mem) {
> + munmap((void *)(uintptr_t)dev->mem->mapped_address,
> (size_t)dev->mem->mapped_size);
> + free(dev->mem);
> + }
> +
> + /* Malloc the memory structure depending on the number of regions. */
> + mem = calloc(1, sizeof(struct virtio_memory) + (sizeof(struct
> virtio_memory_regions) * nregions));
> + if (mem == NULL) {
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to allocate
> memory for dev->mem.\n", dev->device_fh);
> + return -1;
> + }
> +
> + mem->nregions = nregions;
> +
> + mem_regions = (void
> *)(uintptr_t)((uint64_t)(uintptr_t)mem_regions_addr + size);
> +
> + for (regionidx = 0; regionidx < mem->nregions; regionidx++) {
> + /* Populate the region structure for each region. */
> + mem->regions[regionidx].guest_phys_address =
> mem_regions[regionidx].guest_phys_addr;
> + mem->regions[regionidx].guest_phys_address_end = mem-
> >regions[regionidx].guest_phys_address +
> + mem_regions[regionidx].memory_size;
> + mem->regions[regionidx].memory_size =
> mem_regions[regionidx].memory_size;
> + mem->regions[regionidx].userspace_address =
> mem_regions[regionidx].userspace_addr;
> +
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") REGION: %u -
> GPA: %p - QEMU VA: %p - SIZE (%"PRIu64")\n", dev->device_fh,
> + regionidx, (void *)(uintptr_t)mem-
> >regions[regionidx].guest_phys_address,
> + (void *)(uintptr_t)mem-
> >regions[regionidx].userspace_address,
> + mem->regions[regionidx].memory_size);
> +
> + /*set the base address mapping*/
> + if (mem->regions[regionidx].guest_phys_address == 0x0) {
> + mem->base_address = mem-
> >regions[regionidx].userspace_address;
> + /* Map VM memory file */
> + if (host_memory_map(dev, mem, ctx.pid, mem-
> >base_address) != 0) {
> + free(mem);
> + return -1;
> + }
> + }
> + }
> +
> + /* Check that we have a valid base address. */
> + if (mem->base_address == 0) {
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find base
> address of qemu memory file.\n", dev->device_fh);
> + free(mem);
> + return -1;
> + }
> +
> + /* Check if all of our regions have valid mappings. Usually one does not
> exist in the QEMU memory file. */
> + valid_regions = mem->nregions;
> + for (regionidx = 0; regionidx < mem->nregions; regionidx++) {
> + if ((mem->regions[regionidx].userspace_address < mem-
> >base_address) ||
> + (mem->regions[regionidx].userspace_address > (mem-
> >base_address + mem->mapped_size)))
> + valid_regions--;
> + }
> +
> + /* If a region does not have a valid mapping we rebuild our memory
> struct to contain only valid entries. */
> + if (valid_regions != mem->nregions) {
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") Not all memory
> regions exist in the QEMU mem file. Re-populating mem structure\n",
> + dev->device_fh);
> +
> + /* Re-populate the memory structure with only valid regions.
> Invalid regions are over-written with memmove. */
> + valid_regions = 0;
> +
> + for (regionidx = mem->nregions; 0 != regionidx--;) {
> + if ((mem->regions[regionidx].userspace_address <
> mem->base_address) ||
> + (mem-
> >regions[regionidx].userspace_address > (mem->base_address + mem-
> >mapped_size))) {
> + memmove(&mem->regions[regionidx], &mem-
> >regions[regionidx + 1],
> + sizeof(struct virtio_memory_regions) *
> valid_regions);
> + } else {
> + valid_regions++;
> + }
> + }
> + }
> + mem->nregions = valid_regions;
> + dev->mem = mem;
> +
> + /*
> + * Calculate the address offset for each region. This offset is used to
> identify the vhost virtual address
> + * corresponding to a QEMU guest physical address.
> + */
> + for (regionidx = 0; regionidx < dev->mem->nregions; regionidx++)
> + dev->mem->regions[regionidx].address_offset = dev->mem-
> >regions[regionidx].userspace_address - dev->mem->base_address
> + + dev->mem->mapped_address - dev->mem-
> >regions[regionidx].guest_phys_address;
> +
> + return 0;
> +}
> +
> +/**
> + * Called from CUSE IOCTL: VHOST_SET_VRING_NUM
> + * The virtio device sends us the size of the descriptor ring.
> + */
> +static int
> +set_vring_num(struct vhost_device_ctx ctx, struct vhost_vring_state *state)
> +{
> + struct virtio_net *dev;
> +
> + dev = get_device(ctx);
> + if (dev == NULL)
> + return -1;
> +
> + /* State->index refers to the queue index. The TX queue is 1, RX queue is
> 0. */
> + dev->virtqueue[state->index]->size = state->num;
> +
> + return 0;
> +}
> +
> +/**
> + * Called from CUSE IOCTL: VHOST_SET_VRING_ADDR
> + * The virtio device sends us the desc, used and avail ring addresses. This
> function
> + * then converts these to our address space.
> + */
> +static int
> +set_vring_addr(struct vhost_device_ctx ctx, struct vhost_vring_addr *addr)
> +{
> + struct virtio_net *dev;
> + struct vhost_virtqueue *vq;
> +
> + dev = get_device(ctx);
> + if (dev == NULL)
> + return -1;
> +
> + /* addr->index refers to the queue index. The TX queue is 1, RX queue is
> 0. */
> + vq = dev->virtqueue[addr->index];
> +
> + /* The addresses are converted from QEMU virtual to Vhost virtual. */
> + vq->desc = (struct vring_desc *)(uintptr_t)qva_to_vva(dev, addr-
> >desc_user_addr);
> + if (vq->desc == 0) {
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find
> descriptor ring address.\n", dev->device_fh);
> + return -1;
> + }
> +
> + vq->avail = (struct vring_avail *)(uintptr_t)qva_to_vva(dev, addr-
> >avail_user_addr);
> + if (vq->avail == 0) {
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find
> available ring address.\n", dev->device_fh);
> + return -1;
> + }
> +
> + vq->used = (struct vring_used *)(uintptr_t)qva_to_vva(dev, addr-
> >used_user_addr);
> + if (vq->used == 0) {
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find used
> ring address.\n", dev->device_fh);
> + return -1;
> + }
> +
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") mapped address
> desc: %p\n", dev->device_fh, vq->desc);
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") mapped address
> avail: %p\n", dev->device_fh, vq->avail);
> + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") mapped address
> used: %p\n", dev->device_fh, vq->used);
> +
> + return 0;
> +}
> +
> +/**
> + * Called from CUSE IOCTL: VHOST_SET_VRING_BASE
> + * The virtio device sends us the available ring last used index.
> + */
> +static int
> +set_vring_base(struct vhost_device_ctx ctx, struct vhost_vring_state *state)
> +{
> + struct virtio_net *dev;
> +
> + dev = get_device(ctx);
> + if (dev == NULL)
> + return -1;
> +
> + /* State->index refers to the queue index. The TX queue is 1, RX queue is
> 0. */
> + dev->virtqueue[state->index]->last_used_idx = state->num;
> + dev->virtqueue[state->index]->last_used_idx_res = state->num;
> +
> + return 0;
> +}
> +
> +/**
> + * Called from CUSE IOCTL: VHOST_GET_VRING_BASE
> + * We send the virtio device our available ring last used index.
> + */
> +static int
> +get_vring_base(struct vhost_device_ctx ctx, uint32_t index, struct
> vhost_vring_state *state)
> +{
> + struct virtio_net *dev;
> +
> + dev = get_device(ctx);
> + if (dev == NULL)
> + return -1;
> +
> + state->index = index;
> + /* State->index refers to the queue index. The TX queue is 1, RX queue is
> 0. */
> + state->num = dev->virtqueue[state->index]->last_used_idx;
> +
> + return 0;
> +}
> +
> +/**
> + * This function uses the eventfd_link kernel module to copy an eventfd file
> descriptor
> + * provided by QEMU in to our process space.
> + */
> +static int
> +eventfd_copy(struct virtio_net *dev, struct eventfd_copy *eventfd_copy)
> +{
> + int eventfd_link, ret;
> +
> + /* Open the character device to the kernel module. */
> + eventfd_link = open(eventfd_cdev, O_RDWR);
> + if (eventfd_link < 0) {
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") eventfd_link
> module is not loaded\n", dev->device_fh);
> + return -1;
> + }
> +
> + /* Call the IOCTL to copy the eventfd. */
> + ret = ioctl(eventfd_link, EVENTFD_COPY, eventfd_copy);
> + close(eventfd_link);
> +
> + if (ret < 0) {
> + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") EVENTFD_COPY
> ioctl failed\n", dev->device_fh);
> + return -1;
> + }
> +
> +
> + return 0;
> +}
> +
> +/**
> + * Called from CUSE IOCTL: VHOST_SET_VRING_CALL
> + * The virtio device sends an eventfd to interrupt the guest. This fd gets copied
> in
> + * to our process space.
> + */
> +static int
> +set_vring_call(struct vhost_device_ctx ctx, struct vhost_vring_file *file)
> +{
> + struct virtio_net *dev;
> + struct eventfd_copy eventfd_kick;
> + struct vhost_virtqueue *vq;
> +
> + dev = get_device(ctx);
> + if (dev == NULL)
> + return -1;
> +
> + /* file->index refers to the queue index. The TX queue is 1, RX queue is 0.
> */
> + vq = dev->virtqueue[file->index];
> +
> + if (vq->kickfd)
> + close((int)vq->kickfd);
> +
> + /* Populate the eventfd_copy structure and call eventfd_copy. */
> + vq->kickfd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
> + eventfd_kick.source_fd = vq->kickfd;
> + eventfd_kick.target_fd = file->fd;
> + eventfd_kick.target_pid = ctx.pid;
> +
> + if (eventfd_copy(dev, &eventfd_kick))
> + return -1;
> +
> + return 0;
> +}
> +
> +/**
> + * Called from CUSE IOCTL: VHOST_SET_VRING_KICK
> + * The virtio device sends an eventfd that it can use to notify us. This fd gets
> copied in
> + * to our process space.
> + */
> +static int
> +set_vring_kick(struct vhost_device_ctx ctx, struct vhost_vring_file *file)
> +{
> + struct virtio_net *dev;
> + struct eventfd_copy eventfd_call;
> + struct vhost_virtqueue *vq;
> +
> + dev = get_device(ctx);
> + if (dev == NULL)
> + return -1;
> +
> + /* file->index refers to the queue index. The TX queue is 1, RX queue is 0.
> */
> + vq = dev->virtqueue[file->index];
> +
> + if (vq->callfd)
> + close((int)vq->callfd);
> +
> + /* Populate the eventfd_copy structure and call eventfd_copy. */
> + vq->callfd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
> + eventfd_call.source_fd = vq->callfd;
> + eventfd_call.target_fd = file->fd;
> + eventfd_call.target_pid = ctx.pid;
> +
> + if (eventfd_copy(dev, &eventfd_call))
> + return -1;
> +
> + return 0;
> +}
> +
> +/**
> + * Called from CUSE IOCTL: VHOST_NET_SET_BACKEND
> + * To complete device initialisation when the virtio driver is loaded we are
> provided with a
> + * valid fd for a tap device (not used by us). If this happens then we can add the
> device to a
> + * data core. When the virtio driver is removed we get fd=-1. At that point we
> remove the device
> + * from the data core. The device will still exist in the device configuration
> linked list.
> + */
> +static int
> +set_backend(struct vhost_device_ctx ctx, struct vhost_vring_file *file)
> +{
> + struct virtio_net *dev;
> +
> + dev = get_device(ctx);
> + if (dev == NULL)
> + return -1;
> +
> + /* file->index refers to the queue index. The TX queue is 1, RX queue is 0.
> */
> + dev->virtqueue[file->index]->backend = file->fd;
> +
> + /* If the device isn't already running and both backend fds are set we
> add the device. */
> + if (!(dev->flags & VIRTIO_DEV_RUNNING)) {
> + if (((int)dev->virtqueue[VIRTIO_TXQ]->backend !=
> VIRTIO_DEV_STOPPED) &&
> + ((int)dev->virtqueue[VIRTIO_RXQ]->backend !=
> VIRTIO_DEV_STOPPED))
> + return notify_ops->new_device(dev);
> + /* Otherwise we remove it. */
> + } else
> + if (file->fd == VIRTIO_DEV_STOPPED)
> + notify_ops->destroy_device(dev);
> + return 0;
> +}
> +
> +/**
> + * Function pointers are set for the device operations to allow CUSE to call
> functions
> + * when an IOCTL, device_add or device_release is received.
> + */
> +static const struct vhost_net_device_ops vhost_device_ops = {
> + .new_device = new_device,
> + .destroy_device = destroy_device,
> +
> + .get_features = get_features,
> + .set_features = set_features,
> +
> + .set_mem_table = set_mem_table,
> +
> + .set_vring_num = set_vring_num,
> + .set_vring_addr = set_vring_addr,
> + .set_vring_base = set_vring_base,
> + .get_vring_base = get_vring_base,
> +
> + .set_vring_kick = set_vring_kick,
> + .set_vring_call = set_vring_call,
> +
> + .set_backend = set_backend,
> +
> + .set_owner = set_owner,
> + .reset_owner = reset_owner,
> +};
> +
> +/**
> + * Called by main to setup callbacks when registering CUSE device.
> + */
> +struct vhost_net_device_ops const *
> +get_virtio_net_callbacks(void)
> +{
> + return &vhost_device_ops;
> +}
> +
> +int rte_vhost_enable_guest_notification(struct virtio_net *dev, uint16_t
> queue_id, int enable)
> +{
> + if (enable) {
> + RTE_LOG(ERR, VHOST_CONFIG, "guest notification isn't
> supported.\n");
> + return -1;
> + }
> +
> + dev->virtqueue[queue_id]->used->flags = enable ? 0 :
> VRING_USED_F_NO_NOTIFY;
> + return 0;
> +}
> +
> +uint64_t rte_vhost_feature_get(void)
> +{
> + return VHOST_FEATURES;
> +}
> +
> +int rte_vhost_feature_disable(uint64_t feature_mask)
> +{
> + VHOST_FEATURES = VHOST_FEATURES & ~feature_mask;
> + return 0;
> +}
> +
> +int rte_vhost_feature_enable(uint64_t feature_mask)
> +{
> + if ((feature_mask & VHOST_SUPPORTED_FEATURES) == feature_mask) {
> + VHOST_FEATURES = VHOST_FEATURES | feature_mask;
> + return 0;
> + }
> + return -1;
> +}
> +
> +
> +/*
> + * Register ops so that we can add/remove device to data core.
> + */
> +int
> +rte_vhost_driver_callback_register(struct virtio_net_device_ops const * const
> ops)
> +{
> + notify_ops = ops;
> +
> + return 0;
> +}
> --
> 1.8.1.4
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-dev] [PATCH v3] lib/librte_vhost: user space vhost driver library
2014-08-05 15:53 [dpdk-dev] [PATCH v3] lib/librte_vhost: user space vhost driver library Huawei Xie
2014-08-05 15:53 ` [dpdk-dev] [PATCH v3] lib/librte_vhost: vhost library support to facilitate integration with DPDK accelerated vswitch Huawei Xie
@ 2014-08-06 3:09 ` Fu, JingguoX
2014-08-07 14:22 ` Cao, Waterman
2 siblings, 0 replies; 9+ messages in thread
From: Fu, JingguoX @ 2014-08-06 3:09 UTC (permalink / raw)
To: dev
Tested-by: Jingguo Fu <jingguox.fu@intel.com>
This patch includes 1 file, and has been tested by Intel.
Please see information as the following:
Host:
Fedora 19 x86_64, Linux Kernel 3.9.0, GCC 4.8.2 Intel Xeon CPU E5-2680 v2 @ 2.80GHz
NIC: Intel Niantic 82599, Intel i350, Intel 82580 and Intel 82576
Guest:
Fedora 16 x86_64, Linux Kernel 3.4.2, GCC 4.6.3 Qemu emulator 1.4.2
This patch tests with vhost example based on user space vhost library patch.
We verified zero copy and one copy test cases for functional and performance.
Total case Passed Failed
8 8 0
-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Huawei Xie
Sent: Tuesday, August 05, 2014 11:54 PM
To: dev@dpdk.org
Subject: [dpdk-dev] [PATCH v3] lib/librte_vhost: user space vhost driver library
This user space vhost library is provided aiming to facilitate integration with DPDK accelerated vswitch.
Huawei Xie (1):
vhost library support to facilitate integration with DPDK accelerated vswitch.
config/common_linuxapp | 7 +
lib/Makefile | 1 +
lib/librte_vhost/Makefile | 48 ++
lib/librte_vhost/eventfd_link/Makefile | 39 +
lib/librte_vhost/eventfd_link/eventfd_link.c | 194 +++++
lib/librte_vhost/eventfd_link/eventfd_link.h | 40 +
lib/librte_vhost/rte_virtio_net.h | 192 +++++
lib/librte_vhost/vhost-net-cdev.c | 363 ++++++++++
lib/librte_vhost/vhost-net-cdev.h | 109 +++
lib/librte_vhost/vhost_rxtx.c | 292 ++++++++
lib/librte_vhost/virtio-net.c | 1002 ++++++++++++++++++++++++++
11 files changed, 2287 insertions(+)
create mode 100644 lib/librte_vhost/Makefile
create mode 100644 lib/librte_vhost/eventfd_link/Makefile
create mode 100644 lib/librte_vhost/eventfd_link/eventfd_link.c
create mode 100644 lib/librte_vhost/eventfd_link/eventfd_link.h
create mode 100644 lib/librte_vhost/rte_virtio_net.h
create mode 100644 lib/librte_vhost/vhost-net-cdev.c
create mode 100644 lib/librte_vhost/vhost-net-cdev.h
create mode 100644 lib/librte_vhost/vhost_rxtx.c
create mode 100644 lib/librte_vhost/virtio-net.c
--
1.8.1.4
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-dev] [PATCH v3] lib/librte_vhost: user space vhost driver library
2014-08-05 15:53 [dpdk-dev] [PATCH v3] lib/librte_vhost: user space vhost driver library Huawei Xie
2014-08-05 15:53 ` [dpdk-dev] [PATCH v3] lib/librte_vhost: vhost library support to facilitate integration with DPDK accelerated vswitch Huawei Xie
2014-08-06 3:09 ` [dpdk-dev] [PATCH v3] lib/librte_vhost: user space vhost driver library Fu, JingguoX
@ 2014-08-07 14:22 ` Cao, Waterman
2 siblings, 0 replies; 9+ messages in thread
From: Cao, Waterman @ 2014-08-07 14:22 UTC (permalink / raw)
To: Xie, Huawei, dev
Tested-by: Waterman Cao <waterman.cao@intel.com>
This patch facilitates integration with DPDK vSwitch, and is ready to integrate into DPDK.org.
-----Original Message-----
>From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Huawei Xie
>Sent: Tuesday, August 5, 2014 11:54 PM
>To: dev@dpdk.org
>Subject: [dpdk-dev] [PATCH v3] lib/librte_vhost: user space vhost driver library
>
>This user space vhost library is provided aiming to facilitate integration with DPDK accelerated vswitch.
>
>Huawei Xie (1):
> vhost library support to facilitate integration with DPDK accelerated vswitch.
>
> config/common_linuxapp | 7 +
> lib/Makefile | 1 +
> lib/librte_vhost/Makefile | 48 ++
> lib/librte_vhost/eventfd_link/Makefile | 39 +
> lib/librte_vhost/eventfd_link/eventfd_link.c | 194 +++++
> lib/librte_vhost/eventfd_link/eventfd_link.h | 40 +
> lib/librte_vhost/rte_virtio_net.h | 192 +++++
> lib/librte_vhost/vhost-net-cdev.c | 363 ++++++++++
> lib/librte_vhost/vhost-net-cdev.h | 109 +++
> lib/librte_vhost/vhost_rxtx.c | 292 ++++++++
> lib/librte_vhost/virtio-net.c | 1002 ++++++++++++++++++++++++++
> 11 files changed, 2287 insertions(+)
> create mode 100644 lib/librte_vhost/Makefile create mode 100644 lib/librte_vhost/eventfd_link/Makefile
> create mode 100644 lib/librte_vhost/eventfd_link/eventfd_link.c
> create mode 100644 lib/librte_vhost/eventfd_link/eventfd_link.h
> create mode 100644 lib/librte_vhost/rte_virtio_net.h create mode 100644 lib/librte_vhost/vhost-net-cdev.c create mode 100644 lib/librte_vhost/vhost-net-cdev.h create mode 100644 lib/librte_vhost/vhost_rxtx.c create mode 100644 lib/librte_vhost/virtio-net.c
>
>--
>1.8.1.4
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-dev] [PATCH v3] lib/librte_vhost: vhost library support to facilitate integration with DPDK accelerated vswitch.
2014-08-05 15:53 ` [dpdk-dev] [PATCH v3] lib/librte_vhost: vhost library support to facilitate integration with DPDK accelerated vswitch Huawei Xie
2014-08-05 15:56 ` Xie, Huawei
@ 2014-08-28 20:15 ` Thomas Monjalon
2014-08-29 2:05 ` Xie, Huawei
1 sibling, 1 reply; 9+ messages in thread
From: Thomas Monjalon @ 2014-08-28 20:15 UTC (permalink / raw)
To: Huawei Xie; +Cc: dev
> config/common_linuxapp | 7 +
> lib/Makefile | 1 +
> lib/librte_vhost/Makefile | 48 ++
> lib/librte_vhost/eventfd_link/Makefile | 39 +
> lib/librte_vhost/eventfd_link/eventfd_link.c | 194 +++++
> lib/librte_vhost/eventfd_link/eventfd_link.h | 40 +
> lib/librte_vhost/rte_virtio_net.h | 192 +++++
> lib/librte_vhost/vhost-net-cdev.c | 363 ++++++++++
> lib/librte_vhost/vhost-net-cdev.h | 109 +++
> lib/librte_vhost/vhost_rxtx.c | 292 ++++++++
> lib/librte_vhost/virtio-net.c | 1002 ++++++++++++++++++++++++++
It would help if you made a first patch to move existing code,
another patch to convert it into a lib, and a last one for
the new example.
So it would show how you transform the old example code and
would be easier to review.
Thanks
--
Thomas
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-dev] [PATCH v3] lib/librte_vhost: vhost library support to facilitate integration with DPDK accelerated vswitch.
2014-08-28 20:15 ` Thomas Monjalon
@ 2014-08-29 2:05 ` Xie, Huawei
0 siblings, 0 replies; 9+ messages in thread
From: Xie, Huawei @ 2014-08-29 2:05 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas.monjalon@6wind.com]
> Sent: Friday, August 29, 2014 4:16 AM
> To: Xie, Huawei
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v3] lib/librte_vhost: vhost library support to
> facilitate integration with DPDK accelerated vswitch.
>
> > config/common_linuxapp | 7 +
> > lib/Makefile | 1 +
> > lib/librte_vhost/Makefile | 48 ++
> > lib/librte_vhost/eventfd_link/Makefile | 39 +
> > lib/librte_vhost/eventfd_link/eventfd_link.c | 194 +++++
> > lib/librte_vhost/eventfd_link/eventfd_link.h | 40 +
> > lib/librte_vhost/rte_virtio_net.h | 192 +++++
> > lib/librte_vhost/vhost-net-cdev.c | 363 ++++++++++
> > lib/librte_vhost/vhost-net-cdev.h | 109 +++
> > lib/librte_vhost/vhost_rxtx.c | 292 ++++++++
> > lib/librte_vhost/virtio-net.c | 1002 ++++++++++++++++++++++++++
>
> It would help if you made a first patch to move existing code,
> another patch to convert it into a lib, and a last one for
> the new example.
> So it would show how you transform the old example code and
> would be easier to review.
>
> Thanks
> --
> Thomas
My original order is first patch to add a new lib but not turned on,
another patch to remove the old example, and the third patch to
add the new example.
I am preparing new patch sets, merging the mergable feature, will
follow the suggested order.
-huawei
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-dev] [PATCH v3] lib/librte_vhost: vhost library support to facilitate integration with DPDK accelerated vswitch.
2014-08-20 2:18 [dpdk-dev] [PATCH v3] lib/librte_vhost: vhost library support to facilitate integration with DPDK accelerated vswitch loy wolfe
@ 2014-08-20 3:08 ` Xie, Huawei
0 siblings, 0 replies; 9+ messages in thread
From: Xie, Huawei @ 2014-08-20 3:08 UTC (permalink / raw)
To: loy wolfe, dev
Hi:
The support of qemu user space vhost has been planned.
Thanks
From: loy wolfe [mailto:loywolfe@gmail.com]
Sent: Wednesday, August 20, 2014 10:19 AM
To: dev@dpdk.org; Xie, Huawei
Subject: Re: [dpdk-dev] [PATCH v3] lib/librte_vhost: vhost library support to facilitate integration with DPDK accelerated vswitch.
Is there any plan of DPDK lib support upstream vhost-user? It is merged in qemu 2.1 and libvirt 1.2.7, also with vif-vhostuser BP in openstack. I think this will accelerate adoption of dpdkovs use in openstack.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-dev] [PATCH v3] lib/librte_vhost: vhost library support to facilitate integration with DPDK accelerated vswitch.
@ 2014-08-20 2:18 loy wolfe
2014-08-20 3:08 ` Xie, Huawei
0 siblings, 1 reply; 9+ messages in thread
From: loy wolfe @ 2014-08-20 2:18 UTC (permalink / raw)
To: dev, huawei.xie
Is there any plan of DPDK lib support upstream vhost-user? It is merged in
qemu 2.1 and libvirt 1.2.7, also with vif-vhostuser BP in openstack. I
think this will accelerate adoption of dpdkovs use in openstack.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2014-08-29 2:01 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-05 15:53 [dpdk-dev] [PATCH v3] lib/librte_vhost: user space vhost driver library Huawei Xie
2014-08-05 15:53 ` [dpdk-dev] [PATCH v3] lib/librte_vhost: vhost library support to facilitate integration with DPDK accelerated vswitch Huawei Xie
2014-08-05 15:56 ` Xie, Huawei
2014-08-28 20:15 ` Thomas Monjalon
2014-08-29 2:05 ` Xie, Huawei
2014-08-06 3:09 ` [dpdk-dev] [PATCH v3] lib/librte_vhost: user space vhost driver library Fu, JingguoX
2014-08-07 14:22 ` Cao, Waterman
2014-08-20 2:18 [dpdk-dev] [PATCH v3] lib/librte_vhost: vhost library support to facilitate integration with DPDK accelerated vswitch loy wolfe
2014-08-20 3:08 ` Xie, Huawei
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).