From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [143.182.124.21]) by dpdk.org (Postfix) with ESMTP id 16220592E for ; Fri, 25 Jul 2014 15:30:54 +0200 (CEST) Received: from azsmga001.ch.intel.com ([10.2.17.19]) by azsmga101.ch.intel.com with ESMTP; 25 Jul 2014 06:32:26 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.01,731,1400050800"; d="scan'208";a="461335349" Received: from irsmsx104.ger.corp.intel.com ([163.33.3.159]) by azsmga001.ch.intel.com with ESMTP; 25 Jul 2014 06:32:21 -0700 Received: from irsmsx110.ger.corp.intel.com (163.33.3.25) by IRSMSX104.ger.corp.intel.com (163.33.3.159) with Microsoft SMTP Server (TLS) id 14.3.123.3; Fri, 25 Jul 2014 14:31:34 +0100 Received: from irsmsx104.ger.corp.intel.com ([169.254.5.226]) by IRSMSX110.ger.corp.intel.com ([163.33.3.25]) with mapi id 14.03.0123.003; Fri, 25 Jul 2014 14:31:33 +0100 From: "Long, Thomas" To: "Xie, Huawei" , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH v2] lib/librte_vhost: vhost library support to facilitate integration with vswitch. Thread-Index: AQHPom7AqaLlYGmItUOwJFIjNFZZ/Juw1GgA Date: Fri, 25 Jul 2014 13:31:33 +0000 Message-ID: References: <1405677381-14959-1-git-send-email-huawei.xie@intel.com> <1405677381-14959-2-git-send-email-huawei.xie@intel.com> In-Reply-To: <1405677381-14959-2-git-send-email-huawei.xie@intel.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v2] lib/librte_vhost: vhost library support to facilitate integration with vswitch. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 25 Jul 2014 13:30:58 -0000 Acked-by: Tommy Long -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Huawei Xie Sent: Friday, July 18, 2014 10:56 AM To: dev@dpdk.org Subject: [dpdk-dev] [PATCH v2] lib/librte_vhost: vhost library support to f= acilitate integration with vswitch. Signed-off-by: Huawei Xie --- config/common_linuxapp | 7 + lib/Makefile | 1 + lib/librte_vhost/Makefile | 48 ++ lib/librte_vhost/eventfd_link/Makefile | 39 + lib/librte_vhost/eventfd_link/eventfd_link.c | 205 ++++++ lib/librte_vhost/eventfd_link/eventfd_link.h | 79 ++ lib/librte_vhost/rte_virtio_net.h | 192 +++++ lib/librte_vhost/vhost-net-cdev.c | 363 ++++++++++ lib/librte_vhost/vhost-net-cdev.h | 112 +++ lib/librte_vhost/vhost_rxtx.c | 292 ++++++++ lib/librte_vhost/virtio-net.c | 1002 ++++++++++++++++++++++= ++++ 11 files changed, 2340 insertions(+) create mode 100644 lib/librte_vhost/Makefile create mode 100644 lib/librte_vhost/eventfd_link/Makefile create mode 100644 lib/librte_vhost/eventfd_link/eventfd_link.c create mode 100644 lib/librte_vhost/eventfd_link/eventfd_link.h create mode 100644 lib/librte_vhost/rte_virtio_net.h create mode 100644 lib/librte_vhost/vhost-net-cdev.c create mode 100644 lib/librte_vhost/vhost-net-cdev.h create mode 100644 lib/librte_vhost/vhost_rxtx.c create mode 100644 lib/librte_vhost/virtio-net.c diff --git a/config/common_linuxapp b/config/common_linuxapp index 7bf5d80..5b58278 100644 --- a/config/common_linuxapp +++ b/config/common_linuxapp @@ -390,6 +390,13 @@ CONFIG_RTE_KNI_VHOST_DEBUG_RX=3Dn CONFIG_RTE_KNI_VHOST_DEBUG_TX=3Dn =20 # +# Compile vhost library +# fuse, fuse-devel, kernel-modules-extra packages are needed +# +CONFIG_RTE_LIBRTE_VHOST=3Dn +CONFIG_RTE_LIBRTE_VHOST_DEBUG=3Dn + +# #Compile Xen domain0 support # CONFIG_RTE_LIBRTE_XEN_DOM0=3Dn diff --git a/lib/Makefile b/lib/Makefile index 10c5bb3..007c174 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -60,6 +60,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_METER) +=3D librte_meter DIRS-$(CONFIG_RTE_LIBRTE_SCHED) +=3D librte_sched DIRS-$(CONFIG_RTE_LIBRTE_KVARGS) +=3D librte_kvargs DIRS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) +=3D librte_distributor +DIRS-$(CONFIG_RTE_LIBRTE_VHOST) +=3D librte_vhost DIRS-$(CONFIG_RTE_LIBRTE_PORT) +=3D librte_port DIRS-$(CONFIG_RTE_LIBRTE_TABLE) +=3D librte_table DIRS-$(CONFIG_RTE_LIBRTE_PIPELINE) +=3D librte_pipeline diff --git a/lib/librte_vhost/Makefile b/lib/librte_vhost/Makefile new file mode 100644 index 0000000..f79778b --- /dev/null +++ b/lib/librte_vhost/Makefile @@ -0,0 +1,48 @@ +# BSD LICENSE +#=20 +# Copyright(c) 2010-2014 Intel Corporation. All rights reserved. +# All rights reserved. +#=20 +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +#=20 +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +#=20 +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +include $(RTE_SDK)/mk/rte.vars.mk + +# library name +LIB =3D librte_vhost.a + +CFLAGS +=3D $(WERROR_FLAGS) -I$(SRCDIR) -O3 -D_FILE_OFFSET_BITS=3D64 -lfus= e +LDFLAGS +=3D -lfuse +# all source are stored in SRCS-y +SRCS-$(CONFIG_RTE_LIBRTE_VHOST) :=3D vhost-net-cdev.c virtio-net.c vhost_r= xtx.c + +# install includes +SYMLINK-$(CONFIG_RTE_LIBRTE_VHOST)-include +=3D rte_virtio_net.h + +# this lib needs eal +DEPDIRS-$(CONFIG_RTE_LIBRTE_VHOST) +=3D lib/librte_eal lib/librte_mbuf + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_vhost/eventfd_link/Makefile b/lib/librte_vhost/even= tfd_link/Makefile new file mode 100644 index 0000000..5fe7297 --- /dev/null +++ b/lib/librte_vhost/eventfd_link/Makefile @@ -0,0 +1,39 @@ +# BSD LICENSE +#=20 +# Copyright(c) 2010-2014 Intel Corporation. All rights reserved. +# All rights reserved. +#=20 +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +#=20 +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +#=20 +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +=20 +obj-m +=3D eventfd_link.o + + +all: + make -C /lib/modules/$(shell uname -r)/build M=3D$(PWD) modules + +clean: + make -C /lib/modules/$(shell uname -r)/build M=3D$(PWD) clean diff --git a/lib/librte_vhost/eventfd_link/eventfd_link.c b/lib/librte_vhos= t/eventfd_link/eventfd_link.c new file mode 100644 index 0000000..f7975fa --- /dev/null +++ b/lib/librte_vhost/eventfd_link/eventfd_link.c @@ -0,0 +1,205 @@ +/*- + * * GPL LICENSE SUMMARY + * *=20 + * * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * *=20 + * * This program is free software; you can redistribute it and/or modi= fy + * * it under the terms of version 2 of the GNU General Public License = as + * * published by the Free Software Foundation. + * *=20 + * * This program is distributed in the hope that it will be useful, bu= t + * * WITHOUT ANY WARRANTY; without even the implied warranty of + * * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * * General Public License for more details. + * *=20 + * * You should have received a copy of the GNU General Public License + * * along with this program; if not, write to the Free Software + * * Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1= 301 USA. + * * The full GNU General Public License is included in this distributi= on + * * in the file called LICENSE.GPL. + * *=20 + * * Contact Information: + * * Intel Corporation + * */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "eventfd_link.h" + + +/* + * get_files_struct is copied from fs/file.c + */ +struct files_struct * +get_files_struct (struct task_struct *task) +{ + struct files_struct *files; + + task_lock (task); + files =3D task->files; + if (files) + atomic_inc (&files->count); + task_unlock (task); + + return files; +} + +/* + * put_files_struct is extracted from fs/file.c + */ +void +put_files_struct (struct files_struct *files) +{ + if (atomic_dec_and_test (&files->count)) + { + BUG (); + } +} + + +static long +eventfd_link_ioctl (struct file *f, unsigned int ioctl, unsigned long arg) +{ + void __user *argp =3D (void __user *) arg; + struct task_struct *task_target =3D NULL; + struct file *file; + struct files_struct *files; + struct fdtable *fdt; + struct eventfd_copy eventfd_copy; + + switch (ioctl) + { + case EVENTFD_COPY: + if (copy_from_user (&eventfd_copy, argp, sizeof (struct eventfd_copy))) + return -EFAULT; + + /* + * Find the task struct for the target pid + */ + task_target =3D + pid_task (find_vpid (eventfd_copy.target_pid), PIDTYPE_PID); + if (task_target =3D=3D NULL) + { + printk (KERN_DEBUG "Failed to get mem ctx for target pid\n"); + return -EFAULT; + } + + files =3D get_files_struct (current); + if (files =3D=3D NULL) + { + printk (KERN_DEBUG "Failed to get files struct\n"); + return -EFAULT; + } + + rcu_read_lock (); + file =3D fcheck_files (files, eventfd_copy.source_fd); + if (file) + { + if (file->f_mode & FMODE_PATH + || !atomic_long_inc_not_zero (&file->f_count)) + file =3D NULL; + } + rcu_read_unlock (); + put_files_struct (files); + + if (file =3D=3D NULL) + { + printk (KERN_DEBUG "Failed to get file from source pid\n"); + return 0; + } + + /* + * Release the existing eventfd in the source process + */ + spin_lock (&files->file_lock); + filp_close (file, files); + fdt =3D files_fdtable (files); + fdt->fd[eventfd_copy.source_fd] =3D NULL; + spin_unlock (&files->file_lock); + + /* + * Find the file struct associated with the target fd. + */ + + files =3D get_files_struct (task_target); + if (files =3D=3D NULL) + { + printk (KERN_DEBUG "Failed to get files struct\n"); + return -EFAULT; + } + + rcu_read_lock (); + file =3D fcheck_files (files, eventfd_copy.target_fd); + if (file) + { + if (file->f_mode & FMODE_PATH + || !atomic_long_inc_not_zero (&file->f_count)) + file =3D NULL; + } + rcu_read_unlock (); + put_files_struct (files); + + if (file =3D=3D NULL) + { + printk (KERN_DEBUG "Failed to get file from target pid\n"); + return 0; + } + + + /* + * Install the file struct from the target process into the + * file desciptor of the source process, + */ + + fd_install (eventfd_copy.source_fd, file); + + return 0; + + default: + return -ENOIOCTLCMD; + } +} + +static const struct file_operations eventfd_link_fops =3D { + .owner =3D THIS_MODULE, + .unlocked_ioctl =3D eventfd_link_ioctl, +}; + + +static struct miscdevice eventfd_link_misc =3D { + .name =3D "eventfd-link", + .fops =3D &eventfd_link_fops, +}; + +static int __init +eventfd_link_init (void) +{ + return misc_register (&eventfd_link_misc); +} + +module_init (eventfd_link_init); + +static void __exit +eventfd_link_exit (void) +{ + misc_deregister (&eventfd_link_misc); +} + +module_exit (eventfd_link_exit); + +MODULE_VERSION ("0.0.1"); +MODULE_LICENSE ("GPL v2"); +MODULE_AUTHOR ("Anthony Fee"); +MODULE_DESCRIPTION ("Link eventfd"); +MODULE_ALIAS ("devname:eventfd-link"); diff --git a/lib/librte_vhost/eventfd_link/eventfd_link.h b/lib/librte_vhos= t/eventfd_link/eventfd_link.h new file mode 100644 index 0000000..f33c2f8 --- /dev/null +++ b/lib/librte_vhost/eventfd_link/eventfd_link.h @@ -0,0 +1,79 @@ +/*- + * * This file is provided under a dual BSD/GPLv2 license. When using or + * * redistributing this file, you may do so under either license. + * *=20 + * * GPL LICENSE SUMMARY + * *=20 + * * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * *=20 + * * This program is free software; you can redistribute it and/or modi= fy + * * it under the terms of version 2 of the GNU General Public License = as + * * published by the Free Software Foundation. + * *=20 + * * This program is distributed in the hope that it will be useful, bu= t + * * WITHOUT ANY WARRANTY; without even the implied warranty of + * * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * * General Public License for more details. + * *=20 + * * You should have received a copy of the GNU General Public License + * * along with this program; if not, write to the Free Software + * * Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1= 301 USA. + * * The full GNU General Public License is included in this distributi= on + * * in the file called LICENSE.GPL. + * *=20 + * * Contact Information: + * * Intel Corporation + * *=20 + * * BSD LICENSE + * *=20 + * * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * * All rights reserved. + * *=20 + * * Redistribution and use in source and binary forms, with or without + * * modification, are permitted provided that the following conditions + * * are met: + * *=20 + * * * Redistributions of source code must retain the above copyright + * * notice, this list of conditions and the following disclaimer. + * * * Redistributions in binary form must reproduce the above copyri= ght + * * notice, this list of conditions and the following disclaimer i= n + * * the documentation and/or other materials provided with the + * * distribution. + * * * Neither the name of Intel Corporation nor the names of its + * * contributors may be used to endorse or promote products derive= d + * * from this software without specific prior written permission. + * *=20 + * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTOR= S + * * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS = FOR + * * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIG= HT + * * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENT= AL, + * * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF U= SE, + * * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON = ANY + * * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TOR= T + * * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE = USE + * * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAG= E. + * *=20 + * */ + +#ifndef _EVENTFD_LINK_H_ +#define _EVENTFD_LINK_H_ + +/* + * ioctl to copy an fd entry in calling process to an fd in a target proce= ss + */ +#define EVENTFD_COPY 1 + +/* + * arguements for the EVENTFD_COPY ioctl + */ +struct eventfd_copy { + // fd in the target pid + unsigned target_fd; + // fd in the calling pid + unsigned source_fd; + // pid of the target pid + pid_t target_pid; +}; +#endif /* _EVENTFD_LINK_H_ */ diff --git a/lib/librte_vhost/rte_virtio_net.h b/lib/librte_vhost/rte_virti= o_net.h new file mode 100644 index 0000000..7a05dab --- /dev/null +++ b/lib/librte_vhost/rte_virtio_net.h @@ -0,0 +1,192 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _VIRTIO_NET_H_ +#define _VIRTIO_NET_H_ + +#include +#include +#include +#include + +#include +#include +#include + +#define VIRTIO_DEV_RUNNING 1 /**< Used to indicate that the device is run= ning on a data core. */ +#define VIRTIO_DEV_STOPPED -1 /**< Backend value set by guest. */ + +/* Enum for virtqueue management. */ +enum {VIRTIO_RXQ, VIRTIO_TXQ, VIRTIO_QNUM}; + +/** + * Structure contains variables relevant to RX/TX virtqueues. + */ +struct vhost_virtqueue { + struct vring_desc *desc; /**< descriptor ring. */ + struct vring_avail *avail; /**< available ring. */ + struct vring_used *used; /**< used ring. */ + uint32_t size; /**< Size of descriptor ring. */ + uint32_t backend; /**< Backend value to determine i= f device should be started/stopped. */ + uint16_t vhost_hlen; /**< Vhost header length (varies = depending on RX merge buffers. */ + volatile uint16_t last_used_idx; /**< Last index used on the avail= able ring. */ + volatile uint16_t last_used_idx_res; /**< Used for multiple devices re= serving buffers. */ + eventfd_t callfd; /**< Currently unused as polling = mode is enabled. */ + eventfd_t kickfd; /**< Used to notify the guest (tr= igger interrupt). */ +} __rte_cache_aligned; + +/** + * Information relating to memory regions including offsets to + * addresses in QEMUs memory file. + */ +struct virtio_memory_regions { + uint64_t guest_phys_address; /**< Base guest physical address of r= egion. */ + uint64_t guest_phys_address_end; /**< End guest physical address of re= gion. */ + uint64_t memory_size; /**< Size of region. */ + uint64_t userspace_address; /**< Base userspace address of region= . */ + uint64_t address_offset; /**< Offset of region for address tra= nslation. */ +}; + + +/** + * Memory structure includes region and mapping information. + */ +struct virtio_memory { + uint64_t base_address; /**< Base QEMU userspace address of the memo= ry file. */ + uint64_t mapped_address; /**< Mapped address of memory file base in o= ur applications memory space. */ + uint64_t mapped_size; /**< Total size of memory file. */ + uint32_t nregions; /**< Number of memory regions. */ + struct virtio_memory_regions regions[0]; /**< Memory region informat= ion. */ +}; + +/** + * Device structure contains all configuration information relating to the= device. + */ +struct virtio_net { + struct vhost_virtqueue *virtqueue[VIRTIO_QNUM]; /**< Contains all virtqu= eue information. */ + struct virtio_memory *mem; /**< QEMU memory and mem= ory region information. */ + uint64_t features; /**< Negotiated feature set. */ + uint64_t device_fh; /**< Device identifier. */ + uint32_t flags; /**< Device flags. Only used to check if device is = running on data core. */ + void *priv; +} __rte_cache_aligned; + +/** + * Device operations to add/remove device. + */ +struct virtio_net_device_ops { + int (*new_device)(struct virtio_net *); /**< Add device. */ + void (*destroy_device)(struct virtio_net *); /**< Remove device. */ +}; + + +static inline uint16_t __attribute__((always_inline)) +rte_vring_available_entries(struct virtio_net *dev, uint16_t queue_id) +{ + struct vhost_virtqueue *vq =3D dev->virtqueue[queue_id]; + return *(volatile uint16_t *)&vq->avail->idx - vq->last_used_idx_res; +} + +/** + * Function to convert guest physical addresses to vhost virtual addresses= . + * This is used to convert guest virtio buffer addresses. + */ +static inline uint64_t __attribute__((always_inline)) +gpa_to_vva(struct virtio_net *dev, uint64_t guest_pa) +{ + struct virtio_memory_regions *region; + uint32_t regionidx; + uint64_t vhost_va =3D 0; + + for (regionidx =3D 0; regionidx < dev->mem->nregions; regionidx++) { + region =3D &dev->mem->regions[regionidx]; + if ((guest_pa >=3D region->guest_phys_address) && + (guest_pa <=3D region->guest_phys_address_end)) { + vhost_va =3D region->address_offset + guest_pa; + break; + } + } + return vhost_va; +} + +/** + * Disable features in feature_mask. Returns 0 on success. + */ +int rte_vhost_feature_disable(uint64_t feature_mask); + +/** + * Enable features in feature_mask. Returns 0 on success. + */ +int rte_vhost_feature_enable(uint64_t feature_mask); + +/* Returns currently supported vhost features */ +uint64_t rte_vhost_feature_get(void); + +int rte_vhost_enable_guest_notification(struct virtio_net *dev, uint16_t q= ueue_id, int enable); + +/* Register vhost driver. dev_name could be different for multiple instanc= e support. */ +int rte_vhost_driver_register(const char *dev_name); + +/* Register callbacks. */ +int rte_vhost_driver_callback_register(struct virtio_net_device_ops const = * const); + +int rte_vhost_driver_session_start(void); + +/** + * This function adds buffers to the virtio devices RX virtqueue. Buffers = can + * be received from the physical port or from another virtual device. A pa= cket + * count is returned to indicate the number of packets that were succesful= ly + * added to the RX queue. + * @param queue_id + * virtio queue index in mq case + * @return + * num of packets enqueued + */ +uint32_t rte_vhost_enqueue_burst(struct virtio_net *dev, uint16_t queue_id= , + struct rte_mbuf **pkts, uint32_t count); + +/** + * This function gets guest buffers from the virtio device TX virtqueue, + * construct host mbufs, copies guest buffer content to host mbufs and + * store them in pkts to be processed. + * @param mbuf_pool + * mbuf_pool where host mbuf is allocated. + * @param queue_id + * virtio queue index in mq case. + * @return + * num of packets dequeued + */ +uint32_t rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id= , + struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint32_t count); + +#endif /* _VIRTIO_NET_H_ */ diff --git a/lib/librte_vhost/vhost-net-cdev.c b/lib/librte_vhost/vhost-net= -cdev.c new file mode 100644 index 0000000..1dfe918 --- /dev/null +++ b/lib/librte_vhost/vhost-net-cdev.c @@ -0,0 +1,363 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include + +#include "vhost-net-cdev.h" + +#define FUSE_OPT_DUMMY "\0\0" +#define FUSE_OPT_FORE "-f\0\0" +#define FUSE_OPT_NOMULTI "-s\0\0" + +static const uint32_t default_major =3D 231; +static const uint32_t default_minor =3D 1; +static const char cuse_device_name[] =3D "/dev/cuse"; +static const char default_cdev[] =3D "vhost-net"; + +static struct fuse_session *session; +static struct vhost_net_device_ops const *ops; + +/** + * Returns vhost_device_ctx from given fuse_req_t. The index is populated = later when + * the device is added to the device linked list. + */ +static struct vhost_device_ctx +fuse_req_to_vhost_ctx(fuse_req_t req, struct fuse_file_info *fi) +{ + struct vhost_device_ctx ctx; + struct fuse_ctx const *const req_ctx =3D fuse_req_ctx(req); + + ctx.pid =3D req_ctx->pid; + ctx.fh =3D fi->fh; + + return ctx; +} + +/** + * When the device is created in QEMU it gets initialised here and added t= o the device linked list. + */ +static void +vhost_net_open(fuse_req_t req, struct fuse_file_info *fi) +{ + struct vhost_device_ctx ctx =3D fuse_req_to_vhost_ctx(req, fi); + int err =3D 0; + + err =3D ops->new_device(ctx); + if (err =3D=3D -1) { + fuse_reply_err(req, EPERM); + return; + } + + fi->fh =3D err; + + RTE_LOG(INFO, VHOST_CONFIG, "(%"PRIu64") Device configuration started\n",= fi->fh); + fuse_reply_open(req, fi); +} + +/* + * When QEMU is shutdown or killed the device gets released. + */ +static void +vhost_net_release(fuse_req_t req, struct fuse_file_info *fi) +{ + int err =3D 0; + struct vhost_device_ctx ctx =3D fuse_req_to_vhost_ctx(req, fi); + + ops->destroy_device(ctx); + RTE_LOG(INFO, VHOST_CONFIG, "(%"PRIu64") Device released\n", ctx.fh); + fuse_reply_err(req, err); +} + +/* + * Boilerplate code for CUSE IOCTL + * Implicit arguments: ctx, req, result. + */ +#define VHOST_IOCTL(func) do { \ + result =3D (func)(ctx); \ + fuse_reply_ioctl(req, result, NULL, 0); \ +} while (0) + +/* + * Boilerplate IOCTL RETRY + * Implicit arguments: req. + */ +#define VHOST_IOCTL_RETRY(size_r, size_w) do { \ + struct iovec iov_r =3D { arg, (size_r) }; \ + struct iovec iov_w =3D { arg, (size_w) }; \ + fuse_reply_ioctl_retry(req, &iov_r, (size_r) ? 1 : 0, &iov_w, (size_w) ? = 1 : 0); \ +} while (0) \ + +/* + * Boilerplate code for CUSE Read IOCTL + * Implicit arguments: ctx, req, result, in_bufsz, in_buf. + */ +#define VHOST_IOCTL_R(type, var, func) do { \ + if (!in_bufsz) { \ + VHOST_IOCTL_RETRY(sizeof(type), 0); \ + } else { \ + (var) =3D *(const type *)in_buf; \ + result =3D func(ctx, &(var)); \ + fuse_reply_ioctl(req, result, NULL, 0); \ + } \ +} while (0) + +/* + * Boilerplate code for CUSE Write IOCTL + * Implicit arguments: ctx, req, result, out_bufsz. + */ +#define VHOST_IOCTL_W(type, var, func) do { \ + if (!out_bufsz) { \ + VHOST_IOCTL_RETRY(0, sizeof(type)); \ + } else { \ + result =3D (func)(ctx, &(var)); \ + fuse_reply_ioctl(req, result, &(var), sizeof(type)); \ + } \ +} while (0) + +/* + * Boilerplate code for CUSE Read/Write IOCTL + * Implicit arguments: ctx, req, result, in_bufsz, in_buf. + */ +#define VHOST_IOCTL_RW(type1, var1, type2, var2, func) do { \ + if (!in_bufsz) { \ + VHOST_IOCTL_RETRY(sizeof(type1), sizeof(type2)); \ + } else { \ + (var1) =3D *(const type1 *) (in_buf); \ + result =3D (func)(ctx, (var1), &(var2)); \ + fuse_reply_ioctl(req, result, &(var2), sizeof(type2)); \ + } \ +} while (0) + +/** + * The IOCTLs are handled using CUSE/FUSE in userspace. Depending on + * the type of IOCTL a buffer is requested to read or to write. This + * request is handled by FUSE and the buffer is then given to CUSE. + */ +static void +vhost_net_ioctl(fuse_req_t req, int cmd, void *arg, + struct fuse_file_info *fi, __rte_unused unsigned flags, + const void *in_buf, size_t in_bufsz, size_t out_bufsz) +{ + struct vhost_device_ctx ctx =3D fuse_req_to_vhost_ctx(req, fi); + struct vhost_vring_file file; + struct vhost_vring_state state; + struct vhost_vring_addr addr; + uint64_t features; + uint32_t index; + int result =3D 0; + + switch (cmd) { + + case VHOST_NET_SET_BACKEND: + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_NET_SET_BACKEND\n", ct= x.fh); + VHOST_IOCTL_R(struct vhost_vring_file, file, ops->set_backend); + break; + + case VHOST_GET_FEATURES: + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_GET_FEATURES\n", ctx.f= h); + VHOST_IOCTL_W(uint64_t, features, ops->get_features); + break; + + case VHOST_SET_FEATURES: + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_SET_FEATURES\n", ctx.f= h); + VHOST_IOCTL_R(uint64_t, features, ops->set_features); + break; + + case VHOST_RESET_OWNER: + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_RESET_OWNER\n", ctx.fh= ); + VHOST_IOCTL(ops->reset_owner); + break; + + case VHOST_SET_OWNER: + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_SET_OWNER\n", ctx.fh); + VHOST_IOCTL(ops->set_owner); + break; + + case VHOST_SET_MEM_TABLE: + /*TODO fix race condition.*/ + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_SET_MEM_TABLE\n", ctx.= fh); + static struct vhost_memory mem_temp; + + switch (in_bufsz) { + case 0: + VHOST_IOCTL_RETRY(sizeof(struct vhost_memory), 0); + break; + + case sizeof(struct vhost_memory): + mem_temp =3D *(const struct vhost_memory *) in_buf; + + if (mem_temp.nregions > 0) { + VHOST_IOCTL_RETRY(sizeof(struct vhost_memory) + (sizeof(struct vhost_m= emory_region) * mem_temp.nregions), 0); + } else { + result =3D -1; + fuse_reply_ioctl(req, result, NULL, 0); + } + break; + + default: + result =3D ops->set_mem_table(ctx, in_buf, mem_temp.nregions); + if (result) + fuse_reply_err(req, EINVAL); + else + fuse_reply_ioctl(req, result, NULL, 0); + + } + + break; + + case VHOST_SET_VRING_NUM: + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_SET_VRING_NUM\n", ctx.= fh); + VHOST_IOCTL_R(struct vhost_vring_state, state, ops->set_vring_num); + break; + + case VHOST_SET_VRING_BASE: + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_SET_VRING_BASE\n", ctx= .fh); + VHOST_IOCTL_R(struct vhost_vring_state, state, ops->set_vring_base); + break; + + case VHOST_GET_VRING_BASE: + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_GET_VRING_BASE\n", ctx= .fh); + VHOST_IOCTL_RW(uint32_t, index, struct vhost_vring_state, state, ops->ge= t_vring_base); + break; + + case VHOST_SET_VRING_ADDR: + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_SET_VRING_ADDR\n", ctx= .fh); + VHOST_IOCTL_R(struct vhost_vring_addr, addr, ops->set_vring_addr); + break; + + case VHOST_SET_VRING_KICK: + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_SET_VRING_KICK\n", ctx= .fh); + VHOST_IOCTL_R(struct vhost_vring_file, file, ops->set_vring_kick); + break; + + case VHOST_SET_VRING_CALL: + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: VHOST_SET_VRING_CALL\n", ctx= .fh); + VHOST_IOCTL_R(struct vhost_vring_file, file, ops->set_vring_call); + break; + + default: + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") IOCTL: DOESN NOT EXIST\n", ctx.f= h); + result =3D -1; + fuse_reply_ioctl(req, result, NULL, 0); + } + + if (result < 0) + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: FAIL\n", ctx.fh); + else + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") IOCTL: SUCCESS\n", ctx.fh); +} + +/** + * Structure handling open, release and ioctl function pointers is populat= ed. + */ +static const struct cuse_lowlevel_ops vhost_net_ops =3D { + .open =3D vhost_net_open, + .release =3D vhost_net_release, + .ioctl =3D vhost_net_ioctl, +}; + +/** + * cuse_info is populated and used to register the cuse device. vhost_net_= device_ops are + * also passed when the device is registered in main.c. + */ +int +rte_vhost_driver_register(const char *dev_name) +{ + struct cuse_info cuse_info; + char device_name[PATH_MAX] =3D ""; + char char_device_name[PATH_MAX] =3D ""; + const char *device_argv[] =3D { device_name }; + + char fuse_opt_dummy[] =3D FUSE_OPT_DUMMY; + char fuse_opt_fore[] =3D FUSE_OPT_FORE; + char fuse_opt_nomulti[] =3D FUSE_OPT_NOMULTI; + char *fuse_argv[] =3D {fuse_opt_dummy, fuse_opt_fore, fuse_opt_nomulti}; + + if (access(cuse_device_name, R_OK | W_OK) < 0) { + RTE_LOG(ERR, VHOST_CONFIG, "Character device %s can't be accessed, maybe= not exist\n", cuse_device_name); + return -1; + } + + /* + * The device name is created. This is passed to QEMU so that it can regi= ster + * the device with our application. The dev_name allows us to have multip= le instances + * of userspace vhost which we can then add devices to separately. + */ + snprintf(device_name, PATH_MAX, "DEVNAME=3D%s", dev_name); + snprintf(char_device_name, PATH_MAX, "/dev/%s", dev_name); + + /* Check if device already exists. */ + if (access(char_device_name, F_OK) !=3D -1) { + RTE_LOG(ERR, VHOST_CONFIG, "Character device %s already exists\n", char_= device_name); + return -1; + } + + memset(&cuse_info, 0, sizeof(cuse_info)); + cuse_info.dev_major =3D default_major; + cuse_info.dev_minor =3D default_minor; + cuse_info.dev_info_argc =3D 1; + cuse_info.dev_info_argv =3D device_argv; + cuse_info.flags =3D CUSE_UNRESTRICTED_IOCTL; + + ops =3D get_virtio_net_callbacks(); + + session =3D cuse_lowlevel_setup(3, fuse_argv, + &cuse_info, &vhost_net_ops, 0, NULL); + if (session =3D=3D NULL) + return -1; + + return 0; +} + + +/** + * The CUSE session is launched allowing the application to receive open, = release and ioctl calls. + */ +int +rte_vhost_driver_session_start(void) +{ + fuse_session_loop(session); + + return 0; +} diff --git a/lib/librte_vhost/vhost-net-cdev.h b/lib/librte_vhost/vhost-net= -cdev.h new file mode 100644 index 0000000..ecf35fd --- /dev/null +++ b/lib/librte_vhost/vhost-net-cdev.h @@ -0,0 +1,112 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include + +#include + +/* Macros for printing using RTE_LOG */ +#define RTE_LOGTYPE_VHOST_CONFIG RTE_LOGTYPE_USER1 +#define RTE_LOGTYPE_VHOST_DATA RTE_LOGTYPE_USER1 + +#ifdef RTE_LIBRTE_VHOST_DEBUG +#define VHOST_MAX_PRINT_BUFF 6072 +#define LOG_LEVEL RTE_LOG_DEBUG +#define LOG_DEBUG(log_type, fmt, args...) do { \ + RTE_LOG(DEBUG, log_type, fmt, ##args); \ +} while (0) +#define VHOST_PRINT_PACKET(device, addr, size, header) do { = \ + char *pkt_addr =3D (char*)(addr); \ + unsigned int index; \ + char packet[VHOST_MAX_PRINT_BUFF]; \ + \ + if ((header)) \ + snprintf(packet, VHOST_MAX_PRINT_BUFF, "(%"PRIu64") Header size %d: ", (= device->device_fh), (size)); \ + else \ + snprintf(packet, VHOST_MAX_PRINT_BUFF, "(%"PRIu64") Packet size %d: ", (= device->device_fh), (size)); \ + for (index =3D 0; index < (size); index++) { \ + snprintf(packet + strnlen(packet, VHOST_MAX_PRINT_BUFF), VHOST_MAX_PRINT= _BUFF - strnlen(packet, VHOST_MAX_PRINT_BUFF), \ + "%02hhx ", pkt_addr[index]); \ + } \ + snprintf(packet + strnlen(packet, VHOST_MAX_PRINT_BUFF), VHOST_MAX_PRINT_= BUFF - strnlen(packet, VHOST_MAX_PRINT_BUFF), "\n"); \ + \ + LOG_DEBUG(VHOST_DATA, "%s", packet); \ +} while(0) +#else +#define LOG_LEVEL RTE_LOG_INFO +#define LOG_DEBUG(log_type, fmt, args...) do{} while(0) +#define VHOST_PRINT_PACKET(device, addr, size, header) do{} while(0) +#endif + +/** + * Structure used to identify device context. + */ +struct vhost_device_ctx +{ + pid_t pid; /* PID of process calling the IOCTL. */ + uint64_t fh; /* Populated with fi->fh to track the device index. */ +}; + +/** + * Structure contains function pointers to be defined in virtio-net.c. The= se + * functions are called in CUSE context and are used to configure devices. + */ +struct vhost_net_device_ops { + int (* new_device)(struct vhost_device_ctx); + void (* destroy_device)(struct vhost_device_ctx); + + int (* get_features)(struct vhost_device_ctx, uint64_t *); + int (* set_features)(struct vhost_device_ctx, uint64_t *); + + int (* set_mem_table)(struct vhost_device_ctx, const void *, uint32_t); + + int (* set_vring_num)(struct vhost_device_ctx, struct vhost_vring_state *= ); + int (* set_vring_addr)(struct vhost_device_ctx, struct vhost_vring_addr *= ); + int (* set_vring_base)(struct vhost_device_ctx, struct vhost_vring_state = *); + int (* get_vring_base)(struct vhost_device_ctx, uint32_t, struct vhost_vr= ing_state *); + + int (* set_vring_kick)(struct vhost_device_ctx, struct vhost_vring_file *= ); + int (* set_vring_call)(struct vhost_device_ctx, struct vhost_vring_file *= ); + + int (* set_backend)(struct vhost_device_ctx, struct vhost_vring_file *); + + int (* set_owner)(struct vhost_device_ctx); + int (* reset_owner)(struct vhost_device_ctx); +}; + + +struct vhost_net_device_ops const * get_virtio_net_callbacks(void); diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vhost/vhost_rxtx.c new file mode 100644 index 0000000..d25457b --- /dev/null +++ b/lib/librte_vhost/vhost_rxtx.c @@ -0,0 +1,292 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include + +#include +#include +#include + +#include "vhost-net-cdev.h" + +#define VHOST_MAX_PKT_BURST 64 +#define VHOST_MAX_MRG_PKT_BURST 64 + + +uint32_t +rte_vhost_enqueue_burst(struct virtio_net *dev, uint16_t queue_id, struct = rte_mbuf **pkts, uint32_t count) +{ + struct vhost_virtqueue *vq; + struct vring_desc *desc; + struct rte_mbuf *buff; + /* The virtio_hdr is initialised to 0. */ + struct virtio_net_hdr_mrg_rxbuf virtio_hdr =3D {{0, 0, 0, 0, 0, 0}, 0}; + uint64_t buff_addr =3D 0; + uint64_t buff_hdr_addr =3D 0; + uint32_t head[VHOST_MAX_PKT_BURST], packet_len =3D 0; + uint32_t head_idx, packet_success =3D 0; + uint32_t mergeable, mrg_count =3D 0; + uint16_t avail_idx, res_cur_idx; + uint16_t res_base_idx, res_end_idx; + uint16_t free_entries; + uint8_t success =3D 0; + + LOG_DEBUG(VHOST_DATA, "(%"PRIu64") %s()\n", dev->device_fh, __func__); + if (unlikely(queue_id !=3D VIRTIO_RXQ)) { + LOG_DEBUG(VHOST_DATA, "mq isn't supported in this version.\n"); + return 0; + } + + vq =3D dev->virtqueue[VIRTIO_RXQ]; + count =3D (count > VHOST_MAX_PKT_BURST) ? VHOST_MAX_PKT_BURST : count; + /* As many data cores may want access to available buffers, they need to = be reserved. */ + do { + res_base_idx =3D vq->last_used_idx_res; + avail_idx =3D *((volatile uint16_t *)&vq->avail->idx); + + free_entries =3D (avail_idx - res_base_idx); + /*check that we have enough buffers*/ + if (unlikely(count > free_entries)) + count =3D free_entries; + + if (count =3D=3D 0) + return 0; + + res_end_idx =3D res_base_idx + count; + /* vq->last_used_idx_res is atomically updated. */ + /* TODO: Allow to disable cmpset if no concurrency in application */ + success =3D rte_atomic16_cmpset(&vq->last_used_idx_res, + res_base_idx, res_end_idx); + /* If there is contention here and failed, try again. */ + } while (unlikely(success =3D=3D 0)); + res_cur_idx =3D res_base_idx; + LOG_DEBUG(VHOST_DATA, "(%"PRIu64") Current Index %d| End Index %d\n", + dev->device_fh, + res_cur_idx, res_end_idx); + + /* Prefetch available ring to retrieve indexes. */ + rte_prefetch0(&vq->avail->ring[res_cur_idx & (vq->size - 1)]); + + /* Check if the VIRTIO_NET_F_MRG_RXBUF feature is enabled. */ + mergeable =3D dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF); + + /* Retrieve all of the head indexes first to avoid caching issues. */ + for (head_idx =3D 0; head_idx < count; head_idx++) + head[head_idx] =3D vq->avail->ring[(res_cur_idx + head_idx) & (vq->size = - 1)]; + + /*Prefetch descriptor index. */ + rte_prefetch0(&vq->desc[head[packet_success]]); + + while (res_cur_idx !=3D res_end_idx) { + /* Get descriptor from available ring */ + desc =3D &vq->desc[head[packet_success]]; + + buff =3D pkts[packet_success]; + + /* Convert from gpa to vva (guest physical addr -> vhost virtual addr) *= / + buff_addr =3D gpa_to_vva(dev, desc->addr); + /* Prefetch buffer address. */ + rte_prefetch0((void *)(uintptr_t)buff_addr); + + if (mergeable && (mrg_count !=3D 0)) { + desc->len =3D packet_len =3D rte_pktmbuf_data_len(buff); + } else { + /* Copy virtio_hdr to packet and increment buffer address */ + buff_hdr_addr =3D buff_addr; + packet_len =3D rte_pktmbuf_data_len(buff) + vq->vhost_hlen; + + /* + * If the descriptors are chained the header and data are placed in + * separate buffers. + */ + if (desc->flags & VRING_DESC_F_NEXT) { + desc->len =3D vq->vhost_hlen; + desc =3D &vq->desc[desc->next]; + /* Buffer address translation. */ + buff_addr =3D gpa_to_vva(dev, desc->addr); + desc->len =3D rte_pktmbuf_data_len(buff); + } else { + buff_addr +=3D vq->vhost_hlen; + desc->len =3D packet_len; + } + } + + VHOST_PRINT_PACKET(dev, (uintptr_t)buff_addr, rte_pktmbuf_data_len(buff)= , 0); + + /* Update used ring with desc information */ + vq->used->ring[res_cur_idx & (vq->size - 1)].id =3D head[packet_success]= ; + vq->used->ring[res_cur_idx & (vq->size - 1)].len =3D packet_len; + + /* Copy mbuf data to buffer */ + /* TODO fixme for sg mbuf and the case that desc couldn't hold the mbuf = data */ + rte_memcpy((void *)(uintptr_t)buff_addr, (const void *)buff->pkt.data, r= te_pktmbuf_data_len(buff)); + + res_cur_idx++; + packet_success++; + + /* If mergeable is disabled then a header is required per buffer. */ + if (!mergeable) { + rte_memcpy((void *)(uintptr_t)buff_hdr_addr, (const void *)&virtio_hdr,= vq->vhost_hlen); + VHOST_PRINT_PACKET(dev, (uintptr_t)buff_hdr_addr, vq->vhost_hlen, 1); + } else { + mrg_count++; + /* Merge buffer can only handle so many buffers at a time. Tell the gue= st if this limit is reached. */ + if ((mrg_count =3D=3D VHOST_MAX_MRG_PKT_BURST) || (res_cur_idx =3D=3D r= es_end_idx)) { + virtio_hdr.num_buffers =3D mrg_count; + LOG_DEBUG(VHOST_DATA, "(%"PRIu64") RX: Num merge buffers %d\n", dev->d= evice_fh, virtio_hdr.num_buffers); + rte_memcpy((void *)(uintptr_t)buff_hdr_addr, (const void *)&virtio_hdr= , vq->vhost_hlen); + VHOST_PRINT_PACKET(dev, (uintptr_t)buff_hdr_addr, vq->vhost_hlen, 1); + mrg_count =3D 0; + } + } + if (res_cur_idx < res_end_idx) { + /* Prefetch descriptor index. */ + rte_prefetch0(&vq->desc[head[packet_success]]); + } + } + + rte_compiler_barrier(); + + /* Wait until it's our turn to add our buffer to the used ring. */ + while (unlikely(vq->last_used_idx !=3D res_base_idx)) + rte_pause(); + + *(volatile uint16_t *)&vq->used->idx +=3D count; + vq->last_used_idx =3D res_end_idx; + + /* Kick the guest if necessary. */ + if (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT)) + eventfd_write((int)vq->kickfd, 1); + return count; +} + + +uint32_t +rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, struct = rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint32_t count) +{ + struct rte_mbuf *mbuf; + struct vhost_virtqueue *vq; + struct vring_desc *desc; + uint64_t buff_addr =3D 0; + uint32_t head[VHOST_MAX_PKT_BURST]; + uint32_t used_idx; + uint32_t i; + uint16_t free_entries, packet_success =3D 0; + uint16_t avail_idx; + + if (unlikely(queue_id !=3D VIRTIO_TXQ)) { + LOG_DEBUG(VHOST_DATA, "mq isn't supported in this version.\n"); + return 0; + } + + vq =3D dev->virtqueue[VIRTIO_TXQ]; + avail_idx =3D *((volatile uint16_t *)&vq->avail->idx); + + /* If there are no available buffers then return. */ + if (vq->last_used_idx =3D=3D avail_idx) + return 0; + + LOG_DEBUG(VHOST_DATA, "(%"PRIu64") virtio_dev_tx()\n", dev->device_fh); + + /* Prefetch available ring to retrieve head indexes. */ + rte_prefetch0(&vq->avail->ring[vq->last_used_idx & (vq->size - 1)]); + + /*get the number of free entries in the ring*/ + free_entries =3D (avail_idx - vq->last_used_idx); + + if (free_entries > count) + free_entries =3D count; + /* Limit to MAX_PKT_BURST. */ + if (free_entries > VHOST_MAX_PKT_BURST) + free_entries =3D VHOST_MAX_PKT_BURST; + + LOG_DEBUG(VHOST_DATA, "(%"PRIu64") Buffers available %d\n", dev->device_f= h, free_entries); + /* Retrieve all of the head indexes first to avoid caching issues. */ + for (i =3D 0; i < free_entries; i++) + head[i] =3D vq->avail->ring[(vq->last_used_idx + i) & (vq->size - 1)]; + + /* Prefetch descriptor index. */ + rte_prefetch0(&vq->desc[head[packet_success]]); + rte_prefetch0(&vq->used->ring[vq->last_used_idx & (vq->size - 1)]); + + while (packet_success < free_entries) { + desc =3D &vq->desc[head[packet_success]]; + + /* Discard first buffer as it is the virtio header */ + desc =3D &vq->desc[desc->next]; + + /* Buffer address translation. */ + buff_addr =3D gpa_to_vva(dev, desc->addr); + /* Prefetch buffer address. */ + rte_prefetch0((void *)(uintptr_t)buff_addr); + + used_idx =3D vq->last_used_idx & (vq->size - 1); + + if (packet_success < (free_entries - 1)) { + /* Prefetch descriptor index. */ + rte_prefetch0(&vq->desc[head[packet_success+1]]); + rte_prefetch0(&vq->used->ring[(used_idx + 1) & (vq->size - 1)]); + } + + /* Update used index buffer information. */ + vq->used->ring[used_idx].id =3D head[packet_success]; + vq->used->ring[used_idx].len =3D 0; + + mbuf =3D rte_pktmbuf_alloc(mbuf_pool); + if (unlikely(mbuf =3D=3D NULL)) { + RTE_LOG(ERR, VHOST_DATA, "Failed to allocate memory for mbuf.\n"); + return packet_success; + } + mbuf->pkt.data_len =3D desc->len; + mbuf->pkt.pkt_len =3D mbuf->pkt.data_len; + + rte_memcpy((void *) mbuf->pkt.data, + (const void *) buff_addr, mbuf->pkt.data_len); + + pkts[packet_success] =3D mbuf; + + VHOST_PRINT_PACKET(dev, (uintptr_t)buff_addr, desc->len, 0); + + vq->last_used_idx++; + packet_success++; + } + + rte_compiler_barrier(); + vq->used->idx +=3D packet_success; + /* Kick guest if required. */ + if (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT)) + eventfd_write((int)vq->kickfd, 1); + + return packet_success; +} diff --git a/lib/librte_vhost/virtio-net.c b/lib/librte_vhost/virtio-net.c new file mode 100644 index 0000000..ccda8e9 --- /dev/null +++ b/lib/librte_vhost/virtio-net.c @@ -0,0 +1,1002 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#include +#include +#include +#include + +#include "vhost-net-cdev.h" +#include "eventfd_link/eventfd_link.h" + +/** + * Device linked list structure for configuration. + */ +struct virtio_net_config_ll { + struct virtio_net dev; /* Virtio device. */ + struct virtio_net_config_ll *next; /* Next entry on linked list. */ +}; + +static const char eventfd_cdev[] =3D "/dev/eventfd-link"; + +/* device ops to add/remove device to data core. */ +static struct virtio_net_device_ops const *notify_ops; +/* Root address of the linked list in the configuration core. */ +static struct virtio_net_config_ll *ll_root; + +/* Features supported by this library. */ +#define VHOST_SUPPORTED_FEATURES (1ULL << VIRTIO_NET_F_MRG_RXBUF) +static uint64_t VHOST_FEATURES =3D VHOST_SUPPORTED_FEATURES; + +/* Line size for reading maps file. */ +static const uint32_t BUFSIZE =3D PATH_MAX; + +/* Size of prot char array in procmap. */ +#define PROT_SZ 5 + +/* Number of elements in procmap struct. */ +#define PROCMAP_SZ 8 + +/* Structure containing information gathered from maps file. */ +struct procmap { + uint64_t va_start; /* Start virtual address in file. */ + uint64_t len; /* Size of file. */ + uint64_t pgoff; /* Not used. */ + uint32_t maj; /* Not used. */ + uint32_t min; /* Not used. */ + uint32_t ino; /* Not used. */ + char prot[PROT_SZ]; /* Not used. */ + char fname[PATH_MAX]; /* File name. */ +}; + +/** + * Converts QEMU virtual address to Vhost virtual address. This function i= s used + * to convert the ring addresses to our address space. + */ +static uint64_t +qva_to_vva(struct virtio_net *dev, uint64_t qemu_va) +{ + struct virtio_memory_regions *region; + uint64_t vhost_va =3D 0; + uint32_t regionidx =3D 0; + + /* Find the region where the address lives. */ + for (regionidx =3D 0; regionidx < dev->mem->nregions; regionidx++) { + region =3D &dev->mem->regions[regionidx]; + if ((qemu_va >=3D region->userspace_address) && + (qemu_va <=3D region->userspace_address + + region->memory_size)) { + vhost_va =3D dev->mem->mapped_address + qemu_va - dev->mem->base_addres= s; + break; + } + } + return vhost_va; +} + +/** + * Locate the file containing QEMU's memory space and map it to our addres= s space. + */ +static int +host_memory_map(struct virtio_net *dev, struct virtio_memory *mem, pid_t p= id, uint64_t addr) +{ + struct dirent *dptr =3D NULL; + struct procmap procmap; + DIR *dp =3D NULL; + int fd; + int i; + char memfile[PATH_MAX]; + char mapfile[PATH_MAX]; + char procdir[PATH_MAX]; + char resolved_path[PATH_MAX]; + FILE *fmap; + void *map; + uint8_t found =3D 0; + char line[BUFSIZE]; + char dlm[] =3D "- : "; + char *str, *sp, *in[PROCMAP_SZ]; + char *end =3D NULL; + + /* Path where mem files are located. */ + snprintf(procdir, PATH_MAX, "/proc/%u/fd/", pid); + /* Maps file used to locate mem file. */ + snprintf(mapfile, PATH_MAX, "/proc/%u/maps", pid); + + fmap =3D fopen(mapfile, "r"); + if (fmap =3D=3D NULL) { + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to open maps file for pid= %d\n", dev->device_fh, pid); + return -1; + } + + /* Read through maps file until we find out base_address. */ + while (fgets(line, BUFSIZE, fmap) !=3D 0) { + str =3D line; + errno =3D 0; + /* Split line in to fields. */ + for (i =3D 0; i < PROCMAP_SZ; i++) { + in[i] =3D strtok_r(str, &dlm[i], &sp); + if ((in[i] =3D=3D NULL) || (errno !=3D 0)) { + fclose(fmap); + return -1; + } + str =3D NULL; + } + + /* Convert/Copy each field as needed. */ + procmap.va_start =3D strtoull(in[0], &end, 16); + if ((in[0] =3D=3D '\0') || (end =3D=3D NULL) || (*end !=3D '\0') || (err= no !=3D 0)) { + fclose(fmap); + return -1; + } + + procmap.len =3D strtoull(in[1], &end, 16); + if ((in[1] =3D=3D '\0') || (end =3D=3D NULL) || (*end !=3D '\0') || (err= no !=3D 0)) { + fclose(fmap); + return -1; + } + + procmap.pgoff =3D strtoull(in[3], &end, 16); + if ((in[3] =3D=3D '\0') || (end =3D=3D NULL) || (*end !=3D '\0') || (err= no !=3D 0)) { + fclose(fmap); + return -1; + } + + procmap.maj =3D strtoul(in[4], &end, 16); + if ((in[4] =3D=3D '\0') || (end =3D=3D NULL) || (*end !=3D '\0') || (err= no !=3D 0)) { + fclose(fmap); + return -1; + } + + procmap.min =3D strtoul(in[5], &end, 16); + if ((in[5] =3D=3D '\0') || (end =3D=3D NULL) || (*end !=3D '\0') || (err= no !=3D 0)) { + fclose(fmap); + return -1; + } + + procmap.ino =3D strtoul(in[6], &end, 16); + if ((in[6] =3D=3D '\0') || (end =3D=3D NULL) || (*end !=3D '\0') || (err= no !=3D 0)) { + fclose(fmap); + return -1; + } + + memcpy(&procmap.prot, in[2], PROT_SZ); + memcpy(&procmap.fname, in[7], PATH_MAX); + + if (procmap.va_start =3D=3D addr) { + procmap.len =3D procmap.len - procmap.va_start; + found =3D 1; + break; + } + } + fclose(fmap); + + if (!found) { + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find memory file in pi= d %d maps file\n", dev->device_fh, pid); + return -1; + } + + /* Find the guest memory file among the process fds. */ + dp =3D opendir(procdir); + if (dp =3D=3D NULL) { + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Cannot open pid %d process direc= tory \n", dev->device_fh, pid); + return -1; + + } + + found =3D 0; + + /* Read the fd directory contents. */ + while (NULL !=3D (dptr =3D readdir(dp))) { + snprintf(memfile, PATH_MAX, "/proc/%u/fd/%s", pid, dptr->d_name); + realpath(memfile, resolved_path); + if (resolved_path =3D=3D NULL) { + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to resolve fd directory\= n", dev->device_fh); + closedir(dp); + return -1; + } + if (strncmp(resolved_path, procmap.fname, + strnlen(procmap.fname, PATH_MAX)) =3D=3D 0) { + found =3D 1; + break; + } + } + + closedir(dp); + + if (found =3D=3D 0) { + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find memory file for p= id %d\n", dev->device_fh, pid); + return -1; + } + /* Open the shared memory file and map the memory into this process. */ + fd =3D open(memfile, O_RDWR); + + if (fd =3D=3D -1) { + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to open %s for pid %d\n",= dev->device_fh, memfile, pid); + return -1; + } + + map =3D mmap(0, (size_t)procmap.len, PROT_READ|PROT_WRITE , MAP_POPULATE|= MAP_SHARED, fd, 0); + close(fd); + + if (map =3D=3D MAP_FAILED) { + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Error mapping the file %s for pi= d %d\n", dev->device_fh, memfile, pid); + return -1; + } + + /* Store the memory address and size in the device data structure */ + mem->mapped_address =3D (uint64_t)(uintptr_t)map; + mem->mapped_size =3D procmap.len; + + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") Mem File: %s->%s - Size: %llu - VA: = %p\n", dev->device_fh, + memfile, resolved_path, (long long unsigned)mem->mapped_size, map); + + return 0; +} + +/** + * Initialise all variables in device structure. + */ +static void +init_device(struct virtio_net *dev) +{ + uint64_t vq_offset; + + /* Virtqueues have already been malloced so we don't want to set them to = NULL. */ + vq_offset =3D offsetof(struct virtio_net, mem); + + /* Set everything to 0. */ + memset((void *)(uintptr_t)((uint64_t)(uintptr_t)dev + vq_offset), 0, + (sizeof(struct virtio_net) - (size_t)vq_offset)); + memset(dev->virtqueue[VIRTIO_RXQ], 0, sizeof(struct vhost_virtqueue)); + memset(dev->virtqueue[VIRTIO_TXQ], 0, sizeof(struct vhost_virtqueue)); + + /* Backends are set to -1 indicating an inactive device. */ + dev->virtqueue[VIRTIO_RXQ]->backend =3D VIRTIO_DEV_STOPPED; + dev->virtqueue[VIRTIO_TXQ]->backend =3D VIRTIO_DEV_STOPPED; +} + +/** + * Unmap any memory, close any file descriptors and free any memory owned = by a device. + */ +static void +cleanup_device(struct virtio_net *dev) +{ + /* Unmap QEMU memory file if mapped. */ + if (dev->mem) { + munmap((void *)(uintptr_t)dev->mem->mapped_address, (size_t)dev->mem->ma= pped_size); + free(dev->mem); + } + + /* Close any event notifiers opened by device. */ + if (dev->virtqueue[VIRTIO_RXQ]->callfd) + close((int)dev->virtqueue[VIRTIO_RXQ]->callfd); + if (dev->virtqueue[VIRTIO_RXQ]->kickfd) + close((int)dev->virtqueue[VIRTIO_RXQ]->kickfd); + if (dev->virtqueue[VIRTIO_TXQ]->callfd) + close((int)dev->virtqueue[VIRTIO_TXQ]->callfd); + if (dev->virtqueue[VIRTIO_TXQ]->kickfd) + close((int)dev->virtqueue[VIRTIO_TXQ]->kickfd); +} + +/** + * Release virtqueues and device memory. + */ +static void +free_device(struct virtio_net_config_ll *ll_dev) +{ + /* Free any malloc'd memory */ + free(ll_dev->dev.virtqueue[VIRTIO_RXQ]); + free(ll_dev->dev.virtqueue[VIRTIO_TXQ]); + free(ll_dev); +} + +/** + * Retrieves an entry from the devices configuration linked list. + */ +static struct virtio_net_config_ll * +get_config_ll_entry(struct vhost_device_ctx ctx) +{ + struct virtio_net_config_ll *ll_dev =3D ll_root; + + /* Loop through linked list until the device_fh is found. */ + while (ll_dev !=3D NULL) { + if (ll_dev->dev.device_fh =3D=3D ctx.fh) + return ll_dev; + ll_dev =3D ll_dev->next; + } + + return NULL; +} + +/** + * Searches the configuration core linked list and retrieves the device if= it exists. + */ +static struct virtio_net * +get_device(struct vhost_device_ctx ctx) +{ + struct virtio_net_config_ll *ll_dev; + + ll_dev =3D get_config_ll_entry(ctx); + + /* If a matching entry is found in the linked list, return the device in = that entry. */ + if (ll_dev) + return &ll_dev->dev; + + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Device not found in linked list.\= n", ctx.fh); + return NULL; +} + +/** + * Add entry containing a device to the device configuration linked list. + */ +static void +add_config_ll_entry(struct virtio_net_config_ll *new_ll_dev) +{ + struct virtio_net_config_ll *ll_dev =3D ll_root; + + /* If ll_dev =3D=3D NULL then this is the first device so go to else */ + if (ll_dev) { + /* If the 1st device_fh !=3D 0 then we insert our device here. */ + if (ll_dev->dev.device_fh !=3D 0) { + new_ll_dev->dev.device_fh =3D 0; + new_ll_dev->next =3D ll_dev; + ll_root =3D new_ll_dev; + } else { + /* Increment through the ll until we find un unused device_fh. Insert t= he device at that entry*/ + while ((ll_dev->next !=3D NULL) && (ll_dev->dev.device_fh =3D=3D (ll_de= v->next->dev.device_fh - 1))) + ll_dev =3D ll_dev->next; + + new_ll_dev->dev.device_fh =3D ll_dev->dev.device_fh + 1; + new_ll_dev->next =3D ll_dev->next; + ll_dev->next =3D new_ll_dev; + } + } else { + ll_root =3D new_ll_dev; + ll_root->dev.device_fh =3D 0; + } + +} + +/** + * Remove an entry from the device configuration linked list. + */ +static struct virtio_net_config_ll * +rm_config_ll_entry(struct virtio_net_config_ll *ll_dev, struct virtio_net_= config_ll *ll_dev_last) +{ + /* First remove the device and then clean it up. */ + if (ll_dev =3D=3D ll_root) { + ll_root =3D ll_dev->next; + cleanup_device(&ll_dev->dev); + free_device(ll_dev); + return ll_root; + } else { + if (likely(ll_dev_last !=3D NULL)) { + ll_dev_last->next =3D ll_dev->next; + cleanup_device(&ll_dev->dev); + free_device(ll_dev); + return ll_dev_last->next; + } else { + cleanup_device(&ll_dev->dev); + free_device(ll_dev); + RTE_LOG(ERR, VHOST_CONFIG, "Remove entry from config_ll failed\n"); + return NULL; + } + } +} + + +/** + * Function is called from the CUSE open function. The device structure is + * initialised and a new entry is added to the device configuration linked + * list. + */ +static int +new_device(struct vhost_device_ctx ctx) +{ + struct virtio_net_config_ll *new_ll_dev; + struct vhost_virtqueue *virtqueue_rx, *virtqueue_tx; + + /* Setup device and virtqueues. */ + new_ll_dev =3D malloc(sizeof(struct virtio_net_config_ll)); + if (new_ll_dev =3D=3D NULL) { + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to allocate memory for de= v.\n", ctx.fh); + return -1; + } + + virtqueue_rx =3D malloc(sizeof(struct vhost_virtqueue)); + if (virtqueue_rx =3D=3D NULL) { + free(new_ll_dev); + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to allocate memory for vi= rtqueue_rx.\n", ctx.fh); + return -1; + } + + virtqueue_tx =3D malloc(sizeof(struct vhost_virtqueue)); + if (virtqueue_tx =3D=3D NULL) { + free(virtqueue_rx); + free(new_ll_dev); + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to allocate memory for vi= rtqueue_tx.\n", ctx.fh); + return -1; + } + + new_ll_dev->dev.virtqueue[VIRTIO_RXQ] =3D virtqueue_rx; + new_ll_dev->dev.virtqueue[VIRTIO_TXQ] =3D virtqueue_tx; + + /* Initialise device and virtqueues. */ + init_device(&new_ll_dev->dev); + + new_ll_dev->next =3D NULL; + + /* Add entry to device configuration linked list. */ + add_config_ll_entry(new_ll_dev); + + return new_ll_dev->dev.device_fh; +} + +/** + * Function is called from the CUSE release function. This function will c= leanup + * the device and remove it from device configuration linked list. + */ +static void +destroy_device(struct vhost_device_ctx ctx) +{ + struct virtio_net_config_ll *ll_dev_cur_ctx, *ll_dev_last =3D NULL; + struct virtio_net_config_ll *ll_dev_cur =3D ll_root; + + /* Find the linked list entry for the device to be removed. */ + ll_dev_cur_ctx =3D get_config_ll_entry(ctx); + while (ll_dev_cur !=3D NULL) { + /* If the device is found or a device that doesn't exist is found then i= t is removed. */ + if (ll_dev_cur =3D=3D ll_dev_cur_ctx) { + /* + * If the device is running on a data core then call the function to re= move it from + * the data core. + */ + if ((ll_dev_cur->dev.flags & VIRTIO_DEV_RUNNING)) + notify_ops->destroy_device(&(ll_dev_cur->dev)); + ll_dev_cur =3D rm_config_ll_entry(ll_dev_cur, ll_dev_last); + /*TODO return here? */ + } else { + ll_dev_last =3D ll_dev_cur; + ll_dev_cur =3D ll_dev_cur->next; + } + } +} + +/** + * Called from CUSE IOCTL: VHOST_SET_OWNER + * This function just returns success at the moment unless the device hasn= 't been initialised. + */ +static int +set_owner(struct vhost_device_ctx ctx) +{ + struct virtio_net *dev; + + dev =3D get_device(ctx); + if (dev =3D=3D NULL) + return -1; + + return 0; +} + +/** + * Called from CUSE IOCTL: VHOST_RESET_OWNER + */ +static int +reset_owner(struct vhost_device_ctx ctx) +{ + struct virtio_net_config_ll *ll_dev; + + ll_dev =3D get_config_ll_entry(ctx); + + cleanup_device(&ll_dev->dev); + init_device(&ll_dev->dev); + + return 0; +} + +/** + * Called from CUSE IOCTL: VHOST_GET_FEATURES + * The features that we support are requested. + */ +static int +get_features(struct vhost_device_ctx ctx, uint64_t *pu) +{ + struct virtio_net *dev; + + dev =3D get_device(ctx); + if (dev =3D=3D NULL) + return -1; + + /* Send our supported features. */ + *pu =3D VHOST_FEATURES; + return 0; +} + +/** + * Called from CUSE IOCTL: VHOST_SET_FEATURES + * We receive the negotiated set of features supported by us and the virti= o device. + */ +static int +set_features(struct vhost_device_ctx ctx, uint64_t *pu) +{ + struct virtio_net *dev; + + dev =3D get_device(ctx); + if (dev =3D=3D NULL) + return -1; + if (*pu & ~VHOST_FEATURES) + return -1; + + /* Store the negotiated feature list for the device. */ + dev->features =3D *pu; + + /* Set the vhost_hlen depending on if VIRTIO_NET_F_MRG_RXBUF is set. */ + if (dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF)) { + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") Mergeable RX buffers enabled\n", de= v->device_fh); + dev->virtqueue[VIRTIO_RXQ]->vhost_hlen =3D sizeof(struct virtio_net_hdr_= mrg_rxbuf); + dev->virtqueue[VIRTIO_TXQ]->vhost_hlen =3D sizeof(struct virtio_net_hdr_= mrg_rxbuf); + } else { + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") Mergeable RX buffers disabled\n", d= ev->device_fh); + dev->virtqueue[VIRTIO_RXQ]->vhost_hlen =3D sizeof(struct virtio_net_hdr)= ; + dev->virtqueue[VIRTIO_TXQ]->vhost_hlen =3D sizeof(struct virtio_net_hdr)= ; + } + return 0; +} + + +/** + * Called from CUSE IOCTL: VHOST_SET_MEM_TABLE + * This function creates and populates the memory structure for the device= . This includes + * storing offsets used to translate buffer addresses. + */ +static int +set_mem_table(struct vhost_device_ctx ctx, const void *mem_regions_addr, u= int32_t nregions) +{ + struct virtio_net *dev; + struct vhost_memory_region *mem_regions; + struct virtio_memory *mem; + uint64_t size =3D offsetof(struct vhost_memory, regions); + uint32_t regionidx, valid_regions; + + dev =3D get_device(ctx); + if (dev =3D=3D NULL) + return -1; + + if (dev->mem) { + munmap((void *)(uintptr_t)dev->mem->mapped_address, (size_t)dev->mem->ma= pped_size); + free(dev->mem); + } + + /* Malloc the memory structure depending on the number of regions. */ + mem =3D calloc(1, sizeof(struct virtio_memory) + (sizeof(struct virtio_me= mory_regions) * nregions)); + if (mem =3D=3D NULL) { + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to allocate memory for de= v->mem.\n", dev->device_fh); + return -1; + } + + mem->nregions =3D nregions; + + mem_regions =3D (void *)(uintptr_t)((uint64_t)(uintptr_t)mem_regions_addr= + size); + + for (regionidx =3D 0; regionidx < mem->nregions; regionidx++) { + /* Populate the region structure for each region. */ + mem->regions[regionidx].guest_phys_address =3D mem_regions[regionidx].gu= est_phys_addr; + mem->regions[regionidx].guest_phys_address_end =3D mem->regions[regionid= x].guest_phys_address + + mem_regions[regionidx].memory_size; + mem->regions[regionidx].memory_size =3D mem_regions[regionidx].memory_si= ze; + mem->regions[regionidx].userspace_address =3D mem_regions[regionidx].use= rspace_addr; + + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") REGION: %u - GPA: %p - QEMU VA: %p = - SIZE (%"PRIu64")\n", dev->device_fh, + regionidx, (void *)(uintptr_t)mem->regions[regionidx].guest_phys_addre= ss, + (void *)(uintptr_t)mem->regions[regionidx].userspace_address, + mem->regions[regionidx].memory_size); + + /*set the base address mapping*/ + if (mem->regions[regionidx].guest_phys_address =3D=3D 0x0) { + mem->base_address =3D mem->regions[regionidx].userspace_address; + /* Map VM memory file */ + if (host_memory_map(dev, mem, ctx.pid, mem->base_address) !=3D 0) { + free(mem); + return -1; + } + } + } + + /* Check that we have a valid base address. */ + if (mem->base_address =3D=3D 0) { + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find base address of q= emu memory file.\n", dev->device_fh); + free(mem); + return -1; + } + + /* Check if all of our regions have valid mappings. Usually one does not = exist in the QEMU memory file. */ + valid_regions =3D mem->nregions; + for (regionidx =3D 0; regionidx < mem->nregions; regionidx++) { + if ((mem->regions[regionidx].userspace_address < mem->base_address) || + (mem->regions[regionidx].userspace_address > (mem->base_address + mem->= mapped_size))) + valid_regions--; + } + + /* If a region does not have a valid mapping we rebuild our memory struct= to contain only valid entries. */ + if (valid_regions !=3D mem->nregions) { + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") Not all memory regions exist in the= QEMU mem file. Re-populating mem structure\n", + dev->device_fh); + + /* Re-populate the memory structure with only valid regions. Invalid reg= ions are over-written with memmove. */ + valid_regions =3D 0; + + for (regionidx =3D mem->nregions; 0 !=3D regionidx--;) { + if ((mem->regions[regionidx].userspace_address < mem->base_address) || + (mem->regions[regionidx].userspace_address > (mem->base_address + mem= ->mapped_size))) { + memmove(&mem->regions[regionidx], &mem->regions[regionidx + 1], + sizeof(struct virtio_memory_regions) * valid_regions); + } else { + valid_regions++; + } + } + } + mem->nregions =3D valid_regions; + dev->mem =3D mem; + + /* + * Calculate the address offset for each region. This offset is used to i= dentify the vhost virtual address + * corresponding to a QEMU guest physical address. + */ + for (regionidx =3D 0; regionidx < dev->mem->nregions; regionidx++) + dev->mem->regions[regionidx].address_offset =3D dev->mem->regions[region= idx].userspace_address - dev->mem->base_address + + dev->mem->mapped_address - dev->mem->regions[regionidx].guest_phys_ad= dress; + + return 0; +} + +/** + * Called from CUSE IOCTL: VHOST_SET_VRING_NUM + * The virtio device sends us the size of the descriptor ring. + */ +static int +set_vring_num(struct vhost_device_ctx ctx, struct vhost_vring_state *state= ) +{ + struct virtio_net *dev; + + dev =3D get_device(ctx); + if (dev =3D=3D NULL) + return -1; + + /* State->index refers to the queue index. The TX queue is 1, RX queue is= 0. */ + dev->virtqueue[state->index]->size =3D state->num; + + return 0; +} + +/** + * Called from CUSE IOCTL: VHOST_SET_VRING_ADDR + * The virtio device sends us the desc, used and avail ring addresses. Thi= s function + * then converts these to our address space. + */ +static int +set_vring_addr(struct vhost_device_ctx ctx, struct vhost_vring_addr *addr) +{ + struct virtio_net *dev; + struct vhost_virtqueue *vq; + + dev =3D get_device(ctx); + if (dev =3D=3D NULL) + return -1; + + /* addr->index refers to the queue index. The TX queue is 1, RX queue is = 0. */ + vq =3D dev->virtqueue[addr->index]; + + /* The addresses are converted from QEMU virtual to Vhost virtual. */ + vq->desc =3D (struct vring_desc *)(uintptr_t)qva_to_vva(dev, addr->desc_u= ser_addr); + if (vq->desc =3D=3D 0) { + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find descriptor ring a= ddress.\n", dev->device_fh); + return -1; + } + + vq->avail =3D (struct vring_avail *)(uintptr_t)qva_to_vva(dev, addr->avai= l_user_addr); + if (vq->avail =3D=3D 0) { + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find available ring ad= dress.\n", dev->device_fh); + return -1; + } + + vq->used =3D (struct vring_used *)(uintptr_t)qva_to_vva(dev, addr->used_u= ser_addr); + if (vq->used =3D=3D 0) { + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") Failed to find used ring address= .\n", dev->device_fh); + return -1; + } + + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") mapped address desc: %p\n", dev->dev= ice_fh, vq->desc); + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") mapped address avail: %p\n", dev->de= vice_fh, vq->avail); + LOG_DEBUG(VHOST_CONFIG, "(%"PRIu64") mapped address used: %p\n", dev->dev= ice_fh, vq->used); + + return 0; +} + +/** + * Called from CUSE IOCTL: VHOST_SET_VRING_BASE + * The virtio device sends us the available ring last used index. + */ +static int +set_vring_base(struct vhost_device_ctx ctx, struct vhost_vring_state *stat= e) +{ + struct virtio_net *dev; + + dev =3D get_device(ctx); + if (dev =3D=3D NULL) + return -1; + + /* State->index refers to the queue index. The TX queue is 1, RX queue is= 0. */ + dev->virtqueue[state->index]->last_used_idx =3D state->num; + dev->virtqueue[state->index]->last_used_idx_res =3D state->num; + + return 0; +} + +/** + * Called from CUSE IOCTL: VHOST_GET_VRING_BASE + * We send the virtio device our available ring last used index. + */ +static int +get_vring_base(struct vhost_device_ctx ctx, uint32_t index, struct vhost_v= ring_state *state) +{ + struct virtio_net *dev; + + dev =3D get_device(ctx); + if (dev =3D=3D NULL) + return -1; + + state->index =3D index; + /* State->index refers to the queue index. The TX queue is 1, RX queue is= 0. */ + state->num =3D dev->virtqueue[state->index]->last_used_idx; + + return 0; +} + +/** + * This function uses the eventfd_link kernel module to copy an eventfd fi= le descriptor + * provided by QEMU in to our process space. + */ +static int +eventfd_copy(struct virtio_net *dev, struct eventfd_copy *eventfd_copy) +{ + int eventfd_link, ret; + + /* Open the character device to the kernel module. */ + eventfd_link =3D open(eventfd_cdev, O_RDWR); + if (eventfd_link < 0) { + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") eventfd_link module is not loade= d\n", dev->device_fh); + return -1; + } + + /* Call the IOCTL to copy the eventfd. */ + ret =3D ioctl(eventfd_link, EVENTFD_COPY, eventfd_copy); + close(eventfd_link); + + if (ret < 0) { + RTE_LOG(ERR, VHOST_CONFIG, "(%"PRIu64") EVENTFD_COPY ioctl failed\n", d= ev->device_fh); + return -1; + } + + + return 0; +} + +/** + * Called from CUSE IOCTL: VHOST_SET_VRING_CALL + * The virtio device sends an eventfd to interrupt the guest. This fd gets= copied in + * to our process space. + */ +static int +set_vring_call(struct vhost_device_ctx ctx, struct vhost_vring_file *file) +{ + struct virtio_net *dev; + struct eventfd_copy eventfd_kick; + struct vhost_virtqueue *vq; + + dev =3D get_device(ctx); + if (dev =3D=3D NULL) + return -1; + + /* file->index refers to the queue index. The TX queue is 1, RX queue is = 0. */ + vq =3D dev->virtqueue[file->index]; + + if (vq->kickfd) + close((int)vq->kickfd); + + /* Populate the eventfd_copy structure and call eventfd_copy. */ + vq->kickfd =3D eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); + eventfd_kick.source_fd =3D vq->kickfd; + eventfd_kick.target_fd =3D file->fd; + eventfd_kick.target_pid =3D ctx.pid; + + if (eventfd_copy(dev, &eventfd_kick)) + return -1; + + return 0; +} + +/** + * Called from CUSE IOCTL: VHOST_SET_VRING_KICK + * The virtio device sends an eventfd that it can use to notify us. This f= d gets copied in + * to our process space. + */ +static int +set_vring_kick(struct vhost_device_ctx ctx, struct vhost_vring_file *file) +{ + struct virtio_net *dev; + struct eventfd_copy eventfd_call; + struct vhost_virtqueue *vq; + + dev =3D get_device(ctx); + if (dev =3D=3D NULL) + return -1; + + /* file->index refers to the queue index. The TX queue is 1, RX queue is = 0. */ + vq =3D dev->virtqueue[file->index]; + + if (vq->callfd) + close((int)vq->callfd); + + /* Populate the eventfd_copy structure and call eventfd_copy. */ + vq->callfd =3D eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); + eventfd_call.source_fd =3D vq->callfd; + eventfd_call.target_fd =3D file->fd; + eventfd_call.target_pid =3D ctx.pid; + + if (eventfd_copy(dev, &eventfd_call)) + return -1; + + return 0; +} + +/** + * Called from CUSE IOCTL: VHOST_NET_SET_BACKEND + * To complete device initialisation when the virtio driver is loaded we a= re provided with a + * valid fd for a tap device (not used by us). If this happens then we can= add the device to a + * data core. When the virtio driver is removed we get fd=3D-1. At that po= int we remove the device + * from the data core. The device will still exist in the device configura= tion linked list. + */ +static int +set_backend(struct vhost_device_ctx ctx, struct vhost_vring_file *file) +{ + struct virtio_net *dev; + + dev =3D get_device(ctx); + if (dev =3D=3D NULL) + return -1; + + /* file->index refers to the queue index. The TX queue is 1, RX queue is = 0. */ + dev->virtqueue[file->index]->backend =3D file->fd; + + /* If the device isn't already running and both backend fds are set we ad= d the device. */ + if (!(dev->flags & VIRTIO_DEV_RUNNING)) { + if (((int)dev->virtqueue[VIRTIO_TXQ]->backend !=3D VIRTIO_DEV_STOPPED) &= & + ((int)dev->virtqueue[VIRTIO_RXQ]->backend !=3D VIRTIO_DEV_STOPPED)) + return notify_ops->new_device(dev); + /* Otherwise we remove it. */ + } else + if (file->fd =3D=3D VIRTIO_DEV_STOPPED) + notify_ops->destroy_device(dev); + return 0; +} + +/** + * Function pointers are set for the device operations to allow CUSE to ca= ll functions + * when an IOCTL, device_add or device_release is received. + */ +static const struct vhost_net_device_ops vhost_device_ops =3D { + .new_device =3D new_device, + .destroy_device =3D destroy_device, + + .get_features =3D get_features, + .set_features =3D set_features, + + .set_mem_table =3D set_mem_table, + + .set_vring_num =3D set_vring_num, + .set_vring_addr =3D set_vring_addr, + .set_vring_base =3D set_vring_base, + .get_vring_base =3D get_vring_base, + + .set_vring_kick =3D set_vring_kick, + .set_vring_call =3D set_vring_call, + + .set_backend =3D set_backend, + + .set_owner =3D set_owner, + .reset_owner =3D reset_owner, +}; + +/** + * Called by main to setup callbacks when registering CUSE device. + */ +struct vhost_net_device_ops const * +get_virtio_net_callbacks(void) +{ + return &vhost_device_ops; +} + +int rte_vhost_enable_guest_notification(struct virtio_net *dev, uint16_t q= ueue_id, int enable) +{ + if (enable) { + RTE_LOG(ERR, VHOST_CONFIG, "guest notification isn't supported.\n"); + return -1; + } + + dev->virtqueue[queue_id]->used->flags =3D enable ? 0 : VRING_USED_F_NO_NO= TIFY; + return 0; +} + +uint64_t rte_vhost_feature_get(void) +{ + return VHOST_FEATURES; +} + +int rte_vhost_feature_disable(uint64_t feature_mask) +{ + VHOST_FEATURES =3D VHOST_FEATURES & ~feature_mask; + return 0; +} + +int rte_vhost_feature_enable(uint64_t feature_mask) +{ + if ((feature_mask & VHOST_SUPPORTED_FEATURES) =3D=3D feature_mask) { + VHOST_FEATURES =3D VHOST_FEATURES | feature_mask; + return 0; + } + return -1; +} + + +/* + * Register ops so that we can add/remove device to data core. + */ +int +rte_vhost_driver_callback_register(struct virtio_net_device_ops const * co= nst ops) +{ + notify_ops =3D ops; + + return 0; +} --=20 1.8.1.4