From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id DA8521B409 for ; Fri, 22 Dec 2017 16:41:25 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 22 Dec 2017 07:41:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,441,1508828400"; d="scan'208";a="4814583" Received: from unknown (HELO dpdk98.sh.intel.com) ([10.67.111.140]) by orsmga007.jf.intel.com with ESMTP; 22 Dec 2017 07:41:22 -0800 From: Zhihong Wang To: dev@dpdk.org Cc: jianfeng.tan@intel.com, tiwei.bie@intel.com, maxime.coquelin@redhat.com, yliu@fridaylinux.org, cunming.liang@intel.com, xiao.w.wang@intel.com, dan.daly@intel.com, Zhihong Wang Date: Fri, 22 Dec 2017 22:36:43 -0500 Message-Id: <1514000203-69699-3-git-send-email-zhihong.wang@intel.com> X-Mailer: git-send-email 2.7.5 In-Reply-To: <1514000203-69699-1-git-send-email-zhihong.wang@intel.com> References: <1514000203-69699-1-git-send-email-zhihong.wang@intel.com> Subject: [dpdk-dev] [PATCH RFC 2/2] vhost: support selective datapath X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 22 Dec 2017 15:41:26 -0000 This patch introduces support for selective datapath in DPDK vhost-user lib to enable acceleration. The default selection is the existing software implementation, while more options are available when more engines are present. Signed-off-by: Zhihong Wang --- lib/librte_vhost/Makefile | 4 +- lib/librte_vhost/rte_vdpa.h | 126 ++++++++++++++++++++++++++++++++++++++++++ lib/librte_vhost/rte_vhost.h | 48 ++++++++++++++++ lib/librte_vhost/vdpa.c | 122 ++++++++++++++++++++++++++++++++++++++++ lib/librte_vhost/vhost.c | 53 ++++++++++++++++++ lib/librte_vhost/vhost.h | 7 +++ lib/librte_vhost/vhost_user.c | 48 ++++++++++++++-- 7 files changed, 402 insertions(+), 6 deletions(-) create mode 100644 lib/librte_vhost/rte_vdpa.h create mode 100644 lib/librte_vhost/vdpa.c diff --git a/lib/librte_vhost/Makefile b/lib/librte_vhost/Makefile index be18279..47930ba 100644 --- a/lib/librte_vhost/Makefile +++ b/lib/librte_vhost/Makefile @@ -49,9 +49,9 @@ LDLIBS += -lrte_eal -lrte_mempool -lrte_mbuf -lrte_ethdev # all source are stored in SRCS-y SRCS-$(CONFIG_RTE_LIBRTE_VHOST) := fd_man.c iotlb.c socket.c vhost.c \ - vhost_user.c virtio_net.c + vhost_user.c virtio_net.c vdpa.c # install includes -SYMLINK-$(CONFIG_RTE_LIBRTE_VHOST)-include += rte_vhost.h +SYMLINK-$(CONFIG_RTE_LIBRTE_VHOST)-include += rte_vhost.h rte_vdpa.h include $(RTE_SDK)/mk/rte.lib.mk diff --git a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h new file mode 100644 index 0000000..4f9eebd --- /dev/null +++ b/lib/librte_vhost/rte_vdpa.h @@ -0,0 +1,126 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_VDPA_H_ +#define _RTE_VDPA_H_ + +#include +#include +#include "rte_vhost.h" + +/** + * @file + * + * Device specific vhost lib + */ + +#define MAX_VDPA_ENGINE_NUM 128 +#define MAX_VDPA_NAME_LEN 128 + + +struct rte_vdpa_eng_id { + union { + uint8_t __dummy[64]; + + struct { + struct rte_pci_addr pci_addr; + }; + }; +}; + +struct rte_vdpa_eng_attr { + char name[MAX_VDPA_NAME_LEN]; + struct rte_vdpa_eng_id *id; +}; + +/* register/remove engine and return the engine id */ +typedef int (*vdpa_eng_init_t)(int eid, struct rte_vdpa_eng_id *id); +typedef int (*vdpa_eng_uninit_t)(int eid); + +/* driver configure/close the port based on connection */ +typedef int (*vdpa_dev_conf_t)(int vid); +typedef int (*vdpa_dev_close_t)(int vid); + +/* enable/disable this vring */ +typedef int (*vdpa_vring_state_set_t)(int vid, int vring, int state); + +/* set features when changed */ +typedef int (*vdpa_feature_set_t)(int vid); + +/* destination operations when migration done, e.g. send rarp */ +typedef int (*vdpa_migration_done_t)(int vid); + +/* device ops */ +struct rte_vdpa_dev_ops { + vdpa_dev_conf_t dev_conf; + vdpa_dev_close_t dev_close; + vdpa_vring_state_set_t vring_state_set; + vdpa_feature_set_t feature_set; + vdpa_migration_done_t migration_done; +}; + +/* engine ops */ +struct rte_vdpa_eng_ops { + vdpa_eng_init_t eng_init; + vdpa_eng_uninit_t eng_uninit; +}; + +struct rte_vdpa_eng_driver { + const char *name; + struct rte_vdpa_eng_ops eng_ops; + struct rte_vdpa_dev_ops dev_ops; +} __rte_cache_aligned; + +struct rte_vdpa_engine { + struct rte_vdpa_eng_attr eng_attr; + struct rte_vdpa_eng_driver *eng_drv; +} __rte_cache_aligned; + +extern struct rte_vdpa_engine *vdpa_engines[]; +extern uint32_t vdpa_engine_num; + +/* engine management */ +int rte_vdpa_register_engine(const char *name, struct rte_vdpa_eng_id *id); +int rte_vdpa_unregister_engine(int eid); + +/* driver register api */ +void rte_vdpa_register_driver(struct rte_vdpa_eng_driver *drv); + +#define RTE_VDPA_REGISTER_DRIVER(nm, drv) \ +RTE_INIT(vdpainitfn_ ##nm); \ +static void vdpainitfn_ ##nm(void) \ +{ \ + rte_vdpa_register_driver(&drv); \ +} \ + +#endif /* _RTE_VDPA_H_ */ diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h index 17b4c6d..cbb2105 100644 --- a/lib/librte_vhost/rte_vhost.h +++ b/lib/librte_vhost/rte_vhost.h @@ -498,6 +498,54 @@ int rte_vhost_get_vhost_vring(int vid, uint16_t vring_idx, */ uint32_t rte_vhost_rx_queue_count(int vid, uint16_t qid); +/** + * Set vdpa engine id for vhost device. + * + * @param vid + * vhost device ID + * @param eid + * engine id + * @return + * engine id + */ +int rte_vhost_set_vdpa_eid(int vid, int eid); + +/** + * Set vdpa device id for vhost device. + * + * @param vid + * vhost device ID + * @param did + * device id + * @return + * device id + */ +int rte_vhost_set_vdpa_did(int vid, int did); + +/** + * Get vdpa engine id for vhost device. + * + * @param vid + * vhost device ID + * @param eid + * engine id + * @return + * engine id + */ +int rte_vhost_get_vdpa_eid(int vid); + +/** + * Get vdpa device id for vhost device. + * + * @param vid + * vhost device ID + * @param did + * device id + * @return + * device id + */ +int rte_vhost_get_vdpa_did(int vid); + #ifdef __cplusplus } #endif diff --git a/lib/librte_vhost/vdpa.c b/lib/librte_vhost/vdpa.c new file mode 100644 index 0000000..f2aa308 --- /dev/null +++ b/lib/librte_vhost/vdpa.c @@ -0,0 +1,122 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +/** + * @file + * + * Device specific vhost lib + */ + +#include +#include "rte_vdpa.h" +#include "vhost.h" + +static struct rte_vdpa_eng_driver *vdpa_drivers[MAX_VDPA_ENGINE_NUM]; +static uint32_t vdpa_driver_num; + +struct rte_vdpa_engine *vdpa_engines[MAX_VDPA_ENGINE_NUM]; +uint32_t vdpa_engine_num; + +int vid2eid[MAX_VHOST_DEVICE]; +int vid2did[MAX_VHOST_DEVICE]; + +void rte_vdpa_register_driver(struct rte_vdpa_eng_driver *driver) +{ + if (vdpa_driver_num >= MAX_VDPA_ENGINE_NUM) + return; + + vdpa_drivers[vdpa_driver_num] = driver; + vdpa_driver_num++; +} + +int rte_vdpa_register_engine(const char *name, struct rte_vdpa_eng_id *id) +{ + static int engine_idx; + + struct rte_vdpa_engine *eng; + struct rte_vdpa_eng_driver *cur; + char engine_name[MAX_VDPA_NAME_LEN]; + int i; + + for (i = 0; i < MAX_VDPA_ENGINE_NUM; ++i) { + eng = vdpa_engines[i]; + if (eng && 0 == strncmp(eng->eng_attr.name, name, + MAX_VDPA_NAME_LEN) + && eng->eng_attr.id == id) { + return i; + } + } + + sprintf(engine_name, "vdpa-%s-%d", name, engine_idx++); + eng = rte_zmalloc(engine_name, sizeof(struct rte_vdpa_engine), + RTE_CACHE_LINE_SIZE); + if (!eng) + return -1; + + for (i = 0; i < MAX_VDPA_ENGINE_NUM; ++i) { + cur = vdpa_drivers[i]; + if (cur && 0 == strncmp(name, cur->name, + MAX_VDPA_NAME_LEN)) { + eng->eng_drv = cur; + strcpy(eng->eng_attr.name, name); + eng->eng_attr.id = id; + for (i = 0; i < MAX_VDPA_ENGINE_NUM; ++i) { + if (vdpa_engines[i]) + continue; + vdpa_engines[i] = eng; + if (eng->eng_drv->eng_ops.eng_init) + eng->eng_drv->eng_ops.eng_init(i, id); + vdpa_engine_num++; + return i; + } + } + } + + return -1; +} + +int rte_vdpa_unregister_engine(int eid) +{ + if (eid < 0 || eid >= MAX_VDPA_ENGINE_NUM || vdpa_engines[eid] + == NULL) + return -1; + + if (vdpa_engines[eid]->eng_drv->eng_ops.eng_uninit) + vdpa_engines[eid]->eng_drv->eng_ops.eng_uninit(eid); + + rte_free(vdpa_engines[eid]); + vdpa_engines[eid] = NULL; + vdpa_engine_num--; + + return eid; +} diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c index 4f8b73a..709b0f9 100644 --- a/lib/librte_vhost/vhost.c +++ b/lib/librte_vhost/vhost.c @@ -314,6 +314,8 @@ vhost_new_device(void) vhost_devices[i] = dev; dev->vid = i; dev->slave_req_fd = -1; + dev->eid = -1; + dev->did = -1; return i; } @@ -326,11 +328,16 @@ void vhost_destroy_device(int vid) { struct virtio_net *dev = get_device(vid); + int eid = dev->eid; if (dev == NULL) return; if (dev->flags & VIRTIO_DEV_RUNNING) { + if (eid >= 0 && vdpa_engines[eid] && + vdpa_engines[eid]->eng_drv && + vdpa_engines[eid]->eng_drv->dev_ops.dev_close) + vdpa_engines[eid]->eng_drv->dev_ops.dev_close(dev->vid); dev->flags &= ~VIRTIO_DEV_RUNNING; dev->notify_ops->destroy_device(vid); } @@ -610,3 +617,49 @@ rte_vhost_rx_queue_count(int vid, uint16_t qid) return *((volatile uint16_t *)&vq->avail->idx) - vq->last_avail_idx; } + +int +rte_vhost_set_vdpa_eid(int vid, int eid) +{ + struct virtio_net *dev = get_device(vid); + + if (dev == NULL) + return -1; + + dev->eid = eid; + + return eid; +} + +int +rte_vhost_set_vdpa_did(int vid, int did) +{ + struct virtio_net *dev = get_device(vid); + + if (dev == NULL) + return -1; + + dev->did = did; + + return did; +} + +int rte_vhost_get_vdpa_eid(int vid) +{ + struct virtio_net *dev = get_device(vid); + + if (dev == NULL) + return -1; + + return dev->eid; +} + +int rte_vhost_get_vdpa_did(int vid) +{ + struct virtio_net *dev = get_device(vid); + + if (dev == NULL) + return -1; + + return dev->did; +} diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index 1cc81c1..f261c1a 100644 --- a/lib/librte_vhost/vhost.h +++ b/lib/librte_vhost/vhost.h @@ -48,6 +48,7 @@ #include #include "rte_vhost.h" +#include "rte_vdpa.h" /* Used to indicate that the device is running on a data core */ #define VIRTIO_DEV_RUNNING 1 @@ -253,6 +254,12 @@ struct virtio_net { struct guest_page *guest_pages; int slave_req_fd; + + /* engine and device id to identify a certain port on a specific + * backend, both are set to -1 for sw. + */ + int eid; + int did; } __rte_cache_aligned; diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c index 0a4d128..51e6443 100644 --- a/lib/librte_vhost/vhost_user.c +++ b/lib/librte_vhost/vhost_user.c @@ -145,7 +145,13 @@ vhost_user_set_owner(void) static int vhost_user_reset_owner(struct virtio_net *dev) { + int eid = dev->eid; + if (dev->flags & VIRTIO_DEV_RUNNING) { + if (eid >= 0 && vdpa_engines[eid] && + vdpa_engines[eid]->eng_drv && + vdpa_engines[eid]->eng_drv->dev_ops.dev_close) + vdpa_engines[eid]->eng_drv->dev_ops.dev_close(dev->vid); dev->flags &= ~VIRTIO_DEV_RUNNING; dev->notify_ops->destroy_device(dev->vid); } @@ -186,6 +192,7 @@ static int vhost_user_set_features(struct virtio_net *dev, uint64_t features) { uint64_t vhost_features = 0; + int eid = dev->eid; rte_vhost_driver_get_features(dev->ifname, &vhost_features); if (features & ~vhost_features) { @@ -200,6 +207,11 @@ vhost_user_set_features(struct virtio_net *dev, uint64_t features) dev->notify_ops->features_changed(dev->vid, features); } + if (eid >= 0 && vdpa_engines[eid] && + vdpa_engines[eid]->eng_drv && + vdpa_engines[eid]->eng_drv->dev_ops.feature_set) + vdpa_engines[eid]->eng_drv->dev_ops.feature_set(dev->vid); + dev->features = features; if (dev->features & ((1 << VIRTIO_NET_F_MRG_RXBUF) | (1ULL << VIRTIO_F_VERSION_1))) { @@ -813,9 +825,14 @@ vhost_user_get_vring_base(struct virtio_net *dev, VhostUserMsg *msg) { struct vhost_virtqueue *vq = dev->virtqueue[msg->payload.state.index]; + int eid = dev->eid; /* We have to stop the queue (virtio) if it is running. */ if (dev->flags & VIRTIO_DEV_RUNNING) { + if (eid >= 0 && vdpa_engines[eid] && + vdpa_engines[eid]->eng_drv && + vdpa_engines[eid]->eng_drv->dev_ops.dev_close) + vdpa_engines[eid]->eng_drv->dev_ops.dev_close(dev->vid); dev->flags &= ~VIRTIO_DEV_RUNNING; dev->notify_ops->destroy_device(dev->vid); } @@ -858,16 +875,24 @@ vhost_user_set_vring_enable(struct virtio_net *dev, VhostUserMsg *msg) { int enable = (int)msg->payload.state.num; + int index = (int)msg->payload.state.index; + int eid = dev->eid; RTE_LOG(INFO, VHOST_CONFIG, "set queue enable: %d to qp idx: %d\n", - enable, msg->payload.state.index); + enable, index); + + if (eid >= 0 && vdpa_engines[eid] && + vdpa_engines[eid]->eng_drv && + vdpa_engines[eid]->eng_drv->dev_ops.vring_state_set) + vdpa_engines[eid]->eng_drv->dev_ops.vring_state_set(dev->vid, + index, enable); if (dev->notify_ops->vring_state_changed) dev->notify_ops->vring_state_changed(dev->vid, - msg->payload.state.index, enable); + index, enable); - dev->virtqueue[msg->payload.state.index]->enabled = enable; + dev->virtqueue[index]->enabled = enable; return 0; } @@ -978,6 +1003,7 @@ static int vhost_user_send_rarp(struct virtio_net *dev, struct VhostUserMsg *msg) { uint8_t *mac = (uint8_t *)&msg->payload.u64; + int eid = dev->eid; RTE_LOG(DEBUG, VHOST_CONFIG, ":: mac: %02x:%02x:%02x:%02x:%02x:%02x\n", @@ -993,6 +1019,10 @@ vhost_user_send_rarp(struct virtio_net *dev, struct VhostUserMsg *msg) */ rte_smp_wmb(); rte_atomic16_set(&dev->broadcast_rarp, 1); + if (eid >= 0 && vdpa_engines[eid] && + vdpa_engines[eid]->eng_drv && + vdpa_engines[eid]->eng_drv->dev_ops.migration_done) + vdpa_engines[eid]->eng_drv->dev_ops.migration_done(dev->vid); return 0; } @@ -1383,8 +1413,18 @@ vhost_user_msg_handler(int vid, int fd) "dequeue zero copy is enabled\n"); } - if (dev->notify_ops->new_device(dev->vid) == 0) + if (dev->notify_ops->new_device(vid) == 0) dev->flags |= VIRTIO_DEV_RUNNING; + + struct rte_vdpa_engine *eng; + int eid = dev->eid; + + if (eid >= 0) { + eng = vdpa_engines[eid]; + if (eng && eng->eng_drv && + eng->eng_drv->dev_ops.dev_conf) + eng->eng_drv->dev_ops.dev_conf(vid); + } } } -- 2.7.5