From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4EB84A0559; Tue, 17 Mar 2020 07:29:41 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 61F042B9E; Tue, 17 Mar 2020 07:29:40 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id C9827292D for ; Tue, 17 Mar 2020 07:29:37 +0100 (CET) IronPort-SDR: EokMRABa1BbeMS0cwYd0EaG9A7fHV/RaJ67TYaEEFzK+E0qls3ymvs6sl2ppsOH26ReLvNEtO1 X7eyg0KcwxRQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Mar 2020 23:29:36 -0700 IronPort-SDR: 8wkgiP975t1ugpg6ok9tGbo1zH6ofa3HuvzuaNdUSE8AfM+lMPE+fpNnS1C2DVymjzxH4hc9xB BmPuVFffr9SQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,563,1574150400"; d="scan'208";a="290909935" Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201]) by FMSMGA003.fm.intel.com with ESMTP; 16 Mar 2020 23:29:36 -0700 Received: from fmsmsx162.amr.corp.intel.com (10.18.125.71) by FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 16 Mar 2020 23:29:35 -0700 Received: from shsmsx105.ccr.corp.intel.com (10.239.4.158) by fmsmsx162.amr.corp.intel.com (10.18.125.71) with Microsoft SMTP Server (TLS) id 14.3.439.0; Mon, 16 Mar 2020 23:29:35 -0700 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.137]) by SHSMSX105.ccr.corp.intel.com ([169.254.11.144]) with mapi id 14.03.0439.000; Tue, 17 Mar 2020 14:29:33 +0800 From: "Liu, Yong" To: "Hu, Jiayu" , "dev@dpdk.org" CC: "maxime.coquelin@redhat.com" , "Ye, Xiaolong" , "Wang, Zhihong" , "Hu, Jiayu" Thread-Topic: [dpdk-dev] [PATCH 2/4] net/vhost: setup vrings for DMA-accelerated datapath Thread-Index: AQHV/AXc0JbTYtqDQ0KPdAURhxwcQqhMTaNQ Date: Tue, 17 Mar 2020 06:29:33 +0000 Message-ID: <86228AFD5BCD8E4EBFD2B90117B5E81E6350B013@SHSMSX103.ccr.corp.intel.com> References: <1584436885-18651-1-git-send-email-jiayu.hu@intel.com> <1584436885-18651-3-git-send-email-jiayu.hu@intel.com> In-Reply-To: <1584436885-18651-3-git-send-email-jiayu.hu@intel.com> Accept-Language: zh-CN, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.2.0.6 dlp-reaction: no-action x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH 2/4] net/vhost: setup vrings for DMA-accelerated datapath X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: dev On Behalf Of Jiayu Hu > Sent: Tuesday, March 17, 2020 5:21 PM > To: dev@dpdk.org > Cc: maxime.coquelin@redhat.com; Ye, Xiaolong ; > Wang, Zhihong ; Hu, Jiayu > Subject: [dpdk-dev] [PATCH 2/4] net/vhost: setup vrings for DMA- > accelerated datapath >=20 > This patch gets vrings' addresses and sets up GPA and HPA mappings > for offloading large data movement from the CPU to DMA engines in > vhost-user PMD. >=20 > Signed-off-by: Jiayu Hu > --- > drivers/Makefile | 2 +- > drivers/net/vhost/Makefile | 4 +- > drivers/net/vhost/internal.h | 141 > ++++++++++++++++++++++++++++++++ > drivers/net/vhost/meson.build | 3 +- > drivers/net/vhost/rte_eth_vhost.c | 56 +------------ > drivers/net/vhost/virtio_net.c | 119 +++++++++++++++++++++++++++ > drivers/net/vhost/virtio_net.h | 168 > ++++++++++++++++++++++++++++++++++++++ > 7 files changed, 438 insertions(+), 55 deletions(-) > create mode 100644 drivers/net/vhost/internal.h > create mode 100644 drivers/net/vhost/virtio_net.c > create mode 100644 drivers/net/vhost/virtio_net.h >=20 > diff --git a/drivers/Makefile b/drivers/Makefile > index c70bdf9..8555ddd 100644 > --- a/drivers/Makefile > +++ b/drivers/Makefile > @@ -9,7 +9,7 @@ DEPDIRS-bus :=3D common > DIRS-y +=3D mempool > DEPDIRS-mempool :=3D common bus > DIRS-y +=3D net > -DEPDIRS-net :=3D common bus mempool > +DEPDIRS-net :=3D common bus mempool raw > DIRS-$(CONFIG_RTE_LIBRTE_BBDEV) +=3D baseband > DEPDIRS-baseband :=3D common bus mempool > DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) +=3D crypto > diff --git a/drivers/net/vhost/Makefile b/drivers/net/vhost/Makefile > index 0461e29..19cae52 100644 > --- a/drivers/net/vhost/Makefile > +++ b/drivers/net/vhost/Makefile > @@ -15,13 +15,15 @@ LDLIBS +=3D -lrte_bus_vdev >=20 > CFLAGS +=3D -O3 > CFLAGS +=3D $(WERROR_FLAGS) > +CFLAGS +=3D -fno-strict-aliasing > +CFLAGS +=3D -DALLOW_EXPERIMENTAL_API >=20 > EXPORT_MAP :=3D rte_pmd_vhost_version.map >=20 > # > # all source are stored in SRCS-y > # > -SRCS-$(CONFIG_RTE_LIBRTE_PMD_VHOST) +=3D rte_eth_vhost.c > +SRCS-$(CONFIG_RTE_LIBRTE_PMD_VHOST) +=3D rte_eth_vhost.c virtio_net.c >=20 > # > # Export include files > diff --git a/drivers/net/vhost/internal.h b/drivers/net/vhost/internal.h > new file mode 100644 > index 0000000..7588fdf > --- /dev/null > +++ b/drivers/net/vhost/internal.h > @@ -0,0 +1,141 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2020 Intel Corporation > + */ > +#ifndef _INTERNAL_H_ > +#define _INTERNAL_H_ > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +#include > +#include > + > +#include > +#include > +#include > + > +extern int vhost_logtype; > + > +#define VHOST_LOG(level, ...) \ > + rte_log(RTE_LOG_ ## level, vhost_logtype, __VA_ARGS__) > + > +enum vhost_xstats_pkts { > + VHOST_UNDERSIZE_PKT =3D 0, > + VHOST_64_PKT, > + VHOST_65_TO_127_PKT, > + VHOST_128_TO_255_PKT, > + VHOST_256_TO_511_PKT, > + VHOST_512_TO_1023_PKT, > + VHOST_1024_TO_1522_PKT, > + VHOST_1523_TO_MAX_PKT, > + VHOST_BROADCAST_PKT, > + VHOST_MULTICAST_PKT, > + VHOST_UNICAST_PKT, > + VHOST_ERRORS_PKT, > + VHOST_ERRORS_FRAGMENTED, > + VHOST_ERRORS_JABBER, > + VHOST_UNKNOWN_PROTOCOL, > + VHOST_XSTATS_MAX, > +}; > + > +struct vhost_stats { > + uint64_t pkts; > + uint64_t bytes; > + uint64_t missed_pkts; > + uint64_t xstats[VHOST_XSTATS_MAX]; > +}; > + > +struct batch_copy_elem { > + void *dst; > + void *src; > + uint32_t len; > +}; > + > +struct guest_page { > + uint64_t guest_phys_addr; > + uint64_t host_phys_addr; > + uint64_t size; > +}; > + > +struct dma_vring { > + struct rte_vhost_vring vr; > + > + uint16_t last_avail_idx; > + uint16_t last_used_idx; > + > + /* the last used index that front end can consume */ > + uint16_t copy_done_used; > + > + uint16_t signalled_used; > + bool signalled_used_valid; > + > + struct vring_used_elem *shadow_used_split; > + uint16_t shadow_used_idx; > + > + struct batch_copy_elem *batch_copy_elems; > + uint16_t batch_copy_nb_elems; > + > + bool dma_enabled; > + /** > + * DMA ID. Currently, we only support I/OAT, > + * so it's I/OAT rawdev ID. > + */ > + uint16_t dev_id; > + /* DMA address */ > + struct rte_pci_addr dma_addr; > + /** > + * the number of copy jobs that are submitted to the DMA > + * but may not be completed. > + */ > + uint64_t nr_inflight; > + int nr_batching; Look like nr_batching can't be negative value, please changed to uint16_t o= r uint32_t.=20 > + > + /** > + * host physical address of used ring index, > + * used by the DMA. > + */ > + phys_addr_t used_idx_hpa; > +}; > + > +struct vhost_queue { > + int vid; > + rte_atomic32_t allow_queuing; > + rte_atomic32_t while_queuing; > + struct pmd_internal *internal; > + struct rte_mempool *mb_pool; > + uint16_t port; > + uint16_t virtqueue_id; > + struct vhost_stats stats; > + struct dma_vring *dma_vring; > +}; > + > +struct pmd_internal { > + rte_atomic32_t dev_attached; > + char *iface_name; > + uint64_t flags; > + uint64_t disable_flags; > + uint16_t max_queues; > + int vid; > + rte_atomic32_t started; > + uint8_t vlan_strip; > + > + /* guest's memory regions */ > + struct rte_vhost_memory *mem; > + /* guest and host physical address mapping table */ > + struct guest_page *guest_pages; > + uint32_t nr_guest_pages; > + uint32_t max_guest_pages; > + /* guest's vrings */ > + struct dma_vring dma_vrings[RTE_MAX_QUEUES_PER_PORT * 2]; > + uint16_t nr_vrings; > + /* negotiated features */ > + uint64_t features; > + size_t hdr_len; > +}; > + > +#ifdef __cplusplus > +} > +#endif > + > +#endif /* _INTERNAL_H_ */ > diff --git a/drivers/net/vhost/meson.build b/drivers/net/vhost/meson.buil= d > index d793086..b308dcb 100644 > --- a/drivers/net/vhost/meson.build > +++ b/drivers/net/vhost/meson.build > @@ -3,6 +3,7 @@ >=20 > build =3D dpdk_conf.has('RTE_LIBRTE_VHOST') > reason =3D 'missing dependency, DPDK vhost library' > -sources =3D files('rte_eth_vhost.c') > +sources =3D files('rte_eth_vhost.c', > + 'virtio_net.c') > install_headers('rte_eth_vhost.h') > deps +=3D 'vhost' > diff --git a/drivers/net/vhost/rte_eth_vhost.c > b/drivers/net/vhost/rte_eth_vhost.c > index 458ed58..b5c927c 100644 > --- a/drivers/net/vhost/rte_eth_vhost.c > +++ b/drivers/net/vhost/rte_eth_vhost.c > @@ -16,12 +16,10 @@ > #include > #include >=20 > +#include "internal.h" > #include "rte_eth_vhost.h" >=20 > -static int vhost_logtype; > - > -#define VHOST_LOG(level, ...) \ > - rte_log(RTE_LOG_ ## level, vhost_logtype, __VA_ARGS__) > +int vhost_logtype; >=20 > enum {VIRTIO_RXQ, VIRTIO_TXQ, VIRTIO_QNUM}; >=20 > @@ -56,54 +54,6 @@ static struct rte_ether_addr base_eth_addr =3D { > } > }; >=20 > -enum vhost_xstats_pkts { > - VHOST_UNDERSIZE_PKT =3D 0, > - VHOST_64_PKT, > - VHOST_65_TO_127_PKT, > - VHOST_128_TO_255_PKT, > - VHOST_256_TO_511_PKT, > - VHOST_512_TO_1023_PKT, > - VHOST_1024_TO_1522_PKT, > - VHOST_1523_TO_MAX_PKT, > - VHOST_BROADCAST_PKT, > - VHOST_MULTICAST_PKT, > - VHOST_UNICAST_PKT, > - VHOST_ERRORS_PKT, > - VHOST_ERRORS_FRAGMENTED, > - VHOST_ERRORS_JABBER, > - VHOST_UNKNOWN_PROTOCOL, > - VHOST_XSTATS_MAX, > -}; > - > -struct vhost_stats { > - uint64_t pkts; > - uint64_t bytes; > - uint64_t missed_pkts; > - uint64_t xstats[VHOST_XSTATS_MAX]; > -}; > - > -struct vhost_queue { > - int vid; > - rte_atomic32_t allow_queuing; > - rte_atomic32_t while_queuing; > - struct pmd_internal *internal; > - struct rte_mempool *mb_pool; > - uint16_t port; > - uint16_t virtqueue_id; > - struct vhost_stats stats; > -}; > - > -struct pmd_internal { > - rte_atomic32_t dev_attached; > - char *iface_name; > - uint64_t flags; > - uint64_t disable_flags; > - uint16_t max_queues; > - int vid; > - rte_atomic32_t started; > - uint8_t vlan_strip; > -}; > - > struct internal_list { > TAILQ_ENTRY(internal_list) next; > struct rte_eth_dev *eth_dev; > @@ -698,6 +648,7 @@ queue_setup(struct rte_eth_dev *eth_dev, struct > pmd_internal *internal) > vq->vid =3D internal->vid; > vq->internal =3D internal; > vq->port =3D eth_dev->data->port_id; > + vq->dma_vring =3D &internal->dma_vrings[vq->virtqueue_id]; > } > for (i =3D 0; i < eth_dev->data->nb_tx_queues; i++) { > vq =3D eth_dev->data->tx_queues[i]; > @@ -706,6 +657,7 @@ queue_setup(struct rte_eth_dev *eth_dev, struct > pmd_internal *internal) > vq->vid =3D internal->vid; > vq->internal =3D internal; > vq->port =3D eth_dev->data->port_id; > + vq->dma_vring =3D &internal->dma_vrings[vq->virtqueue_id]; > } > } >=20 > diff --git a/drivers/net/vhost/virtio_net.c b/drivers/net/vhost/virtio_ne= t.c > new file mode 100644 > index 0000000..11591c0 > --- /dev/null > +++ b/drivers/net/vhost/virtio_net.c > @@ -0,0 +1,119 @@ > +#include > +#include > +#include > + > +#include > +#include > + > +#include "virtio_net.h" > + > +int > +vhost_dma_setup(struct pmd_internal *dev) > +{ > + struct dma_vring *dma_vr; > + int vid =3D dev->vid; > + int ret; > + uint16_t i, j, size; > + > + rte_vhost_get_negotiated_features(vid, &dev->features); > + > + if (dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF)) > + dev->hdr_len =3D sizeof(struct virtio_net_hdr_mrg_rxbuf); > + else > + dev->hdr_len =3D sizeof(struct virtio_net_hdr); > + > + dev->nr_vrings =3D rte_vhost_get_vring_num(vid); > + > + if (rte_vhost_get_mem_table(vid, &dev->mem) < 0) { > + VHOST_LOG(ERR, "Failed to get guest memory regions\n"); > + return -1; > + } > + > + /* set up gpa and hpa mappings */ > + if (setup_guest_pages(dev, dev->mem) < 0) { > + VHOST_LOG(ERR, "Failed to set up hpa and gpa > mappings\n"); > + free(dev->mem); > + return -1; > + } > + > + for (i =3D 0; i < dev->nr_vrings; i++) { > + dma_vr =3D &dev->dma_vrings[i]; > + > + ret =3D rte_vhost_get_vring_base(vid, i, &dma_vr- > >last_avail_idx, > + &dma_vr->last_used_idx); > + if (ret < 0) { > + VHOST_LOG(ERR, "Failed to get vring index.\n"); > + goto err; > + } > + > + ret =3D rte_vhost_get_vhost_vring(vid, i, &dma_vr->vr); > + if (ret < 0) { > + VHOST_LOG(ERR, "Failed to get vring address.\n"); > + goto err; > + } > + > + size =3D dma_vr->vr.size; > + dma_vr->shadow_used_split =3D > + rte_malloc(NULL, size * sizeof(struct > vring_used_elem), > + RTE_CACHE_LINE_SIZE); > + if (dma_vr->shadow_used_split =3D=3D NULL) > + goto err; > + > + dma_vr->batch_copy_elems =3D > + rte_malloc(NULL, size * sizeof(struct > batch_copy_elem), > + RTE_CACHE_LINE_SIZE); > + if (dma_vr->batch_copy_elems =3D=3D NULL) > + goto err; > + > + /* get HPA of used ring's index */ > + dma_vr->used_idx_hpa =3D > + rte_mem_virt2iova(&dma_vr->vr.used->idx); > + > + dma_vr->copy_done_used =3D dma_vr->last_used_idx; > + dma_vr->signalled_used =3D dma_vr->last_used_idx; > + dma_vr->signalled_used_valid =3D false; > + dma_vr->shadow_used_idx =3D 0; > + dma_vr->batch_copy_nb_elems =3D 0; > + } > + > + return 0; > + > +err: > + for (j =3D 0; j <=3D i; j++) { > + dma_vr =3D &dev->dma_vrings[j]; > + rte_free(dma_vr->shadow_used_split); > + rte_free(dma_vr->batch_copy_elems); > + dma_vr->shadow_used_split =3D NULL; > + dma_vr->batch_copy_elems =3D NULL; > + dma_vr->used_idx_hpa =3D 0; > + } > + > + free(dev->mem); > + dev->mem =3D NULL; > + free(dev->guest_pages); > + dev->guest_pages =3D NULL; > + > + return -1; > +} > + > +void > +vhost_dma_remove(struct pmd_internal *dev) > +{ > + struct dma_vring *dma_vr; > + uint16_t i; > + > + for (i =3D 0; i < dev->nr_vrings; i++) { > + dma_vr =3D &dev->dma_vrings[i]; > + rte_free(dma_vr->shadow_used_split); > + rte_free(dma_vr->batch_copy_elems); > + dma_vr->shadow_used_split =3D NULL; > + dma_vr->batch_copy_elems =3D NULL; > + dma_vr->signalled_used_valid =3D false; > + dma_vr->used_idx_hpa =3D 0; > + } > + > + free(dev->mem); > + dev->mem =3D NULL; > + free(dev->guest_pages); > + dev->guest_pages =3D NULL; > +} > diff --git a/drivers/net/vhost/virtio_net.h b/drivers/net/vhost/virtio_ne= t.h > new file mode 100644 > index 0000000..7f99f1d > --- /dev/null > +++ b/drivers/net/vhost/virtio_net.h > @@ -0,0 +1,168 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2020 Intel Corporation > + */ > +#ifndef _VIRTIO_NET_H_ > +#define _VIRTIO_NET_H_ > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +#include > +#include > +#include > + > +#include "internal.h" > + > +static uint64_t > +get_blk_size(int fd) > +{ > + struct stat stat; > + int ret; > + > + ret =3D fstat(fd, &stat); > + return ret =3D=3D -1 ? (uint64_t)-1 : (uint64_t)stat.st_blksize; > +} > + > +static __rte_always_inline int > +add_one_guest_page(struct pmd_internal *dev, uint64_t guest_phys_addr, > + uint64_t host_phys_addr, uint64_t size) Jiayu, We have same set of functions for gpa to hpa translation in vhost library. = Can those functions be shared here? Thanks, Marvin > +{ > + struct guest_page *page, *last_page; > + struct guest_page *old_pages; > + > + if (dev->nr_guest_pages =3D=3D dev->max_guest_pages) { > + dev->max_guest_pages *=3D 2; > + old_pages =3D dev->guest_pages; > + dev->guest_pages =3D realloc(dev->guest_pages, > + dev->max_guest_pages * > + sizeof(*page)); > + if (!dev->guest_pages) { > + VHOST_LOG(ERR, "Cannot realloc guest_pages\n"); > + free(old_pages); > + return -1; > + } > + } > + > + if (dev->nr_guest_pages > 0) { > + last_page =3D &dev->guest_pages[dev->nr_guest_pages - 1]; > + /* merge if the two pages are continuous */ > + if (host_phys_addr =3D=3D last_page->host_phys_addr + > + last_page->size) { > + last_page->size +=3D size; > + return 0; > + } > + } > + > + page =3D &dev->guest_pages[dev->nr_guest_pages++]; > + page->guest_phys_addr =3D guest_phys_addr; > + page->host_phys_addr =3D host_phys_addr; > + page->size =3D size; > + > + return 0; > +} > + > +static __rte_always_inline int > +add_guest_page(struct pmd_internal *dev, struct rte_vhost_mem_region > *reg) > +{ > + uint64_t reg_size =3D reg->size; > + uint64_t host_user_addr =3D reg->host_user_addr; > + uint64_t guest_phys_addr =3D reg->guest_phys_addr; > + uint64_t host_phys_addr; > + uint64_t size, page_size; > + > + page_size =3D get_blk_size(reg->fd); > + if (page_size =3D=3D (uint64_t)-1) { > + VHOST_LOG(ERR, "Cannot get hugepage size through > fstat\n"); > + return -1; > + } > + > + host_phys_addr =3D rte_mem_virt2iova((void > *)(uintptr_t)host_user_addr); > + size =3D page_size - (guest_phys_addr & (page_size - 1)); > + size =3D RTE_MIN(size, reg_size); > + > + if (add_one_guest_page(dev, guest_phys_addr, host_phys_addr, > size) < 0) > + return -1; > + > + host_user_addr +=3D size; > + guest_phys_addr +=3D size; > + reg_size -=3D size; > + > + while (reg_size > 0) { > + size =3D RTE_MIN(reg_size, page_size); > + host_phys_addr =3D rte_mem_virt2iova((void *)(uintptr_t) > + host_user_addr); > + if (add_one_guest_page(dev, guest_phys_addr, > host_phys_addr, > + size) < 0) > + return -1; > + > + host_user_addr +=3D size; > + guest_phys_addr +=3D size; > + reg_size -=3D size; > + } > + > + return 0; > +} > + > +static __rte_always_inline int > +setup_guest_pages(struct pmd_internal *dev, struct rte_vhost_memory > *mem) > +{ > + uint32_t nr_regions =3D mem->nregions; > + uint32_t i; > + > + dev->nr_guest_pages =3D 0; > + dev->max_guest_pages =3D 8; > + > + dev->guest_pages =3D malloc(dev->max_guest_pages * > + sizeof(struct guest_page)); > + if (dev->guest_pages =3D=3D NULL) { > + VHOST_LOG(ERR, "(%d) failed to allocate memory " > + "for dev->guest_pages\n", dev->vid); > + return -1; > + } > + > + for (i =3D 0; i < nr_regions; i++) { > + if (add_guest_page(dev, &mem->regions[i]) < 0) > + return -1; > + } > + > + return 0; > +} > + > +static __rte_always_inline rte_iova_t > +gpa_to_hpa(struct pmd_internal *dev, uint64_t gpa, uint64_t size) > +{ > + uint32_t i; > + struct guest_page *page; > + > + for (i =3D 0; i < dev->nr_guest_pages; i++) { > + page =3D &dev->guest_pages[i]; > + > + if (gpa >=3D page->guest_phys_addr && > + gpa + size < page->guest_phys_addr + page->size) { > + return gpa - page->guest_phys_addr + > + page->host_phys_addr; > + } > + } > + > + return 0; > +} > + > +/** > + * This function gets front end's memory and vrings information. > + * In addition, it sets up necessary data structures for enqueue > + * and dequeue operations. > + */ > +int vhost_dma_setup(struct pmd_internal *dev); > + > +/** > + * This function destroys front end's information and frees data > + * structures for enqueue and dequeue operations. > + */ > +void vhost_dma_remove(struct pmd_internal *dev); > + > +#ifdef __cplusplus > +} > +#endif > + > +#endif /* _VIRTIO_NET_H_ */ > -- > 2.7.4