From: Jin Yu <jin.yu@intel.com>
To: dev@dpdk.org
Cc: changpeng.liu@intel.com, maxime.coquelin@redhat.com,
tiwei.bie@intel.com, zhihong.wang@intel.com,
Jin Yu <jin.yu@intel.com>, Lin Li <lilin24@baidu.com>,
Xun Ni <nixun@baidu.com>, Yu Zhang <zhangyu31@baidu.com>
Subject: [dpdk-dev] [PATCH v11 6/9] vhost: add the APIs to operate inflight ring
Date: Thu, 10 Oct 2019 04:48:34 +0800 [thread overview]
Message-ID: <20191009204837.65039-7-jin.yu@intel.com> (raw)
In-Reply-To: <20191009204837.65039-1-jin.yu@intel.com>
This patch introduces three APIs to operate the inflight
ring. Three APIs are set, set last and clear. It includes
split and packed ring.
Signed-off-by: Lin Li <lilin24@baidu.com>
Signed-off-by: Xun Ni <nixun@baidu.com>
Signed-off-by: Yu Zhang <zhangyu31@baidu.com>
Signed-off-by: Jin Yu <jin.yu@intel.com>
---
lib/librte_vhost/rte_vhost.h | 116 +++++++++++
lib/librte_vhost/rte_vhost_version.map | 6 +
lib/librte_vhost/vhost.c | 273 +++++++++++++++++++++++++
3 files changed, 395 insertions(+)
diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h
index 95e4d720e..15d7e67cd 100644
--- a/lib/librte_vhost/rte_vhost.h
+++ b/lib/librte_vhost/rte_vhost.h
@@ -693,6 +693,122 @@ int rte_vhost_get_mem_table(int vid, struct rte_vhost_memory **mem);
int rte_vhost_get_vhost_vring(int vid, uint16_t vring_idx,
struct rte_vhost_vring *vring);
+/**
+ * Set split inflight descriptor.
+ *
+ * This function save descriptors that has been comsumed in available
+ * ring
+ *
+ * @param vid
+ * vhost device ID
+ * @param vring_idx
+ * vring index
+ * @param idx
+ * inflight entry index
+ * @return
+ * 0 on success, -1 on failure
+ */
+__rte_experimental
+int
+rte_vhost_set_inflight_desc_split(int vid, uint16_t vring_idx,
+ uint16_t idx);
+
+/**
+ * Set packed inflight descriptor and get corresponding inflight entry
+ *
+ * This function save descriptors that has been comsumed
+ *
+ * @param vid
+ * vhost device ID
+ * @param vring_idx
+ * vring index
+ * @param head
+ * head of descriptors
+ * @param last
+ * last of descriptors
+ * @param inflight_entry
+ * corresponding inflight entry
+ * @return
+ * 0 on success, -1 on failure
+ */
+__rte_experimental
+int
+rte_vhost_set_inflight_desc_packed(int vid, uint16_t vring_idx,
+ uint16_t head, uint16_t last, uint16_t *inflight_entry);
+
+/**
+ * Save the head of list that the last batch of used descriptors.
+ *
+ * @param vid
+ * vhost device ID
+ * @param vring_idx
+ * vring index
+ * @param idx
+ * descriptor entry index
+ * @return
+ * 0 on success, -1 on failure
+ */
+__rte_experimental
+int
+rte_vhost_set_last_inflight_io_split(int vid,
+ uint16_t vring_idx, uint16_t idx);
+
+/**
+ * Update the inflight free_head, used_idx and used_wrap_counter.
+ *
+ * This function will update status first before updating descriptors
+ * to used
+ *
+ * @param vid
+ * vhost device ID
+ * @param vring_idx
+ * vring index
+ * @param head
+ * head of descriptors
+ * @return
+ * 0 on success, -1 on failure
+ */
+__rte_experimental
+int
+rte_vhost_set_last_inflight_io_packed(int vid,
+ uint16_t vring_idx, uint16_t head);
+
+/**
+ * Clear the split inflight status.
+ *
+ * @param vid
+ * vhost device ID
+ * @param vring_idx
+ * vring index
+ * @param last_used_idx
+ * last used idx of used ring
+ * @param idx
+ * inflight entry index
+ * @return
+ * 0 on success, -1 on failure
+ */
+__rte_experimental
+int
+rte_vhost_clr_inflight_desc_split(int vid, uint16_t vring_idx,
+ uint16_t last_used_idx, uint16_t idx);
+
+/**
+ * Clear the packed inflight status.
+ *
+ * @param vid
+ * vhost device ID
+ * @param vring_idx
+ * vring index
+ * @param head
+ * inflight entry index
+ * @return
+ * 0 on success, -1 on failure
+ */
+__rte_experimental
+int
+rte_vhost_clr_inflight_desc_packed(int vid, uint16_t vring_idx,
+ uint16_t head);
+
/**
* Notify the guest that used descriptors have been added to the vring. This
* function acts as a memory barrier.
diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map
index 5f1d4a75c..bc70bfaa5 100644
--- a/lib/librte_vhost/rte_vhost_version.map
+++ b/lib/librte_vhost/rte_vhost_version.map
@@ -87,4 +87,10 @@ EXPERIMENTAL {
rte_vdpa_relay_vring_used;
rte_vhost_extern_callback_register;
rte_vhost_driver_set_protocol_features;
+ rte_vhost_set_inflight_desc_split;
+ rte_vhost_set_inflight_desc_packed;
+ rte_vhost_set_last_inflight_io_split;
+ rte_vhost_set_last_inflight_io_packed;
+ rte_vhost_clr_inflight_desc_split;
+ rte_vhost_clr_inflight_desc_packed;
};
diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index f8660edbf..b8c14a6ea 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -783,6 +783,279 @@ rte_vhost_get_vhost_vring(int vid, uint16_t vring_idx,
return 0;
}
+int
+rte_vhost_set_inflight_desc_split(int vid, uint16_t vring_idx,
+ uint16_t idx)
+{
+ struct vhost_virtqueue *vq;
+ struct virtio_net *dev;
+
+ dev = get_device(vid);
+ if (unlikely(!dev))
+ return -1;
+
+ if (unlikely(!(dev->protocol_features &
+ (1ULL << VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD))))
+ return 0;
+
+ if (unlikely(vq_is_packed(dev)))
+ return -1;
+
+ if (unlikely(vring_idx >= VHOST_MAX_VRING))
+ return -1;
+
+ vq = dev->virtqueue[vring_idx];
+ if (unlikely(!vq))
+ return -1;
+
+ if (unlikely(!vq->inflight_split))
+ return -1;
+
+ if (unlikely(idx >= vq->size))
+ return -1;
+
+ vq->inflight_split->desc[idx].counter = vq->global_counter++;
+ vq->inflight_split->desc[idx].inflight = 1;
+ return 0;
+}
+
+int
+rte_vhost_set_inflight_desc_packed(int vid, uint16_t vring_idx,
+ uint16_t head, uint16_t last,
+ uint16_t *inflight_entry)
+{
+ struct rte_vhost_inflight_info_packed *inflight_info;
+ struct virtio_net *dev;
+ struct vhost_virtqueue *vq;
+ struct vring_packed_desc *desc;
+ uint16_t old_free_head, free_head;
+
+ dev = get_device(vid);
+ if (unlikely(!dev))
+ return -1;
+
+ if (unlikely(!(dev->protocol_features &
+ (1ULL << VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD))))
+ return 0;
+
+ if (unlikely(!vq_is_packed(dev)))
+ return -1;
+
+ if (unlikely(vring_idx >= VHOST_MAX_VRING))
+ return -1;
+
+ vq = dev->virtqueue[vring_idx];
+ if (unlikely(!vq))
+ return -1;
+
+ inflight_info = vq->inflight_packed;
+ if (unlikely(!inflight_info))
+ return -1;
+
+ if (unlikely(head >= vq->size))
+ return -1;
+
+ desc = vq->desc_packed;
+ old_free_head = inflight_info->old_free_head;
+ if (unlikely(old_free_head >= vq->size))
+ return -1;
+
+ free_head = old_free_head;
+
+ /* init header descriptor */
+ inflight_info->desc[old_free_head].num = 0;
+ inflight_info->desc[old_free_head].counter = vq->global_counter++;
+ inflight_info->desc[old_free_head].inflight = 1;
+
+ /* save desc entry in flight entry */
+ while (head != ((last + 1) % vq->size)) {
+ inflight_info->desc[old_free_head].num++;
+ inflight_info->desc[free_head].addr = desc[head].addr;
+ inflight_info->desc[free_head].len = desc[head].len;
+ inflight_info->desc[free_head].flags = desc[head].flags;
+ inflight_info->desc[free_head].id = desc[head].id;
+
+ inflight_info->desc[old_free_head].last = free_head;
+ free_head = inflight_info->desc[free_head].next;
+ inflight_info->free_head = free_head;
+ head = (head + 1) % vq->size;
+ }
+
+ inflight_info->old_free_head = free_head;
+ *inflight_entry = old_free_head;
+
+ return 0;
+}
+
+int
+rte_vhost_clr_inflight_desc_split(int vid, uint16_t vring_idx,
+ uint16_t last_used_idx, uint16_t idx)
+{
+ struct virtio_net *dev;
+ struct vhost_virtqueue *vq;
+
+ dev = get_device(vid);
+ if (unlikely(!dev))
+ return -1;
+
+ if (unlikely(!(dev->protocol_features &
+ (1ULL << VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD))))
+ return 0;
+
+ if (unlikely(vq_is_packed(dev)))
+ return -1;
+
+ if (unlikely(vring_idx >= VHOST_MAX_VRING))
+ return -1;
+
+ vq = dev->virtqueue[vring_idx];
+ if (unlikely(!vq))
+ return -1;
+
+ if (unlikely(!vq->inflight_split))
+ return -1;
+
+ if (unlikely(idx >= vq->size))
+ return -1;
+
+ rte_smp_mb();
+
+ vq->inflight_split->desc[idx].inflight = 0;
+
+ rte_smp_mb();
+
+ vq->inflight_split->used_idx = last_used_idx;
+ return 0;
+}
+
+int
+rte_vhost_clr_inflight_desc_packed(int vid, uint16_t vring_idx,
+ uint16_t head)
+{
+ struct rte_vhost_inflight_info_packed *inflight_info;
+ struct virtio_net *dev;
+ struct vhost_virtqueue *vq;
+
+ dev = get_device(vid);
+ if (unlikely(!dev))
+ return -1;
+
+ if (unlikely(!(dev->protocol_features &
+ (1ULL << VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD))))
+ return 0;
+
+ if (unlikely(!vq_is_packed(dev)))
+ return -1;
+
+ if (unlikely(vring_idx >= VHOST_MAX_VRING))
+ return -1;
+
+ vq = dev->virtqueue[vring_idx];
+ if (unlikely(!vq))
+ return -1;
+
+ inflight_info = vq->inflight_packed;
+ if (unlikely(!inflight_info))
+ return -1;
+
+ if (unlikely(head >= vq->size))
+ return -1;
+
+ rte_smp_mb();
+
+ inflight_info->desc[head].inflight = 0;
+
+ rte_smp_mb();
+
+ inflight_info->old_free_head = inflight_info->free_head;
+ inflight_info->old_used_idx = inflight_info->used_idx;
+ inflight_info->old_used_wrap_counter = inflight_info->used_wrap_counter;
+
+ return 0;
+}
+
+int
+rte_vhost_set_last_inflight_io_split(int vid, uint16_t vring_idx,
+ uint16_t idx)
+{
+ struct virtio_net *dev;
+ struct vhost_virtqueue *vq;
+
+ dev = get_device(vid);
+ if (unlikely(!dev))
+ return -1;
+
+ if (unlikely(!(dev->protocol_features &
+ (1ULL << VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD))))
+ return 0;
+
+ if (unlikely(vq_is_packed(dev)))
+ return -1;
+
+ if (unlikely(vring_idx >= VHOST_MAX_VRING))
+ return -1;
+
+ vq = dev->virtqueue[vring_idx];
+ if (unlikely(!vq))
+ return -1;
+
+ if (unlikely(!vq->inflight_split))
+ return -1;
+
+ vq->inflight_split->last_inflight_io = idx;
+ return 0;
+}
+
+int
+rte_vhost_set_last_inflight_io_packed(int vid, uint16_t vring_idx,
+ uint16_t head)
+{
+ struct rte_vhost_inflight_info_packed *inflight_info;
+ struct virtio_net *dev;
+ struct vhost_virtqueue *vq;
+ uint16_t last;
+
+ dev = get_device(vid);
+ if (unlikely(!dev))
+ return -1;
+
+ if (unlikely(!(dev->protocol_features &
+ (1ULL << VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD))))
+ return 0;
+
+ if (unlikely(!vq_is_packed(dev)))
+ return -1;
+
+ if (unlikely(vring_idx >= VHOST_MAX_VRING))
+ return -1;
+
+ vq = dev->virtqueue[vring_idx];
+ if (unlikely(!vq))
+ return -1;
+
+ inflight_info = vq->inflight_packed;
+ if (unlikely(!inflight_info))
+ return -1;
+
+ if (unlikely(head >= vq->size))
+ return -1;
+
+ last = inflight_info->desc[head].last;
+ if (unlikely(last >= vq->size))
+ return -1;
+
+ inflight_info->desc[last].next = inflight_info->free_head;
+ inflight_info->free_head = head;
+ inflight_info->used_idx += inflight_info->desc[head].num;
+ if (inflight_info->used_idx >= inflight_info->desc_num) {
+ inflight_info->used_idx -= inflight_info->desc_num;
+ inflight_info->used_wrap_counter =
+ !inflight_info->used_wrap_counter;
+ }
+
+ return 0;
+}
+
int
rte_vhost_vring_call(int vid, uint16_t vring_idx)
{
--
2.17.2
next prev parent reply other threads:[~2019-10-09 13:06 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20191009152515.21765>
2019-10-09 20:48 ` [dpdk-dev] [PATCH v11 0/9] vhost: support inflight share memory protocol feature Jin Yu
2019-10-09 20:48 ` [dpdk-dev] [PATCH v11 1/9] vhost: add the inflight description Jin Yu
2019-10-09 20:48 ` [dpdk-dev] [PATCH v11 2/9] vhost: add packed ring Jin Yu
2019-10-09 20:48 ` [dpdk-dev] [PATCH v11 3/9] vhost: add the inflight structure Jin Yu
2019-10-11 10:01 ` Maxime Coquelin
2019-10-09 20:48 ` [dpdk-dev] [PATCH v11 4/9] vhost: add two new messages to support a shared buffer Jin Yu
2019-10-11 10:07 ` Maxime Coquelin
2019-10-09 20:48 ` [dpdk-dev] [PATCH v11 5/9] vhost: checkout the resubmit inflight information Jin Yu
2019-10-11 10:09 ` Maxime Coquelin
2019-10-09 20:48 ` Jin Yu [this message]
2019-10-11 10:14 ` [dpdk-dev] [PATCH v11 6/9] vhost: add the APIs to operate inflight ring Maxime Coquelin
2019-10-09 20:48 ` [dpdk-dev] [PATCH v11 7/9] vhost: add APIs for user getting " Jin Yu
2019-10-11 10:18 ` Maxime Coquelin
2019-10-09 20:48 ` [dpdk-dev] [PATCH v11 8/9] vhost: add vring functions packed ring support Jin Yu
2019-10-11 10:19 ` Maxime Coquelin
2019-10-09 20:48 ` [dpdk-dev] [PATCH v11 9/9] vhost: add vhost-user-blk example which support inflight Jin Yu
2019-10-11 10:24 ` Maxime Coquelin
2019-10-28 19:37 ` [dpdk-dev] [PATCH] " Jin Yu
2019-11-01 10:42 ` [dpdk-dev] [PATCH v5] " Jin Yu
2019-11-04 16:36 ` [dpdk-dev] [PATCH v6] " Jin Yu
2019-11-06 20:26 ` Maxime Coquelin
2019-11-06 21:01 ` Maxime Coquelin
2019-10-16 11:12 ` [dpdk-dev] [PATCH v11 0/9] vhost: support inflight share memory protocol feature Maxime Coquelin
2019-10-25 10:08 ` Thomas Monjalon
2019-10-25 10:12 ` Maxime Coquelin
2019-10-25 23:01 ` Thomas Monjalon
2019-10-28 1:37 ` Yu, Jin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191009204837.65039-7-jin.yu@intel.com \
--to=jin.yu@intel.com \
--cc=changpeng.liu@intel.com \
--cc=dev@dpdk.org \
--cc=lilin24@baidu.com \
--cc=maxime.coquelin@redhat.com \
--cc=nixun@baidu.com \
--cc=tiwei.bie@intel.com \
--cc=zhangyu31@baidu.com \
--cc=zhihong.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).