From: Yuan Wang <yuanx.wang@intel.com>
To: dev@dpdk.org
Cc: maxime.coquelin@redhat.com, chenbo.xia@intel.com,
Sunil.Pai.G@intel.com, jiayu.hu@intel.com, xuan.ding@intel.com,
cheng1.jiang@intel.com, wenwux.ma@intel.com,
yvonnex.yang@intel.com, Yuan Wang <yuanx.wang@intel.com>
Subject: [dpdk-dev] [PATCH 2/2] vhost: support thread-safe API for clearing in-flight packets in async vhost
Date: Thu, 9 Sep 2021 06:58:07 +0000 [thread overview]
Message-ID: <20210909065807.812145-3-yuanx.wang@intel.com> (raw)
In-Reply-To: <20210909065807.812145-1-yuanx.wang@intel.com>
This patch adds thread-safe version for
clearing in-flight packets function.
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
---
lib/vhost/rte_vhost_async.h | 21 +++++++++++++++++++++
lib/vhost/version.map | 1 +
lib/vhost/virtio_net.c | 36 ++++++++++++++++++++++++++++++++++++
3 files changed, 58 insertions(+)
diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
index 5e2429ab70..a418e0a03d 100644
--- a/lib/vhost/rte_vhost_async.h
+++ b/lib/vhost/rte_vhost_async.h
@@ -261,6 +261,27 @@ int rte_vhost_async_get_inflight(int vid, uint16_t queue_id);
__rte_experimental
uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
struct rte_mbuf **pkts, uint16_t count);
+
+/**
+ * This function checks async completion status and clear packets for
+ * a specific vhost device queue. Packets which are inflight will be
+ * returned in an array.
+ *
+ * @param vid
+ * ID of vhost device to clear data
+ * @param queue_id
+ * Queue id to clear data
+ * @param pkts
+ * Blank array to get return packet pointer
+ * @param count
+ * Size of the packet array
+ * @return
+ * Number of packets returned
+ */
+__rte_experimental
+uint16_t rte_vhost_clear_queue(int vid, uint16_t queue_id,
+ struct rte_mbuf **pkts, uint16_t count);
+
/**
* This function tries to receive packets from the guest with offloading
* copies to the async channel. The packets that are transfer completed
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index 3d566a6d5f..f78cc89b58 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -88,4 +88,5 @@ EXPERIMENTAL {
# added in 21.11
rte_vhost_async_try_dequeue_burst;
+ rte_vhost_clear_queue;
};
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 7f6183a929..51693a7c35 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -2142,6 +2142,42 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
return n_pkts_cpl;
}
+uint16_t
+rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, uint16_t count)
+{
+ struct virtio_net *dev = get_device(vid);
+ struct vhost_virtqueue *vq;
+ uint16_t n_pkts_cpl;
+
+ if (!dev)
+ return 0;
+
+ VHOST_LOG_DATA(DEBUG, "(%d) %s\n", dev->vid, __func__);
+
+ vq = dev->virtqueue[queue_id];
+
+ if (unlikely(!vq->async_registered)) {
+ VHOST_LOG_DATA(ERR, "(%d) %s: async not registered for queue id %d.\n",
+ dev->vid, __func__, queue_id);
+ return 0;
+ }
+
+ if (!rte_spinlock_trylock(&vq->access_lock)) {
+ VHOST_LOG_CONFIG(ERR, "Failed to clear async queue, virt queue busy.\n");
+ return 0;
+ }
+
+ if ((queue_id % 2) == 0)
+ n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count);
+ else
+ n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, queue_id, pkts, count,
+ dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS);
+
+ rte_spinlock_unlock(&vq->access_lock);
+
+ return n_pkts_cpl;
+}
+
static __rte_always_inline uint32_t
virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id,
struct rte_mbuf **pkts, uint32_t count)
--
2.25.1
next prev parent reply other threads:[~2021-09-09 7:12 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-09 6:58 [dpdk-dev] [PATCH 0/2] support to clear in-flight packets for async Yuan Wang
2021-09-09 6:58 ` [dpdk-dev] [PATCH 1/2] vhost: support to clear in-flight packets for async dequeue Yuan Wang
2021-09-15 7:02 ` Xia, Chenbo
2021-09-22 2:18 ` Yang, YvonneX
2021-09-09 6:58 ` Yuan Wang [this message]
2021-09-15 7:23 ` [dpdk-dev] [PATCH 2/2] vhost: support thread-safe API for clearing in-flight packets in async vhost Xia, Chenbo
2021-09-22 2:17 ` Yang, YvonneX
2021-09-15 7:00 ` [dpdk-dev] [PATCH 0/2] support to clear in-flight packets for async Xia, Chenbo
2021-09-17 8:12 ` [dpdk-dev] [PATCH v2 " Yuan Wang
2021-09-17 8:12 ` [dpdk-dev] [PATCH v2 1/2] vhost: support to clear in-flight packets for async dequeue Yuan Wang
2021-09-17 8:12 ` [dpdk-dev] [PATCH v2 2/2] vhost: add thread-safe API for clearing in-flight packets in async vhost Yuan Wang
2021-09-22 2:19 ` [dpdk-dev] [PATCH 0/2] support to clear in-flight packets for async Yang, YvonneX
2021-09-22 8:55 ` [dpdk-dev] [PATCH v3 " Yuan Wang
2021-09-22 8:55 ` [dpdk-dev] [PATCH v3 1/2] vhost: support to clear in-flight packets for async dequeue Yuan Wang
2021-09-23 2:43 ` Yang, YvonneX
2021-09-22 8:55 ` [dpdk-dev] [PATCH v3 2/2] vhost: add thread-safe API for clearing in-flight packets in async vhost Yuan Wang
2021-09-23 2:43 ` Yang, YvonneX
2021-09-30 6:54 ` Ding, Xuan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210909065807.812145-3-yuanx.wang@intel.com \
--to=yuanx.wang@intel.com \
--cc=Sunil.Pai.G@intel.com \
--cc=chenbo.xia@intel.com \
--cc=cheng1.jiang@intel.com \
--cc=dev@dpdk.org \
--cc=jiayu.hu@intel.com \
--cc=maxime.coquelin@redhat.com \
--cc=wenwux.ma@intel.com \
--cc=xuan.ding@intel.com \
--cc=yvonnex.yang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).