From: xuan.ding@intel.com
To: maxime.coquelin@redhat.com, chenbo.xia@intel.com
Cc: dev@dpdk.org, jiayu.hu@intel.com, cheng1.jiang@intel.com,
sunil.pai.g@intel.com, liangma@liangbit.com,
Xuan Ding <xuan.ding@intel.com>, Yuan Wang <yuanx.wang@intel.com>
Subject: [PATCH v2 4/5] vhost: support async dequeue for split ring
Date: Mon, 11 Apr 2022 10:00:31 +0000 [thread overview]
Message-ID: <20220411100032.114434-5-xuan.ding@intel.com> (raw)
In-Reply-To: <20220411100032.114434-1-xuan.ding@intel.com>
From: Xuan Ding <xuan.ding@intel.com>
This patch implements asynchronous dequeue data path for vhost split
ring, a new API rte_vhost_async_try_dequeue_burst() is introduced.
Signed-off-by: Xuan Ding <xuan.ding@intel.com>
Signed-off-by: Yuan Wang <yuanx.wang@intel.com>
---
doc/guides/prog_guide/vhost_lib.rst | 7 +
doc/guides/rel_notes/release_22_07.rst | 4 +
lib/vhost/rte_vhost_async.h | 33 +++
lib/vhost/version.map | 3 +
lib/vhost/virtio_net.c | 335 +++++++++++++++++++++++++
5 files changed, 382 insertions(+)
diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index 886f8f5e72..40cf315170 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -276,6 +276,13 @@ The following is an overview of some key Vhost API functions:
Clear inflight packets which are submitted to DMA engine in vhost async data
path. Completed packets are returned to applications through ``pkts``.
+* ``rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
+ struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count,
+ int *nr_inflight, uint16_t dma_id, uint16_t vchan_id)``
+
+ Receives (dequeues) ``count`` packets from guest to host in async data path,
+ and stored them at ``pkts``.
+
Vhost-user Implementations
--------------------------
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 42a5f2d990..422a6673cb 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -55,6 +55,10 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added vhost async dequeue API to receive pkts from guest.**
+
+ Added vhost async dequeue API which can leverage DMA devices to accelerate
+ receiving pkts from guest.
Removed Items
-------------
diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
index f1293c6a9d..23fe1a7316 100644
--- a/lib/vhost/rte_vhost_async.h
+++ b/lib/vhost/rte_vhost_async.h
@@ -187,6 +187,39 @@ uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
__rte_experimental
int rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id);
+/**
+ * This function tries to receive packets from the guest with offloading
+ * copies to the async channel. The packets that are transfer completed
+ * are returned in "pkts". The other packets that their copies are submitted to
+ * the async channel but not completed are called "in-flight packets".
+ * This function will not return in-flight packets until their copies are
+ * completed by the async channel.
+ *
+ * @param vid
+ * ID of vhost device to dequeue data
+ * @param queue_id
+ * ID of virtqueue to dequeue data
+ * @param mbuf_pool
+ * Mbuf_pool where host mbuf is allocated
+ * @param pkts
+ * Blank array to keep successfully dequeued packets
+ * @param count
+ * Size of the packet array
+ * @param nr_inflight
+ * The amount of in-flight packets. If error occurred, its value is set to -1.
+ * @param dma_id
+ * The identifier of DMA device
+ * @param vchan_id
+ * The identifier of virtual DMA channel
+ * @return
+ * Number of successfully dequeued packets
+ */
+__rte_experimental
+uint16_t
+rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
+ struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count,
+ int *nr_inflight, uint16_t dma_id, uint16_t vchan_id);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index 0a66c5840c..514e3ff6a6 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -87,6 +87,9 @@ EXPERIMENTAL {
# added in 22.03
rte_vhost_async_dma_configure;
+
+ # added in 22.07
+ rte_vhost_async_try_dequeue_burst;
};
INTERNAL {
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 56904ad9a5..514315ef50 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -3166,3 +3166,338 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
return count;
}
+
+static __rte_always_inline uint16_t
+async_poll_dequeue_completed_split(struct virtio_net *dev, uint16_t queue_id,
+ struct rte_mbuf **pkts, uint16_t count, uint16_t dma_id,
+ uint16_t vchan_id, bool legacy_ol_flags)
+{
+ uint16_t start_idx, from, i;
+ uint16_t nr_cpl_pkts = 0;
+ struct async_inflight_info *pkts_info;
+ struct vhost_virtqueue *vq = dev->virtqueue[queue_id];
+
+ pkts_info = vq->async->pkts_info;
+
+ vhost_async_dma_check_completed(dev, dma_id, vchan_id, VHOST_DMA_MAX_COPY_COMPLETE);
+
+ start_idx = async_get_first_inflight_pkt_idx(vq);
+
+ from = start_idx;
+ while (vq->async->pkts_cmpl_flag[from] && count--) {
+ vq->async->pkts_cmpl_flag[from] = false;
+ from = (from + 1) & (vq->size - 1);
+ nr_cpl_pkts++;
+ }
+
+ if (nr_cpl_pkts == 0)
+ return 0;
+
+ for (i = 0; i < nr_cpl_pkts; i++) {
+ from = (start_idx + i) & (vq->size - 1);
+ pkts[i] = pkts_info[from].mbuf;
+
+ if (virtio_net_with_host_offload(dev))
+ vhost_dequeue_offload(dev, &pkts_info[from].nethdr, pkts[i],
+ legacy_ol_flags);
+ }
+
+ /* write back completed descs to used ring and update used idx */
+ write_back_completed_descs_split(vq, nr_cpl_pkts);
+ __atomic_add_fetch(&vq->used->idx, nr_cpl_pkts, __ATOMIC_RELEASE);
+ vhost_vring_call_split(dev, vq);
+
+ vq->async->pkts_inflight_n -= nr_cpl_pkts;
+
+ return nr_cpl_pkts;
+}
+
+static __rte_always_inline uint16_t
+virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
+ uint16_t queue_id, struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts,
+ uint16_t count, uint16_t dma_id, uint16_t vchan_id, bool legacy_ol_flags)
+{
+ static bool allocerr_warned;
+ bool dropped = false;
+ uint16_t free_entries;
+ uint16_t pkt_idx, slot_idx = 0;
+ uint16_t nr_done_pkts = 0;
+ uint16_t pkt_err = 0;
+ uint16_t n_xfer;
+ struct vhost_async *async = vq->async;
+ struct async_inflight_info *pkts_info = async->pkts_info;
+ struct rte_mbuf *pkts_prealloc[MAX_PKT_BURST];
+ uint16_t pkts_size = count;
+
+ /**
+ * The ordering between avail index and
+ * desc reads needs to be enforced.
+ */
+ free_entries = __atomic_load_n(&vq->avail->idx, __ATOMIC_ACQUIRE) - vq->last_avail_idx;
+ if (free_entries == 0)
+ goto out;
+
+ rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
+
+ async_iter_reset(async);
+
+ count = RTE_MIN(count, MAX_PKT_BURST);
+ count = RTE_MIN(count, free_entries);
+ VHOST_LOG_DATA(DEBUG, "(%s) about to dequeue %u buffers\n", dev->ifname, count);
+
+ if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count))
+ goto out;
+
+ for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
+ uint16_t head_idx = 0;
+ uint16_t nr_vec = 0;
+ uint16_t to;
+ uint32_t buf_len;
+ int err;
+ struct buf_vector buf_vec[BUF_VECTOR_MAX];
+ struct rte_mbuf *pkt = pkts_prealloc[pkt_idx];
+
+ if (unlikely(fill_vec_buf_split(dev, vq, vq->last_avail_idx,
+ &nr_vec, buf_vec,
+ &head_idx, &buf_len,
+ VHOST_ACCESS_RO) < 0)) {
+ dropped = true;
+ break;
+ }
+
+ err = virtio_dev_pktmbuf_prep(dev, pkt, buf_len);
+ if (unlikely(err)) {
+ /**
+ * mbuf allocation fails for jumbo packets when external
+ * buffer allocation is not allowed and linear buffer
+ * is required. Drop this packet.
+ */
+ if (!allocerr_warned) {
+ VHOST_LOG_DATA(ERR,
+ "(%s) %s: Failed mbuf alloc of size %d from %s\n",
+ dev->ifname, __func__, buf_len, mbuf_pool->name);
+ allocerr_warned = true;
+ }
+ dropped = true;
+ break;
+ }
+
+ slot_idx = (async->pkts_idx + pkt_idx) & (vq->size - 1);
+ err = desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkt, mbuf_pool,
+ legacy_ol_flags, slot_idx, true);
+ if (unlikely(err)) {
+ if (!allocerr_warned) {
+ VHOST_LOG_DATA(ERR,
+ "(%s) %s: Failed to offload copies to async channel.\n",
+ dev->ifname, __func__);
+ allocerr_warned = true;
+ }
+ dropped = true;
+ break;
+ }
+
+ pkts_info[slot_idx].mbuf = pkt;
+
+ /* store used descs */
+ to = async->desc_idx_split & (vq->size - 1);
+ async->descs_split[to].id = head_idx;
+ async->descs_split[to].len = 0;
+ async->desc_idx_split++;
+
+ vq->last_avail_idx++;
+ }
+
+ if (unlikely(dropped))
+ rte_pktmbuf_free_bulk(&pkts_prealloc[pkt_idx], count - pkt_idx);
+
+ n_xfer = vhost_async_dma_transfer(dev, vq, dma_id, vchan_id, async->pkts_idx,
+ async->iov_iter, pkt_idx);
+
+ async->pkts_inflight_n += n_xfer;
+
+ pkt_err = pkt_idx - n_xfer;
+ if (unlikely(pkt_err)) {
+ VHOST_LOG_DATA(DEBUG,
+ "(%s) %s: failed to transfer data for queue id %d.\n",
+ dev->ifname, __func__, queue_id);
+
+ pkt_idx = n_xfer;
+ /* recover available ring */
+ vq->last_avail_idx -= pkt_err;
+
+ /**
+ * recover async channel copy related structures and free pktmbufs
+ * for error pkts.
+ */
+ async->desc_idx_split -= pkt_err;
+ while (pkt_err-- > 0) {
+ rte_pktmbuf_free(pkts_info[slot_idx & (vq->size - 1)].mbuf);
+ slot_idx--;
+ }
+ }
+
+ async->pkts_idx += pkt_idx;
+ if (async->pkts_idx >= vq->size)
+ async->pkts_idx -= vq->size;
+
+out:
+ /* DMA device may serve other queues, unconditionally check completed. */
+ nr_done_pkts = async_poll_dequeue_completed_split(dev, queue_id, pkts, pkts_size,
+ dma_id, vchan_id, legacy_ol_flags);
+
+ return nr_done_pkts;
+}
+
+__rte_noinline
+static uint16_t
+virtio_dev_tx_async_split_legacy(struct virtio_net *dev,
+ struct vhost_virtqueue *vq, uint16_t queue_id,
+ struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts,
+ uint16_t count, uint16_t dma_id, uint16_t vchan_id)
+{
+ return virtio_dev_tx_async_split(dev, vq, queue_id, mbuf_pool,
+ pkts, count, dma_id, vchan_id, true);
+}
+
+__rte_noinline
+static uint16_t
+virtio_dev_tx_async_split_compliant(struct virtio_net *dev,
+ struct vhost_virtqueue *vq, uint16_t queue_id,
+ struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts,
+ uint16_t count, uint16_t dma_id, uint16_t vchan_id)
+{
+ return virtio_dev_tx_async_split(dev, vq, queue_id, mbuf_pool,
+ pkts, count, dma_id, vchan_id, false);
+}
+
+uint16_t
+rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id,
+ struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count,
+ int *nr_inflight, uint16_t dma_id, uint16_t vchan_id)
+{
+ struct virtio_net *dev;
+ struct rte_mbuf *rarp_mbuf = NULL;
+ struct vhost_virtqueue *vq;
+ int16_t success = 1;
+
+ *nr_inflight = -1;
+
+ dev = get_device(vid);
+ if (!dev)
+ return 0;
+
+ if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) {
+ VHOST_LOG_DATA(ERR,
+ "(%s) %s: built-in vhost net backend is disabled.\n",
+ dev->ifname, __func__);
+ return 0;
+ }
+
+ if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) {
+ VHOST_LOG_DATA(ERR,
+ "(%s) %s: invalid virtqueue idx %d.\n",
+ dev->ifname, __func__, queue_id);
+ return 0;
+ }
+
+ if (unlikely(!dma_copy_track[dma_id].vchans ||
+ !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) {
+ VHOST_LOG_DATA(ERR, "(%s) %s: invalid channel %d:%u.\n", dev->ifname, __func__,
+ dma_id, vchan_id);
+ return 0;
+ }
+
+ vq = dev->virtqueue[queue_id];
+
+ if (unlikely(rte_spinlock_trylock(&vq->access_lock) == 0))
+ return 0;
+
+ if (unlikely(vq->enabled == 0)) {
+ count = 0;
+ goto out_access_unlock;
+ }
+
+ if (unlikely(!vq->async)) {
+ VHOST_LOG_DATA(ERR, "(%s) %s: async not registered for queue id %d.\n",
+ dev->ifname, __func__, queue_id);
+ count = 0;
+ goto out_access_unlock;
+ }
+
+ if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
+ vhost_user_iotlb_rd_lock(vq);
+
+ if (unlikely(vq->access_ok == 0))
+ if (unlikely(vring_translate(dev, vq) < 0)) {
+ count = 0;
+ goto out;
+ }
+
+ /*
+ * Construct a RARP broadcast packet, and inject it to the "pkts"
+ * array, to looks like that guest actually send such packet.
+ *
+ * Check user_send_rarp() for more information.
+ *
+ * broadcast_rarp shares a cacheline in the virtio_net structure
+ * with some fields that are accessed during enqueue and
+ * __atomic_compare_exchange_n causes a write if performed compare
+ * and exchange. This could result in false sharing between enqueue
+ * and dequeue.
+ *
+ * Prevent unnecessary false sharing by reading broadcast_rarp first
+ * and only performing compare and exchange if the read indicates it
+ * is likely to be set.
+ */
+ if (unlikely(__atomic_load_n(&dev->broadcast_rarp, __ATOMIC_ACQUIRE) &&
+ __atomic_compare_exchange_n(&dev->broadcast_rarp,
+ &success, 0, 0, __ATOMIC_RELEASE, __ATOMIC_RELAXED))) {
+
+ rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
+ if (rarp_mbuf == NULL) {
+ VHOST_LOG_DATA(ERR, "Failed to make RARP packet.\n");
+ count = 0;
+ goto out;
+ }
+ /*
+ * Inject it to the head of "pkts" array, so that switch's mac
+ * learning table will get updated first.
+ */
+ pkts[0] = rarp_mbuf;
+ pkts++;
+ count -= 1;
+ }
+
+ if (unlikely(vq_is_packed(dev))) {
+ static bool not_support_pack_log;
+ if (!not_support_pack_log) {
+ VHOST_LOG_DATA(ERR,
+ "(%s) %s: async dequeue does not support packed ring.\n",
+ dev->ifname, __func__);
+ not_support_pack_log = true;
+ }
+ count = 0;
+ goto out;
+ }
+
+ if (dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS)
+ count = virtio_dev_tx_async_split_legacy(dev, vq, queue_id,
+ mbuf_pool, pkts, count, dma_id, vchan_id);
+ else
+ count = virtio_dev_tx_async_split_compliant(dev, vq, queue_id,
+ mbuf_pool, pkts, count, dma_id, vchan_id);
+
+ *nr_inflight = vq->async->pkts_inflight_n;
+
+out:
+ if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
+ vhost_user_iotlb_rd_unlock(vq);
+
+out_access_unlock:
+ rte_spinlock_unlock(&vq->access_lock);
+
+ if (unlikely(rarp_mbuf != NULL))
+ count += 1;
+
+ return count;
+}
--
2.17.1
next prev parent reply other threads:[~2022-04-11 10:01 UTC|newest]
Thread overview: 73+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-07 15:25 [PATCH v1 0/5] vhost: support async dequeue data path xuan.ding
2022-04-07 15:25 ` [PATCH v1 1/5] vhost: prepare sync for descriptor to mbuf refactoring xuan.ding
2022-04-07 15:25 ` [PATCH v1 2/5] vhost: prepare async " xuan.ding
2022-04-07 15:25 ` [PATCH v1 3/5] vhost: merge sync and async descriptor to mbuf filling xuan.ding
2022-04-07 15:25 ` [PATCH v1 4/5] vhost: support async dequeue for split ring xuan.ding
2022-04-07 15:25 ` [PATCH v1 5/5] examples/vhost: support async dequeue data path xuan.ding
2022-04-11 10:00 ` [PATCH v2 0/5] vhost: " xuan.ding
2022-04-11 10:00 ` [PATCH v2 1/5] vhost: prepare sync for descriptor to mbuf refactoring xuan.ding
2022-04-11 10:00 ` [PATCH v2 2/5] vhost: prepare async " xuan.ding
2022-04-11 10:00 ` [PATCH v2 3/5] vhost: merge sync and async descriptor to mbuf filling xuan.ding
2022-04-11 10:00 ` xuan.ding [this message]
2022-04-11 10:00 ` [PATCH v2 5/5] examples/vhost: support async dequeue data path xuan.ding
2022-04-19 3:43 ` [PATCH v3 0/5] vhost: " xuan.ding
2022-04-19 3:43 ` [PATCH v3 1/5] vhost: prepare sync for descriptor to mbuf refactoring xuan.ding
2022-04-22 15:30 ` Maxime Coquelin
2022-04-19 3:43 ` [PATCH v3 2/5] vhost: prepare async " xuan.ding
2022-04-22 15:32 ` Maxime Coquelin
2022-04-19 3:43 ` [PATCH v3 3/5] vhost: merge sync and async descriptor to mbuf filling xuan.ding
2022-04-22 11:06 ` David Marchand
2022-04-22 15:46 ` Maxime Coquelin
2022-04-24 2:02 ` Ding, Xuan
2022-04-22 15:43 ` Maxime Coquelin
2022-04-19 3:43 ` [PATCH v3 4/5] vhost: support async dequeue for split ring xuan.ding
2022-04-19 3:43 ` [PATCH v3 5/5] examples/vhost: support async dequeue data path xuan.ding
2022-05-05 6:23 ` [PATCH v4 0/5] vhost: " xuan.ding
2022-05-05 6:23 ` [PATCH v4 1/5] vhost: prepare sync for descriptor to mbuf refactoring xuan.ding
2022-05-05 7:37 ` Yang, YvonneX
2022-05-05 6:23 ` [PATCH v4 2/5] vhost: prepare async " xuan.ding
2022-05-05 7:38 ` Yang, YvonneX
2022-05-05 6:23 ` [PATCH v4 3/5] vhost: merge sync and async descriptor to mbuf filling xuan.ding
2022-05-05 7:39 ` Yang, YvonneX
2022-05-05 6:23 ` [PATCH v4 4/5] vhost: support async dequeue for split ring xuan.ding
2022-05-05 7:40 ` Yang, YvonneX
2022-05-05 19:36 ` Maxime Coquelin
2022-05-05 6:23 ` [PATCH v4 5/5] examples/vhost: support async dequeue data path xuan.ding
2022-05-05 7:39 ` Yang, YvonneX
2022-05-05 19:38 ` Maxime Coquelin
2022-05-05 19:52 ` [PATCH v4 0/5] vhost: " Maxime Coquelin
2022-05-06 1:49 ` Ding, Xuan
2022-05-13 2:00 ` [PATCH v5 " xuan.ding
2022-05-13 2:00 ` [PATCH v5 1/5] vhost: prepare sync for descriptor to mbuf refactoring xuan.ding
2022-05-13 2:00 ` [PATCH v5 2/5] vhost: prepare async " xuan.ding
2022-05-13 2:00 ` [PATCH v5 3/5] vhost: merge sync and async descriptor to mbuf filling xuan.ding
2022-05-13 2:00 ` [PATCH v5 4/5] vhost: support async dequeue for split ring xuan.ding
2022-05-13 2:24 ` Stephen Hemminger
2022-05-13 2:33 ` Ding, Xuan
2022-05-13 2:00 ` [PATCH v5 5/5] examples/vhost: support async dequeue data path xuan.ding
2022-05-13 2:50 ` [PATCH v6 0/5] vhost: " xuan.ding
2022-05-13 2:50 ` [PATCH v6 1/5] vhost: prepare sync for descriptor to mbuf refactoring xuan.ding
2022-05-13 2:50 ` [PATCH v6 2/5] vhost: prepare async " xuan.ding
2022-05-13 2:50 ` [PATCH v6 3/5] vhost: merge sync and async descriptor to mbuf filling xuan.ding
2022-05-13 2:50 ` [PATCH v6 4/5] vhost: support async dequeue for split ring xuan.ding
2022-05-13 2:50 ` [PATCH v6 5/5] examples/vhost: support async dequeue data path xuan.ding
2022-05-13 3:27 ` Xia, Chenbo
2022-05-13 3:51 ` Ding, Xuan
2022-05-16 2:43 ` [PATCH v7 0/5] vhost: " xuan.ding
2022-05-16 2:43 ` [PATCH v7 1/5] vhost: prepare sync for descriptor to mbuf refactoring xuan.ding
2022-05-16 2:43 ` [PATCH v7 2/5] vhost: prepare async " xuan.ding
2022-05-16 2:43 ` [PATCH v7 3/5] vhost: merge sync and async descriptor to mbuf filling xuan.ding
2022-05-16 2:43 ` [PATCH v7 4/5] vhost: support async dequeue for split ring xuan.ding
2022-05-16 5:52 ` Hu, Jiayu
2022-05-16 6:10 ` Ding, Xuan
2022-05-16 2:43 ` [PATCH v7 5/5] examples/vhost: support async dequeue data path xuan.ding
2022-05-16 11:10 ` [PATCH v8 0/5] vhost: " xuan.ding
2022-05-16 11:10 ` [PATCH v8 1/5] vhost: prepare sync for descriptor to mbuf refactoring xuan.ding
2022-05-16 11:10 ` [PATCH v8 2/5] vhost: prepare async " xuan.ding
2022-05-16 11:10 ` [PATCH v8 3/5] vhost: merge sync and async descriptor to mbuf filling xuan.ding
2022-05-16 11:10 ` [PATCH v8 4/5] vhost: support async dequeue for split ring xuan.ding
2022-06-16 14:38 ` David Marchand
2022-06-16 14:40 ` David Marchand
2022-06-17 6:34 ` Ding, Xuan
2022-05-16 11:10 ` [PATCH v8 5/5] examples/vhost: support async dequeue data path xuan.ding
2022-05-17 13:22 ` [PATCH v8 0/5] vhost: " Maxime Coquelin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220411100032.114434-5-xuan.ding@intel.com \
--to=xuan.ding@intel.com \
--cc=chenbo.xia@intel.com \
--cc=cheng1.jiang@intel.com \
--cc=dev@dpdk.org \
--cc=jiayu.hu@intel.com \
--cc=liangma@liangbit.com \
--cc=maxime.coquelin@redhat.com \
--cc=sunil.pai.g@intel.com \
--cc=yuanx.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).