* [PATCH 0/2] support to clear in-flight packets for async @ 2022-04-13 18:27 Yuan Wang 2022-04-13 18:27 ` [PATCH 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang ` (5 more replies) 0 siblings, 6 replies; 22+ messages in thread From: Yuan Wang @ 2022-04-13 18:27 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia Cc: dev, jiayu.hu, xuan.ding, xingguang.he, yvonnex.yang, sunil.pai.g, yuanx.wang These patches support to clear in-flight packets for async dequeue and introduce thread-safe version of this function. note: The patches depend on the following patches (https://patches.dpdk.org/project/dpdk/patch/20220411100032.114434-5-xuan.ding@intel.com/) (https://patches.dpdk.org/project/dpdk/patch/20220411100032.114434-6-xuan.ding@intel.com/) Yuan Wang (2): vhost: support clear in-flight packets for async dequeue example/vhost: support to clear in-flight packets for async dequeue doc/guides/prog_guide/vhost_lib.rst | 8 ++- doc/guides/rel_notes/release_22_07.rst | 4 ++ examples/vhost/main.c | 3 - lib/vhost/rte_vhost_async.h | 25 ++++++++ lib/vhost/version.map | 1 + lib/vhost/virtio_net.c | 80 +++++++++++++++++++++++++- 6 files changed, 115 insertions(+), 6 deletions(-) -- 2.25.1 ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH 1/2] vhost: support clear in-flight packets for async dequeue 2022-04-13 18:27 [PATCH 0/2] support to clear in-flight packets for async Yuan Wang @ 2022-04-13 18:27 ` Yuan Wang 2022-04-13 18:27 ` [PATCH 2/2] example/vhost: support to " Yuan Wang ` (4 subsequent siblings) 5 siblings, 0 replies; 22+ messages in thread From: Yuan Wang @ 2022-04-13 18:27 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia Cc: dev, jiayu.hu, xuan.ding, xingguang.he, yvonnex.yang, sunil.pai.g, yuanx.wang rte_vhost_clear_queue_thread_unsafe() supports to clear in-flight packets for async enqueue only. But after supporting async dequeue, this API should support async dequeue too. This patch also adds the thread-safe version of this API, the difference between the two API is that thread safety uses lock. These APIs maybe used to clean up packets in the async channel to prevent packet loss when the device state changes or when the device is destroyed. Signed-off-by: Yuan Wang <yuanx.wang@intel.com> --- doc/guides/prog_guide/vhost_lib.rst | 8 ++- doc/guides/rel_notes/release_22_07.rst | 4 ++ lib/vhost/rte_vhost_async.h | 25 ++++++++ lib/vhost/version.map | 1 + lib/vhost/virtio_net.c | 80 +++++++++++++++++++++++++- 5 files changed, 115 insertions(+), 3 deletions(-) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index 40cf315170..967c902703 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -273,7 +273,13 @@ The following is an overview of some key Vhost API functions: * ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count, dma_id, vchan_id)`` - Clear inflight packets which are submitted to DMA engine in vhost async data + Clear in-flight packets which are submitted to async channel in vhost + async data path without performing any locking. Completed packets are + returned to applications through ``pkts``. + +* ``rte_vhost_clear_queue(vid, queue_id, **pkts, count, dma_id, vchan_id)`` + + Clear in-flight packets which are submitted to async channel in vhost async data path. Completed packets are returned to applications through ``pkts``. * ``rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index 422a6673cb..6340ab9474 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -60,6 +60,10 @@ New Features Added vhost async dequeue API which can leverage DMA devices to accelerate receiving pkts from guest. +* **Added thread-safe version of inflight packet clear API in vhost library.** + Added an API which can clear the inflight packets submitted to + the async channel in a thread-safe manner in the vhost async data path. + Removed Items ------------- diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h index 23fe1a7316..8a0e4849b9 100644 --- a/lib/vhost/rte_vhost_async.h +++ b/lib/vhost/rte_vhost_async.h @@ -166,6 +166,31 @@ uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id); +/** + * This function checks async completion status and clear packets for + * a specific vhost device queue. Packets which are inflight will be + * returned in an array. + * + * @param vid + * ID of vhost device to clear data + * @param queue_id + * Queue id to clear data + * @param pkts + * Blank array to get return packet pointer + * @param count + * Size of the packet array + * @param dma_id + * The identifier of the DMA device + * @param vchan_id + * The identifier of virtual DMA channel + * @return + * Number of packets returned + */ +__rte_experimental +uint16_t rte_vhost_clear_queue(int vid, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, + uint16_t vchan_id); + /** * The DMA vChannels used in asynchronous data path must be configured * first. So this function needs to be called before enabling DMA diff --git a/lib/vhost/version.map b/lib/vhost/version.map index 514e3ff6a6..531c966c03 100644 --- a/lib/vhost/version.map +++ b/lib/vhost/version.map @@ -90,6 +90,7 @@ EXPERIMENTAL { # added in 22.07 rte_vhost_async_try_dequeue_burst; + rte_vhost_clear_queue; }; INTERNAL { diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 514315ef50..d650b291db 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -26,6 +26,11 @@ #define MAX_BATCH_LEN 256 +static __rte_always_inline uint16_t +async_poll_dequeue_completed_split(struct virtio_net *dev, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count, uint16_t dma_id, + uint16_t vchan_id, bool legacy_ol_flags); + /* DMA device copy operation tracking array. */ struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX]; @@ -2097,7 +2102,7 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); - if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { + if (unlikely(queue_id >= dev->nr_vring)) { VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n", dev->ifname, __func__, queue_id); return 0; @@ -2118,11 +2123,82 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; } - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id); + if (queue_id % 2 == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, + pkts, count, dma_id, vchan_id); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%d) %s: async dequeue does not support packed ring.\n", + dev->vid, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, queue_id, pkts, count, + dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } return n_pkts_cpl; } +uint16_t +rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, + uint16_t count, int16_t dma_id, uint16_t vchan_id) +{ + struct virtio_net *dev = get_device(vid); + struct vhost_virtqueue *vq; + uint16_t n_pkts_cpl = 0; + + if (!dev) + return 0; + + VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); + if (unlikely(queue_id >= dev->nr_vring)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n", + dev->ifname, __func__, queue_id); + return 0; + } + + vq = dev->virtqueue[queue_id]; + + if (!rte_spinlock_trylock(&vq->access_lock)) { + VHOST_LOG_DATA(ERR, + "(%d) %s: failed to clear async queue id %d, virtqueue busy.\n", + dev->vid, __func__, queue_id); + return 0; + } + + if (unlikely(!vq->async)) { + VHOST_LOG_DATA(ERR, "(%s) %s: async not registered for queue id %d.\n", + dev->ifname, __func__, queue_id); + goto out_access_unlock; + } + + if (unlikely(!dma_copy_track[dma_id].vchans || + !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid channel %d:%u.\n", dev->ifname, __func__, + dma_id, vchan_id); + goto out_access_unlock; + } + + if (queue_id % 2 == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, + pkts, count, dma_id, vchan_id); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%d) %s: async dequeue does not support packed ring.\n", + dev->vid, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, queue_id, pkts, count, + dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } + +out_access_unlock: + rte_spinlock_unlock(&vq->access_lock); + + return n_pkts_cpl; +} + + static __rte_always_inline uint32_t virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) -- 2.25.1 ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH 2/2] example/vhost: support to clear in-flight packets for async dequeue 2022-04-13 18:27 [PATCH 0/2] support to clear in-flight packets for async Yuan Wang 2022-04-13 18:27 ` [PATCH 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang @ 2022-04-13 18:27 ` Yuan Wang 2022-05-13 16:35 ` [PATCH v2 0/2] support to clear in-flight packets for async Yuan Wang ` (3 subsequent siblings) 5 siblings, 0 replies; 22+ messages in thread From: Yuan Wang @ 2022-04-13 18:27 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia Cc: dev, jiayu.hu, xuan.ding, xingguang.he, yvonnex.yang, sunil.pai.g, yuanx.wang This patch allows vring_state_changed() to clear in-flight dequeue packets. Signed-off-by: Yuan Wang <yuanx.wang@intel.com> --- examples/vhost/main.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/examples/vhost/main.c b/examples/vhost/main.c index d26e40ab73..04e7821322 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -1767,9 +1767,6 @@ vring_state_changed(int vid, uint16_t queue_id, int enable) if (!vdev) return -1; - if (queue_id != VIRTIO_RXQ) - return 0; - if (dma_bind[vid2socketid[vid]].dmas[queue_id].async_enabled) { if (!enable) vhost_clear_queue_thread_unsafe(vdev, queue_id); -- 2.25.1 ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 0/2] support to clear in-flight packets for async 2022-04-13 18:27 [PATCH 0/2] support to clear in-flight packets for async Yuan Wang 2022-04-13 18:27 ` [PATCH 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang 2022-04-13 18:27 ` [PATCH 2/2] example/vhost: support to " Yuan Wang @ 2022-05-13 16:35 ` Yuan Wang 2022-05-13 16:35 ` [PATCH v2 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang 2022-05-13 16:35 ` [PATCH v2 2/2] example/vhost: support to " Yuan Wang 2022-05-23 16:13 ` [PATCH v3 0/2] support to clear in-flight packets for async Yuan Wang ` (2 subsequent siblings) 5 siblings, 2 replies; 22+ messages in thread From: Yuan Wang @ 2022-05-13 16:35 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia Cc: dev, jiayu.hu, xuan.ding, sunil.pai.g, yuanx.wang These patches support to clear in-flight packets for async dequeue and introduce thread-safe version of this function. note: The patches depend on the following patches (http://patches.dpdk.org/project/dpdk/patch/20220513025058.12898-5-xuan.ding@intel.com/) (http://patches.dpdk.org/project/dpdk/patch/20220513025058.12898-6-xuan.ding@intel.com/) v1->v2: * Rebase to latest DPDK * Use the thread-safe version in destroy_device() RFC->v1: * Protect vq access with splitlock Yuan Wang (2): vhost: support clear in-flight packets for async dequeue example/vhost: support to clear in-flight packets for async dequeue doc/guides/prog_guide/vhost_lib.rst | 8 ++- doc/guides/rel_notes/release_22_07.rst | 5 ++ examples/vhost/main.c | 26 +++++++-- lib/vhost/rte_vhost_async.h | 25 ++++++++ lib/vhost/version.map | 1 + lib/vhost/virtio_net.c | 80 +++++++++++++++++++++++++- 6 files changed, 137 insertions(+), 8 deletions(-) -- 2.25.1 ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 1/2] vhost: support clear in-flight packets for async dequeue 2022-05-13 16:35 ` [PATCH v2 0/2] support to clear in-flight packets for async Yuan Wang @ 2022-05-13 16:35 ` Yuan Wang 2022-05-13 16:35 ` [PATCH v2 2/2] example/vhost: support to " Yuan Wang 1 sibling, 0 replies; 22+ messages in thread From: Yuan Wang @ 2022-05-13 16:35 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia Cc: dev, jiayu.hu, xuan.ding, sunil.pai.g, yuanx.wang rte_vhost_clear_queue_thread_unsafe() supports to clear in-flight packets for async enqueue only. But after supporting async dequeue, this API should support async dequeue too. This patch also adds the thread-safe version of this API, the difference between the two API is that thread safety uses lock. These APIs maybe used to clean up packets in the async channel to prevent packet loss when the device state changes or when the device is destroyed. Signed-off-by: Yuan Wang <yuanx.wang@intel.com> --- doc/guides/prog_guide/vhost_lib.rst | 8 ++- doc/guides/rel_notes/release_22_07.rst | 5 ++ lib/vhost/rte_vhost_async.h | 25 ++++++++ lib/vhost/version.map | 1 + lib/vhost/virtio_net.c | 80 +++++++++++++++++++++++++- 5 files changed, 116 insertions(+), 3 deletions(-) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index 09c1c24b48..543d37e4f4 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -279,7 +279,13 @@ The following is an overview of some key Vhost API functions: * ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count, dma_id, vchan_id)`` - Clear inflight packets which are submitted to DMA engine in vhost async data + Clear in-flight packets which are submitted to async channel in vhost + async data path without performing any locking. Completed packets are + returned to applications through ``pkts``. + +* ``rte_vhost_clear_queue(vid, queue_id, **pkts, count, dma_id, vchan_id)`` + + Clear in-flight packets which are submitted to async channel in vhost async data path. Completed packets are returned to applications through ``pkts``. * ``rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index 564d88623e..2696deb8bb 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -75,6 +75,11 @@ New Features Added vhost async dequeue API which can leverage DMA devices to accelerate receiving pkts from guest. +* **Added thread-safe version of inflight packet clear API in vhost library.** + + Added an API which can clear the inflight packets submitted to + the async channel in a thread-safe manner in the vhost async data path. + Removed Items ------------- diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h index 2789492e38..2ecaced7d8 100644 --- a/lib/vhost/rte_vhost_async.h +++ b/lib/vhost/rte_vhost_async.h @@ -183,6 +183,31 @@ uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id); +/** + * This function checks async completion status and clear packets for + * a specific vhost device queue. Packets which are inflight will be + * returned in an array. + * + * @param vid + * ID of vhost device to clear data + * @param queue_id + * Queue id to clear data + * @param pkts + * Blank array to get return packet pointer + * @param count + * Size of the packet array + * @param dma_id + * The identifier of the DMA device + * @param vchan_id + * The identifier of virtual DMA channel + * @return + * Number of packets returned + */ +__rte_experimental +uint16_t rte_vhost_clear_queue(int vid, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, + uint16_t vchan_id); + /** * The DMA vChannels used in asynchronous data path must be configured * first. So this function needs to be called before enabling DMA diff --git a/lib/vhost/version.map b/lib/vhost/version.map index 8c7211bf0d..eeaab77695 100644 --- a/lib/vhost/version.map +++ b/lib/vhost/version.map @@ -91,6 +91,7 @@ EXPERIMENTAL { # added in 22.07 rte_vhost_async_get_inflight_thread_unsafe; rte_vhost_async_try_dequeue_burst; + rte_vhost_clear_queue; }; INTERNAL { diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 8290514e65..36e4d80ea8 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -26,6 +26,11 @@ #define MAX_BATCH_LEN 256 +static __rte_always_inline uint16_t +async_poll_dequeue_completed_split(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mbuf **pkts, uint16_t count, uint16_t dma_id, + uint16_t vchan_id, bool legacy_ol_flags); + /* DMA device copy operation tracking array. */ struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX]; @@ -2102,7 +2107,7 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); - if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { + if (unlikely(queue_id >= dev->nr_vring)) { VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n", dev->ifname, __func__, queue_id); return 0; @@ -2123,11 +2128,82 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; } - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id); + if (queue_id % 2 == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, + pkts, count, dma_id, vchan_id); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%d) %s: async dequeue does not support packed ring.\n", + dev->vid, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, pkts, count, + dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } + + return n_pkts_cpl; +} + +uint16_t +rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, + uint16_t count, int16_t dma_id, uint16_t vchan_id) +{ + struct virtio_net *dev = get_device(vid); + struct vhost_virtqueue *vq; + uint16_t n_pkts_cpl = 0; + + if (!dev) + return 0; + + VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); + if (unlikely(queue_id >= dev->nr_vring)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n", + dev->ifname, __func__, queue_id); + return 0; + } + + vq = dev->virtqueue[queue_id]; + + if (!rte_spinlock_trylock(&vq->access_lock)) { + VHOST_LOG_DATA(ERR, + "(%d) %s: failed to clear async queue id %d, virtqueue busy.\n", + dev->vid, __func__, queue_id); + return 0; + } + + if (unlikely(!vq->async)) { + VHOST_LOG_DATA(ERR, "(%s) %s: async not registered for queue id %d.\n", + dev->ifname, __func__, queue_id); + goto out_access_unlock; + } + + if (unlikely(!dma_copy_track[dma_id].vchans || + !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid channel %d:%u.\n", dev->ifname, __func__, + dma_id, vchan_id); + goto out_access_unlock; + } + + if (queue_id % 2 == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, + pkts, count, dma_id, vchan_id); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%d) %s: async dequeue does not support packed ring.\n", + dev->vid, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, pkts, count, + dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } + +out_access_unlock: + rte_spinlock_unlock(&vq->access_lock); return n_pkts_cpl; } + static __rte_always_inline uint32_t virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) -- 2.25.1 ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 2/2] example/vhost: support to clear in-flight packets for async dequeue 2022-05-13 16:35 ` [PATCH v2 0/2] support to clear in-flight packets for async Yuan Wang 2022-05-13 16:35 ` [PATCH v2 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang @ 2022-05-13 16:35 ` Yuan Wang 1 sibling, 0 replies; 22+ messages in thread From: Yuan Wang @ 2022-05-13 16:35 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia Cc: dev, jiayu.hu, xuan.ding, sunil.pai.g, yuanx.wang This patch allows vring_state_changed() to clear in-flight dequeue packets. It also clears the in-flight packets in a thread-safe way in destroy_device(). Signed-off-by: Yuan Wang <yuanx.wang@intel.com> --- examples/vhost/main.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/examples/vhost/main.c b/examples/vhost/main.c index d070391727..a97ac23061 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -1537,6 +1537,25 @@ vhost_clear_queue_thread_unsafe(struct vhost_dev *vdev, uint16_t queue_id) } } +static void +vhost_clear_queue(struct vhost_dev *vdev, uint16_t queue_id) +{ + uint16_t n_pkt = 0; + int pkts_inflight; + + uint16_t dma_id = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dev_id; + pkts_inflight = rte_vhost_async_get_inflight(vdev->vid, queue_id); + + struct rte_mbuf *m_cpl[pkts_inflight]; + + while (pkts_inflight) { + n_pkt = rte_vhost_clear_queue(vdev->vid, queue_id, m_cpl, + pkts_inflight, dma_id, 0); + free_pkts(m_cpl, n_pkt); + pkts_inflight = rte_vhost_async_get_inflight(vdev->vid, queue_id); + } +} + /* * Remove a device from the specific data core linked list and from the * main linked list. Synchronization occurs through the use of the @@ -1594,13 +1613,13 @@ destroy_device(int vid) vdev->vid); if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) { - vhost_clear_queue_thread_unsafe(vdev, VIRTIO_RXQ); + vhost_clear_queue(vdev, VIRTIO_RXQ); rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ); dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false; } if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) { - vhost_clear_queue_thread_unsafe(vdev, VIRTIO_TXQ); + vhost_clear_queue(vdev, VIRTIO_TXQ); rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ); dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false; } @@ -1759,9 +1778,6 @@ vring_state_changed(int vid, uint16_t queue_id, int enable) if (!vdev) return -1; - if (queue_id != VIRTIO_RXQ) - return 0; - if (dma_bind[vid2socketid[vid]].dmas[queue_id].async_enabled) { if (!enable) vhost_clear_queue_thread_unsafe(vdev, queue_id); -- 2.25.1 ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v3 0/2] support to clear in-flight packets for async 2022-04-13 18:27 [PATCH 0/2] support to clear in-flight packets for async Yuan Wang ` (2 preceding siblings ...) 2022-05-13 16:35 ` [PATCH v2 0/2] support to clear in-flight packets for async Yuan Wang @ 2022-05-23 16:13 ` Yuan Wang 2022-05-23 16:13 ` [PATCH v3 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang 2022-05-23 16:13 ` [PATCH v3 2/2] example/vhost: support to " Yuan Wang 2022-06-06 17:45 ` [PATCH v4 0/2] support to clear in-flight packets for async Yuan Wang 2022-06-09 17:34 ` [PATCH v5 0/2] support to clear in-flight packets for async Yuan Wang 5 siblings, 2 replies; 22+ messages in thread From: Yuan Wang @ 2022-05-23 16:13 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia Cc: dev, jiayu.hu, xuan.ding, xingguang.he, sunil.pai.g, yuanx.wang These patches support to clear in-flight packets for async dequeue and introduce thread-safe version of this function. v3: - Rebase to latest DPDK v2: - Rebase to latest DPDK - Use the thread-safe version in destroy_device() v1: - Protect vq access with splitlock Yuan Wang (2): vhost: support clear in-flight packets for async dequeue example/vhost: support to clear in-flight packets for async dequeue doc/guides/prog_guide/vhost_lib.rst | 8 ++- doc/guides/rel_notes/release_22_07.rst | 5 ++ examples/vhost/main.c | 26 ++++++-- lib/vhost/rte_vhost_async.h | 25 ++++++++ lib/vhost/version.map | 1 + lib/vhost/virtio_net.c | 82 +++++++++++++++++++++++++- 6 files changed, 139 insertions(+), 8 deletions(-) -- 2.25.1 ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v3 1/2] vhost: support clear in-flight packets for async dequeue 2022-05-23 16:13 ` [PATCH v3 0/2] support to clear in-flight packets for async Yuan Wang @ 2022-05-23 16:13 ` Yuan Wang 2022-05-23 16:13 ` [PATCH v3 2/2] example/vhost: support to " Yuan Wang 1 sibling, 0 replies; 22+ messages in thread From: Yuan Wang @ 2022-05-23 16:13 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia Cc: dev, jiayu.hu, xuan.ding, xingguang.he, sunil.pai.g, yuanx.wang rte_vhost_clear_queue_thread_unsafe() supports to clear in-flight packets for async enqueue only. But after supporting async dequeue, this API should support async dequeue too. This patch also adds the thread-safe version of this API, the difference between the two API is that thread safety uses lock. These APIs maybe used to clean up packets in the async channel to prevent packet loss when the device state changes or when the device is destroyed. Signed-off-by: Yuan Wang <yuanx.wang@intel.com> --- doc/guides/prog_guide/vhost_lib.rst | 8 ++- doc/guides/rel_notes/release_22_07.rst | 5 ++ lib/vhost/rte_vhost_async.h | 25 ++++++++ lib/vhost/version.map | 1 + lib/vhost/virtio_net.c | 82 +++++++++++++++++++++++++- 5 files changed, 118 insertions(+), 3 deletions(-) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index 680da504c8..a789f0c26f 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -288,7 +288,13 @@ The following is an overview of some key Vhost API functions: * ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count, dma_id, vchan_id)`` - Clear inflight packets which are submitted to DMA engine in vhost async data + Clear in-flight packets which are submitted to async channel in vhost + async data path without performing any locking. Completed packets are + returned to applications through ``pkts``. + +* ``rte_vhost_clear_queue(vid, queue_id, **pkts, count, dma_id, vchan_id)`` + + Clear in-flight packets which are submitted to async channel in vhost async data path. Completed packets are returned to applications through ``pkts``. * ``rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, struct rte_vhost_stat_name *names, unsigned int size)`` diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index 70b48d0fec..4c8b0c1b21 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -92,6 +92,11 @@ New Features Added vhost async dequeue API which can leverage DMA devices to accelerate receiving pkts from guest. +* **Added thread-safe version of inflight packet clear API in vhost library.** + + Added an API which can clear the inflight packets submitted to + the async channel in a thread-safe manner in the vhost async data path. + Removed Items ------------- diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h index a1e7f674ed..1db2a10124 100644 --- a/lib/vhost/rte_vhost_async.h +++ b/lib/vhost/rte_vhost_async.h @@ -183,6 +183,31 @@ uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id); +/** + * This function checks async completion status and clear packets for + * a specific vhost device queue. Packets which are inflight will be + * returned in an array. + * + * @param vid + * ID of vhost device to clear data + * @param queue_id + * Queue id to clear data + * @param pkts + * Blank array to get return packet pointer + * @param count + * Size of the packet array + * @param dma_id + * The identifier of the DMA device + * @param vchan_id + * The identifier of virtual DMA channel + * @return + * Number of packets returned + */ +__rte_experimental +uint16_t rte_vhost_clear_queue(int vid, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, + uint16_t vchan_id); + /** * The DMA vChannels used in asynchronous data path must be configured * first. So this function needs to be called before enabling DMA diff --git a/lib/vhost/version.map b/lib/vhost/version.map index bc75d4d724..a1ed3a1205 100644 --- a/lib/vhost/version.map +++ b/lib/vhost/version.map @@ -94,6 +94,7 @@ EXPERIMENTAL { rte_vhost_vring_stats_get; rte_vhost_vring_stats_reset; rte_vhost_async_try_dequeue_burst; + rte_vhost_clear_queue; }; INTERNAL { diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 68a26eb17d..a90ae3cb96 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -26,6 +26,11 @@ #define MAX_BATCH_LEN 256 +static __rte_always_inline uint16_t +async_poll_dequeue_completed_split(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, + uint16_t vchan_id, bool legacy_ol_flags); + /* DMA device copy operation tracking array. */ struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX]; @@ -2155,7 +2160,7 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); - if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { + if (unlikely(queue_id >= dev->nr_vring)) { VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n", dev->ifname, __func__, queue_id); return 0; @@ -2182,7 +2187,18 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; } - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id); + if (queue_id % 2 == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, + pkts, count, dma_id, vchan_id); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%d) %s: async dequeue does not support packed ring.\n", + dev->vid, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, pkts, count, + dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl); vq->stats.inflight_completed += n_pkts_cpl; @@ -2190,6 +2206,68 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return n_pkts_cpl; } +uint16_t +rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, + uint16_t count, int16_t dma_id, uint16_t vchan_id) +{ + struct virtio_net *dev = get_device(vid); + struct vhost_virtqueue *vq; + uint16_t n_pkts_cpl = 0; + + if (!dev) + return 0; + + VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); + if (unlikely(queue_id >= dev->nr_vring)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n", + dev->ifname, __func__, queue_id); + return 0; + } + + vq = dev->virtqueue[queue_id]; + + if (!rte_spinlock_trylock(&vq->access_lock)) { + VHOST_LOG_DATA(ERR, + "(%d) %s: failed to clear async queue id %d, virtqueue busy.\n", + dev->vid, __func__, queue_id); + return 0; + } + + if (unlikely(!vq->async)) { + VHOST_LOG_DATA(ERR, "(%s) %s: async not registered for queue id %d.\n", + dev->ifname, __func__, queue_id); + goto out_access_unlock; + } + + if (unlikely(!dma_copy_track[dma_id].vchans || + !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid channel %d:%u.\n", dev->ifname, __func__, + dma_id, vchan_id); + goto out_access_unlock; + } + + if (queue_id % 2 == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, + pkts, count, dma_id, vchan_id); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%d) %s: async dequeue does not support packed ring.\n", + dev->vid, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, pkts, count, + dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } + + vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl); + vq->stats.inflight_completed += n_pkts_cpl; + +out_access_unlock: + rte_spinlock_unlock(&vq->access_lock); + + return n_pkts_cpl; +} + static __rte_always_inline uint32_t virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) -- 2.25.1 ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v3 2/2] example/vhost: support to clear in-flight packets for async dequeue 2022-05-23 16:13 ` [PATCH v3 0/2] support to clear in-flight packets for async Yuan Wang 2022-05-23 16:13 ` [PATCH v3 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang @ 2022-05-23 16:13 ` Yuan Wang 1 sibling, 0 replies; 22+ messages in thread From: Yuan Wang @ 2022-05-23 16:13 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia Cc: dev, jiayu.hu, xuan.ding, xingguang.he, sunil.pai.g, yuanx.wang This patch allows vring_state_changed() to clear in-flight dequeue packets. It also clears the in-flight packets in a thread-safe way in destroy_device(). Signed-off-by: Yuan Wang <yuanx.wang@intel.com> --- examples/vhost/main.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 5bc34b0c52..a66d6d4d18 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -1539,6 +1539,25 @@ vhost_clear_queue_thread_unsafe(struct vhost_dev *vdev, uint16_t queue_id) } } +static void +vhost_clear_queue(struct vhost_dev *vdev, uint16_t queue_id) +{ + uint16_t n_pkt = 0; + int pkts_inflight; + + int16_t dma_id = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dev_id; + pkts_inflight = rte_vhost_async_get_inflight(vdev->vid, queue_id); + + struct rte_mbuf *m_cpl[pkts_inflight]; + + while (pkts_inflight) { + n_pkt = rte_vhost_clear_queue(vdev->vid, queue_id, m_cpl, + pkts_inflight, dma_id, 0); + free_pkts(m_cpl, n_pkt); + pkts_inflight = rte_vhost_async_get_inflight(vdev->vid, queue_id); + } +} + /* * Remove a device from the specific data core linked list and from the * main linked list. Synchronization occurs through the use of the @@ -1596,13 +1615,13 @@ destroy_device(int vid) vdev->vid); if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) { - vhost_clear_queue_thread_unsafe(vdev, VIRTIO_RXQ); + vhost_clear_queue(vdev, VIRTIO_RXQ); rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ); dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false; } if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) { - vhost_clear_queue_thread_unsafe(vdev, VIRTIO_TXQ); + vhost_clear_queue(vdev, VIRTIO_TXQ); rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ); dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false; } @@ -1761,9 +1780,6 @@ vring_state_changed(int vid, uint16_t queue_id, int enable) if (!vdev) return -1; - if (queue_id != VIRTIO_RXQ) - return 0; - if (dma_bind[vid2socketid[vid]].dmas[queue_id].async_enabled) { if (!enable) vhost_clear_queue_thread_unsafe(vdev, queue_id); -- 2.25.1 ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v4 0/2] support to clear in-flight packets for async 2022-04-13 18:27 [PATCH 0/2] support to clear in-flight packets for async Yuan Wang ` (3 preceding siblings ...) 2022-05-23 16:13 ` [PATCH v3 0/2] support to clear in-flight packets for async Yuan Wang @ 2022-06-06 17:45 ` Yuan Wang 2022-06-06 17:45 ` [PATCH v4 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang 2022-06-06 17:45 ` [PATCH v4 2/2] example/vhost: support to " Yuan Wang 2022-06-09 17:34 ` [PATCH v5 0/2] support to clear in-flight packets for async Yuan Wang 5 siblings, 2 replies; 22+ messages in thread From: Yuan Wang @ 2022-06-06 17:45 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia, dev Cc: jiayu.hu, xuan.ding, sunil.pai.g, Yuan Wang These patches support to clear in-flight packets for async dequeue and introduce thread-safe version of this function. v4: - Rebase to latest DPDK v3: - Rebase to latest DPDK v2: - Rebase to latest DPDK - Use the thread-safe version in destroy_device v1: - Protect vq access with splitlock Yuan Wang (2): vhost: support clear in-flight packets for async dequeue example/vhost: support to clear in-flight packets for async dequeue doc/guides/prog_guide/vhost_lib.rst | 8 ++- doc/guides/rel_notes/release_22_07.rst | 5 ++ examples/vhost/main.c | 26 ++++++-- lib/vhost/rte_vhost_async.h | 25 ++++++++ lib/vhost/version.map | 1 + lib/vhost/virtio_net.c | 82 +++++++++++++++++++++++++- 6 files changed, 139 insertions(+), 8 deletions(-) -- 2.25.1 ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v4 1/2] vhost: support clear in-flight packets for async dequeue 2022-06-06 17:45 ` [PATCH v4 0/2] support to clear in-flight packets for async Yuan Wang @ 2022-06-06 17:45 ` Yuan Wang 2022-06-09 7:06 ` Hu, Jiayu 2022-06-06 17:45 ` [PATCH v4 2/2] example/vhost: support to " Yuan Wang 1 sibling, 1 reply; 22+ messages in thread From: Yuan Wang @ 2022-06-06 17:45 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia, dev Cc: jiayu.hu, xuan.ding, sunil.pai.g, Yuan Wang rte_vhost_clear_queue_thread_unsafe() supports to clear in-flight packets for async enqueue only. But after supporting async dequeue, this API should support async dequeue too. This patch also adds the thread-safe version of this API, the difference between the two API is that thread safety uses lock. These APIs maybe used to clean up packets in the async channel to prevent packet loss when the device state changes or when the device is destroyed. Signed-off-by: Yuan Wang <yuanx.wang@intel.com> --- doc/guides/prog_guide/vhost_lib.rst | 8 ++- doc/guides/rel_notes/release_22_07.rst | 5 ++ lib/vhost/rte_vhost_async.h | 25 ++++++++ lib/vhost/version.map | 1 + lib/vhost/virtio_net.c | 82 +++++++++++++++++++++++++- 5 files changed, 118 insertions(+), 3 deletions(-) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index cd3f6caa9a..b9545770d0 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -288,7 +288,13 @@ The following is an overview of some key Vhost API functions: * ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count, dma_id, vchan_id)`` - Clear inflight packets which are submitted to DMA engine in vhost async data + Clear in-flight packets which are submitted to async channel in vhost + async data path without performing any locking. Completed packets are + returned to applications through ``pkts``. + +* ``rte_vhost_clear_queue(vid, queue_id, **pkts, count, dma_id, vchan_id)`` + + Clear in-flight packets which are submitted to async channel in vhost async data path. Completed packets are returned to applications through ``pkts``. * ``rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, struct rte_vhost_stat_name *names, unsigned int size)`` diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index c81383f4a3..2ca06b543c 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -147,6 +147,11 @@ New Features Added vhost async dequeue API which can leverage DMA devices to accelerate receiving pkts from guest. +* **Added thread-safe version of inflight packet clear API in vhost library.** + + Added an API which can clear the inflight packets submitted to + the async channel in a thread-safe manner in the vhost async data path. + Removed Items ------------- diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h index a1e7f674ed..1db2a10124 100644 --- a/lib/vhost/rte_vhost_async.h +++ b/lib/vhost/rte_vhost_async.h @@ -183,6 +183,31 @@ uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id); +/** + * This function checks async completion status and clear packets for + * a specific vhost device queue. Packets which are inflight will be + * returned in an array. + * + * @param vid + * ID of vhost device to clear data + * @param queue_id + * Queue id to clear data + * @param pkts + * Blank array to get return packet pointer + * @param count + * Size of the packet array + * @param dma_id + * The identifier of the DMA device + * @param vchan_id + * The identifier of virtual DMA channel + * @return + * Number of packets returned + */ +__rte_experimental +uint16_t rte_vhost_clear_queue(int vid, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, + uint16_t vchan_id); + /** * The DMA vChannels used in asynchronous data path must be configured * first. So this function needs to be called before enabling DMA diff --git a/lib/vhost/version.map b/lib/vhost/version.map index 4880b9a422..9329f88e79 100644 --- a/lib/vhost/version.map +++ b/lib/vhost/version.map @@ -95,6 +95,7 @@ EXPERIMENTAL { rte_vhost_vring_stats_reset; rte_vhost_async_try_dequeue_burst; rte_vhost_driver_get_vdpa_dev_type; + rte_vhost_clear_queue; }; INTERNAL { diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 68a26eb17d..a90ae3cb96 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -26,6 +26,11 @@ #define MAX_BATCH_LEN 256 +static __rte_always_inline uint16_t +async_poll_dequeue_completed_split(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, + uint16_t vchan_id, bool legacy_ol_flags); + /* DMA device copy operation tracking array. */ struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX]; @@ -2155,7 +2160,7 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); - if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { + if (unlikely(queue_id >= dev->nr_vring)) { VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n", dev->ifname, __func__, queue_id); return 0; @@ -2182,7 +2187,18 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; } - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id); + if (queue_id % 2 == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, + pkts, count, dma_id, vchan_id); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%d) %s: async dequeue does not support packed ring.\n", + dev->vid, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, pkts, count, + dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl); vq->stats.inflight_completed += n_pkts_cpl; @@ -2190,6 +2206,68 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return n_pkts_cpl; } +uint16_t +rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, + uint16_t count, int16_t dma_id, uint16_t vchan_id) +{ + struct virtio_net *dev = get_device(vid); + struct vhost_virtqueue *vq; + uint16_t n_pkts_cpl = 0; + + if (!dev) + return 0; + + VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); + if (unlikely(queue_id >= dev->nr_vring)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n", + dev->ifname, __func__, queue_id); + return 0; + } + + vq = dev->virtqueue[queue_id]; + + if (!rte_spinlock_trylock(&vq->access_lock)) { + VHOST_LOG_DATA(ERR, + "(%d) %s: failed to clear async queue id %d, virtqueue busy.\n", + dev->vid, __func__, queue_id); + return 0; + } + + if (unlikely(!vq->async)) { + VHOST_LOG_DATA(ERR, "(%s) %s: async not registered for queue id %d.\n", + dev->ifname, __func__, queue_id); + goto out_access_unlock; + } + + if (unlikely(!dma_copy_track[dma_id].vchans || + !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid channel %d:%u.\n", dev->ifname, __func__, + dma_id, vchan_id); + goto out_access_unlock; + } + + if (queue_id % 2 == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, + pkts, count, dma_id, vchan_id); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%d) %s: async dequeue does not support packed ring.\n", + dev->vid, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, pkts, count, + dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } + + vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl); + vq->stats.inflight_completed += n_pkts_cpl; + +out_access_unlock: + rte_spinlock_unlock(&vq->access_lock); + + return n_pkts_cpl; +} + static __rte_always_inline uint32_t virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan_id) -- 2.25.1 ^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [PATCH v4 1/2] vhost: support clear in-flight packets for async dequeue 2022-06-06 17:45 ` [PATCH v4 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang @ 2022-06-09 7:06 ` Hu, Jiayu 2022-06-09 7:51 ` Wang, YuanX 0 siblings, 1 reply; 22+ messages in thread From: Hu, Jiayu @ 2022-06-09 7:06 UTC (permalink / raw) To: Wang, YuanX, maxime.coquelin, Xia, Chenbo, dev; +Cc: Ding, Xuan, Pai G, Sunil Hi Yuan, > -----Original Message----- > From: Wang, YuanX <yuanx.wang@intel.com> > Sent: Tuesday, June 7, 2022 1:45 AM > To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>; > dev@dpdk.org > Cc: Hu, Jiayu <jiayu.hu@intel.com>; Ding, Xuan <xuan.ding@intel.com>; Pai > G, Sunil <sunil.pai.g@intel.com>; Wang, YuanX <yuanx.wang@intel.com> > Subject: [PATCH v4 1/2] vhost: support clear in-flight packets for async > dequeue > > rte_vhost_clear_queue_thread_unsafe() supports to clear in-flight packets > for async enqueue only. But after supporting async dequeue, this API should > support async dequeue too. > > This patch also adds the thread-safe version of this API, the difference > between the two API is that thread safety uses lock. > > These APIs maybe used to clean up packets in the async channel to prevent > packet loss when the device state changes or when the device is destroyed. > > Signed-off-by: Yuan Wang <yuanx.wang@intel.com> > --- > doc/guides/prog_guide/vhost_lib.rst | 8 ++- > doc/guides/rel_notes/release_22_07.rst | 5 ++ > lib/vhost/rte_vhost_async.h | 25 ++++++++ > lib/vhost/version.map | 1 + > lib/vhost/virtio_net.c | 82 +++++++++++++++++++++++++- > 5 files changed, 118 insertions(+), 3 deletions(-) > > diff --git a/doc/guides/prog_guide/vhost_lib.rst > b/doc/guides/prog_guide/vhost_lib.rst > index cd3f6caa9a..b9545770d0 100644 > --- a/doc/guides/prog_guide/vhost_lib.rst > +++ b/doc/guides/prog_guide/vhost_lib.rst > @@ -288,7 +288,13 @@ The following is an overview of some key Vhost API > functions: > > * ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count, > dma_id, vchan_id)`` > > - Clear inflight packets which are submitted to DMA engine in vhost async > data > + Clear in-flight packets which are submitted to async channel in vhost > + async data path without performing any locking. Completed packets are Better specify "without performing locking on virtqueue". > + returned to applications through ``pkts``. > + > +* ``rte_vhost_clear_queue(vid, queue_id, **pkts, count, dma_id, > +vchan_id)`` > + > + Clear in-flight packets which are submitted to async channel in vhost > + async data > path. Completed packets are returned to applications through ``pkts``. > > * ``rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, struct > rte_vhost_stat_name *names, unsigned int size)`` diff --git > a/doc/guides/rel_notes/release_22_07.rst > b/doc/guides/rel_notes/release_22_07.rst > index c81383f4a3..2ca06b543c 100644 > --- a/doc/guides/rel_notes/release_22_07.rst > +++ b/doc/guides/rel_notes/release_22_07.rst > @@ -147,6 +147,11 @@ New Features > Added vhost async dequeue API which can leverage DMA devices to > accelerate receiving pkts from guest. > > +* **Added thread-safe version of inflight packet clear API in vhost > +library.** > + > + Added an API which can clear the inflight packets submitted to the > + async channel in a thread-safe manner in the vhost async data path. > + > Removed Items > ------------- > > diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h index > a1e7f674ed..1db2a10124 100644 > --- a/lib/vhost/rte_vhost_async.h > +++ b/lib/vhost/rte_vhost_async.h > @@ -183,6 +183,31 @@ uint16_t rte_vhost_clear_queue_thread_unsafe(int > vid, uint16_t queue_id, > struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, > uint16_t vchan_id); > > +/** > + * This function checks async completion status and clear packets for > + * a specific vhost device queue. Packets which are inflight will be > + * returned in an array. > + * > + * @param vid > + * ID of vhost device to clear data > + * @param queue_id > + * Queue id to clear data > + * @param pkts > + * Blank array to get return packet pointer > + * @param count > + * Size of the packet array > + * @param dma_id > + * The identifier of the DMA device > + * @param vchan_id > + * The identifier of virtual DMA channel > + * @return > + * Number of packets returned > + */ > +__rte_experimental > +uint16_t rte_vhost_clear_queue(int vid, uint16_t queue_id, > + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, > + uint16_t vchan_id); > + > /** > * The DMA vChannels used in asynchronous data path must be configured > * first. So this function needs to be called before enabling DMA diff --git > a/lib/vhost/version.map b/lib/vhost/version.map index > 4880b9a422..9329f88e79 100644 > --- a/lib/vhost/version.map > +++ b/lib/vhost/version.map > @@ -95,6 +95,7 @@ EXPERIMENTAL { > rte_vhost_vring_stats_reset; > rte_vhost_async_try_dequeue_burst; > rte_vhost_driver_get_vdpa_dev_type; > + rte_vhost_clear_queue; > }; > > INTERNAL { > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index > 68a26eb17d..a90ae3cb96 100644 > --- a/lib/vhost/virtio_net.c > +++ b/lib/vhost/virtio_net.c > @@ -26,6 +26,11 @@ > > #define MAX_BATCH_LEN 256 > > +static __rte_always_inline uint16_t > +async_poll_dequeue_completed_split(struct virtio_net *dev, struct > vhost_virtqueue *vq, > + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, > + uint16_t vchan_id, bool legacy_ol_flags); > + > /* DMA device copy operation tracking array. */ struct async_dma_info > dma_copy_track[RTE_DMADEV_DEFAULT_MAX]; > > @@ -2155,7 +2160,7 @@ rte_vhost_clear_queue_thread_unsafe(int vid, > uint16_t queue_id, > return 0; > > VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); > - if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { > + if (unlikely(queue_id >= dev->nr_vring)) { > VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue > idx %d.\n", > dev->ifname, __func__, queue_id); > return 0; > @@ -2182,7 +2187,18 @@ rte_vhost_clear_queue_thread_unsafe(int vid, > uint16_t queue_id, > return 0; > } > > - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, > count, dma_id, vchan_id); > + if (queue_id % 2 == 0) Replace "%" with "&" should help the performance a bit. > + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, > + pkts, count, dma_id, vchan_id); > + else { > + if (unlikely(vq_is_packed(dev))) > + VHOST_LOG_DATA(ERR, > + "(%d) %s: async dequeue does not > support packed ring.\n", > + dev->vid, __func__); > + else > + n_pkts_cpl = > async_poll_dequeue_completed_split(dev, vq, pkts, count, > + dma_id, vchan_id, dev->flags & > VIRTIO_DEV_LEGACY_OL_FLAGS); > + } > > vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl); > vq->stats.inflight_completed += n_pkts_cpl; @@ -2190,6 +2206,68 > @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, > return n_pkts_cpl; > } > > +uint16_t > +rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, > + uint16_t count, int16_t dma_id, uint16_t vchan_id) { > + struct virtio_net *dev = get_device(vid); > + struct vhost_virtqueue *vq; > + uint16_t n_pkts_cpl = 0; > + > + if (!dev) > + return 0; > + > + VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); > + if (unlikely(queue_id >= dev->nr_vring)) { > + VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue > idx %d.\n", > + dev->ifname, __func__, queue_id); > + return 0; > + } > + > + vq = dev->virtqueue[queue_id]; > + > + if (!rte_spinlock_trylock(&vq->access_lock)) { > + VHOST_LOG_DATA(ERR, > + "(%d) %s: failed to clear async queue id %d, > virtqueue busy.\n", > + dev->vid, __func__, queue_id); > + return 0; > + } Failing to acquire the lock shouldn't be treated as an error. > + > + if (unlikely(!vq->async)) { > + VHOST_LOG_DATA(ERR, "(%s) %s: async not registered for > queue id %d.\n", > + dev->ifname, __func__, queue_id); > + goto out_access_unlock; > + } > + > + if (unlikely(!dma_copy_track[dma_id].vchans || > + > !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { > + VHOST_LOG_DATA(ERR, "(%s) %s: invalid channel %d:%u.\n", > dev->ifname, __func__, > + dma_id, vchan_id); > + goto out_access_unlock; > + } Also need to check if dma_id is valid. > + > + if (queue_id % 2 == 0) Ditto. Thanks, Jiayu ^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [PATCH v4 1/2] vhost: support clear in-flight packets for async dequeue 2022-06-09 7:06 ` Hu, Jiayu @ 2022-06-09 7:51 ` Wang, YuanX 0 siblings, 0 replies; 22+ messages in thread From: Wang, YuanX @ 2022-06-09 7:51 UTC (permalink / raw) To: Hu, Jiayu, maxime.coquelin, Xia, Chenbo, dev; +Cc: Ding, Xuan, Pai G, Sunil Hi Jiayu, > -----Original Message----- > From: Hu, Jiayu <jiayu.hu@intel.com> > Sent: Thursday, June 9, 2022 3:06 PM > To: Wang, YuanX <yuanx.wang@intel.com>; maxime.coquelin@redhat.com; > Xia, Chenbo <chenbo.xia@intel.com>; dev@dpdk.org > Cc: Ding, Xuan <xuan.ding@intel.com>; Pai G, Sunil <sunil.pai.g@intel.com> > Subject: RE: [PATCH v4 1/2] vhost: support clear in-flight packets for async > dequeue > > Hi Yuan, > > > -----Original Message----- > > From: Wang, YuanX <yuanx.wang@intel.com> > > Sent: Tuesday, June 7, 2022 1:45 AM > > To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>; > > dev@dpdk.org > > Cc: Hu, Jiayu <jiayu.hu@intel.com>; Ding, Xuan <xuan.ding@intel.com>; > > Pai G, Sunil <sunil.pai.g@intel.com>; Wang, YuanX > > <yuanx.wang@intel.com> > > Subject: [PATCH v4 1/2] vhost: support clear in-flight packets for > > async dequeue > > > > rte_vhost_clear_queue_thread_unsafe() supports to clear in-flight > > packets for async enqueue only. But after supporting async dequeue, > > this API should support async dequeue too. > > > > This patch also adds the thread-safe version of this API, the > > difference between the two API is that thread safety uses lock. > > > > These APIs maybe used to clean up packets in the async channel to > > prevent packet loss when the device state changes or when the device is > destroyed. > > > > Signed-off-by: Yuan Wang <yuanx.wang@intel.com> > > --- > > doc/guides/prog_guide/vhost_lib.rst | 8 ++- > > doc/guides/rel_notes/release_22_07.rst | 5 ++ > > lib/vhost/rte_vhost_async.h | 25 ++++++++ > > lib/vhost/version.map | 1 + > > lib/vhost/virtio_net.c | 82 +++++++++++++++++++++++++- > > 5 files changed, 118 insertions(+), 3 deletions(-) > > > > diff --git a/doc/guides/prog_guide/vhost_lib.rst > > b/doc/guides/prog_guide/vhost_lib.rst > > index cd3f6caa9a..b9545770d0 100644 > > --- a/doc/guides/prog_guide/vhost_lib.rst > > +++ b/doc/guides/prog_guide/vhost_lib.rst > > @@ -288,7 +288,13 @@ The following is an overview of some key Vhost > > API > > functions: > > > > * ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count, > > dma_id, vchan_id)`` > > > > - Clear inflight packets which are submitted to DMA engine in vhost > > async data > > + Clear in-flight packets which are submitted to async channel in > > + vhost async data path without performing any locking. Completed > > + packets are > > Better specify "without performing locking on virtqueue". > > > + returned to applications through ``pkts``. > > + > > +* ``rte_vhost_clear_queue(vid, queue_id, **pkts, count, dma_id, > > +vchan_id)`` > > + > > + Clear in-flight packets which are submitted to async channel in > > + vhost async data > > path. Completed packets are returned to applications through ``pkts``. > > > > * ``rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, > > struct rte_vhost_stat_name *names, unsigned int size)`` diff --git > > a/doc/guides/rel_notes/release_22_07.rst > > b/doc/guides/rel_notes/release_22_07.rst > > index c81383f4a3..2ca06b543c 100644 > > --- a/doc/guides/rel_notes/release_22_07.rst > > +++ b/doc/guides/rel_notes/release_22_07.rst > > @@ -147,6 +147,11 @@ New Features > > Added vhost async dequeue API which can leverage DMA devices to > > accelerate receiving pkts from guest. > > > > +* **Added thread-safe version of inflight packet clear API in vhost > > +library.** > > + > > + Added an API which can clear the inflight packets submitted to the > > + async channel in a thread-safe manner in the vhost async data path. > > + > > Removed Items > > ------------- > > > > diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h > > index > > a1e7f674ed..1db2a10124 100644 > > --- a/lib/vhost/rte_vhost_async.h > > +++ b/lib/vhost/rte_vhost_async.h > > @@ -183,6 +183,31 @@ uint16_t > rte_vhost_clear_queue_thread_unsafe(int > > vid, uint16_t queue_id, > > struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t > > vchan_id); > > > > +/** > > + * This function checks async completion status and clear packets for > > + * a specific vhost device queue. Packets which are inflight will be > > + * returned in an array. > > + * > > + * @param vid > > + * ID of vhost device to clear data > > + * @param queue_id > > + * Queue id to clear data > > + * @param pkts > > + * Blank array to get return packet pointer > > + * @param count > > + * Size of the packet array > > + * @param dma_id > > + * The identifier of the DMA device > > + * @param vchan_id > > + * The identifier of virtual DMA channel > > + * @return > > + * Number of packets returned > > + */ > > +__rte_experimental > > +uint16_t rte_vhost_clear_queue(int vid, uint16_t queue_id, struct > > +rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id); > > + > > /** > > * The DMA vChannels used in asynchronous data path must be configured > > * first. So this function needs to be called before enabling DMA > > diff --git a/lib/vhost/version.map b/lib/vhost/version.map index > > 4880b9a422..9329f88e79 100644 > > --- a/lib/vhost/version.map > > +++ b/lib/vhost/version.map > > @@ -95,6 +95,7 @@ EXPERIMENTAL { > > rte_vhost_vring_stats_reset; > > rte_vhost_async_try_dequeue_burst; > > rte_vhost_driver_get_vdpa_dev_type; > > +rte_vhost_clear_queue; > > }; > > > > INTERNAL { > > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index > > 68a26eb17d..a90ae3cb96 100644 > > --- a/lib/vhost/virtio_net.c > > +++ b/lib/vhost/virtio_net.c > > @@ -26,6 +26,11 @@ > > > > #define MAX_BATCH_LEN 256 > > > > +static __rte_always_inline uint16_t > > +async_poll_dequeue_completed_split(struct virtio_net *dev, struct > > vhost_virtqueue *vq, > > +struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t > > +vchan_id, bool legacy_ol_flags); > > + > > /* DMA device copy operation tracking array. */ struct > > async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX]; > > > > @@ -2155,7 +2160,7 @@ rte_vhost_clear_queue_thread_unsafe(int vid, > > uint16_t queue_id, return 0; > > > > VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); -if > > (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { > > +if (unlikely(queue_id >= dev->nr_vring)) { > > VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n", > > dev->ifname, __func__, queue_id); return 0; @@ -2182,7 +2187,18 @@ > > rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, > > return 0; } > > > > -n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, > > dma_id, vchan_id); > > +if (queue_id % 2 == 0) > > Replace "%" with "&" should help the performance a bit. > > > +n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, > count, > > +dma_id, vchan_id); else { if (unlikely(vq_is_packed(dev))) > > +VHOST_LOG_DATA(ERR, > > +"(%d) %s: async dequeue does not > > support packed ring.\n", > > +dev->vid, __func__); > > +else > > +n_pkts_cpl = > > async_poll_dequeue_completed_split(dev, vq, pkts, count, > > +dma_id, vchan_id, dev->flags & > > VIRTIO_DEV_LEGACY_OL_FLAGS); > > +} > > > > vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl); > > vq->stats.inflight_completed += n_pkts_cpl; @@ -2190,6 +2206,68 @@ > > rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, > > return n_pkts_cpl; } > > > > +uint16_t > > +rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf > > +**pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id) { struct > > +virtio_net *dev = get_device(vid); struct vhost_virtqueue *vq; > > +uint16_t n_pkts_cpl = 0; > > + > > +if (!dev) > > +return 0; > > + > > +VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); if > > +(unlikely(queue_id >= dev->nr_vring)) { VHOST_LOG_DATA(ERR, > "(%s) %s: > > +invalid virtqueue > > idx %d.\n", > > +dev->ifname, __func__, queue_id); > > +return 0; > > +} > > + > > +vq = dev->virtqueue[queue_id]; > > + > > +if (!rte_spinlock_trylock(&vq->access_lock)) { VHOST_LOG_DATA(ERR, > > +"(%d) %s: failed to clear async queue id %d, > > virtqueue busy.\n", > > +dev->vid, __func__, queue_id); > > +return 0; > > +} > > Failing to acquire the lock shouldn't be treated as an error. > > > + > > +if (unlikely(!vq->async)) { > > +VHOST_LOG_DATA(ERR, "(%s) %s: async not registered for > > queue id %d.\n", > > +dev->ifname, __func__, queue_id); > > +goto out_access_unlock; > > +} > > + > > +if (unlikely(!dma_copy_track[dma_id].vchans || > > + > > !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { > > +VHOST_LOG_DATA(ERR, "(%s) %s: invalid channel %d:%u.\n", > > dev->ifname, __func__, > > +dma_id, vchan_id); > > +goto out_access_unlock; > > +} > > Also need to check if dma_id is valid. > > > + > > +if (queue_id % 2 == 0) > > Ditto. Thanks, will fix them soon. Regards, Yuan > > Thanks, > Jiayu > ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v4 2/2] example/vhost: support to clear in-flight packets for async dequeue 2022-06-06 17:45 ` [PATCH v4 0/2] support to clear in-flight packets for async Yuan Wang 2022-06-06 17:45 ` [PATCH v4 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang @ 2022-06-06 17:45 ` Yuan Wang 1 sibling, 0 replies; 22+ messages in thread From: Yuan Wang @ 2022-06-06 17:45 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia, dev Cc: jiayu.hu, xuan.ding, sunil.pai.g, Yuan Wang This patch allows vring_state_changed() to clear in-flight dequeue packets. It also clears the in-flight packets in a thread-safe way in destroy_device(). Signed-off-by: Yuan Wang <yuanx.wang@intel.com> --- examples/vhost/main.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 9aae340c46..1e36c35565 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -1543,6 +1543,25 @@ vhost_clear_queue_thread_unsafe(struct vhost_dev *vdev, uint16_t queue_id) } } +static void +vhost_clear_queue(struct vhost_dev *vdev, uint16_t queue_id) +{ + uint16_t n_pkt = 0; + int pkts_inflight; + + int16_t dma_id = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dev_id; + pkts_inflight = rte_vhost_async_get_inflight(vdev->vid, queue_id); + + struct rte_mbuf *m_cpl[pkts_inflight]; + + while (pkts_inflight) { + n_pkt = rte_vhost_clear_queue(vdev->vid, queue_id, m_cpl, + pkts_inflight, dma_id, 0); + free_pkts(m_cpl, n_pkt); + pkts_inflight = rte_vhost_async_get_inflight(vdev->vid, queue_id); + } +} + /* * Remove a device from the specific data core linked list and from the * main linked list. Synchronization occurs through the use of the @@ -1600,13 +1619,13 @@ destroy_device(int vid) vdev->vid); if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) { - vhost_clear_queue_thread_unsafe(vdev, VIRTIO_RXQ); + vhost_clear_queue(vdev, VIRTIO_RXQ); rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ); dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false; } if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) { - vhost_clear_queue_thread_unsafe(vdev, VIRTIO_TXQ); + vhost_clear_queue(vdev, VIRTIO_TXQ); rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ); dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false; } @@ -1765,9 +1784,6 @@ vring_state_changed(int vid, uint16_t queue_id, int enable) if (!vdev) return -1; - if (queue_id != VIRTIO_RXQ) - return 0; - if (dma_bind[vid2socketid[vid]].dmas[queue_id].async_enabled) { if (!enable) vhost_clear_queue_thread_unsafe(vdev, queue_id); -- 2.25.1 ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v5 0/2] support to clear in-flight packets for async 2022-04-13 18:27 [PATCH 0/2] support to clear in-flight packets for async Yuan Wang ` (4 preceding siblings ...) 2022-06-06 17:45 ` [PATCH v4 0/2] support to clear in-flight packets for async Yuan Wang @ 2022-06-09 17:34 ` Yuan Wang 2022-06-09 17:34 ` [PATCH v5 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang ` (2 more replies) 5 siblings, 3 replies; 22+ messages in thread From: Yuan Wang @ 2022-06-09 17:34 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia, dev Cc: jiayu.hu, xuan.ding, sunil.pai.g, Yuan Wang These patches support to clear in-flight packets for async dequeue and introduce thread-safe version of this function. v5: - Add dma_id check v4: - Rebase to latest DPDK v3: - Rebase to latest DPDK v2: - Use the thread-safe version in destroy_device v1: - Protect vq access with splitlock Yuan Wang (2): vhost: support clear in-flight packets for async dequeue example/vhost: support to clear in-flight packets for async dequeue doc/guides/prog_guide/vhost_lib.rst | 8 ++- doc/guides/rel_notes/release_22_07.rst | 4 ++ examples/vhost/main.c | 26 +++++-- lib/vhost/rte_vhost_async.h | 25 +++++++ lib/vhost/version.map | 1 + lib/vhost/virtio_net.c | 93 +++++++++++++++++++++++++- 6 files changed, 149 insertions(+), 8 deletions(-) -- 2.25.1 ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v5 1/2] vhost: support clear in-flight packets for async dequeue 2022-06-09 17:34 ` [PATCH v5 0/2] support to clear in-flight packets for async Yuan Wang @ 2022-06-09 17:34 ` Yuan Wang 2022-06-14 13:23 ` Maxime Coquelin 2022-06-14 23:56 ` Hu, Jiayu 2022-06-09 17:34 ` [PATCH v5 2/2] example/vhost: support to " Yuan Wang 2022-06-17 14:06 ` [PATCH v5 0/2] support to clear in-flight packets for async Maxime Coquelin 2 siblings, 2 replies; 22+ messages in thread From: Yuan Wang @ 2022-06-09 17:34 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia, dev Cc: jiayu.hu, xuan.ding, sunil.pai.g, Yuan Wang rte_vhost_clear_queue_thread_unsafe() supports to clear in-flight packets for async enqueue only. But after supporting async dequeue, this API should support async dequeue too. This patch also adds the thread-safe version of this API, the difference between the two API is that thread safety uses lock. These APIs maybe used to clean up packets in the async channel to prevent packet loss when the device state changes or when the device is destroyed. Signed-off-by: Yuan Wang <yuanx.wang@intel.com> --- doc/guides/prog_guide/vhost_lib.rst | 8 ++- doc/guides/rel_notes/release_22_07.rst | 4 ++ lib/vhost/rte_vhost_async.h | 25 +++++++ lib/vhost/version.map | 1 + lib/vhost/virtio_net.c | 93 +++++++++++++++++++++++++- 5 files changed, 128 insertions(+), 3 deletions(-) diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst index cd3f6caa9a..606edee940 100644 --- a/doc/guides/prog_guide/vhost_lib.rst +++ b/doc/guides/prog_guide/vhost_lib.rst @@ -288,7 +288,13 @@ The following is an overview of some key Vhost API functions: * ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count, dma_id, vchan_id)`` - Clear inflight packets which are submitted to DMA engine in vhost async data + Clear in-flight packets which are submitted to async channel in vhost + async data path without performing locking on virtqueue. Completed + packets are returned to applications through ``pkts``. + +* ``rte_vhost_clear_queue(vid, queue_id, **pkts, count, dma_id, vchan_id)`` + + Clear in-flight packets which are submitted to async channel in vhost async data path. Completed packets are returned to applications through ``pkts``. * ``rte_vhost_vring_stats_get_names(int vid, uint16_t queue_id, struct rte_vhost_stat_name *names, unsigned int size)`` diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index d46f773df0..28ad615a66 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -170,6 +170,10 @@ New Features This is a fall-back implementation for platforms that don't support vector operations. +* **Added thread-safe version of inflight packet clear API in vhost library.** + + Added an API which can clear the inflight packets submitted to + the async channel in a thread-safe manner in the vhost async data path. Removed Items ------------- diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h index a1e7f674ed..1db2a10124 100644 --- a/lib/vhost/rte_vhost_async.h +++ b/lib/vhost/rte_vhost_async.h @@ -183,6 +183,31 @@ uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, uint16_t vchan_id); +/** + * This function checks async completion status and clear packets for + * a specific vhost device queue. Packets which are inflight will be + * returned in an array. + * + * @param vid + * ID of vhost device to clear data + * @param queue_id + * Queue id to clear data + * @param pkts + * Blank array to get return packet pointer + * @param count + * Size of the packet array + * @param dma_id + * The identifier of the DMA device + * @param vchan_id + * The identifier of virtual DMA channel + * @return + * Number of packets returned + */ +__rte_experimental +uint16_t rte_vhost_clear_queue(int vid, uint16_t queue_id, + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, + uint16_t vchan_id); + /** * The DMA vChannels used in asynchronous data path must be configured * first. So this function needs to be called before enabling DMA diff --git a/lib/vhost/version.map b/lib/vhost/version.map index 4880b9a422..9329f88e79 100644 --- a/lib/vhost/version.map +++ b/lib/vhost/version.map @@ -95,6 +95,7 @@ EXPERIMENTAL { rte_vhost_vring_stats_reset; rte_vhost_async_try_dequeue_burst; rte_vhost_driver_get_vdpa_dev_type; + rte_vhost_clear_queue; }; INTERNAL { diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 68a26eb17d..4b28f65728 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -26,6 +26,11 @@ #define MAX_BATCH_LEN 256 +static __rte_always_inline uint16_t +async_poll_dequeue_completed_split(struct virtio_net *dev, struct vhost_virtqueue *vq, + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id, + uint16_t vchan_id, bool legacy_ol_flags); + /* DMA device copy operation tracking array. */ struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX]; @@ -2155,12 +2160,18 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); - if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { + if (unlikely(queue_id >= dev->nr_vring)) { VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %d.\n", dev->ifname, __func__, queue_id); return 0; } + if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid dma id %d.\n", + dev->ifname, __func__, dma_id); + return 0; + } + vq = dev->virtqueue[queue_id]; if (unlikely(!rte_spinlock_is_locked(&vq->access_lock))) { @@ -2182,11 +2193,89 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, return 0; } - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count, dma_id, vchan_id); + if ((queue_id & 1) == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, + pkts, count, dma_id, vchan_id); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%s) %s: async dequeue does not support packed ring.\n", + dev->ifname, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, pkts, count, + dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } + + vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl); + vq->stats.inflight_completed += n_pkts_cpl; + + return n_pkts_cpl; +} + +uint16_t +rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, + uint16_t count, int16_t dma_id, uint16_t vchan_id) +{ + struct virtio_net *dev = get_device(vid); + struct vhost_virtqueue *vq; + uint16_t n_pkts_cpl = 0; + + if (!dev) + return 0; + + VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); + if (unlikely(queue_id >= dev->nr_vring)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid virtqueue idx %u.\n", + dev->ifname, __func__, queue_id); + return 0; + } + + if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid dma id %d.\n", + dev->ifname, __func__, dma_id); + return 0; + } + + vq = dev->virtqueue[queue_id]; + + if (!rte_spinlock_trylock(&vq->access_lock)) { + VHOST_LOG_DATA(DEBUG, "(%s) %s: virtqueue %u is busy.\n", + dev->ifname, __func__, queue_id); + return 0; + } + + if (unlikely(!vq->async)) { + VHOST_LOG_DATA(ERR, "(%s) %s: async not registered for queue id %u.\n", + dev->ifname, __func__, queue_id); + goto out_access_unlock; + } + + if (unlikely(!dma_copy_track[dma_id].vchans || + !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { + VHOST_LOG_DATA(ERR, "(%s) %s: invalid channel %d:%u.\n", dev->ifname, __func__, + dma_id, vchan_id); + goto out_access_unlock; + } + + if ((queue_id & 1) == 0) + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, + pkts, count, dma_id, vchan_id); + else { + if (unlikely(vq_is_packed(dev))) + VHOST_LOG_DATA(ERR, + "(%s) %s: async dequeue does not support packed ring.\n", + dev->ifname, __func__); + else + n_pkts_cpl = async_poll_dequeue_completed_split(dev, vq, pkts, count, + dma_id, vchan_id, dev->flags & VIRTIO_DEV_LEGACY_OL_FLAGS); + } vhost_queue_stats_update(dev, vq, pkts, n_pkts_cpl); vq->stats.inflight_completed += n_pkts_cpl; +out_access_unlock: + rte_spinlock_unlock(&vq->access_lock); + return n_pkts_cpl; } -- 2.25.1 ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v5 1/2] vhost: support clear in-flight packets for async dequeue 2022-06-09 17:34 ` [PATCH v5 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang @ 2022-06-14 13:23 ` Maxime Coquelin 2022-06-14 23:56 ` Hu, Jiayu 1 sibling, 0 replies; 22+ messages in thread From: Maxime Coquelin @ 2022-06-14 13:23 UTC (permalink / raw) To: Yuan Wang, chenbo.xia, dev; +Cc: jiayu.hu, xuan.ding, sunil.pai.g On 6/9/22 19:34, Yuan Wang wrote: > rte_vhost_clear_queue_thread_unsafe() supports to clear > in-flight packets for async enqueue only. But after > supporting async dequeue, this API should support async dequeue too. > > This patch also adds the thread-safe version of this API, > the difference between the two API is that thread safety uses lock. > > These APIs maybe used to clean up packets in the async channel > to prevent packet loss when the device state changes or > when the device is destroyed. > > Signed-off-by: Yuan Wang <yuanx.wang@intel.com> > --- > doc/guides/prog_guide/vhost_lib.rst | 8 ++- > doc/guides/rel_notes/release_22_07.rst | 4 ++ > lib/vhost/rte_vhost_async.h | 25 +++++++ > lib/vhost/version.map | 1 + > lib/vhost/virtio_net.c | 93 +++++++++++++++++++++++++- > 5 files changed, 128 insertions(+), 3 deletions(-) > Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com> Thanks, Maxime ^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [PATCH v5 1/2] vhost: support clear in-flight packets for async dequeue 2022-06-09 17:34 ` [PATCH v5 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang 2022-06-14 13:23 ` Maxime Coquelin @ 2022-06-14 23:56 ` Hu, Jiayu 1 sibling, 0 replies; 22+ messages in thread From: Hu, Jiayu @ 2022-06-14 23:56 UTC (permalink / raw) To: Wang, YuanX, maxime.coquelin, Xia, Chenbo, dev; +Cc: Ding, Xuan, Pai G, Sunil > -----Original Message----- > From: Wang, YuanX <yuanx.wang@intel.com> > Sent: Friday, June 10, 2022 1:34 AM > To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>; > dev@dpdk.org > Cc: Hu, Jiayu <jiayu.hu@intel.com>; Ding, Xuan <xuan.ding@intel.com>; Pai > G, Sunil <sunil.pai.g@intel.com>; Wang, YuanX <yuanx.wang@intel.com> > Subject: [PATCH v5 1/2] vhost: support clear in-flight packets for async > dequeue > > rte_vhost_clear_queue_thread_unsafe() supports to clear in-flight packets > for async enqueue only. But after supporting async dequeue, this API should > support async dequeue too. > > This patch also adds the thread-safe version of this API, the difference > between the two API is that thread safety uses lock. > > These APIs maybe used to clean up packets in the async channel to prevent > packet loss when the device state changes or when the device is destroyed. > > Signed-off-by: Yuan Wang <yuanx.wang@intel.com> > --- > doc/guides/prog_guide/vhost_lib.rst | 8 ++- > doc/guides/rel_notes/release_22_07.rst | 4 ++ > lib/vhost/rte_vhost_async.h | 25 +++++++ > lib/vhost/version.map | 1 + > lib/vhost/virtio_net.c | 93 +++++++++++++++++++++++++- > 5 files changed, 128 insertions(+), 3 deletions(-) > Reviewed-by: Jiayu Hu <jiayu.hu@intel.com> Thanks, Jiayu ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v5 2/2] example/vhost: support to clear in-flight packets for async dequeue 2022-06-09 17:34 ` [PATCH v5 0/2] support to clear in-flight packets for async Yuan Wang 2022-06-09 17:34 ` [PATCH v5 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang @ 2022-06-09 17:34 ` Yuan Wang 2022-06-14 13:28 ` Maxime Coquelin 2022-06-14 23:56 ` Hu, Jiayu 2022-06-17 14:06 ` [PATCH v5 0/2] support to clear in-flight packets for async Maxime Coquelin 2 siblings, 2 replies; 22+ messages in thread From: Yuan Wang @ 2022-06-09 17:34 UTC (permalink / raw) To: maxime.coquelin, chenbo.xia, dev Cc: jiayu.hu, xuan.ding, sunil.pai.g, Yuan Wang This patch allows vring_state_changed() to clear in-flight dequeue packets. It also clears the in-flight packets in a thread-safe way in destroy_device(). Signed-off-by: Yuan Wang <yuanx.wang@intel.com> --- examples/vhost/main.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/examples/vhost/main.c b/examples/vhost/main.c index e7fee5aa1b..a679ef738c 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -1543,6 +1543,25 @@ vhost_clear_queue_thread_unsafe(struct vhost_dev *vdev, uint16_t queue_id) } } +static void +vhost_clear_queue(struct vhost_dev *vdev, uint16_t queue_id) +{ + uint16_t n_pkt = 0; + int pkts_inflight; + + int16_t dma_id = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dev_id; + pkts_inflight = rte_vhost_async_get_inflight(vdev->vid, queue_id); + + struct rte_mbuf *m_cpl[pkts_inflight]; + + while (pkts_inflight) { + n_pkt = rte_vhost_clear_queue(vdev->vid, queue_id, m_cpl, + pkts_inflight, dma_id, 0); + free_pkts(m_cpl, n_pkt); + pkts_inflight = rte_vhost_async_get_inflight(vdev->vid, queue_id); + } +} + /* * Remove a device from the specific data core linked list and from the * main linked list. Synchronization occurs through the use of the @@ -1600,13 +1619,13 @@ destroy_device(int vid) vdev->vid); if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) { - vhost_clear_queue_thread_unsafe(vdev, VIRTIO_RXQ); + vhost_clear_queue(vdev, VIRTIO_RXQ); rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ); dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false; } if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) { - vhost_clear_queue_thread_unsafe(vdev, VIRTIO_TXQ); + vhost_clear_queue(vdev, VIRTIO_TXQ); rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ); dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false; } @@ -1765,9 +1784,6 @@ vring_state_changed(int vid, uint16_t queue_id, int enable) if (!vdev) return -1; - if (queue_id != VIRTIO_RXQ) - return 0; - if (dma_bind[vid2socketid[vid]].dmas[queue_id].async_enabled) { if (!enable) vhost_clear_queue_thread_unsafe(vdev, queue_id); -- 2.25.1 ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v5 2/2] example/vhost: support to clear in-flight packets for async dequeue 2022-06-09 17:34 ` [PATCH v5 2/2] example/vhost: support to " Yuan Wang @ 2022-06-14 13:28 ` Maxime Coquelin 2022-06-14 23:56 ` Hu, Jiayu 1 sibling, 0 replies; 22+ messages in thread From: Maxime Coquelin @ 2022-06-14 13:28 UTC (permalink / raw) To: Yuan Wang, chenbo.xia, dev; +Cc: jiayu.hu, xuan.ding, sunil.pai.g On 6/9/22 19:34, Yuan Wang wrote: > This patch allows vring_state_changed() to clear in-flight > dequeue packets. It also clears the in-flight packets in > a thread-safe way in destroy_device(). > > Signed-off-by: Yuan Wang <yuanx.wang@intel.com> > --- > examples/vhost/main.c | 26 +++++++++++++++++++++----- > 1 file changed, 21 insertions(+), 5 deletions(-) > Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com> Thanks, Maxime ^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [PATCH v5 2/2] example/vhost: support to clear in-flight packets for async dequeue 2022-06-09 17:34 ` [PATCH v5 2/2] example/vhost: support to " Yuan Wang 2022-06-14 13:28 ` Maxime Coquelin @ 2022-06-14 23:56 ` Hu, Jiayu 1 sibling, 0 replies; 22+ messages in thread From: Hu, Jiayu @ 2022-06-14 23:56 UTC (permalink / raw) To: Wang, YuanX, maxime.coquelin, Xia, Chenbo, dev; +Cc: Ding, Xuan, Pai G, Sunil Reviewed-by: Jiayu Hu <jiayu.hu@intel.com> Thanks, Jiayu > -----Original Message----- > From: Wang, YuanX <yuanx.wang@intel.com> > Sent: Friday, June 10, 2022 1:34 AM > To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>; > dev@dpdk.org > Cc: Hu, Jiayu <jiayu.hu@intel.com>; Ding, Xuan <xuan.ding@intel.com>; Pai > G, Sunil <sunil.pai.g@intel.com>; Wang, YuanX <yuanx.wang@intel.com> > Subject: [PATCH v5 2/2] example/vhost: support to clear in-flight packets for > async dequeue > > This patch allows vring_state_changed() to clear in-flight dequeue packets. It > also clears the in-flight packets in a thread-safe way in destroy_device(). > > Signed-off-by: Yuan Wang <yuanx.wang@intel.com> > --- > examples/vhost/main.c | 26 +++++++++++++++++++++----- > 1 file changed, 21 insertions(+), 5 deletions(-) > > diff --git a/examples/vhost/main.c b/examples/vhost/main.c index > e7fee5aa1b..a679ef738c 100644 > --- a/examples/vhost/main.c > +++ b/examples/vhost/main.c > @@ -1543,6 +1543,25 @@ vhost_clear_queue_thread_unsafe(struct > vhost_dev *vdev, uint16_t queue_id) > } > } > > +static void > +vhost_clear_queue(struct vhost_dev *vdev, uint16_t queue_id) { > + uint16_t n_pkt = 0; > + int pkts_inflight; > + > + int16_t dma_id = dma_bind[vid2socketid[vdev- > >vid]].dmas[queue_id].dev_id; > + pkts_inflight = rte_vhost_async_get_inflight(vdev->vid, queue_id); > + > + struct rte_mbuf *m_cpl[pkts_inflight]; > + > + while (pkts_inflight) { > + n_pkt = rte_vhost_clear_queue(vdev->vid, queue_id, m_cpl, > + pkts_inflight, dma_id, 0); > + free_pkts(m_cpl, n_pkt); > + pkts_inflight = rte_vhost_async_get_inflight(vdev->vid, > queue_id); > + } > +} > + > /* > * Remove a device from the specific data core linked list and from the > * main linked list. Synchronization occurs through the use of the @@ - > 1600,13 +1619,13 @@ destroy_device(int vid) > vdev->vid); > > if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) { > - vhost_clear_queue_thread_unsafe(vdev, VIRTIO_RXQ); > + vhost_clear_queue(vdev, VIRTIO_RXQ); > rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ); > dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false; > } > > if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) { > - vhost_clear_queue_thread_unsafe(vdev, VIRTIO_TXQ); > + vhost_clear_queue(vdev, VIRTIO_TXQ); > rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ); > dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false; > } > @@ -1765,9 +1784,6 @@ vring_state_changed(int vid, uint16_t queue_id, int > enable) > if (!vdev) > return -1; > > - if (queue_id != VIRTIO_RXQ) > - return 0; > - > if (dma_bind[vid2socketid[vid]].dmas[queue_id].async_enabled) { > if (!enable) > vhost_clear_queue_thread_unsafe(vdev, queue_id); > -- > 2.25.1 ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v5 0/2] support to clear in-flight packets for async 2022-06-09 17:34 ` [PATCH v5 0/2] support to clear in-flight packets for async Yuan Wang 2022-06-09 17:34 ` [PATCH v5 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang 2022-06-09 17:34 ` [PATCH v5 2/2] example/vhost: support to " Yuan Wang @ 2022-06-17 14:06 ` Maxime Coquelin 2 siblings, 0 replies; 22+ messages in thread From: Maxime Coquelin @ 2022-06-17 14:06 UTC (permalink / raw) To: Yuan Wang, chenbo.xia, dev; +Cc: jiayu.hu, xuan.ding, sunil.pai.g On 6/9/22 19:34, Yuan Wang wrote: > These patches support to clear in-flight packets for async dequeue > and introduce thread-safe version of this function. > > v5: > - Add dma_id check > > v4: > - Rebase to latest DPDK > > v3: > - Rebase to latest DPDK > > v2: > - Use the thread-safe version in destroy_device > > v1: > - Protect vq access with splitlock > > Yuan Wang (2): > vhost: support clear in-flight packets for async dequeue > example/vhost: support to clear in-flight packets for async dequeue > > doc/guides/prog_guide/vhost_lib.rst | 8 ++- > doc/guides/rel_notes/release_22_07.rst | 4 ++ > examples/vhost/main.c | 26 +++++-- > lib/vhost/rte_vhost_async.h | 25 +++++++ > lib/vhost/version.map | 1 + > lib/vhost/virtio_net.c | 93 +++++++++++++++++++++++++- > 6 files changed, 149 insertions(+), 8 deletions(-) > Applied to dpdk-next-virtio/main. Thanks, Maxime ^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2022-06-17 14:07 UTC | newest] Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2022-04-13 18:27 [PATCH 0/2] support to clear in-flight packets for async Yuan Wang 2022-04-13 18:27 ` [PATCH 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang 2022-04-13 18:27 ` [PATCH 2/2] example/vhost: support to " Yuan Wang 2022-05-13 16:35 ` [PATCH v2 0/2] support to clear in-flight packets for async Yuan Wang 2022-05-13 16:35 ` [PATCH v2 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang 2022-05-13 16:35 ` [PATCH v2 2/2] example/vhost: support to " Yuan Wang 2022-05-23 16:13 ` [PATCH v3 0/2] support to clear in-flight packets for async Yuan Wang 2022-05-23 16:13 ` [PATCH v3 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang 2022-05-23 16:13 ` [PATCH v3 2/2] example/vhost: support to " Yuan Wang 2022-06-06 17:45 ` [PATCH v4 0/2] support to clear in-flight packets for async Yuan Wang 2022-06-06 17:45 ` [PATCH v4 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang 2022-06-09 7:06 ` Hu, Jiayu 2022-06-09 7:51 ` Wang, YuanX 2022-06-06 17:45 ` [PATCH v4 2/2] example/vhost: support to " Yuan Wang 2022-06-09 17:34 ` [PATCH v5 0/2] support to clear in-flight packets for async Yuan Wang 2022-06-09 17:34 ` [PATCH v5 1/2] vhost: support clear in-flight packets for async dequeue Yuan Wang 2022-06-14 13:23 ` Maxime Coquelin 2022-06-14 23:56 ` Hu, Jiayu 2022-06-09 17:34 ` [PATCH v5 2/2] example/vhost: support to " Yuan Wang 2022-06-14 13:28 ` Maxime Coquelin 2022-06-14 23:56 ` Hu, Jiayu 2022-06-17 14:06 ` [PATCH v5 0/2] support to clear in-flight packets for async Maxime Coquelin
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).