From: "Xia, Chenbo" <chenbo.xia@intel.com>
To: "Fu, Patrick" <patrick.fu@intel.com>,
"dev@dpdk.org" <dev@dpdk.org>,
"maxime.coquelin@redhat.com" <maxime.coquelin@redhat.com>
Cc: "Wang, Yinan" <yinan.wang@intel.com>
Subject: Re: [dpdk-dev] [PATCH v2] vhost: support async copy free segmentations
Date: Fri, 17 Jul 2020 03:21:02 +0000 [thread overview]
Message-ID: <MN2PR11MB4063F74F2414743C0C5ABE5B9C7C0@MN2PR11MB4063.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20200715111520.2755307-1-patrick.fu@intel.com>
> -----Original Message-----
> From: Fu, Patrick <patrick.fu@intel.com>
> Sent: Wednesday, July 15, 2020 7:15 PM
> To: dev@dpdk.org; maxime.coquelin@redhat.com; Xia, Chenbo
> <chenbo.xia@intel.com>
> Cc: Fu, Patrick <patrick.fu@intel.com>; Wang, Yinan <yinan.wang@intel.com>
> Subject: [PATCH v2] vhost: support async copy free segmentations
>
> From: Patrick Fu <patrick.fu@intel.com>
>
> Vhost async enqueue assumes that all async copies should break at packet
> boundary. i.e. if a packet is splited into multiple copy segments, the async engine
> should always report copy completion when entire packet is finished. This patch
> removes the assumption.
>
> Fixes: cd6760da1076 ("vhost: introduce async enqueue for split ring")
>
> Signed-off-by: Patrick Fu <patrick.fu@intel.com>
> ---
> v2:
> - fix an issue that can stuck async poll when packets buffer is full
> - rename a local variable to better reflect its usage
>
> lib/librte_vhost/vhost.h | 3 +++
> lib/librte_vhost/virtio_net.c | 17 ++++++++++++-----
> 2 files changed, 15 insertions(+), 5 deletions(-)
>
> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index
> 8c01cee42..0f7212f88 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -46,6 +46,8 @@
>
> #define MAX_PKT_BURST 32
>
> +#define ASYNC_MAX_POLL_SEG 255
> +
> #define VHOST_MAX_ASYNC_IT (MAX_PKT_BURST * 2) #define
> VHOST_MAX_ASYNC_VEC (BUF_VECTOR_MAX * 2)
>
> @@ -225,6 +227,7 @@ struct vhost_virtqueue {
> uint64_t *async_pending_info;
> uint16_t async_pkts_idx;
> uint16_t async_pkts_inflight_n;
> + uint16_t async_last_seg_n;
>
> /* vq async features */
> bool async_inorder;
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index
> 1d0be3dd4..17808ab29 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -1633,6 +1633,7 @@ uint16_t rte_vhost_poll_enqueue_completed(int vid,
> uint16_t queue_id,
> struct vhost_virtqueue *vq;
> uint16_t n_pkts_cpl, n_pkts_put = 0, n_descs = 0;
> uint16_t start_idx, pkts_idx, vq_size;
> + uint16_t n_inflight;
> uint64_t *async_pending_info;
>
> VHOST_LOG_DATA(DEBUG, "(%d) %s\n", dev->vid, __func__); @@ -
> 1646,28 +1647,32 @@ uint16_t rte_vhost_poll_enqueue_completed(int vid,
> uint16_t queue_id,
>
> rte_spinlock_lock(&vq->access_lock);
>
> + n_inflight = vq->async_pkts_inflight_n;
> pkts_idx = vq->async_pkts_idx;
> async_pending_info = vq->async_pending_info;
> vq_size = vq->size;
> start_idx = virtio_dev_rx_async_get_info_idx(pkts_idx,
> vq_size, vq->async_pkts_inflight_n);
>
> - n_pkts_cpl =
> - vq->async_ops.check_completed_copies(vid, queue_id, 0,
> count);
> + n_pkts_cpl = vq->async_ops.check_completed_copies(vid, queue_id,
> + 0, ASYNC_MAX_POLL_SEG - vq->async_last_seg_n) +
> + vq->async_last_seg_n;
>
> rte_smp_wmb();
>
> - while (likely(((start_idx + n_pkts_put) & (vq_size - 1)) != pkts_idx)) {
> + while (likely((n_pkts_put < count) && n_inflight)) {
> uint64_t info = async_pending_info[
> (start_idx + n_pkts_put) & (vq_size - 1)];
> uint64_t n_segs;
> n_pkts_put++;
> + n_inflight--;
> n_descs += info & ASYNC_PENDING_INFO_N_MSK;
> n_segs = info >> ASYNC_PENDING_INFO_N_SFT;
>
> if (n_segs) {
> - if (!n_pkts_cpl || n_pkts_cpl < n_segs) {
> + if (unlikely(n_pkts_cpl < n_segs)) {
> n_pkts_put--;
> + n_inflight++;
> n_descs -= info &
> ASYNC_PENDING_INFO_N_MSK;
> if (n_pkts_cpl) {
> async_pending_info[
> @@ -1684,8 +1689,10 @@ uint16_t rte_vhost_poll_enqueue_completed(int vid,
> uint16_t queue_id,
> }
> }
>
> + vq->async_last_seg_n = n_pkts_cpl;
> +
> if (n_pkts_put) {
> - vq->async_pkts_inflight_n -= n_pkts_put;
> + vq->async_pkts_inflight_n = n_inflight;
> __atomic_add_fetch(&vq->used->idx, n_descs,
> __ATOMIC_RELEASE);
>
> vhost_vring_call_split(dev, vq);
> --
> 2.18.4
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
next prev parent reply other threads:[~2020-07-17 3:21 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-15 7:46 [dpdk-dev] [PATCH v1] " patrick.fu
2020-07-15 11:15 ` [dpdk-dev] [PATCH v2] " patrick.fu
2020-07-17 3:21 ` Xia, Chenbo [this message]
2020-07-17 11:52 ` Ferruh Yigit
2020-07-20 14:58 ` Maxime Coquelin
2020-07-20 16:49 ` Ferruh Yigit
2020-07-21 5:52 ` Fu, Patrick
2020-07-21 5:47 ` [dpdk-dev] [PATCH v3] vhost: fix wrong async completion of multi-seg packets patrick.fu
2020-07-21 8:40 ` Maxime Coquelin
2020-07-21 14:57 ` Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=MN2PR11MB4063F74F2414743C0C5ABE5B9C7C0@MN2PR11MB4063.namprd11.prod.outlook.com \
--to=chenbo.xia@intel.com \
--cc=dev@dpdk.org \
--cc=maxime.coquelin@redhat.com \
--cc=patrick.fu@intel.com \
--cc=yinan.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).