DPDK patches and discussions
 help / color / mirror / Atom feed
From: Maxime Coquelin <maxime.coquelin@redhat.com>
To: patrick.fu@intel.com, dev@dpdk.org, chenbo.xia@intel.com
Cc: yinan.wang@intel.com
Subject: Re: [dpdk-dev] [PATCH v2] vhost: support async copy free segmentations
Date: Mon, 20 Jul 2020 16:58:07 +0200	[thread overview]
Message-ID: <ad2568c1-51a8-3d51-1546-44048d4d8443@redhat.com> (raw)
In-Reply-To: <20200715111520.2755307-1-patrick.fu@intel.com>

Hi Patrick,

On 7/15/20 1:15 PM, patrick.fu@intel.com wrote:
> From: Patrick Fu <patrick.fu@intel.com>
> 
> Vhost async enqueue assumes that all async copies should break at packet
> boundary. i.e. if a packet is splited into multiple copy segments, the
> async engine should always report copy completion when entire packet is
> finished. This patch removes the assumption.

Could you please rework the commit message and title?
It is hard to understand what the patch is doing and why.

Thanks in advance,
Maxime

> Fixes: cd6760da1076 ("vhost: introduce async enqueue for split ring")
> 
> Signed-off-by: Patrick Fu <patrick.fu@intel.com>
> ---
> v2:
>  - fix an issue that can stuck async poll when packets buffer is full
>  - rename a local variable to better reflect its usage
> 
>  lib/librte_vhost/vhost.h      |  3 +++
>  lib/librte_vhost/virtio_net.c | 17 ++++++++++++-----
>  2 files changed, 15 insertions(+), 5 deletions(-)
> 
> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> index 8c01cee42..0f7212f88 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -46,6 +46,8 @@
>  
>  #define MAX_PKT_BURST 32
>  
> +#define ASYNC_MAX_POLL_SEG 255
> +
>  #define VHOST_MAX_ASYNC_IT (MAX_PKT_BURST * 2)
>  #define VHOST_MAX_ASYNC_VEC (BUF_VECTOR_MAX * 2)
>  
> @@ -225,6 +227,7 @@ struct vhost_virtqueue {
>  	uint64_t	*async_pending_info;
>  	uint16_t	async_pkts_idx;
>  	uint16_t	async_pkts_inflight_n;
> +	uint16_t	async_last_seg_n;
>  
>  	/* vq async features */
>  	bool		async_inorder;
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index 1d0be3dd4..17808ab29 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -1633,6 +1633,7 @@ uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
>  	struct vhost_virtqueue *vq;
>  	uint16_t n_pkts_cpl, n_pkts_put = 0, n_descs = 0;
>  	uint16_t start_idx, pkts_idx, vq_size;
> +	uint16_t n_inflight;
>  	uint64_t *async_pending_info;
>  
>  	VHOST_LOG_DATA(DEBUG, "(%d) %s\n", dev->vid, __func__);
> @@ -1646,28 +1647,32 @@ uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
>  
>  	rte_spinlock_lock(&vq->access_lock);
>  
> +	n_inflight = vq->async_pkts_inflight_n;
>  	pkts_idx = vq->async_pkts_idx;
>  	async_pending_info = vq->async_pending_info;
>  	vq_size = vq->size;
>  	start_idx = virtio_dev_rx_async_get_info_idx(pkts_idx,
>  		vq_size, vq->async_pkts_inflight_n);
>  
> -	n_pkts_cpl =
> -		vq->async_ops.check_completed_copies(vid, queue_id, 0, count);
> +	n_pkts_cpl = vq->async_ops.check_completed_copies(vid, queue_id,
> +		0, ASYNC_MAX_POLL_SEG - vq->async_last_seg_n) +
> +		vq->async_last_seg_n;
>  
>  	rte_smp_wmb();
>  
> -	while (likely(((start_idx + n_pkts_put) & (vq_size - 1)) != pkts_idx)) {
> +	while (likely((n_pkts_put < count) && n_inflight)) {
>  		uint64_t info = async_pending_info[
>  			(start_idx + n_pkts_put) & (vq_size - 1)];
>  		uint64_t n_segs;
>  		n_pkts_put++;
> +		n_inflight--;
>  		n_descs += info & ASYNC_PENDING_INFO_N_MSK;
>  		n_segs = info >> ASYNC_PENDING_INFO_N_SFT;
>  
>  		if (n_segs) {
> -			if (!n_pkts_cpl || n_pkts_cpl < n_segs) {
> +			if (unlikely(n_pkts_cpl < n_segs)) {
>  				n_pkts_put--;
> +				n_inflight++;
>  				n_descs -= info & ASYNC_PENDING_INFO_N_MSK;
>  				if (n_pkts_cpl) {
>  					async_pending_info[
> @@ -1684,8 +1689,10 @@ uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
>  		}
>  	}
>  
> +	vq->async_last_seg_n = n_pkts_cpl;
> +
>  	if (n_pkts_put) {
> -		vq->async_pkts_inflight_n -= n_pkts_put;
> +		vq->async_pkts_inflight_n = n_inflight;
>  		__atomic_add_fetch(&vq->used->idx, n_descs, __ATOMIC_RELEASE);
>  
>  		vhost_vring_call_split(dev, vq);
> 


  parent reply	other threads:[~2020-07-20 14:58 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-15  7:46 [dpdk-dev] [PATCH v1] " patrick.fu
2020-07-15 11:15 ` [dpdk-dev] [PATCH v2] " patrick.fu
2020-07-17  3:21   ` Xia, Chenbo
2020-07-17 11:52     ` Ferruh Yigit
2020-07-20 14:58   ` Maxime Coquelin [this message]
2020-07-20 16:49     ` Ferruh Yigit
2020-07-21  5:52     ` Fu, Patrick
2020-07-21  5:47 ` [dpdk-dev] [PATCH v3] vhost: fix wrong async completion of multi-seg packets patrick.fu
2020-07-21  8:40   ` Maxime Coquelin
2020-07-21 14:57     ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ad2568c1-51a8-3d51-1546-44048d4d8443@redhat.com \
    --to=maxime.coquelin@redhat.com \
    --cc=chenbo.xia@intel.com \
    --cc=dev@dpdk.org \
    --cc=patrick.fu@intel.com \
    --cc=yinan.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).