From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id B8888A0A02;
	Thu, 15 Apr 2021 04:02:57 +0200 (CEST)
Received: from [217.70.189.124] (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 3CE56161EEE;
	Thu, 15 Apr 2021 04:02:57 +0200 (CEST)
Received: from mga07.intel.com (mga07.intel.com [134.134.136.100])
 by mails.dpdk.org (Postfix) with ESMTP id 03DE3161EE9
 for <dev@dpdk.org>; Thu, 15 Apr 2021 04:02:54 +0200 (CEST)
IronPort-SDR: 6crPmQV3rrdhZv+GFoEWa8l4YARBYoS2qcZfaSOaMe6JDvMq696d7YuxrUMvUifY/sYgG3Gdhx
 rftaDs2eV3zg==
X-IronPort-AV: E=McAfee;i="6200,9189,9954"; a="258736870"
X-IronPort-AV: E=Sophos;i="5.82,223,1613462400"; d="scan'208";a="258736870"
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 14 Apr 2021 19:02:53 -0700
IronPort-SDR: G5BJhAcu13m+lwImbUHS1sT4+DKbm+9fiqeyhPqTc8l0wwG1s+t4+XOYmOz6aaFac7bxisVWT+
 q7kM621BWFwg==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.82,223,1613462400"; d="scan'208";a="452721494"
Received: from fmsmsx606.amr.corp.intel.com ([10.18.126.86])
 by fmsmga002.fm.intel.com with ESMTP; 14 Apr 2021 19:02:52 -0700
Received: from shsmsx606.ccr.corp.intel.com (10.109.6.216) by
 fmsmsx606.amr.corp.intel.com (10.18.126.86) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2106.2; Wed, 14 Apr 2021 19:02:51 -0700
Received: from shsmsx606.ccr.corp.intel.com (10.109.6.216) by
 SHSMSX606.ccr.corp.intel.com (10.109.6.216) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2106.2; Thu, 15 Apr 2021 10:02:49 +0800
Received: from shsmsx606.ccr.corp.intel.com ([10.109.6.216]) by
 SHSMSX606.ccr.corp.intel.com ([10.109.6.216]) with mapi id 15.01.2106.013;
 Thu, 15 Apr 2021 10:02:49 +0800
From: "Hu, Jiayu" <jiayu.hu@intel.com>
To: "Jiang, Cheng1" <cheng1.jiang@intel.com>, "maxime.coquelin@redhat.com"
 <maxime.coquelin@redhat.com>, "Xia, Chenbo" <chenbo.xia@intel.com>
CC: "dev@dpdk.org" <dev@dpdk.org>, "Yang, YvonneX" <yvonnex.yang@intel.com>,
 "Wang, Yinan" <yinan.wang@intel.com>, "Liu, Yong" <yong.liu@intel.com>
Thread-Topic: [PATCH v7 2/4] vhost: add support for packed ring in async vhost
Thread-Index: AQHXMPcwRVp2gR2nmUSLyBYlTWuWW6q0zOAQ
Date: Thu, 15 Apr 2021 02:02:48 +0000
Message-ID: <ca2a2909f03f441da2c45a57305c1e14@intel.com>
References: <20210317085426.10119-1-Cheng1.jiang@intel.com>
 <20210414061343.54919-1-Cheng1.jiang@intel.com>
 <20210414061343.54919-3-Cheng1.jiang@intel.com>
In-Reply-To: <20210414061343.54919-3-Cheng1.jiang@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
dlp-reaction: no-action
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
x-originating-ip: [10.239.127.36]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dpdk-dev] [PATCH v7 2/4] vhost: add support for packed ring in
 async vhost
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

Hi Cheng,

> -----Original Message-----
> From: Jiang, Cheng1 <cheng1.jiang@intel.com>
> Sent: Wednesday, April 14, 2021 2:14 PM
> To: maxime.coquelin@redhat.com; Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Hu, Jiayu <jiayu.hu@intel.com>; Yang, YvonneX
> <yvonnex.yang@intel.com>; Wang, Yinan <yinan.wang@intel.com>; Liu,
> Yong <yong.liu@intel.com>; Jiang, Cheng1 <cheng1.jiang@intel.com>
> Subject: [PATCH v7 2/4] vhost: add support for packed ring in async vhost
>=20
> For now async vhost data path only supports split ring. This patch
> enables packed ring in async vhost data path to make async vhost
> compatible with virtio 1.1 spec.
>=20
> Signed-off-by: Cheng Jiang <Cheng1.jiang@intel.com>
> ---
>  lib/librte_vhost/rte_vhost_async.h |   1 +
>  lib/librte_vhost/vhost.c           |  49 ++--
>  lib/librte_vhost/vhost.h           |  15 +-
>  lib/librte_vhost/virtio_net.c      | 432 +++++++++++++++++++++++++++--
>  4 files changed, 456 insertions(+), 41 deletions(-)
>=20
> diff --git a/lib/librte_vhost/rte_vhost_async.h
> b/lib/librte_vhost/rte_vhost_async.h
> index c855ff875..6faa31f5a 100644
> --- a/lib/librte_vhost/rte_vhost_async.h
> +++ b/lib/librte_vhost/rte_vhost_async.h
> @@ -89,6 +89,7 @@ struct rte_vhost_async_channel_ops {
>  struct async_inflight_info {
>  	struct rte_mbuf *mbuf;
>  	uint16_t descs; /* num of descs inflight */
> +	uint16_t nr_buffers; /* num of buffers inflight for packed ring */
>  };
>=20
>  /**
> diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
> index a70fe01d8..f509186c6 100644
> --- a/lib/librte_vhost/vhost.c
> +++ b/lib/librte_vhost/vhost.c
> @@ -338,19 +338,22 @@ cleanup_device(struct virtio_net *dev, int destroy)
>  }
>=20
>  static void
> -vhost_free_async_mem(struct vhost_virtqueue *vq)
> +vhost_free_async_mem(struct virtio_net *dev, struct vhost_virtqueue *vq)
>  {
> -	if (vq->async_pkts_info)
> -		rte_free(vq->async_pkts_info);
> -	if (vq->async_descs_split)
> +	rte_free(vq->async_pkts_info);
> +
> +	if (vq_is_packed(dev)) {
> +		rte_free(vq->async_buffers_packed);
> +		vq->async_buffers_packed =3D NULL;
> +	} else {
>  		rte_free(vq->async_descs_split);
> -	if (vq->it_pool)
> -		rte_free(vq->it_pool);
> -	if (vq->vec_pool)
> -		rte_free(vq->vec_pool);
> +		vq->async_descs_split =3D NULL;
> +	}
> +
> +	rte_free(vq->it_pool);
> +	rte_free(vq->vec_pool);
>=20
>  	vq->async_pkts_info =3D NULL;
> -	vq->async_descs_split =3D NULL;
>  	vq->it_pool =3D NULL;
>  	vq->vec_pool =3D NULL;
>  }
> @@ -360,10 +363,10 @@ free_vq(struct virtio_net *dev, struct
> vhost_virtqueue *vq)
>  {
>  	if (vq_is_packed(dev))
>  		rte_free(vq->shadow_used_packed);
> -	else {
> +	else
>  		rte_free(vq->shadow_used_split);
> -		vhost_free_async_mem(vq);
> -	}
> +
> +	vhost_free_async_mem(dev, vq);
>  	rte_free(vq->batch_copy_elems);
>  	if (vq->iotlb_pool)
>  		rte_mempool_free(vq->iotlb_pool);
> @@ -1626,10 +1629,9 @@ int rte_vhost_async_channel_register(int vid,
> uint16_t queue_id,
>  	if (unlikely(vq =3D=3D NULL || !dev->async_copy))
>  		return -1;
>=20
> -	/* packed queue is not supported */
> -	if (unlikely(vq_is_packed(dev) || !f.async_inorder)) {
> +	if (unlikely(!f.async_inorder)) {
>  		VHOST_LOG_CONFIG(ERR,
> -			"async copy is not supported on packed queue or
> non-inorder mode "
> +			"async copy is not supported on non-inorder mode "
>  			"(vid %d, qid: %d)\n", vid, queue_id);
>  		return -1;
>  	}
> @@ -1667,12 +1669,19 @@ int rte_vhost_async_channel_register(int vid,
> uint16_t queue_id,
>  	vq->vec_pool =3D rte_malloc_socket(NULL,
>  			VHOST_MAX_ASYNC_VEC * sizeof(struct iovec),
>  			RTE_CACHE_LINE_SIZE, node);
> -	vq->async_descs_split =3D rte_malloc_socket(NULL,
> +	if (vq_is_packed(dev)) {
> +		vq->async_buffers_packed =3D rte_malloc_socket(NULL,
> +			vq->size * sizeof(struct vring_used_elem_packed),
> +			RTE_CACHE_LINE_SIZE, node);
> +	} else {
> +		vq->async_descs_split =3D rte_malloc_socket(NULL,
>  			vq->size * sizeof(struct vring_used_elem),
>  			RTE_CACHE_LINE_SIZE, node);
> -	if (!vq->async_descs_split || !vq->async_pkts_info ||
> -		!vq->it_pool || !vq->vec_pool) {
> -		vhost_free_async_mem(vq);
> +	}
> +
> +	if (!vq->async_buffers_packed || !vq->async_descs_split ||
async_buffers_packed and async_descs_split are two members of a union.
Like the way processed in vhost_free_async_mem(), do you think it's better
to check if they are NULL in if-else respectively?

> +		!vq->async_pkts_info || !vq->it_pool || !vq->vec_pool) {
> +		vhost_free_async_mem(dev, vq);
>  		VHOST_LOG_CONFIG(ERR,
>  				"async register failed: cannot allocate
> memory for vq data "
>  				"(vid %d, qid: %d)\n", vid, queue_id);
> @@ -1728,7 +1737,7 @@ int rte_vhost_async_channel_unregister(int vid,
> uint16_t queue_id)
>  		goto out;
>  	}
>=20
> -	vhost_free_async_mem(vq);
> +	vhost_free_async_mem(dev, vq);
>=20
>  	vq->async_ops.transfer_data =3D NULL;
>  	vq->async_ops.check_completed_copies =3D NULL;
> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> index f628714c2..673335217 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -201,9 +201,18 @@ struct vhost_virtqueue {
>  	uint16_t	async_pkts_idx;
>  	uint16_t	async_pkts_inflight_n;
>  	uint16_t	async_last_pkts_n;
> -	struct vring_used_elem  *async_descs_split;
> -	uint16_t async_desc_idx;
> -	uint16_t last_async_desc_idx;
> +	union {
> +		struct vring_used_elem  *async_descs_split;
> +		struct vring_used_elem_packed *async_buffers_packed;
> +	};
> +	union {
> +		uint16_t async_desc_idx;
> +		uint16_t async_packed_buffer_idx;
> +	};
> +	union {
> +		uint16_t last_async_desc_idx;
> +		uint16_t last_async_buffer_idx;
> +	};
>=20
>  	/* vq async features */
>  	bool		async_inorder;
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.=
c
> index 438bdafd1..54e11e3a5 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -363,14 +363,14 @@
> vhost_shadow_dequeue_single_packed_inorder(struct vhost_virtqueue *vq,
>  }
>=20
>  static __rte_always_inline void
> -vhost_shadow_enqueue_single_packed(struct virtio_net *dev,
> -				   struct vhost_virtqueue *vq,
> -				   uint32_t len[],
> -				   uint16_t id[],
> -				   uint16_t count[],
> +vhost_shadow_enqueue_packed(struct vhost_virtqueue *vq,
> +				   uint32_t *len,
> +				   uint16_t *id,
> +				   uint16_t *count,
>  				   uint16_t num_buffers)
>  {
>  	uint16_t i;
> +
>  	for (i =3D 0; i < num_buffers; i++) {
>  		/* enqueue shadow flush action aligned with batch num */
>  		if (!vq->shadow_used_idx)
> @@ -382,6 +382,17 @@ vhost_shadow_enqueue_single_packed(struct
> virtio_net *dev,
>  		vq->shadow_aligned_idx +=3D count[i];
>  		vq->shadow_used_idx++;
>  	}
> +}
> +
> +static __rte_always_inline void
> +vhost_shadow_enqueue_single_packed(struct virtio_net *dev,
> +				   struct vhost_virtqueue *vq,
> +				   uint32_t *len,
> +				   uint16_t *id,
> +				   uint16_t *count,
> +				   uint16_t num_buffers)
> +{
> +	vhost_shadow_enqueue_packed(vq, len, id, count, num_buffers);
>=20
>  	if (vq->shadow_aligned_idx >=3D PACKED_BATCH_SIZE) {
>  		do_data_copy_enqueue(dev, vq);
> @@ -1474,6 +1485,23 @@ store_dma_desc_info_split(struct
> vring_used_elem *s_ring, struct vring_used_elem
>  	}
>  }
>=20
> +static __rte_always_inline void
> +store_dma_desc_info_packed(struct vring_used_elem_packed *s_ring,
> +		struct vring_used_elem_packed *d_ring,
> +		uint16_t ring_size, uint16_t s_idx, uint16_t d_idx, uint16_t
> count)
> +{
> +	uint16_t elem_size =3D sizeof(struct vring_used_elem_packed);
> +
> +	if (d_idx + count <=3D ring_size) {
> +		rte_memcpy(d_ring + d_idx, s_ring + s_idx, count *
> elem_size);
> +	} else {
> +		uint16_t size =3D ring_size - d_idx;
> +
> +		rte_memcpy(d_ring + d_idx, s_ring + s_idx, size * elem_size);
> +		rte_memcpy(d_ring, s_ring + s_idx + size, (count - size) *
> elem_size);
> +	}
> +}
> +
>  static __rte_noinline uint32_t
>  virtio_dev_rx_async_submit_split(struct virtio_net *dev,
>  	struct vhost_virtqueue *vq, uint16_t queue_id,
> @@ -1641,6 +1669,330 @@ virtio_dev_rx_async_submit_split(struct
> virtio_net *dev,
>  	return pkt_idx;
>  }
>=20
> +static __rte_always_inline void
> +vhost_update_used_packed(struct vhost_virtqueue *vq,
> +			struct vring_used_elem_packed *shadow_ring,
> +			uint16_t count)
> +{
> +	int i;
> +	uint16_t used_idx =3D vq->last_used_idx;
> +	uint16_t head_idx =3D vq->last_used_idx;
> +	uint16_t head_flags =3D 0;
> +
> +	if (count =3D=3D 0)
> +		return;
> +
> +	/* Split loop in two to save memory barriers */
> +	for (i =3D 0; i < count; i++) {
> +		vq->desc_packed[used_idx].id =3D shadow_ring[i].id;
> +		vq->desc_packed[used_idx].len =3D shadow_ring[i].len;
> +
> +		used_idx +=3D shadow_ring[i].count;
> +		if (used_idx >=3D vq->size)
> +			used_idx -=3D vq->size;
> +	}
> +
> +	/* The ordering for storing desc flags needs to be enforced. */
> +	rte_atomic_thread_fence(__ATOMIC_RELEASE);
> +
> +	for (i =3D 0; i < count; i++) {
> +		uint16_t flags;
> +
> +		if (vq->shadow_used_packed[i].len)
> +			flags =3D VRING_DESC_F_WRITE;
> +		else
> +			flags =3D 0;
> +
> +		if (vq->used_wrap_counter) {
> +			flags |=3D VRING_DESC_F_USED;
> +			flags |=3D VRING_DESC_F_AVAIL;
> +		} else {
> +			flags &=3D ~VRING_DESC_F_USED;
> +			flags &=3D ~VRING_DESC_F_AVAIL;
> +		}
> +
> +		if (i > 0) {
> +			vq->desc_packed[vq->last_used_idx].flags =3D flags;
> +
No need a blank line above.

> +		} else {
> +			head_idx =3D vq->last_used_idx;
> +			head_flags =3D flags;
> +		}
> +
> +		vq_inc_last_used_packed(vq, shadow_ring[i].count);
> +	}
> +
> +	vq->desc_packed[head_idx].flags =3D head_flags;
> +}
> +
> +static __rte_always_inline int
> +vhost_enqueue_async_single_packed(struct virtio_net *dev,
> +			    struct vhost_virtqueue *vq,
> +			    struct rte_mbuf *pkt,
> +			    struct buf_vector *buf_vec,
> +			    uint16_t *nr_descs,
> +			    uint16_t *nr_buffers,
> +			    struct vring_packed_desc *async_descs,
> +			    struct iovec *src_iovec, struct iovec *dst_iovec,
> +			    struct rte_vhost_iov_iter *src_it,
> +			    struct rte_vhost_iov_iter *dst_it)
> +{
> +	uint16_t nr_vec =3D 0;
> +	uint16_t avail_idx =3D vq->last_avail_idx;
> +	uint16_t max_tries, tries =3D 0;
> +	uint16_t buf_id =3D 0;
> +	uint32_t len =3D 0;
> +	uint16_t desc_count =3D 0;
> +	uint32_t size =3D pkt->pkt_len + sizeof(struct
> virtio_net_hdr_mrg_rxbuf);
> +	uint32_t buffer_len[vq->size];
> +	uint16_t buffer_buf_id[vq->size];
> +	uint16_t buffer_desc_count[vq->size];
> +	*nr_buffers =3D 0;
> +
> +	if (rxvq_is_mergeable(dev))
> +		max_tries =3D vq->size - 1;
> +	else
> +		max_tries =3D 1;
> +
> +	while (size > 0) {
> +		/*
> +		 * if we tried all available ring items, and still
> +		 * can't get enough buf, it means something abnormal
> +		 * happened.
> +		 */
> +		if (unlikely(++tries > max_tries))
> +			return -1;
> +
> +		if (unlikely(fill_vec_buf_packed(dev, vq, avail_idx,
> &desc_count, buf_vec, &nr_vec,
> +						&buf_id, &len,
> VHOST_ACCESS_RW) < 0))
> +			return -1;
> +
> +		len =3D RTE_MIN(len, size);
> +		size -=3D len;
> +
> +		buffer_len[*nr_buffers] =3D len;
> +		buffer_buf_id[*nr_buffers] =3D buf_id;
> +		buffer_desc_count[*nr_buffers] =3D desc_count;
> +		*nr_buffers +=3D 1;
> +
> +		*nr_descs +=3D desc_count;
> +		avail_idx +=3D desc_count;
> +		if (avail_idx >=3D vq->size)
> +			avail_idx -=3D vq->size;
> +	}
> +
> +	if (async_mbuf_to_desc(dev, vq, pkt, buf_vec, nr_vec, *nr_buffers,
> src_iovec, dst_iovec,
> +			src_it, dst_it) < 0)
> +		return -1;
> +	/* store descriptors for DMA */
> +	if (avail_idx >=3D *nr_descs) {
> +		rte_memcpy(async_descs, &vq->desc_packed[vq-
> >last_avail_idx],
> +			*nr_descs * sizeof(struct vring_packed_desc));
> +	} else {
> +		uint16_t nr_copy =3D vq->size - vq->last_avail_idx;
> +		rte_memcpy(async_descs, &vq->desc_packed[vq-
> >last_avail_idx],
> +			nr_copy * sizeof(struct vring_packed_desc));
> +		rte_memcpy(async_descs + nr_copy, vq->desc_packed,
> +			(*nr_descs - nr_copy) * sizeof(struct
> vring_packed_desc));
> +	}
> +
> +	vhost_shadow_enqueue_packed(vq, buffer_len, buffer_buf_id,
> buffer_desc_count, *nr_buffers);
> +
> +	return 0;
> +}
> +
> +static __rte_always_inline int16_t
> +virtio_dev_rx_async_single_packed(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
> +			    struct rte_mbuf *pkt, uint16_t *nr_descs, uint16_t
> *nr_buffers,
> +			    struct vring_packed_desc *async_descs,
> +			    struct iovec *src_iovec, struct iovec *dst_iovec,
> +			    struct rte_vhost_iov_iter *src_it, struct
> rte_vhost_iov_iter *dst_it)
> +{
> +	struct buf_vector buf_vec[BUF_VECTOR_MAX];
> +	*nr_descs =3D 0;
> +	*nr_buffers =3D 0;
> +
> +	if (unlikely(vhost_enqueue_async_single_packed(dev, vq, pkt,
> buf_vec, nr_descs, nr_buffers,
> +						 async_descs, src_iovec,
> dst_iovec,
> +						 src_it, dst_it) < 0)) {
> +		VHOST_LOG_DATA(DEBUG, "(%d) failed to get enough desc
> from vring\n", dev->vid);
> +		return -1;
> +	}
> +
> +	VHOST_LOG_DATA(DEBUG, "(%d) current index %d | end
> index %d\n",
> +			dev->vid, vq->last_avail_idx, vq->last_avail_idx +
> *nr_descs);
> +
> +	return 0;
> +}
> +
> +static __rte_always_inline void
> +dma_error_handler_packed(struct vhost_virtqueue *vq, struct
> vring_packed_desc *async_descs,
> +			uint16_t async_descs_idx, uint16_t slot_idx, uint32_t
> nr_err,
> +			uint32_t *pkt_idx, uint32_t *num_async_pkts,
> uint32_t *num_done_pkts)
> +{
> +	uint16_t descs_err =3D 0;
> +	uint16_t buffers_err =3D 0;
> +	struct async_inflight_info *pkts_info =3D vq->async_pkts_info;
> +
> +	*num_async_pkts -=3D nr_err;
> +	*pkt_idx -=3D nr_err;
> +	/* calculate the sum of buffers and descs of DMA-error packets. */
> +	while (nr_err-- > 0) {
> +		descs_err +=3D pkts_info[slot_idx % vq->size].descs;
I notice there are several parts using "%" to wrap around index, but
existed code uses "& (vq->size - 1)" instead. I think it's better to keep
it consistent.

Thanks,
Jiayu