DPDK patches and discussions
 help / color / mirror / Atom feed
From: patrick.fu@intel.com
To: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com
Cc: Patrick Fu <patrick.fu@intel.com>
Subject: [dpdk-dev] [PATCH v3] vhost: fix wrong async completion of multi-seg packets
Date: Tue, 21 Jul 2020 13:47:20 +0800	[thread overview]
Message-ID: <20200721054720.3417804-1-patrick.fu@intel.com> (raw)
In-Reply-To: <20200715074650.2375332-1-patrick.fu@intel.com>

From: Patrick Fu <patrick.fu@intel.com>

In async enqueue copy, a packet could be split into multiple copy
segments. When polling the copy completion status, current async data
path assumes the async device callbacks are aware of the packet
boundary and return completed segments only if all segments belonging
to the same packet are done. Such assumption are not generic to common
async devices and may degrees the copy performance if async callbacks
have to implement it in software manner.

This patch adds tracking of the completed copy segments at vhost side.
If async copy device reports partial completion of a packets, only
vhost internal record is updated and vring status keeps unchanged
until remaining segments of the packet are also finished. The async
copy device is no longer necessary to care about the packet boundary.

Fixes: cd6760da1076 ("vhost: introduce async enqueue for split ring")

Signed-off-by: Patrick Fu <patrick.fu@intel.com>
---
v2:
 - fix an issue that can stuck async poll when packets buffer is full
v3:
 - revise commit message and title
 - rename a local variable to better reflect its usage

 lib/librte_vhost/vhost.h      |  3 +++
 lib/librte_vhost/virtio_net.c | 27 +++++++++++++++++----------
 2 files changed, 20 insertions(+), 10 deletions(-)

diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 8c01cee42..0f7212f88 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -46,6 +46,8 @@
 
 #define MAX_PKT_BURST 32
 
+#define ASYNC_MAX_POLL_SEG 255
+
 #define VHOST_MAX_ASYNC_IT (MAX_PKT_BURST * 2)
 #define VHOST_MAX_ASYNC_VEC (BUF_VECTOR_MAX * 2)
 
@@ -225,6 +227,7 @@ struct vhost_virtqueue {
 	uint64_t	*async_pending_info;
 	uint16_t	async_pkts_idx;
 	uint16_t	async_pkts_inflight_n;
+	uint16_t	async_last_seg_n;
 
 	/* vq async features */
 	bool		async_inorder;
diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 1d0be3dd4..635113cb0 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -1631,8 +1631,9 @@ uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
 {
 	struct virtio_net *dev = get_device(vid);
 	struct vhost_virtqueue *vq;
-	uint16_t n_pkts_cpl, n_pkts_put = 0, n_descs = 0;
+	uint16_t n_segs_cpl, n_pkts_put = 0, n_descs = 0;
 	uint16_t start_idx, pkts_idx, vq_size;
+	uint16_t n_inflight;
 	uint64_t *async_pending_info;
 
 	VHOST_LOG_DATA(DEBUG, "(%d) %s\n", dev->vid, __func__);
@@ -1646,46 +1647,52 @@ uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
 
 	rte_spinlock_lock(&vq->access_lock);
 
+	n_inflight = vq->async_pkts_inflight_n;
 	pkts_idx = vq->async_pkts_idx;
 	async_pending_info = vq->async_pending_info;
 	vq_size = vq->size;
 	start_idx = virtio_dev_rx_async_get_info_idx(pkts_idx,
 		vq_size, vq->async_pkts_inflight_n);
 
-	n_pkts_cpl =
-		vq->async_ops.check_completed_copies(vid, queue_id, 0, count);
+	n_segs_cpl = vq->async_ops.check_completed_copies(vid, queue_id,
+		0, ASYNC_MAX_POLL_SEG - vq->async_last_seg_n) +
+		vq->async_last_seg_n;
 
 	rte_smp_wmb();
 
-	while (likely(((start_idx + n_pkts_put) & (vq_size - 1)) != pkts_idx)) {
+	while (likely((n_pkts_put < count) && n_inflight)) {
 		uint64_t info = async_pending_info[
 			(start_idx + n_pkts_put) & (vq_size - 1)];
 		uint64_t n_segs;
 		n_pkts_put++;
+		n_inflight--;
 		n_descs += info & ASYNC_PENDING_INFO_N_MSK;
 		n_segs = info >> ASYNC_PENDING_INFO_N_SFT;
 
 		if (n_segs) {
-			if (!n_pkts_cpl || n_pkts_cpl < n_segs) {
+			if (unlikely(n_segs_cpl < n_segs)) {
 				n_pkts_put--;
+				n_inflight++;
 				n_descs -= info & ASYNC_PENDING_INFO_N_MSK;
-				if (n_pkts_cpl) {
+				if (n_segs_cpl) {
 					async_pending_info[
 						(start_idx + n_pkts_put) &
 						(vq_size - 1)] =
-					((n_segs - n_pkts_cpl) <<
+					((n_segs - n_segs_cpl) <<
 					 ASYNC_PENDING_INFO_N_SFT) |
 					(info & ASYNC_PENDING_INFO_N_MSK);
-					n_pkts_cpl = 0;
+					n_segs_cpl = 0;
 				}
 				break;
 			}
-			n_pkts_cpl -= n_segs;
+			n_segs_cpl -= n_segs;
 		}
 	}
 
+	vq->async_last_seg_n = n_segs_cpl;
+
 	if (n_pkts_put) {
-		vq->async_pkts_inflight_n -= n_pkts_put;
+		vq->async_pkts_inflight_n = n_inflight;
 		__atomic_add_fetch(&vq->used->idx, n_descs, __ATOMIC_RELEASE);
 
 		vhost_vring_call_split(dev, vq);
-- 
2.18.4


  parent reply	other threads:[~2020-07-21  5:49 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-15  7:46 [dpdk-dev] [PATCH v1] vhost: support async copy free segmentations patrick.fu
2020-07-15 11:15 ` [dpdk-dev] [PATCH v2] " patrick.fu
2020-07-17  3:21   ` Xia, Chenbo
2020-07-17 11:52     ` Ferruh Yigit
2020-07-20 14:58   ` Maxime Coquelin
2020-07-20 16:49     ` Ferruh Yigit
2020-07-21  5:52     ` Fu, Patrick
2020-07-21  5:47 ` patrick.fu [this message]
2020-07-21  8:40   ` [dpdk-dev] [PATCH v3] vhost: fix wrong async completion of multi-seg packets Maxime Coquelin
2020-07-21 14:57     ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200721054720.3417804-1-patrick.fu@intel.com \
    --to=patrick.fu@intel.com \
    --cc=chenbo.xia@intel.com \
    --cc=dev@dpdk.org \
    --cc=maxime.coquelin@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).