From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3876CA0540; Wed, 15 Jul 2020 09:48:39 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 724841BFBC; Wed, 15 Jul 2020 09:48:38 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 2B1DA1BF95 for ; Wed, 15 Jul 2020 09:48:36 +0200 (CEST) IronPort-SDR: TTC0iZM95SlTK3/16e7HE+gbD2dPBJEqJj2s0ecalCCmy0QO7YdNOdTVHhXGzxgxqSDiUiHA+p 10owWVbHbyJQ== X-IronPort-AV: E=McAfee;i="6000,8403,9682"; a="129189569" X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="129189569" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jul 2020 00:48:35 -0700 IronPort-SDR: r9wUHdkTAKl5uXwaCTrBSYUJiw8iyCZHGUkpzN9XkjphbuvlPWoNg3q0Zdw6howqm47EiPuA7R S9d3x1aYveXQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,354,1589266800"; d="scan'208";a="286026792" Received: from npg-dpdk-patrickfu-casc2.sh.intel.com ([10.67.119.92]) by orsmga006.jf.intel.com with ESMTP; 15 Jul 2020 00:48:33 -0700 From: patrick.fu@intel.com To: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: Patrick Fu Date: Wed, 15 Jul 2020 15:46:50 +0800 Message-Id: <20200715074650.2375332-1-patrick.fu@intel.com> X-Mailer: git-send-email 2.18.4 Subject: [dpdk-dev] [PATCH v1] vhost: support async copy free segmentations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Patrick Fu Vhost async enqueue assumes that all async copies should break at packet boundary. i.e. if a packet is splited into multiple copy segments, the async engine should always report copy completion when entire packet is finished. This patch removes the assumption. Fixes: cd6760da1076 ("vhost: introduce async enqueue for split ring") Signed-off-by: Patrick Fu --- lib/librte_vhost/vhost.h | 3 +++ lib/librte_vhost/virtio_net.c | 12 ++++++++---- 2 files changed, 11 insertions(+), 4 deletions(-) diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index 8c01cee42..0f7212f88 100644 --- a/lib/librte_vhost/vhost.h +++ b/lib/librte_vhost/vhost.h @@ -46,6 +46,8 @@ #define MAX_PKT_BURST 32 +#define ASYNC_MAX_POLL_SEG 255 + #define VHOST_MAX_ASYNC_IT (MAX_PKT_BURST * 2) #define VHOST_MAX_ASYNC_VEC (BUF_VECTOR_MAX * 2) @@ -225,6 +227,7 @@ struct vhost_virtqueue { uint64_t *async_pending_info; uint16_t async_pkts_idx; uint16_t async_pkts_inflight_n; + uint16_t async_last_seg_n; /* vq async features */ bool async_inorder; diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 1d0be3dd4..c6fa33f37 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -1652,12 +1652,14 @@ uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, start_idx = virtio_dev_rx_async_get_info_idx(pkts_idx, vq_size, vq->async_pkts_inflight_n); - n_pkts_cpl = - vq->async_ops.check_completed_copies(vid, queue_id, 0, count); + n_pkts_cpl = vq->async_ops.check_completed_copies(vid, queue_id, + 0, ASYNC_MAX_POLL_SEG - vq->async_last_seg_n) + + vq->async_last_seg_n; rte_smp_wmb(); - while (likely(((start_idx + n_pkts_put) & (vq_size - 1)) != pkts_idx)) { + while (likely((n_pkts_put < count) && + (((start_idx + n_pkts_put) & (vq_size - 1)) != pkts_idx))) { uint64_t info = async_pending_info[ (start_idx + n_pkts_put) & (vq_size - 1)]; uint64_t n_segs; @@ -1666,7 +1668,7 @@ uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, n_segs = info >> ASYNC_PENDING_INFO_N_SFT; if (n_segs) { - if (!n_pkts_cpl || n_pkts_cpl < n_segs) { + if (unlikely(n_pkts_cpl < n_segs)) { n_pkts_put--; n_descs -= info & ASYNC_PENDING_INFO_N_MSK; if (n_pkts_cpl) { @@ -1684,6 +1686,8 @@ uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, } } + vq->async_last_seg_n = n_pkts_cpl; + if (n_pkts_put) { vq->async_pkts_inflight_n -= n_pkts_put; __atomic_add_fetch(&vq->used->idx, n_descs, __ATOMIC_RELEASE); -- 2.18.4