From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 40E1DA04C0; Tue, 29 Sep 2020 11:39:18 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DADEC1D9A5; Tue, 29 Sep 2020 11:38:41 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 02EBD1D91A for ; Tue, 29 Sep 2020 11:38:34 +0200 (CEST) IronPort-SDR: GfC6jkCEqHqGMQMZLw4JEJ+vosofqK6q69M/FjRgEqpYCgSjh9PccPJCH3hSyzevg5w+h4DUNB mKlRculMzP0Q== X-IronPort-AV: E=McAfee;i="6000,8403,9758"; a="159485209" X-IronPort-AV: E=Sophos;i="5.77,318,1596524400"; d="scan'208";a="159485209" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Sep 2020 02:38:34 -0700 IronPort-SDR: vvVpLk1ifM2MQu8GGGmHlCKecHB5tcYhpOj0I3FNvJAhGA8ZMm3xU95IbfzeGFQHN1mD0ipmRL QyXi+WdcomvQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,318,1596524400"; d="scan'208";a="457216018" Received: from npg-dpdk-patrickfu-casc2.sh.intel.com ([10.67.119.92]) by orsmga004.jf.intel.com with ESMTP; 29 Sep 2020 02:38:32 -0700 From: Patrick Fu To: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: zhihong.wang@intel.com, cheng1.jiang@intel.com, patrick.fu@intel.com Date: Tue, 29 Sep 2020 17:29:54 +0800 Message-Id: <20200929092955.2848419-4-patrick.fu@intel.com> X-Mailer: git-send-email 2.18.4 In-Reply-To: <20200929092955.2848419-1-patrick.fu@intel.com> References: <20200911015316.1903181-1-patrick.fu@intel.com> <20200929092955.2848419-1-patrick.fu@intel.com> Subject: [dpdk-dev] [PATCH v3 3/4] vhost: fix async vector buffer overrun X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add check on the async vector buffer usage to prevent the buf overrun. If the unused vector buffer is not sufficient to prepare for next packet's iov creation, an async transfer will be triggered immediately to free the vector buffer. Fixes: 78639d54563a ("vhost: introduce async enqueue registration API") Signed-off-by: Patrick Fu --- lib/librte_vhost/vhost.h | 2 +- lib/librte_vhost/virtio_net.c | 13 ++++++++++++- 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index f0ee00c73..34197ce99 100644 --- a/lib/librte_vhost/vhost.h +++ b/lib/librte_vhost/vhost.h @@ -47,7 +47,7 @@ #define MAX_PKT_BURST 32 #define VHOST_MAX_ASYNC_IT (MAX_PKT_BURST * 2) -#define VHOST_MAX_ASYNC_VEC (BUF_VECTOR_MAX * 2) +#define VHOST_MAX_ASYNC_VEC (BUF_VECTOR_MAX * 4) #define PACKED_DESC_ENQUEUE_USED_FLAG(w) \ ((w) ? (VRING_DESC_F_AVAIL | VRING_DESC_F_USED | VRING_DESC_F_WRITE) : \ diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 68ead9a71..60cd5e2b1 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -1492,6 +1492,7 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct rte_vhost_iov_iter *dst_it = it_pool + 1; uint16_t n_free_slot, slot_idx; uint16_t pkt_err = 0; + uint16_t segs_await = 0; struct async_inflight_info *pkts_info = vq->async_pkts_info; int n_pkts = 0; @@ -1540,6 +1541,7 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, dst_iovec += dst_it->nr_segs; src_it += 2; dst_it += 2; + segs_await += src_it->nr_segs; } else { pkts_info[slot_idx].info = num_buffers; vq->async_pkts_inflight_n++; @@ -1547,14 +1549,23 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, vq->last_avail_idx += num_buffers; + /* + * conditions to trigger async device transfer: + * - buffered packet number reaches transfer threshold + * - this is the last packet in the burst enqueue + * - unused async iov number is less than max vhost vector + */ if (pkt_burst_idx >= VHOST_ASYNC_BATCH_THRESHOLD || - (pkt_idx == count - 1 && pkt_burst_idx)) { + (pkt_idx == count - 1 && pkt_burst_idx) || + (VHOST_MAX_ASYNC_VEC / 2 - segs_await < + BUF_VECTOR_MAX)) { n_pkts = vq->async_ops.transfer_data(dev->vid, queue_id, tdes, 0, pkt_burst_idx); src_iovec = vec_pool; dst_iovec = vec_pool + (VHOST_MAX_ASYNC_VEC >> 1); src_it = it_pool; dst_it = it_pool + 1; + segs_await = 0; vq->async_pkts_inflight_n += n_pkts; if (unlikely(n_pkts < (int)pkt_burst_idx)) { -- 2.18.4